POPULARITY
Categories
Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at ocdevel.com/mlg/36 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. Thanks to T.J. Wilder from intrep.io for recording this episode! Fundamentals of Autoencoders Autoencoders are neural networks designed to reconstruct their input data by passing data through a compressed intermediate representation called a “code.” The architecture typically follows an hourglass shape: a wide input and output separated by a narrower bottleneck layer that enforces information compression. The encoder compresses input data into the code, while the decoder reconstructs the original input from this code. Comparison with Supervised Learning Unlike traditional supervised learning, where the output differs from the input (e.g., image classification), autoencoders use the same vector for both input and output. Use Cases: Dimensionality Reduction and Representation Autoencoders perform dimensionality reduction by learning compressed forms of high-dimensional data, making it easier to visualize and process data with many features. The compressed code can be used for clustering, visualization in 2D or 3D graphs, and input into subsequent machine learning models, saving computational resources and improving scalability. Feature Learning and Embeddings Autoencoders enable feature learning by extracting abstract representations from the input data, similar in concept to learned embeddings in large language models (LLMs). While effective for many data types, autoencoder-based encodings are less suited for variable-length text compared to LLM embeddings. Data Search, Clustering, and Compression By reducing dimensionality, autoencoders facilitate vector searches, efficient clustering, and similarity retrieval. The compressed codes enable lossy compression analogous to audio codecs like MP3, with the difference that autoencoders lack domain-specific optimizations for preserving perceptually important data. Reconstruction Fidelity and Loss Types Loss functions in autoencoders are defined to compare reconstructed outputs to original inputs, often using different loss types depending on input variable types (e.g., Boolean vs. continuous). Compression via autoencoders is typically lossy, meaning some information from the input is lost during reconstruction, and the areas of information lost may not be easily controlled. Outlier Detection and Noise Reduction Since reconstruction errors tend to move data toward the mean, autoencoders can be used to reduce noise and identify data outliers. Large reconstruction errors can signal atypical or outlier samples in the dataset. Denoising Autoencoders Denoising autoencoders are trained to reconstruct clean data from noisy inputs, making them valuable for applications in image and audio de-noising as well as signal smoothing. Iterative denoising as a principle forms the basis for diffusion models, where repeated application of a denoising autoencoder can gradually turn random noise into structured output. Data Imputation Autoencoders can aid in data imputation by filling in missing values: training on complete records and reconstructing missing entries for incomplete records using learned code representations. This approach leverages the model's propensity to output ‘plausible' values learned from overall data structure. Cryptographic Analogy The separation of encoding and decoding can draw parallels to encryption and decryption, though autoencoders are not intended or suitable for secure communication due to their inherent lossiness. Advanced Architectures: Sparse and Overcomplete Autoencoders Sparse autoencoders use constraints to encourage code representations with only a few active values, increasing interpretability and explainability. Overcomplete autoencoders have a code size larger than the input, often in applications that require extraction of distinct, interpretable features from complex model states. Interpretability and Research Example Research such as Anthropic's “Towards Monosemanticity” applies sparse autoencoders to the internal activations of language models to identify interpretable features correlated with concrete linguistic or semantic concepts. These models can be used to monitor and potentially control model behaviors (e.g., detecting specific language usage or enforcing safety constraints) by manipulating feature activations. Variational Autoencoders (VAEs) VAEs extend autoencoder architecture by encoding inputs as distributions (means and standard deviations) instead of point values, enforcing a continuous, normalized code space. Decoding from sampled points within this space enables synthetic data generation, as any point near the center of the code space corresponds to plausible data according to the model. VAEs for Synthetic Data and Rare Event Amplification VAEs are powerful in domains with sparse data or rare events (e.g., healthcare), allowing generation of synthetic samples representing underrepresented cases. They can increase model performance by augmenting datasets without requiring changes to existing model pipelines. Conditional Generative Techniques Conditional autoencoders extend VAEs by allowing controlled generation based on specified conditions (e.g., generating a house with a pool), through additional decoder inputs and conditional loss terms. Practical Considerations and Limitations Training autoencoders and their variants requires computational resources, and their stochastic training can produce differing code representations across runs. Lossy reconstruction, lack of domain-specific optimizations, and limited code interpretability restrict some use cases, particularly where exact data preservation or meaningful decompositions are required.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Exploring a Use Case of Artificial Intelligence Assistance with Understanding an Attack Jennifer Wilson took a weird string found in a recent honeypot sample and worked with ChatGPT to figure out what it is all about. https://isc.sans.edu/diary/%5BGuest%20Diary%5D%20Exploring%20a%20Use%20Case%20of%20Artificial%20Intelligence%20Assistance%20with%20Understanding%20an%20Attack/31980 Ransomware Deployed via SimpleHelp Vulnerabilities Ransomware actors are using vulnerabilities in SimpleHelp to gain access to victim s networks via MSPs. The exploited vulnerabilities were patched in January. https://news.sophos.com/en-us/2025/05/27/dragonforce-actors-target-simplehelp-vulnerabilities-to-attack-msp-customers/ OS Command Injection in Everetz Equipment Broadcast equipment manufactured by Everetz is susceptible to an OS command injection vulnerability. Everetz has not responded to researchers reporting the vulnerability so far and there is no patch available. https://www.onekey.com/resource/security-advisory-remote-code-execution-on-evertz-svdn-cve-2025-4009
Where do you create an AI? This week, Technology Now explores the world of AI factories, dedicated spaces for building bespoke artificial intelligence software. We look into what these factories are, how they work, and we examine the importance of them going forward. Iveta Lohovska tells us more. This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what can be learnt from it. More about this week's guest: https://www.linkedin.com/in/iveta-lohovska-40210362/?originalSubdomain=atEnergy to train an LLM: https://www.economist.com/technology-quarterly/2024/01/29/data-centres-improved-greatly-in-energy-efficiency-as-they-grew-massively-largerToday I Learnt: https://www.science.org/doi/10.1126/sciadv.adu9368 This Week in History:https://eclipse2017.nasa.gov/testing-general-relativityhttps://www.amnh.org/exhibitions/einstein/energy/special-relativityhttps://web.lemoyne.edu/giunta/ruth1920.html
Segment 1: Erik Bloch Interview The math on SOC AI just isn't adding up. It's not easy to do the math, either, as each SOC automation vendor is tackling alert fatigue and SecOps assistants a bit differently. Fortunately for us and our audience, Erik Bloch met with many of these vendors at RSAC and is going to share what he learned with us! Segment 2: Enterprise Weekly News In this week's enterprise security news, 1. Some interesting new companies getting funding 2. Chainguard isn't unique anymore 3. AI slop coming to open source soon 4. Wiz dominance analysis 5. the IKEA effect in cybersecurity 6. LLM model collapse 7. vulnerabilities 8. DFIR reports 9. and fun with LinkedIn and prompt injection! Segment 3: RSAC Interviews runZero Interview with HD Moore Despite becoming a checkbox feature in major product suites, vulnerability management is fundamentally broken. The few remaining first-wave vulnerability scanners long ago shifted their investments and attention into adjacent markets to maintain growth, bolting on fragmented functionality that's added complexity without effectively securing today's attack surfaces. Meanwhile, security teams are left contending with massive blind spots and disparate tools that collectively fail to detect exposures that are commonly exploited by attackers. Our industry is ready for change. Jeff and HD explore the current state of vulnerability management, what's required to truly prevent real-world incidents, new perspectives that are challenging the status quo, and innovative approaches that are finally overcoming decades old problems to usher in a new era of vulnerability management. Segment Resources: Read more about runZero's recent launch, including new exposure management capabilities: https://www.runzero.com/blog/new-era-exposure-management/ Watch a two-minute summary and deeper dive videos here: https://www.youtube.com/@runZeroInc Tune into runZero's monthly research webcast, runZero Hour, to hear about the team's latest research findings and additional debate on all things exposure management: https://www.runzero.com/research/runzero-hour/ Try runZero free for 21 days by visiting https://securityweekly.com/runzerorsac. After 21 days, the trial converts into a free Community Edition license that is great for small environments and home networks. Imprivata interview with Joel Burleson-Davis Organizations in mission-critical industries are acutely aware of the growing cyber threats, like the Medusa ransomware gang attacking critical US sectors, but are wary that implementing stricter security protocols will slow productivity and create new barriers for employees. This is a valid concern, but organizations should not accept the trade-off between the inevitability of a breach by avoiding productivity-dampening security measures, or the drop in employee productivity and rise in frustration caused by implementing security measures that might mitigate a threat like Medusa. In this conversation, Joel will discuss how organizations can build a robust security strategy that does not impede productivity. He will highlight how Imprivata's partnership with SailPoint enables stronger enterprise identity security while enhancing efficiency—helping organizations strike the right balance. This segment is sponsored by Imprivata. Visit https://securityweekly.com/imprivatarsac to learn more about them! Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-408
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
Why you should listenLearn how to build your own AI revenue streams now with agents and digital employees.Get a reality check on where AI adoption is really at—and how it's disrupting mid-tier SaaS faster than you think.Hear practical agent use cases across sales, support, and delivery—straight from a tech leader who's helped raise $ 400 M+ and worked with Fortune 500s.If you're still hanging your hat on your SaaS vendor adding AI features, you're already behind.In this episode, I talk with Ken Gavranovic, a tech exec who's been building with AI since before most people could spell LLM. Ken shares how agents (aka “digital employees”) are quietly becoming the next big service you should offer—and why relying solely on platform tools like Salesforce's Agent Assist will cost you deals, not win them.We dive into the difference between LLMs and agents, how to package AI advisory work for clients, and the mindset shift SaaS Consulting Partners must make to stay relevant and profitable in 2025.About Ken GavranovicKen Gavranovic is a seasoned technology executive, global keynote speaker, and member of Thinkers50 and the Forbes Council, renowned for helping brick-and-mortar businesses leverage AI and advanced technologies for immediate, impactful results. With over three decades of experience, Ken has collaborated with top venture firms, guiding over 18 successful exits, 35 mergers and acquisitions, and one IPO. Ken's expertise spans iconic brands like Disney World and 7-Eleven, where he implemented technology solutions that improved customer engagement and operational efficiency.Resources and LinksProductgenius.guruKen's LinkedIn profilePrevious episode: 611 - From CRM to Agents: How SaaS Consultants Can Sell Bigger Deals with AICheck out more episodes of the Paul Higgins PodcastSubscribe to our YouTube channel: @PaulHigginsMentoringJoin our newsletterSuggested resources
Le novità dal Google I/O. Jony Ive si unisce a OpenAI. I modelli LLM che ricattano gli sviluppatori di Anthropic. Magnotta Redux. Darth Vader impreca su Fortnite. Queste e molte altre le notizie tech commentate nella puntata di questa settimana.Dallo studio distribuito di digitalia:Franco Solerio, Michele Di Maio, Francesco FacconiProduttori esecutivi:@Akagrinta, Antonio Manna, Davide Tinti, Arzigogolo, Giuseppe Marino, Mauro Boffa, Luca Di Stefano, Alberto Cuffaro, Alessandro Blasi, @Jh4Ckal, Andrea Bottaro, Andrea Delise, Roberto Basile, Paola Bellini, @Stefano, Idle Fellow, Valerio Bendotti, Douglas Whiting, Massimo Pollastri, Fiorenzo Pilla, Marco Chizzini, Giulio Magnifico, Manuel Zavatta, @Geckonode, Fabio Zappa, Pierpaolo Taffarello, Enrico De Anna, Mattia Lanzoni, Antonio Gargiulo, Davide Bellia, Alessandro Lago, Michele Bordoni, Fabio Filisetti, Cristian De Solda, Filippo Brancaleoni, Marcello Marigliano, Ligea Technology Di D'esposito Antonio, Paola Danieli, Christian Schwarz, Paolo Bernardini, Nicola Gabriele Del Popolo, Giuliano Arcinotti, Fabrizio Mele, Simone MagnaschiSponsor:Links:The 15 biggest announcements at Google I/O 2025At Google I/O, everything is changing and normal and scary and chillGoogle wants $250 (!) per month for its new AI Ultra planKering Eyewear e Google progettano gli smart glasses AIWe tried on Googles prototype AI smart glassesGoogle is baking Gemini AI into ChromeOpenAI Acquires Jony Ive's ioOpenAI's next big bet won't be a wearable: reportOpenAIs Ambitions Just Became Crystal ClearDetails leak about Jony Ives new screen-free OpenAI deviceWhat Sam Altman Told About the Secret Device He's Making With Jony IveAnnouncing Gemma 3n preview: powerful efficient mobile-first AIWhat to expect at WWDC 2025Techs Trump Whisperer Tim Cook Goes Quiet as His Influence FadesMobile (Alabama) - WikipediaAnthropics new AI model turns to blackmailIl modello Anthropic Claude Opus 4 ha ricattato gli sviluppatoriThe empire strikes back with F-bombs: AI Darth Vader goes roguePersonal context is Googles big advantage in AITech CEOs are using AI to replace themselvesDuolingo CEO says AI is a better teacher than humansI trapped an AI model inside an art installationAmazon's AI-generated ‘shopping experts' summarize product detailsAI-Generated Summer Reading List With Books That Don't ExistSlop the PressesWant to Buy Her House in Ireland? You'll Need $7 and Some Luck.I helped a lost dogs AirTag ping its owner13.000 euro per il display di una Kia SportageGingilli del giorno:textfiles.com - file di testo scambiati su internet tra il 1980 e il 1995La costruzione del flipper di DuneA Brief History of JavaScript - This year, JavaScript turns 30.Supporta Digitalia, diventa produttore esecutivo.
Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
Today's show: Trump takes aim at Apple with a potential 25% iPhone tariff, Anthropic releases two powerful new Claude models claiming top performance in coding tasks, and two founders go head-to-head in the final round of Founder Fridays. In this packed episode, Jason, Alex, and Lon unpack what Trump's latest trade threat really means, why Anthropic's AI progress could reshape the dev landscape, and which startup—an AI ad network or industrial automation platform—comes out on top. A fast-paced look at the volatility, breakthroughs, and ambition defining today's startup ecosystem.Timestamps:(0:00) Episode Teaser(3:41) MORE tariffs, this time on Apple… is this just a message to Tim Cook?(7:08) Potential impact of tariffs on Apple(10:09) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(14:14) Anthropic's new models are topping the LLM charts but are they REALLY self-aware(20:07) LinkedIn Ads - Get a $100 LinkedIn ad credit at http://www.linkedin.com/thisweekinstartups(24:35) Who should buy OnlyFans? Jason, Alex, and Lon each have ideas(30:03) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(35:10) Gut health is important but do we need SMART toilets?(40:34) Which TWiST 500 companies are blowing up on secondary markets?(47:27) The Founder Friday city competition FINAL ROUND: see which startup wins!Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpLinks from episode:Throne One: https://www.thronescience.com/product?Arcana: https://arcana.ad/TACTUN: https://tactun.com/Follow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:09) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(20:07) LinkedIn Ads - Get a $100 LinkedIn ad credit at http://www.linkedin.com/thisweekinstartups(30:03) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Jim talks with Daniel Rodriguez about the state of AI software development and its implementation in industry. They discuss Daniel's background at Microsoft & Anaconda, transformer-based technologies, software engineering as hard vs soft science, vibe coding, barriers to entry in software engineering, cognitive styles needed for programming, Daniel's history with LLMs, unit testing & test-driven development with AI, social aspects of AI adoption, quality concerns & technical debt, style consistency & aesthetics, approaches to steering LLMs through roles & personas, philosophical perspectives on LLM consciousness & intelligence, personification & interaction styles, memory & conversation history in models, agent-based systems & their historical origins, the future of agent frameworks, customer/user interaction within agent ecosystems, distributed systems, future predictions about inference costs & protocols, IDEs & linting tools, and much more. Episode Transcript JRS EP 289 - Adam Levine on AI-Powered Programming for Non-Developers Daniel Rodriguez is Chief Architect and acting Technical Lead at r.Potential, the first enterprise platform for optimizing hybrid teams of humans and digital workers. As the venture's overall technical architect, he designs and integrates a full stack of AI systems, combining Agentforce with advanced data, simulation, and orchestration technologies to bring that vision to life. Before r.Potential, Daniel bootstrapped and scaled retrieval-augmented AI services and agentic infrastructure at Anaconda. Earlier, at Microsoft, he maintained Azure TypeScript SDKs and co-created Visual Studio Code's Jupyter and Data Wrangler extensions, expanding cloud and data-science workflows.
Space the final frontier for startups. Rob DeMillo is here to talk to us about the Space startup scene. Anthropic releases hybrid LLMs Claude Opus 4 and Claude Sonnet 4. Will Jony Ives AI device be more successful than Rabbit R1 or the Humane AI pin? And is Google harming its own ability to compete in the LLM space with its confusing naming conventions? Starring Sarah Lane, Robb Dunewood, Rob DeMillo, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
Can Generative AI Be Secured? Amazon's Chief Security Officer Weighs In In this episode of Eye on AI, Amazon's Chief Security Officer Stephen Schmidt pulls back the curtain on how Amazon is using AI-powered cybersecurity to defend against real-world threats. From global honeypots to intelligent alarm systems and secure AI agent networks, Steve shares never-before-heard details on how Amazon is protecting both its infrastructure and your data in the age of generative AI. We dive deep into: Amazon's MadPot honeypot network and how it tracks adversaries in 90 seconds The role of AI in threat detection, alarm triage, and code validation Why open-source vs. closed-source models are a real security debate The critical need for data privacy, secure LLM usage, and agent oversight Amazon's $5M+ Nova Trusted AI Challenge to battle adversarial code generation Whether you're building AI tools, deploying models at scale, or just want to understand how the future of cybersecurity is evolving—this episode is a must-listen. Don't forget to like, subscribe, and turn on notifications to stay updated on the latest in AI, security, and innovation. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview (00:52) Stephen Schmidt's Role and Background at Amazon (02:11) Inside Amazon's Global Honeypot Network (MadPot) (05:26) How Amazon Shares Threat Intel Through GuardDuty (08:06) Are Cybercriminals Using AI? (10:28) Open Source vs Closed Source AI Security Debate (13:09) What Is Amazon GuardDuty (17:44) How Amazon Protects Customer Data at Scale (20:18) Can Autonomous AI Agents Handle Security? (25:14) How Amazon Empowers SMBs with Agent-Driven Security (26:18) What Tools Power Amazon's Security Agents? (29:25) AI Security Basics (35:34) Securing AI-Generated Code (37:26) Are Models Learning from Our Queries? (39:44) Risks of Agent-to-Agent Data Sharing (42:08) Inside the $5M Nova Trusted AI Security Challenge (47:01) Supply Chain Attacks and State Actor Tactics (51:32) How Many True Adversaries Are Out There? (53:04) What Everyone Needs to Know About AI Security
Guillaume Gorincour est radiologue, chercheur, enseignant… et utilisateur avisé de l'IA.Dans cet épisode, il partage son expérience de terrain, ses gardes aux urgences avec des algorithmes comme assistants, et ses réflexions éthiques sur la médecine augmentée. Ce n'est pas un plaidoyer technophile, mais un témoignage nuancé, précis, et profondément humain.À retenir dans cet épisode :– L'IA ne remplace pas le radiologue : elle l'assiste, parfois en pleine nuit– Les faux positifs sont fréquents : l'œil humain reste indispensable– La formation intègre désormais l'IA, mais sans négliger la relation patient– L'IA donne de la confiance, pas forcément du temps– Les données médicales sont précieuses : leur usage doit être encadré– Les grands modèles (LLM) posent des questions environnementales et éthiques– Expliquer un diagnostic à un patient reste une compétence… irremplaçable– La responsabilité du médecin ne doit pas être déléguée à une boîte noire– L'IA peut aider à prioriser les cas urgents : gain de pertinence plus que de vitesse– Les radiologues de demain devront être experts… et humainsUn épisode pour comprendre ce que l'IA change vraiment dans la radiologie — et pourquoi le jugement clinique reste au cœur du soin.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
The DOGGZZONE welcomes back Dennard Dayle to the DOGGZZONE! Quick, name three sexy things. Don't be shy... Correct! Clowns, Christians and beans. Heaving, gelatinous mouthfuls of steaming hot baked beans. BBQ if you're nasty. Now, imagine a book that packs all three of these, (your favorite) erotic ingredients into a violent, (also hot) action/noir. An LLM hallucination that will leave you honking and horny, (on account of those sweet whore beans you see) it's none other than Duane Laflin's titillating tome, "The Party Is Over! (The Detective Was A Clown)"
Dr. Shiv Rao, cardiologist and CEO of Abridge, joins hosts Raj Manrai and Andy Beam on NEJM AI Grand Rounds for an inspiring conversation at the intersection of medicine, technology, and meaning. Shiv shares the origin story of Abridge, reflecting on how a deeply human encounter in clinic sparked the idea for a company now transforming clinical documentation across more than 100 health systems. From his early days programming electronic music to navigating LLM deployment at scale, Shiv offers a rare look into the soul of a founder building not just infrastructure — but a movement. He unpacks how generative AI can be used to restore presence in the clinic, what it takes to earn clinician trust, and why he believes taste, empathy, and curiosity are the real moats in health care AI. Transcript.
Full episode up on Patreon n this episode, philosophy professor and AI safety researcher Vincent Lê joins us to explore the strange philosophical roots of Effective Altruism, the Z1zians, and Silicon Valley's culture at large. Rather than rehashing the usual narratives around EA, E/Acc, and AI alignment, we dig into their shared intellectual DNA—from the cringeworthy utilitarianism of Peter Singer to the mimetic theories of René Girard, and even The Office as an unlikely ideological text.We also touch on the Westall UFO sighting and its eerie relevance, Franco "Bifo" Berardi's reflections on cognitive capitalism and collapse, the “infobrain,” the cult of rationalism, and the surreal logic of AI-driven extremism as LLM's talk like a therapist. Plus: exit strategies, neocameralist fantasy worlds, and the political theology behind technocratic acceleration.Subscribe to Vincent's blog here
https://rtrvr.ai/ Join hosts Shashank from Google AI Labs and Mark (currently on hiatus from Amazon) as they interview Arjun and Pavani, the founders of Retriever AI. This episode explores how Retriever's innovative browser extension brings agentic LLM capabilities directly to your desktop browser, offering unique advantages over cloud-based alternatives. Learn how Retriever can automate repetitive tasks, extract data across multiple websites, and interact with your personal accounts while maintaining security and privacy. The founders share their journey from big tech to startup life, demonstrate real-world use cases, and reveal their exciting vision for a federated network of browser agents that could revolutionize how we interact with the web.
Looking for ideas to engage your students in conversation? In this week's quick news roundup, I give you stories about: The impact of overwork on the teacher's brain, An idea for an energy drink experiment for science teachers around the chemical "taurine" NASA and the tectonic plates of Venus YouTube's new "Peak Points" Advertising strategy as an AI article to discuss with students ChatGPT 4 going away? And how I teach students to test different models of AI and share their results How some people are installing local LLM's on their machines New AI guidance for teachers and common patterns I'm noting Google's AIME and the future of medical chat bots DuoLingo goes AI, and A Star-Wars themed personality test gone to the dark side? Once a week, I work to share news articles and stories with you that I'm using. I want you to have quick ideas for turning headlines into a warm-up, debate, or story starter. Show notes and links: https://www.coolcatteacher.com/e902 Sponsor: Rise Vision Do you want to know how I have students share their prompts and test various models of AI? I use my Rise Vision Board! When teaching AI, seeing how each student uniquely interacts with technology is essential. Rise Vision's screen sharing solution turned my aging display into a modern wireless hub without replacement costs. I can now securely moderate which student screens appear—perfect for AI demonstrations and collaborative learning. The Rise Vision system is incredibly user-friendly and costs just a fraction of new interactive displays. I'm saving my school money while enhancing our tech capabilities! Visit Rise Vision to see how you can refresh rather than replace your classroom displays. Link: https://www.coolcatteacher.com/risevision/
What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design.This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents produce optimal outcomes.Drawing from our conversation with Dr. Michael Zargum (Episode 32), we break down why LLM-based agents struggle with transparency and governance. The "surface area" for errors expands dramatically when you can't explain how decisions are made across multiple steps. Instead, mechanism design creates clear states with defined optimization parameters at each stage—making the entire system more reliable and accountable.We explore the famous Prisoner's Dilemma to illustrate how individual incentives can work against collective benefits without proper system design. Then we introduce the Vickrey-Clark-Groves mechanism, which ensures AI agents truthfully reveal preferences and actively participate in multi-step processes—critical properties for enterprise applications.Beyond technical advantages, this approach offers something profound: a way to preserve humanity in increasingly automated systems. By explicitly designing for values, fairness, and social welfare, we're not just building better agents—we're ensuring AI serves human needs rather than replacing human thought.Subscribe now to follow our journey as we build an agentic travel system from first principles, applying these concepts to real business challenges. Have questions about mechanism design for AI? Send them our way for future episodes!What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Regal AI is revolutionizing the contact center landscape with its voice AI agent platform that's transforming how businesses communicate with customers. With $82 million in funding, Regal has positioned itself at the forefront of the AI revolution in customer service. In this episode of Category Visionaries, I spoke with Alex Levin, CEO and Co-Founder of Regal AI, about the company's journey from building tools to optimize human agent performance to pioneering voice AI agents that can handle customer interactions with unprecedented effectiveness. Topics Discussed: Regal AI's pivot from optimizing human agent calls to developing AI agents The economics of AI agents compared to human agents (10-20¢ per minute vs. $1 per minute) How AI agents achieve 97% containment rates versus the 20-40% traditional benchmark The challenges of enterprise sales in the contact center space The evolution of Regal's go-to-market strategy as AI capabilities have rapidly advanced The future of voice as the primary channel for brand engagement GTM Lessons For B2B Founders: Timing your product evolution is critical: Alex emphasizes the importance of not moving too early or too late when pivoting to new technology. "If you invested too early, it was a waste, but if you wait too late then all the first trials with every company would be with another AI provider, not with us," Alex explains. Their decision to wait until LLMs were capable enough before focusing on AI agents prevented them from wasting resources on soon-to-be-obsolete technology while still allowing them to be early in the market. Enterprise sales requires embracing the process: When moving upmarket, Alex learned that trying to rush enterprise sales leads to poor outcomes. "If you try to rush it in an enterprise environment, which is possible, you're not going to have a good outcome," he shares. B2B founders should understand the sales timeline for their specific industry and be prepared for longer, more complex sales cycles when targeting enterprise customers. Build foundational technology that transcends AI hype: Regal's advantage came from building deep platform infrastructure before AI agents were ready. "Most of the companies that exist today, all they've ever built is this thing that interacts with the customer, the agent itself, the voice and the LLM, which is relatively trivial actually," Alex explains. By building integrations with customer data systems, decision engines, and channel management tools first, they created a more comprehensive solution that could quickly incorporate AI advances when the technology matured. Reconsider conventional marketing channels: Alex notes that traditional B2B marketing approaches are losing effectiveness: "A lot of the traditional channels that used to work just don't work or are not efficient anymore. So paid SEM, traditional sponsorships of online content, writing blog posts in some big paper... a lot of these demand gen channels are just highly ineffective." Founders should prioritize breaking through with authentic founder-led storytelling rather than relying solely on conventional demand generation tactics. The economics of AI can reverse long-standing business practices: Regal AI's solution flips conventional contact center wisdom on its head. As Alex explains, "Instead of calls being the most expensive thing you have, AI calls are the cheapest channel you have. So you lead with those calls and you do as many calls as possible because it's cheaper than any other channel." B2B founders should look for opportunities where AI fundamentally changes the cost structure of traditional business operations. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co
The After Hours Entrepreneur Social Media, Podcasting, and YouTube Show
Is your business ready for the age of AI, hyper-personalized content, and digital transformation?Matt Britton is the CEO of Suzy, a real-time consumer insights platform trusted by global giants like Netflix and P&G. He's also the author of Generation AI, a powerful new book exploring how Generation Alpha is the first truly AI-native generation—and what that means for brands, leaders, and innovators.In this episode, Matt unpacks how AI is transforming marketing, content creation, SEO, and influence. He shares real strategies businesses can use to train custom GPTs, build first-party data systems, and use automation to reclaim time and increase impact.You'll also learn how the influencer economy is evolving with AI-generated personas, what the future of search looks like, and how companies can futureproof their people and processes in a world changing at exponential speed.In this conversation, you'll learn:Why the future of marketing belongs to AI-native generations.How first-party data and automation are redefining business operations.Why companies must embrace AI now or risk extinction.How AI influencers and digital twins are reshaping culture and media.How to build search-optimized, relevant content that AI models rank.The role of internal training and curiosity in adapting to AI.What Google's search drop signals for your brand strategy.Why building authority, relevance, and video content is key in the LLM era.Takeaways:Why businesses must move now to integrate AI or risk falling behind.How to train custom GPTs using first-party and third-party data.What the future of content, SEO, and influencer marketing looks like in an AI-dominated world.Connect with Matt Britton:Website: https://mattbritton.com/Book: https://a.co/d/0UEMYxRTimestamps:00:00 - Intro: AI's rapid disruption01:00 - Why companies aren't adapting fast enough03:00 - Small vs. large business AI adoption06:00 - Training GPTs with your own data08:30 - Automating your operations with AI10:00 - Building custom frameworks in ChatGPT12:00 - AI's impact on SEO and the fall of Google15:00 - Content strategy for AI-powered search18:00 - The rise of AI influencers and virtual IP22:00 - The future of jobs and creativity in AI23:00 - About Generation AI__________________________________________________________________________________________
Microsoft finally opens the source of WSL, Paolo Scanferla describes an inherent trade-off in TypeScript's type system, Alberto Fortin is taking a step back from heavy LLM use while coding, a pseudonymous hacker spent two weeks coding from their Android phone, and NLWeb might become the HTML of the open agentic web.
In this episode, Marc Bernstein sits down with Will Bodewes, CEO of Phonely. This one's about architecture, not aspiration. Will breaks down the anatomy of a voice agent, from speech generation and LLM orchestration to real-time prompting and user control. You'll hear why hallucinations aren't what you think, how “agent guidelines” shape human-like behavior, and why flow-based design is the new gold standard for AI in the contact center.Plus, how Phonely customers go live in 30 days, what really drives conversion, and why the future of AI isn't about doing everything—it's about doing the right things, repeatedly, without fail.Whether you're building an AI roadmap or just tired of buzzwords, this one's for you.Timestamps:00:00 Introduction01:03 Setting Up LLMs05:10 Prompts & Giving Users Control06:47 Setting Up On Phonely09:50 Agent Guidelines14:02 First Day, Month, and Year Using Phonely18:40 AI Agent Use Cases & Limitations24:13 Customers' Comfort & Learnings32:10 AI in 3 Years
Microsoft finally opens the source of WSL, Paolo Scanferla describes an inherent trade-off in TypeScript's type system, Alberto Fortin is taking a step back from heavy LLM use while coding, a pseudonymous hacker spent two weeks coding from their Android phone, and NLWeb might become the HTML of the open agentic web.
Scott has spent over 25 years in the IT field, working in disciplines such as systems architecture, software development, team growth, and entrepreneurship. He was a Microsoft MVP for 12 years in ASP.NET and IIS. He's co-authored two books (IIS 7 and IIS 8 Professional), is a Pluralsight author, and has spoken at various conferences, code camps, and user groups. He's now shifted into the AI space, building AI solutions and supporting others in their AI journey. Scott co-founded a new startup, Kitchen Copilot, using all the new GenAI tech to help people cook more at home and eat healthier. Topics of Discussion: [1:34] Scott's career journey and how Kitchen Copilot originated. [7:27] More about Kitchen Copilot's features. [9:35] How Kitchen helps with meal planning and recipe discovery for all levels of at-home chefs. [13:17] The technical process of importing recipes. [15:58] Using Microsoft Document intelligence to read handwriting and take screenshots of cookbook recipes. [17:13] Deployed pieces of the application. [20:43] E-mail import and queuing system. [23:32] Choice of LLM models including Groq. [29:37] Automated testing approach for LLM-based applications. [31:36] Hosting details. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo Flowise Attention is all you need Kitchen Copilot Make Perplexity Claude Groq About Groq Scott on LinkedIn Kitchen Copilot AI Generator Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
Martin Mao is the co-founder and CEO of Chronosphere, an observability platform built for the modern containerized world. Prior to Chronosphere, Martin led the observability team at Uber, tackling the unique challenges of large-scale distributed systems. With a background as a technical lead at AWS, Martin brings unique experience in building scalable and reliable infrastructure. In this episode, he shares the story behind Chronosphere, its approach to cost-efficient observability, and the future of monitoring in the age of AI.What you'll learn:The specific observability challenges that arise when transitioning to containerized environments and microservices architectures, including increased data volume and new problem sources.How Chronosphere addresses the issue of wasteful data storage by providing features that identify and optimize useful data, ensuring customers only pay for valuable insights.Chronosphere's strategy for competing with observability solutions offered by major cloud providers like AWS, Azure, and Google Cloud, focusing on specialized end-to-end product.The innovative ways in which Chronosphere's products, including their observability platform and telemetry pipeline, improve the process of detecting and resolving problems.How Chronosphere is leveraging AI and knowledge graphs to normalize unstructured data, enhance its analytics engine, and provide more effective insights to customers.Why targeting early adopters and tech-forward companies is beneficial for product innovation, providing valuable feedback for further improvements and new features. How observability requirements are changing with the rise of AI and LLM-based applications, and the unique data collection and evaluation criteria needed for GPUs.Takeaways:Chronosphere originated from the observability challenges faced at Uber, where existing solutions couldn't handle the scale and complexity of a containerized environment.Cost efficiency is a major differentiator for Chronosphere, offering significantly better cost-benefit ratios compared to other solutions, making it attractive for companies operating at scale.The company's telemetry pipeline product can be used with existing observability solutions like Splunk and Elastic to reduce costs without requiring a full platform migration.Chronosphere's architecture is purposely single-tenanted to minimize coupled infrastructures, ensuring reliability and continuous monitoring even when core components go down.AI-driven insights for observability may not benefit from LLMs that are trained on private business data, which can be diverse and may cause models to overfit to a specific case.Many tech-forward companies are using the platform to monitor model training which involves GPU clusters and a new evaluation criterion that is unlike general CPU workload.The company found a huge potential by scrubbing the diverse data and building knowledge graphs to be used as a source of useful information when problems are recognized.Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!→ Email updates: https://startupproject.substack.com/#StartupProject #Chronosphere #Observability #Containers #Microservices #Uber #AWS #Monitoring #CloudNative #CostOptimization #AI #ArtificialIntelligence #LLM #MLOps #Entrepreneurship #Podcast #YouTube #Tech #Innovation
258.Bölümde Hello Tomorrow Türkiye Yönetim Kurulu Başkanı ve CEO'su Timur Topalgökçeli konuğum oldu. Deep Tech'in ticarileşme süreci gerçekten düşünüldüğü kadar zorlu mu? Avrupa'daki yatırım eğilimleri Türkiye'deki girişimcilere ne anlatıyor? Hello Tomorrow Türkiye elçisi Timur Topalgoekceli ile yapay zekâdan yeni enerji teknolojilerine, yatırımcı psikolojisinden küresel rekabete kadar Deep Tech dünyasını masaya yatırıyoruz. Bu bölümde, bilimin ve teknolojinin kesişiminde kurulan girişimlerin geleceğine ışık tutuyoruz. https://hello-tomorrow.org.tr/ (00:00) – Açılış (00:25) - Hello Tomorrow Türkiye hakkında (03:22) – Deep Tech Nedir? (04:25) – McKinsey raporuna göre, Deep Tech girişimlerinin zaman içinde geleneksel teknoloji girişimleriyle benzer başarı oranlarına sahip olduğu belirtiliyor. Sizin deneyiminizde, Deep Tech girişimlerinin ticarileşme veya ölçeklenme süreçleri gerçekten düşünüldüğü kadar zor mu? (06:50) - 2023 verilerine göre Avrupa'daki Deep Tech yatırımlarının %20-30'u Yapay Zeka ve %15-20'si Yeni Enerji alanlarına yönelmiş. Türkiye veya Avrupa'daki Deep Tech girişimcileri için sizce en büyük gelecek potansiyeli hangi temada gizli? (09:57) – İnovasyon ve Devletlerin destekleri… (15:12) - Deep Tech yatırımları daha büyük başlangıç sermayeleri gerektiriyor ama uzun vadede daha yüksek getiriler sağlıyor. Deep Tech projelerinde yatırımcıları ikna etmenin en zor yanı nedir? Sizce bu risk algısını değiştirmek mümkün mü? (18:52) - Deep Tech girişimleri genellikle global pazarlara hitap ediyor. Sizce Türkiye'den çıkan Deep Tech çözümlerinin globalde rekabet edebilmesi için hangi stratejik adımlar atılmalı? (21:30) - Hello Tomorrow Türkiye bu yıl hangi tema ile düzenleniyor. Deep Tech girişimciliği için oynadığı rolü nasıl değerlendiriyorsunuz? Sizce bu tür zirveler bilimsel buluşları pazara taşıma sürecinde nasıl bir boşluğu dolduruyor?” (23:27) - Hello Tomorrow'un LLM tabanlı yapay zeka girişimlerine özel başlattığı destek programı dikkat çekiyor. “Sizce büyük dil modelleri ve yapay zeka, Deep Tech girişimlerinin önünü nasıl açıyor? Bu alanlarda çalışan girişimciler neden bu tarz özel programlara daha fazla yönelmeli?” (27:30) – Kitap önerisi Mobi Dick - https://www.goodreads.com/book/show/18187101-moby-dick---beyaz-balina?ac=1&from_search=true&qid=0vYoZcnY9U&rank=1 Başarılı Startup için 24 Adım - https://www.goodreads.com/book/show/39217082-disciplined-entrepreneurship?ac=1&from_search=true&qid=nGNwrUOPyY&rank=1 (31:00) – Patreon destekçilerimiz… (31:22) - Kapanış Sosyal Medya takibi yaptın mı? Twitter - https://twitter.com/dunyatrendleri Instagram - https://www.instagram.com/dunya.trendleri/ Linkedin - https://www.linkedin.com/company/dunyatrendleri/ Youtube - https://www.youtube.com/c/aykutbalcitv Goodreads - https://www.goodreads.com/user/show/28342227-aykut-balc aykut@dunyatrendleri.com Bize bağış yapıp destek olmak için Patreon hesabımız – https://www.patreon.com/dunyatrendleri Learn more about your ad choices. Visit megaphone.fm/adchoices
An airhacks.fm conversation with Juan Fumero (@snatverk) about: tornadovm as a Java parallel framework for accelerating data parallelization on GPUs and other hardware, first GPU experiences with ELSA Winner and Voodoo cards, explanation of TornadoVM as a plugin to existing JDKs that uses Graal as a library, TornadoVM's programming model with @parallel and @reduce annotations for parallelizable code, introduction of kernel API for lower-level GPU programming, TornadoVM's ability to dynamically reconfigure and select the best hardware for workloads, implementation of LLM inference acceleration with TornadoVM, challenges in accelerating Llama models on GPUs, introduction of tensor types in TornadoVM to support FP8 and FP16 operations, shared buffer capabilities for GPU memory management, comparison of Java Vector API performance versus GPU acceleration, discussion of model quantization as a potential use case for TornadoVM, exploration of Deep Java Library (DJL) and its ND array implementation, potential standardization of tensor types in Java, integration possibilities with Project Babylon and its Code Reflection capabilities, TornadoVM's execution plans and task graphs for defining accelerated workloads, ability to run on multiple GPUs with different backends simultaneously, potential enterprise applications for LLMs in Java including model distillation for domain-specific models, discussion of Foreign Function & Memory API integration in TornadoVM, performance comparison between different GPU backends like OpenCL and CUDA, collaboration with Intel Level Zero oneAPI and integrated graphics support, future plans for RISC-V support in TornadoVM Juan Fumero on twitter: @snatverk
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
In this episode of The Tech Trek, Amir speaks with Patrick Leung, CTO of Faro Health, about what it takes to lead an engineering organization through a transformation to become an AI-first company. From redefining the product roadmap to managing cultural and technical shifts, Patrick shares practical insights on team structure, skill development, and delivering AI-enabled features in a regulated domain like clinical trials. This is a must-listen for tech leaders navigating similar transitions.
Why is February only 28 days long? Is there a link between red meat, testosterone, and male violence? Is there a 'best' LLM? Can macular degeneration cause you to go blind? What do we know about the big bang? What determines the speed of your internet? Does an autoimmune system attack random parts of the body? Dr Chris Smith and Clarence Ford have all the answers... Like this podcast? Please help us by supporting the Naked Scientists
Why is February only 28 days long? Is there a link between red meat, testosterone, and male violence? Is there a 'best' LLM? Can macular degeneration cause you to go blind? What do we know about the big bang? What determines the speed of your internet? Does an autoimmune system attack random parts of the body? Dr Chris Smith and Clarence Ford have all the answers... Like this podcast? Please help us by supporting the Naked Scientists
In this episode, Kane Simms is joined by Katherine Munro, Conversational AI Engineer at Swisscom, for a deep dive into what might sound like an odd pairing: using LLMs to classify customer intents.Large Language Models (LLMs) are powerful, multi-purpose tools. But would you trust one to handle the precision of a classification task?It's an unlikely fit for an LLM. Classifiers typically need to be fast, accurate, and interpretable. LLMs are slow, random black-boxes. Classifiers need to output a single label. LLMs never stop talking.And yet, there are good reasons to use LLMs for such tasks, and emerging architectures and techniques. Many real-world use cases need a classifier, and many data and product development teams will soon find themselves wondering: could GPT handle that?If that sounds like you, then check out this extended episode to explore how Switzerland's largest telecommunications provider tackles this issue while building a next-generation AI assistant. This episode is brought to you by NLX.NLX is a conversational AI platform enabling brands to build and manage chat, voice and multimodal applications. NLX's patented Voice+ technology synchronizes voice with digital channels, making it possible to automate complex use cases typically handled by a human agent. When a customer calls, the voice AI guides them to resolve their inquiry through self-service using the brand's digital asset, resulting in automation and CSAT scores well above industry average. Just ask United Airlines.Shownotes:"The Handbook of Data Science and AI: Generate Value from Data with Machine Learning and Data Analytics" - Available on Amazon: https://a.co/d/3wNN9cvKatherine's website: http://katherine-munro.com/Subscribe to VUX World: https://vuxworld.typeform.com/to/Qlo5aaeWSubscribe to The AI Ultimatum Substack: https://open.substack.com/pub/kanesimmsGet in touch with Kane on LinkedIn: https://www.linkedin.com/in/kanesimms/ Hosted on Acast. See acast.com/privacy for more information.
The Net Promoter System Podcast – Customer Experience Insights from Loyalty Leaders
Episode 246: The AI future of customer service is already here—and it's better than most people think. In this episode, Deon Nicholas, President and Executive Chairman of Forethought, joins host Rob Markey to show us how some companies are already using AI to resolve customer issues end-to-end in ways we could barely imagine just a couple of years ago. Deon introduces us to agentic AI: an emerging class of intelligent agents that take real action, integrate across enterprise systems, and adapt to each customer's needs. Drawing on his experience building Forethought's platform, Deon reveals how these systems are resolving issues, improving customer satisfaction, and going live in as little as 1 to 30 days. This isn't a future promise. It's happening now. The episode explores the architecture behind agentic AI, including Forethought's use of multi-agent systems, plain-language Autoflows, and a Discover model that learns company policies from historical tickets and call logs. Rob and Deon dig into risk, hallucination, and data privacy concerns—and how to address them without six-month implementation timelines. A surprising insight? Forethought sometimes adds a delay to its lightning-fast responses. Why? To build trust through operational transparency. Deon explains how even loading dots can reassure customers that the system is working on their behalf. Guest: Deon Nicholas, Founder, President, and Executive Chairman, Forethought Host: Rob Markey, Partner, Bain & Company Give Us Feedback: Help us improve the podcast (feedback link) Want to get in touch? Send a note to host Rob Markey. Key Topics Covered [1:00] Agentic AI vs. traditional chatbots [2:00] Why chatbots fail regarding decision trees and limitations [4:00] Real-time AI issue resolution and automation [7:00] AI integration with enterprise systems [12:00] Fast deployment and autoflow policy learning [15:00] Multi-agent AI systems and scalability [18:00] AI adoption challenges and business integration [22:00] Balancing AI automation and human agent handoffs [27:00] Cost comparison of AI vs. business process outsourcing [30:00] Rapid AI deployment and testing strategies Time-stamped Notable Quotes [4:00] “With an agentic AI, you have something that has learned from your business policies. It's read through hundreds of thousands of past conversations, knows the vernacular, knows how to respond, and knows the business policy. So, instead of getting a menu of items, you just have a conversation.” [13:00] “You probably already have hundreds of thousands of conversations, whether they're sitting in transcripts, support tickets, [or] call recordings. It turns out that is a wealth of data that can train an AI in such a way that you don't need to manually create all these rules and decision trees and workflows.” [ 16:00] “When we first launched our LLM-native AI two years ago, there were some hallucinations. But we've been able to go through, evaluate the model, fine-tune the model, and now we're at the point where it rarely happens. What we typically say to everyone is: ‘Test it. Test it before you launch it, run a 14-day free trial, proof of value, run us against anyone else in the market.'” [21:00] “What's beautiful about all of this is now you get to the point where AI can become embedded into the ecosystem—and, ironically, make all of these human experiences better.” [21:00] “AI is making it so that when it's time to actually hand off to a human agent, you're far less frustrated, or far less likely to be frustrated. And then the humans will now be resolving issues that require more judgment and more empathy.”
How do we make artificial intelligence more intelligent? This week, Technology Now dives deep into the world of AI agents and how they interact with large language models. We ask what are some of the current problems with AI, and examine how applying agents can help artificial intelligence to provide better answers to our questions. Jimmy Whitaker, Chief Scientist in the AI Private Cloud Group at HPE, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what can be learnt from it.Jimmy Whitaker: https://www.linkedin.com/in/jimmymwhitaker/Sources cited in this week's episode:Today I learned: https://www.simonsfoundation.org/2025/04/29/flares-from-magnetized-stars-can-forge-planets-worth-of-gold-other-heavy-elements/Anirudh Patel et al., 2025, Direct Evidence for r-process Nucleosynthesis in Delayed MeV Emission from the SGR 1806–20 Magnetar Giant Flare, ApJL 984 L29, DOI 10.3847/2041-8213/adc9b0This week in history:Strassburg MA. The global eradication of smallpox. Am J Infect Control. 1982 May;10(2):53-9. doi: 10.1016/0196-6553(82)90003-7. PMID: 7044193.Muyembe JJ, et al, 2024, Ebola Outbreak Response in the DRC with r-VSV-ZEBOV-GP Ring Vaccination, The New England Journal of Medicine, 2024;391:2327-2336, VOL. 391 NO.24, https://www.nejm.org/doi/10.1056/NEJMoa1904387https://www.who.int/health-topics/poliomyelitis#tab=tab_1
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
AFCEA'S TechNet Cyber conference held in Baltimore, Maryland was the perfect opportunity to sit down with Greg Carl, Principal Technologist from Pure Storage. Pure Storage is used by 175 federal agencies. Time to sit down from a subject matter expert and explain their value proposition. Today's federal government is attempting to accomplish digital modernization through a move to the cloud and, at the same time, reduce staff. To multiply the risk associated with this endeavor, we see an increase in cyber attacks on data at rest, in transit, and while in use. Greg Carl drills down on how Pure Storage can help federal leaders in several areas, he begins with Retrieval Augmented Generation, RAG. People have jumped into AI without knowing how to structure a large language model, the popular LLM. RAG focuses on text generation and tries to make sure the data collected is accurate, relevant, and contextually aware. Pure Storage asks, if RAG protects the results of a query, what protects the “Retrieval” part of RAG. We know LLMs are being attacked every day. Malicious code could be placed in a LLM, and the RAG system might not know. A decade ago, backups were child's play. A server down the hall, a backup appliance. Today, one needs an agile cloud solution to perform continuous backups in a hybrid world. One way to gain resilience is to use immutable backups where the attacked system can be restored and not lose valuable time. Speed and security handling important data activities can reduce costs for federal leaders by improving accuracy of LLMs and speed the time to recover after an attack. Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com
In this episode we discuss why and how we built the new sharing experience (see blog post for v1.19).Download the Metacast podcast app for free:iOS: https://apps.apple.com/app/metacast/id6462012536Android: https://play.google.com/store/apps/details?id=app.metacast.podcast.playerSegments[00:20] Opening[00:57] New weekly update episodes[05:22] New podcast sharing experience in Metacast[06:52] Why we are working on sharing?[09:32] The new sharing experience[14:35] QR codes[16:51] Optimized image loading[17:53] Problems with the old URL sharing[23:04] The technical solution for consistently working app links[24:43] The complexity of the change[28:45] Preview of the next episode about rebranding[33:26] RecommendationsShow notesNew sharing experience announcement blogLatest weekly update episode: Weekly Update (May 6, 2025): New sharing experience, new logo & branding67. Attack of LLM bots (podcast) + blogEpisode with Jennie: 42. Equity compensation at a pre-revenue bootstrapped startup with Jennie BuechnerVoices in my head by Falling in ReverseGet in touch
This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Offer only for new US customers with a minimum financial commitment. See if you qualify for half off at http://oracle.com/eyeonai In this episode of Eye on AI, host Craig Smith speaks with Prashanth Chandrasekar, CEO of Stack Overflow, about how one of the internet's most trusted platforms for developers is adapting to the era of generative AI. With over 60 million human-curated Q&A pairs, Stack Overflow is now at the center of AI development — not as a competitor to large language models like ChatGPT, but as a foundational knowledge base that powers them. Prashanth breaks down how Stack Overflow is partnering with OpenAI, Google, and other LLM providers to license its data and improve AI accuracy, while also protecting the integrity of its community. He explains the rise of OverflowAI, how Stack Overflow for Teams is fueling enterprise-grade co-pilots, and why developers still rely on expert human input when AI hits its “complexity cliff.” The conversation covers everything from hallucination problems and trust issues in AI-generated code to the monetization of developer data and the evolving interface of the web. If you want to understand the future of developer tools, AI coding assistants, and how human knowledge will coexist with autonomous agents, this episode is a must-listen. Subscribe for more deep dives into how AI is reshaping the world of software, enterprise, and innovation. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Intro (02:31) Prashanth's Journey from Developer to CEO (05:18) Why Stack Overflow is Different from GitHub (08:51) The Power of Community and Human-Curated Knowledge (12:53) Stack Overflow's Data Strategy for AI Training (17:26) Why Stack Overflow Isn't Competing with OpenAI (20:36) How Stack Overflow Powers Enterprise AI Agents (26:13) OverflowAI, Gemini, and the Future of Dev Workflows (30:09) Inside Stack Overflow for Teams (33:29) Safeguarding Quality: The Fight Against AI Slop (38:32) Licensing, Attribution, and Protecting the Knowledge Base (43:19) Business Strategy in the Age of Generative AI
* Mailbag!* Tariff talk* Josh is in another play!* Email from Dean the Machine: Any plans for Cathay?* Email from Tibor: Thanks for the podcast!* Group pattern for the Throalic Army?* Email from Paul: Campaign tales!* Email from Justin: Upcoming Roll20 materials?* News about the upcoming Legends of Barsaive: Haven Vol 2* Email from Michael: What's the deal with dwarf sword game stats?* Email from Brian: Inventory spell and ChatGPT* Extensive discussion about LLM "tools" like ChatGPTFind and Follow:Email: edsgpodcast@gmail.comYouTube: https://www.youtube.com/@EDSGPodcastFind and follow Josh: https://linktr.ee/LoreMerchantGet product information, developer blogs, and more at www.fasagames.comFASA Games on Facebook: https://www.facebook.com/fasagamesincOfficial Earthdawn Facebook Group: https://www.facebook.com/groups/officialearthdawnFASA Games Discord Channel: https://discord.gg/uuVwS9uEarthdawn West Marches: https://discord.gg/hhHDtXW
We chat with linguist and cognitive scientist Hagen Blix about his new book Why We Fear AI (co-authored with computer scientist Ingeborg Glimmer) about how the technical qualities of AI – especially LLM chatbots – take the alienation (and seemingly alien power) of capital to the next level. What happens when the social logic of capital — which appears to be a motive force with no motivator — is channeled through generative technologies that appear to be texts with no author? People see an entity that must be feared and worshipped. ••• Why We Fear AI | Hagen Blix & Ingeborg Glimmer https://www.commonnotions.org/why-we-fear-ai ••• https://www.If A.I. Systems Become Conscious, Should They Have Rights? nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html ••• Marx's Comments on James Mill http://marxists.org/archive/marx/works/1844/james-mill/ Standing Plugs: ••• Order Jathan's new book: https://www.ucpress.edu/book/9780520398078/the-mechanic-and-the-luddite ••• Subscribe to Ed's substack: https://substack.com/@thetechbubble ••• Subscribe to TMK on patreon for premium episodes: https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (bsky.app/profile/jathansadowski.com) and Edward Ongweso Jr. (www.x.com/bigblackjacobin). Production / Music by Jereme Brown (bsky.app/profile/jebr.bsky.social)
Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don't really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in code, packages, and designs. Rey Bango shares his experience with secure code reviews and where developer education fits in among the adoption of LLMs. As businesses rapidly embrace SaaS and AI-powered applications at an unprecedented rate, many small-to-medium sized businesses (SMBs) struggle to keep up due to complex tech stacks and limited visibility into the skyrocketing app sprawl. These modern challenges demand a smarter, more streamlined approach to identity and access management. Learn how LastPass is reimagining access control through “Secure Access Experiences” - starting with the introduction of SaaS Monitoring capabilities designed to bring clarity to even the most chaotic environments. Secure Access Experiences - https://www.lastpass.com/solutions/secure-access This segment is sponsored by LastPass. Visit https://securityweekly.com/lastpassrsac to learn more about them! Cloud Application Detection and Response (CADR) has burst onto the scene as one of the hottest categories in security, with numerous vendors touting a variety of capabilities and making promises on how bringing detection and response to the application-level will be a game changer. In this segment, Gal Elbaz, co-founder and CTO of Oligo Security, will dive into what CADR is, who it helps, and what the future will look like for this game changing technology. Segment Resources - https://www.oligo.security/company/whyoligo To see Oligo in action, please visit https://securityweekly.com/oligorsac Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-330
Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences. After spending significant time on the forefront of AI's breakthroughs, Stuart believes many of the products we're seeing today are the result of FOMO above all else. He shares a belief that I've emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt's 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users' needs. Highlights/ Skip to Why Stuart was asked to speak to the House of Lords about AI (2:04) The LLM-powered products has Stuart been building recently (4:20) Finding product-market fit with AI products (7:44) Lessons Stuart has learned over the past two years working with LLM-power products (10:54) Figuring out how to build user trust in your AI products (14:40) The differences between being a digital product manager vs. AI product manager (18:13) Who is best suited for an AI product management role (25:42) Why Stuart thinks user experience matters greatly with AI products (32:18) The formula needed to create a business-viable AI product (38:22) Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53) Conversations that need to be had with academics and data scientists when building AI-powered products (54:04) Final thoughts from Stuart and where you can find more from him (58:07) Quotes from Today's Episode “I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We're finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn't expect large language models to be used for fact-checking… they're actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There's a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58) “[LLM-powered products] gave me the wow factor, and I think that's part of what's caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we're probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We're in this rush and [these products] are [based on] FOMO. We're leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It's not. It's another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What's the context of the problem? What's the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We're putting the cart before the horse.” - Stuart Winter-Tear (11:11) “[LLM-powered products] feel most amazing when you're not a domain expert in whatever you're using it for. I'll give you an example: I'm terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it's actually most powerful in the hands of experts who do understand the domain they're using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01) “We're so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it's the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you're dealing with cutting-edge products, you've got these two timelines that you're trying to put together, and the data research one is very unpredictable. It's the nature of research. We don't necessarily know when we're going to get to where we want to be.” - Stuart Winter-Tear (22:25) “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world. I'm against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they're dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)
Episode SummaryJeremy Snyder is the co-founder and CEO of FireTail, a company that enables organizations to adopt AI safely without sacrificing speed or innovation. In this conversation, Jeremy shares his deep expertise in API and AI security, highlighting the second wave of cloud adoption and his pivotal experiences at AWS during key moments in its growth from startup onwards.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Jeremy Snyder, the Co-founder and CEO of FireTail, to unravel the complexities of API security and explore its critical intersection with the burgeoning field of Artificial Intelligence. Jeremy brings a wealth of experience, tracing his journey from early days in computational linguistics and IT infrastructure, through a pivotal period at AWS during its startup phase, to eventually co-founding FireTail to address the escalating challenges in API security driven by modern, decoupled software architectures.The conversation dives deep into the common pitfalls and crucial best practices for securing APIs. Jeremy clearly distinguishes between authentication (verifying identity) and authorization (defining permissions), emphasizing that failures in authorization are a leading cause of API-related data breaches. He sheds light on vulnerabilities like Broken Object-Level Authorization (BOLA), explaining how seemingly innocuous practices like using sequential integer IDs can expose entire datasets if server-side checks are missed. The discussion also touches on the discoverability of backend APIs and the persistent challenges surrounding multi-factor authentication, including the human element in security weaknesses like SIM swapping.Looking at current trends, Jeremy shares insights from FireTail's ongoing research, including their annual "State of API Security" report, which has uncovered novel attack vectors such as attempts to deploy malware via API calls. A significant portion of the discussion focuses on the new frontier of AI security, where APIs serve as the primary conduit for interaction—and potential exploitation. Jeremy details how AI systems and LLM integrations introduce new risks, citing a real-world example of how a vulnerability in an AI's web crawler API could be leveraged for DDoS attacks. He speculates on the future evolution of APIs, suggesting that technologies like GraphQL might become more prevalent to accommodate the non-deterministic and data-hungry nature of AI agents. Despite the evolving threats, Jeremy concludes with an optimistic view, noting that the gap between business adoption of new technologies and security teams' responses is encouragingly shrinking, leading to more proactive and integrated security practices.LinksFireTailRapid7Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don't really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in code, packages, and designs. Rey Bango shares his experience with secure code reviews and where developer education fits in among the adoption of LLMs. As businesses rapidly embrace SaaS and AI-powered applications at an unprecedented rate, many small-to-medium sized businesses (SMBs) struggle to keep up due to complex tech stacks and limited visibility into the skyrocketing app sprawl. These modern challenges demand a smarter, more streamlined approach to identity and access management. Learn how LastPass is reimagining access control through “Secure Access Experiences” - starting with the introduction of SaaS Monitoring capabilities designed to bring clarity to even the most chaotic environments. Secure Access Experiences - https://www.lastpass.com/solutions/secure-access This segment is sponsored by LastPass. Visit https://securityweekly.com/lastpassrsac to learn more about them! Cloud Application Detection and Response (CADR) has burst onto the scene as one of the hottest categories in security, with numerous vendors touting a variety of capabilities and making promises on how bringing detection and response to the application-level will be a game changer. In this segment, Gal Elbaz, co-founder and CTO of Oligo Security, will dive into what CADR is, who it helps, and what the future will look like for this game changing technology. Segment Resources - https://www.oligo.security/company/whyoligo To see Oligo in action, please visit https://securityweekly.com/oligorsac Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-330
Guest: Diana Kelley, CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem' Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
Show Notes: Gideon Yaffe and his then girlfriend-now wife, Sue Chan, drove across the country after graduation to San Francisco, where they had no jobs or prospects. Gideon had applied to graduate school in philosophy but didn't get in anywhere. They got married and his first job was at a pet store, Gideon worked there for a while, then at a computer magazine. Studying Philosophy at Stanford While hanging out in San Francisco, he started reading Proust's Remembrance of Things Past, which he loved and found to be hugely rewarding. This inspired him to apply to grad school again and this time his application was accepted in a lot of places. He decided to study philosophy at Stanford, where he met Michael Bratman, a professor who worked on philosophy of action and related questions about the nature of action, agency, and intention. Gideon also became interested in the history of philosophy and wrote a dissertation about John Locke and contemporary problems related to the Free Will problem. Gideon went on the job market in academia. He got some interviews but didn't get a job. The following year, he got a one-year job at UCSD. His wife, Sue, was working in the film industry, so they moved to LA and he commuted down to San Diego. Tenure at the University of Southern California He finally secured a tenure track job at the University of Southern California (USC) in 1999 and taught Philosophy of Law classes. He wrote a paper about addiction, which he found interesting due to his knowledge of addicts and interest in freedom and addiction and how addiction would undermine freedom. When a friend of Gideon's was a victim of a carjacking incident, he became interested in the legal problem at the center of many carjacking cases that revolves around intention, so he wrote a paper about conditional intention. Gideon explains how carjacking differs from car theft, and the paper questioned whether a conditional intention was enough for the crime when the statute called for unconditional intention. Gideon felt that philosophy of law was important to work on but he needed to know more about the law, so he coerced USC into giving him a year in the law school. Gideon recounts his experience as a law student and how it led to teaching law in law school. Collaborating on a Neuroscience and Legal Proceedings Think Tank The MacArthur Foundation launched the law and neuroscience project, which aimed to bring together various people from philosophy, law, and neuroscience to discuss the relevance of neuroscience to legal proceedings, particularly in criminal law. Gideon was invited to be part of this think tank. During this time, he collaborated with neuroscientists on various problems and experiments related to neuroscience that could be useful to the legal system. This led to a desire to learn more about neuroscience and he pursued a grant to support the project. He spent another year as a neuroscience student at Cal Tech which allowed him to learn a lot about neuroscience. Gideon also started writing a book about attempted crimes. Gideon talks about the importance of understanding the double failure of attempts, and inherent impossible attempts. He highlights the number of cases where individuals seem incapable of committing crimes. Gideon received a job offer from Yale University after being a part of the MacArthur think tank and his book. He decided to take the position and he and his family moved to New Haven. Neuroscience, Law, and GenAI Intentions Gideon talks about the challenges faced by those who attempt to escape the harsh realities of the criminal justice system. His experiences highlight the importance of understanding the factors that contribute to attempted crimes. The conversation turns to Gideon's involvement in generative AI and the potential of AI intentions. He is currently working on a project with neuroscientist Uri Maoz, which aims to understand, for example, the difference between self-driving cars and drones in terms of intentions. Another project involves a group funded by billionaire Sergey Brin, who has a daughter with severe autism. The group aims to build AI models of the brain of a person and use the model to see how it responds to various forms of surgery. AI, Consciousness, and Intentions Organizing Behavior Gideon discusses the concept of AI consciousness. Gideon states that a lot depends on how consciousness is defined. One understanding is that consciousness involves self-representation of certain kinds of thoughts. He suggests that understanding consciousness depends on what one thinks about it. One way of understanding consciousness involves self-representation of certain kinds, such as having a second-order thought about the thought. If that's all that's required for consciousness, then these LLMs can be conscious. Another way of understanding consciousness involves qualia, or ways of feeling, such as experiencing a particular sensation or feeling something. However, he acknowledges that it is difficult to know exactly what it is like to be an LLM or a toaster. He acknowledges that there are some similarities between the two, but acknowledges the challenges in determining their exact roles in AI and neuroscience. Gideon explains that intentions serve to organize behavior in various interesting ways. For example, if an AI has intentions, they can make decisions now so they don't have to think about them later. This is relevant for coordinating behavior with each other, as well as interpersonal organization. The question of whether AIs have intentions is more tractable than the question of whether they are conscious. Intentions play a crucial role in various aspects of law, such as contract interpretation and legal texts. Understanding the intentions of AI and their potential impact on these areas is essential for understanding the future of AI and its applications in various fields. Updating Law to Address AI Intention The conversation explores the need to rewrite laws or update them to address the issue of intention in AI. Gideon states that the intention of a person or AI and the textual language, and the interpretation of the text are all areas that need to be explored. The project Gideon is working on aims to determine the intentions of AI by examining the role their representation plays in guiding their behavior. Gideon suggests that the question is whether inferences can be made about AI's intentions by looking at the role the representations plays from the AI behavior. Gideon talks about a project on criminal activity and neuroscience that he is proud of. Leniency and Child Criminality Gideon goes on to talk about leniency and child criminality. He argues that the reason to give a break to kids who engage in criminal behavior is disenfranchisement, not neural immaturity. He talks about the age of maturity, lack of political participation. Gideon's book about kids was written after completing his studies at Yale. He also discusses his personal life, including being in a car accident which resulted in a severe brain injury and how he is immensely grateful for his recovery. Influential Harvard Courses and Professors Gideon mentions a core class on the Baroque period by Simon Schama, which he found to be the most influential. The course focused on famous European paintings from the Baroque period, which he found to be a source of inspiration. Other influential courses include a seminar with Hilary Putnam, Rational Action with Robert Nozick. Timestamps: 04:50: Philosophy Studies and Academic Challenges 11:18: Legal Philosophy and Collaborative Research 22:25: Transition to Yale and Continued Research 27:22: Philosophical Reflections on AI and Consciousness 39:36: Personal Reflections and Career Highlights 49:52: Courses and Professors at Harvard 52:27: Current Work and Future Directions 52:41: Personal Life and Family Links: https://law.yale.edu/gideon-yaffe Featured Non-profit: The featured non-profit of this episode of The 92 Report is recommended by Michael Johnson who reports: “Hi. I'm Michael Johnson, class of 1992. The featured nonprofit of this episode of The 92 Report is Son of a Saint. Son of a Saint provides guidance, mentorship and opportunities to young boys in the New Orleans area who did not have a father in the home, usually due to death or incarceration. Founded in 2011 by Sonny Lee, who lost his own father, a defensive back of the saints from a heart attack at the age of 36, Son of a Saint is making a significant impact on the lives of young boys in the New Orleans area. My wife and I have been supporters for many years, as has my firm advantage capital, which recently endowed a scholarship that will cover high school tuition for two boys from the program. Although my circumstances were much different, having lost my own father when I was five years old, I know firsthand how important a male influence can be on a young boy. I luckily had family members and friends who stepped up from me and hope in some small way, my support of Son of a Saint and the work their mentors do can give the boys and their programs similar help. You can learn more about their work at Son of a Saint.org and now here's Will Bachman with this week's episode.” To learn more about their work, visit: www.sonofasaint.org.
Yasiru Ranaraja discusses whether Pax Americana is really on the decline or not, how Belt & Road connectivity is advancing, BRICS, ASEAN, the geostrategic importance of Sri Lanka, the rising importance of India, flashpoints, and more! Watch on BitChute / Brighteon / Rumble / Substack / YouTube Geopolitics & Empire · Yasiru Ranaraja: Growing Belt & Road Connectivity, Sri Lanka, & Flashpoints #552 *Support Geopolitics & Empire! Become a Member https://geopoliticsandempire.substack.com Donate https://geopoliticsandempire.com/donations Consult https://geopoliticsandempire.com/consultation **Visit Our Affiliates & Sponsors! Above Phone https://abovephone.com/?above=geopolitics easyDNS (15% off with GEOPOLITICS) https://easydns.com Escape Technocracy course (15% off with GEOPOLITICS) https://escapethetechnocracy.com/geopolitics PassVult https://passvult.com Sociatates Civis (CitizenHR, CitizenIT, CitizenPL) https://societates-civis.com Wise Wolf Gold https://www.wolfpack.gold/?ref=geopolitics Websites BRISL https://brisl.org X https://x.com/YRanaraja About Yasiru Ranaraja Yasiru Ranaraja is a researcher on maritime affairs and BRI development. He graduated from Dalian Maritime University, and in 2016 was awarded the Chinese Government Scholarship to complete his LLM at Ocean University of China. He is a Consulting Specialist at China International Economic Consultants Co., Ltd (CIECC) and a Founding Director at Belt & Road Initiative Sri Lanka (BRISL). *Podcast intro music is from the song "The Queens Jig" by "Musicke & Mirth" from their album "Music for Two Lyra Viols": http://musicke-mirth.de/en/recordings.html (available on iTunes or Amazon)
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.
Few understand both the promise and limitations of artificial general intelligence better than Jack Clark, co-founder of Anthropic. With a background in journalism and the humanities that sets him apart in Silicon Valley, Clark offers a refreshingly sober assessment of AI's economic impact—predicting growth of 3-5% rather than the 20-30% touted by techno-optimists—based on his firsthand experience of repeatedly underestimating AI progress while still recognizing the physical world's resistance to digital transformation. In this conversation, Jack and Tyler explore which parts of the economy AGI will affect last, where AI will encounter the strongest legal obstacles, the prospect of AI teddy bears, what AI means for the economics of journalism, how competitive the LLM sector will become, why he's relatively bearish on AI-fueled economic growth, how AI will change American cities, what we'll do with abundant compute, how the law should handle autonomous AI agents, whether we're entering the age of manager nerds, AI consciousness, when we'll be able to speak directly to dolphins, AI and national sovereignty, how the UK and Singapore might position themselves as AI hubs, what Clark hopes to learn next, and much more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded March 28th, 2025. Help keep the show ad free by donating today! Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Jack on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.