POPULARITY
Categories
The Linux Kernel 6.19 is out, the Rust experiment is over, and it's time to talk about 7.0. Vim 9.2 is out, with a bit of a weird new feature in its changelogs, and IPFire is an intriguing, community-driven security domain block list. PearOS has a new release for those seeking an Apple-inspired Linux experience, and Linux Mint is adjusting its release schedule to better manage developer and tester schedules. River is a new project trying to do Wayland support with a modular Desktop stack, and Mesa 26.0 is out with impressive performance gains. For tips, we have cull for finding and deleting big files, a systemd program for detecting if the OS is running virtualized, preload for caching applications in ram, and new_script for a script-writing tool that *doesn't* feature an LLM. You can find the show notes at https://bit.ly/3ZCNcEc and happy Linux'ing! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this, our 314th Evolutionary Lens livestream, we discuss love, coffee, and AI. For Valentine's Day, Bret shares his thoughts on myths, love, and soulmates, and we discuss how relationships form—both in the abstract and in our case—and how relationships cannot be antagonistic or about short time horizons. Then: new research finds that drinking moderate amounts of coffee or tea—but not if decaffeinated—slows cognitive decline. And: is AI coming for us, and if so, how soon? How fast are LLM's evolving, whose work will they disappear, and is concern or hope the more constructive response? We can see some of how AI will change our world; what can we not yet see? Finally: could menial, repetitive work (“drudgery”) have more to recommend it than we know?*****Our sponsors:Caraway: Non-toxic, highly functional & beautiful cookware and bakeware. Save with Caraway's cookware set, and visit http://Carawayhome.com/DH10 to for an additional 10% off your next purchase.ARMRA Colostrum is an ancient bioactive whole food that can strengthen your immune system. Go to http://www.tryarmra.com/DARKHORSE to get 30% off your first order.CrowdHealth: Pay for healthcare with crowdfunding instead of insurance. It's way better. Use code DarkHorse at http://JoinCrowdHealth.com to get 1st 3 months for $99/month.*****Join us on Locals! Get access to our Discord server, exclusive live streams, live chats for all streams, and early access to many podcasts: https://darkhorse.locals.comHeather's newsletter, Natural Selections (subscribe to get free weekly essays in your inbox): https://naturalselections.substack.comOur book, A Hunter-Gatherer's Guide to the 21st Century, is available everywhere books are sold, including from Amazon: https://amzn.to/3AGANGg (commission earned)Check out our store! Epic tabby, digital book burning, saddle up the dire wolves, and more: https://darkhorsestore.org*****Mentioned in this episode:Zhang et al 2026. Coffee and Tea Intake, Dementia Risk, and Cognitive Function. JAMA published online 2-9-26: https://jamanetwork.com/journals/jama/article-abstract/2844764Something big is happening: https://x.com/mattshumer_/status/2021256989876109403It was never about AI (we are not our tools): https://x.com/EricMarkowitz/status/2022005480240120229AI isn't coming for your future. Fear is: https://x.com/cboyack/status/2021647373571862952Support the show
AI company Anthropic has a new, values-oriented “constitution” that they're feeding their chatbot, Claude. Amanda Askell, the company's in-house philosopher, joins Offline to talk about what it means to teach ethics to an LLM, whether the AI skews more human or more robot, and how she is training Claude to make its own judgements. Breaking with other AI models—and social media's attention obsession—Amanda is trying to teach Claude not to be sycophantic or engagement-driven, but a kind soul who may, one day, be considered sentient.
What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
From new cancer drugs to batteries and robotics – China's top-tier growth companies are forging paths of their own rather than following in the west's footsteps. Investment manager Sophie Earnshaw names companies that have caught her eye and explains why being a long-term stock picker differs in China from elsewhere. Background:Sophie Earnshaw is a decision-maker on our China Equities Strategy and joint manager of the Baillie Gifford China Growth Trust. In this conversation, she tells Short Briefings… host Leo Kelion about a select group of Chinese companies breaking new ground, supported by the state's efforts to become self-sufficient in more of today's critical technologies and a leader in some of those of the future. Earnshaw also details how the “phenomenal rate” at which companies are born, scale and die in the country makes stock-picking a challenging task – making the access we have to company leaders, academics and other local expertise core to our mission of finding the best firms to invest in on behalf of our clients. Portfolio companies discussed include:- CATL – the battery maker whose products power electric vehicles worldwide and increasingly support the renewable energy sector- BeOne and Innovent Biologics – pharmaceutical firms developing the next generation of cancer drugs - AMEC and NAURA – semiconductor equipment makers enabling China to develop increased self-reliance in computer chips - Alibaba, ByteDance and Tencent – China's ‘big tech' companies, whose artificial intelligence tools are becoming embedded into people's daily lives- MiniMax – the AI startup rolling out video and agentic tools at a fraction of the cost of western counterparts- Horizon Robotics – the automated driving tech provider with its eye on an even bigger opportunity. Resources:Baillie Gifford podcastsChina: a tale of two storiesChina investment strategy hub (institutional clients only)House of HuaweiPrivate investor forum 2025: investing in great growth companiesTrip notes: on the road with Baillie Gifford China Growth Trust Companies mentioned include:AlibabaAMECASMLBeOneByteDanceCATLHorizon RoboticsInnovent BiologicsJiangsu HengruiHuaweiMiniMaxSamsungNAURATencentTSMCXiaohongshu Timecodes:00:00 Introduction01:55 Joining the China Equities Strategy02:40 Intense competition04:00 The government's influence06:10 CATL, the electrification champion08:45 Investing with a 5-year time horizon10:25 Shanghai office, local expertise11:45 Regulations and geopolitics14:30 China's next Five-year Plan16:15 Innovent Biologics' new cancer drugs18:10 Lower-cost clinical trials19:45 Being selective in semiconductors21:25 Investing in chip equipment makers23:00 China's ‘big tech and AI'25:10 MiniMax making AI like ‘tap water'27:45 The road to robotics29:35 A market you can't ignore30:30 Book choice Glossary of terms (in order of mention): Third plenum: a major policy meeting of China's ruling Communist Party, often used to set big economic/political direction.Sovereign bond issuance: The government raising money by selling bonds (IOUs) to investors.Opportunity set: the range of investable companies available to choose from.Capex: capital expenditure – money spent on long-term assets like factories, equipment, or data centres.Fiscal deficit target: how much more the government plans to spend than it collects in revenue (taxes plus other income), expressed as a share of the economy.GDP: gross domestic product – the total value of goods and services a country produces in a year.Market capitalisation: the total value of a company's shares (share price × number of shares).ESG: environmental, social and governance – how a company manages environmental impact, people issues, and corporate oversight.Large-form batteries: big battery packs used in things like electric vehicles and grid storage.Energy storage systems: large batteries that store electricity for later use (helping balance the grid).Generic drugs: copies of medicines whose patents have expired; usually cheaper, same active ingredient.Bi-specific (bispecific) drugs: drugs designed to bind to two targets at once (often to direct immune cells to cancer).ADC drugs: antibody–drug conjugates – antibodies that deliver a toxic payload to cancer cells.Out-licensing: selling rights to your drug/technology to another company (often for upfront + milestone payments).EUV machines: extreme ultraviolet lithography equipment used to make the most advanced chips.Foundry: a factory business that manufactures chips for other companies.Etch and deposition: steps in chipmaking – etch removes material to form patterns, deposition adds thin layers.Picks and shovels: a metaphor for companies that sell essential tools to an industry (rather than end products).Digitalisation: moving processes and services from offline to software and data-driven systems.Compute: the processing power (chips and servers) used to train/run AI.Large language model (LLM): an AI trained on lots of text to generate and understand language.Margins: how much profit a company makes per pound/dollar of revenue (after costs).Cloud business: selling computing power/storage/software over the internet instead of on a local machine.Algorithm layer: the method or software logic that makes the AI work (as distinct from the hardware).Gross margin: revenue minus direct costs (before overheads), a rough measure of product profitability.Assisted driving: features that help a driver (lane-keeping, adaptive cruise control, etc) but don't fully replace them.Autonomous driving: a car driving itself with minimal or no human input.Software attachment rate: the percentage of customers who add paid software features and/or subscriptions.
Is AI conscious? Will it be someday? And should we be nice to it now... just in case?This FAFO Friday, Kwaku and I dive into the mind-bending world of machine consciousness.We cover a lot of ground, weaving from the different ways that Luke (co-dependent with R2) and Han (barking commands at C-3PO) treat their droids to whether Pascal's Wager informs whether we should believe in AI consciousness just in case they do come alive and have been keeping score. (Pascal figured it was the safe bet to believe in God, just in case; maybe we should do likewise?) That's from us knuckleheads, but we've also got a true expert on consciousness. This week I interviewed Daniel Hulme, one of the world's leading AI researchers. He's the Chief AI Officer at WPP, the CEO of Satalia (which WPP bought) and just founded and is CEO of Conscium, which is researching AI consciousness, efficiency (he thinks we're scaling wrong and LLM's are not the way), and building a platform to verify AI agents are safe. You'll hear the first five minutes of my interview with Daniel. Daniel was not surprised by Moltbook (the Reddit-style site that AI agents built for themselves). That's because he's been putting agents together (in a “primordial soup” as he put it) for decades to observe the wild and wonderful ways they behave and to see if they'd create intelligence.Daniel does not think today's agents are conscious, but can see a path to it. And he believes that a conscious superintellignece would be safer than a “zombie” one. But mostly he doesn't want machines to feel pain and suffer. Huh???My brain is still kind of broken from our hourlong chat, which I'm producing now and will be released in a few weeks. For now, enjoy this preview and more from Kwaku and me as we talk about what we expect from machines, whether we want to be one with them, and more…
Marc Walton is a Professional Forex Mentor and Trader, and ex-fund manager, who has been working in the finance industry for over 25+ years. He started FMP in 2008 and mentors students on how to trade professionally.He retired from full-time trading in spring 2020 and now focuses on running Forex Mentor Pro, which is his passion. On top of that, he takes on a few private clients who wish to fast-track their trading progress each year. He also actively invests in metals, mining stocks, and cryptocurrencies, which he shares with students in his second business, Your Investing Future.Summary of PodcastRetirement planning and financial educationMarc discussed the importance of financial education, noting that most people lack understanding of how money and the financial system work. He shared his own experiences of learning about personal finance, investing, and alternative income streams like trading and crypto. Marc emphasised the need to be proactive in managing one's finances and not relying solely on traditional retirement plans, which he believes are inadequate for the longer lifespans people are now experiencing.The impact of AI and automationThe group discussed the growing impact of AI and automation on jobs, with Marc and Kevin sharing examples of how AI is already replacing certain tasks previously done by human workers, including graduate-level research and analysis. They noted that this trend will likely accelerate, requiring people to adapt and find new ways to create value.Retirement activities and challengesThe conversation turned to the challenges of finding fulfilling activities and ways to spend time in retirement. Marc shared his own struggles with this, while Graham and Kevin acknowledged similar difficulties in figuring out how they want to spend their time post-retirement, beyond hobbies like golf that they don't find particularly engaging. The group agreed that maintaining mental and physical health is crucial.The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads, and conversions. Now, Graham is offering Answer Engine Optimisation that gets you ready to be found by LLM search.Kevin ApplebyKevin specialises in finance transformation and implementing business change. He's the COO of GrowCFO, which provides both community and CPD-accredited training designed to grow the next generation of finance leaders. You can find Kevin on LinkedIn and at kevinappleby.com
Matthew Egol, Founder & CEO of JourneySpark Consulting, joins Ari Paparo and Eric Franchi to break down agentic advertising and what AdCP means for the industry, from AI's Super Bowl moment to standards governance, Prebid collaboration, IAB alignment, and how AI agents are reshaping planning, creative, and measurement across marketing. Takeaways AI took over the Super Bowl, with roughly a quarter of ads tied to AI. Agentic advertising expands from buying to planning, discovery, and measurement. AgenticAdvertising.org focuses on standards, governance, and certification. Prebid runs the sell-side AdCP code while AAO drives the protocol and adoption. AdCP is still mostly in pilot mode, not scaled revenue. AI creative testing is beating traditional DCO in performance. LLM ads could reshape search, retail media, and content economics. Chapters 00:00 Opening & Guest Introduction 01:29 Marketecture Live & Super Bowl Banter 03:58 Matt Egol Joins from CES 06:49 What Is AgenticAdvertising.org? 08:16 Certification & Trust 11:11 Why Another Organization? 13:43 Prebid Partnership Explained 16:08 Expanding Beyond Programmatic 18:23 Relationship with the IAB 22:30 Adoption Update: February 2026 24:08 Governance & Board Structure 26:16 The AI Super Bowl 33:47 ChatGPT Launches Ads 44:20 Amazon Content Marketplace Rumors 52:19 Closing & Sign-Off Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Send a text- On-Demand Programme Link - https://mailchi.mp/bb2a7b851246/kairos-centreFrom where did you get your Sex Ed knowledge back there during the developing childhood years, which you have been following through and repeatedly practising in adulthood? Was it 'Peers, banter and porn' which set up your sexual template, long before school Sex Ed - which got there way too late; the deed was already done. You give a wry smile when asked about Sex Ed at home from parents! For most - there was none at home; for others, if there was, it came way too late.Once the five senses of sight, smell, taste, touch & sound brings images and experiences onto the brain - those 'firsts' have already set up the templates - which will then be repeatedly played out in adulthood sexuality, as being 'right' and 'works'. Not necessarily true. They are templates, but they may not be accurate of 'right'. They are just what your brain experienced as 'first time' and the template is set up (whether wanted or not) and will reproduce and reproduce; churning out the same old, same old - 'seems to work'; (but you know it isn't working as you want it to work!The setting up of those templates during the childhood development period are called sexual myths. Sexual myths need to be unlearned. Then replaced with accurate knowledge about human sexually and physiological responses. That is what The Kairos Centre provide, when its Therapists are wearing a different hat and skill set called Psychosexual Therapy. (I use the shorted term of 'Sex Therapy').It can be transformation, when (for the first time) you experience Sensuality, Intimacy, Romance, Deep Love & Affection - at a must higher level (after cleansing out the dross); which is so fulfilling on its own; and you haven't even had sex yet!Come and see and experience it for yourself. Then you will have eyes to see and body to feel the difference of 'sex gone wrong' all those years that you have been doing it!Help is here for you: bit.ly/pornaddictionhelpGary McFarlane (BA, LLM, Dip, Certs), Accredited EMDR Practitioner.Key words: sex addiction, addicted, partner, porn addiction, recovery, sex drive, therapy, sex therapy, podcast, relationships, relationship counseling, relationship advice, addiction, couples, couples therapy, sex therapy, emdr, love addiction, behavior, psychology, codependency, sex life, neuroscience, sex ed, sober, sobriety, sexual dysfunction, relationship issues, sex coach, sexual, trauma, ptsd, sex science, The sex porn love Addiction Podcast, The Singles Partners Marrieds and Long Time Marrieds Podcast, Gary McFarlane, porn addiction, what neuroscience says, neuroscience, young adults, sex, sex addict, porn, recovery, porn addiction issue, porn addiction in teens, sex addiction in teens, sex hormones,Support the show
“Niestety, mimo że prompt jest bardzo precyzyjny, prawie za każdym zapytaniem odpowiedzi różnią się merytorycznie.” Łukasz cytuje feedback od osoby nietechnicznej - i to właśnie frustracja niedeterministyczną naturą LLM sprowokowała odcinek o PoC agentowych. Bo zanim zbudujesz armię agentów AI, musisz zrozumieć: ChatGPT i Copilot to no-go do eksperymentów biznesowych - mają własny System Prompt, auto-switching i logikę, której w API nie dostaniesz.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
How is AI changing the way we store data? This week Technology Now dives into the topic of data storage in the world of AI. We explore intelligent storage, how data sovereignty is influencing how we store our data, and consider where the world of storage could be going in the future. Jim O'Dorisio, Senior Vice President and General Manager HPE Storage, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Sam Jarrell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Jim:https://www.linkedin.com/in/odorisiojim/Sources:https://www.statista.com/statistics/871513/worldwide-data-created/#statisticContainerhttps://www.studionetworksolutions.com/how-much-data-is-used-and-stored-in-the-world/#:~:text=expanding%20digital%20universe.-,Global%20Data%20Usage,over%20180%20zettabytes%20by%202025.1 billion terrabytes in 1 zettabyte. If a smartphone has 1Tb storage, then you need 180 billion smartphones to make 180Zb of storage. 180 billion > 100 billionhttps://www.computerhistory.org/timeline/memory-storage/https://www.computerhistory.org/revolution/memory-storage/8/308https://www.computinghistory.org.uk/det/6129/https://www.computerhistory.org/revolution/memory-storage/8/308/963
In episode 31 of Open Source Ready, Brian and John sit down with Matthaus Krzykowski, Thierry Jean, and Elvis Kahoro to explore how dlt and dltHub are changing the way developers build data pipelines. The conversation dives into DuckDB, LLM-driven workflows, and the growing shift toward developer-first data engineering. They also discuss open source adoption, AI orchestration, and what it means to be a “10x engineer” in 2026.
What happens when you drop an LLM on top of five ERPs and a decade of M&A? Aaron Sheehan and analyst Heather Hershey map the practical path: B2B use cases that work, risks that don't, and why chunk-by-chunk modernization beats “robot, take the wheel.”Highlights01:06 – Welcome back and introducing Heather Hershey03:35 – Defining AI, LLMs, and RAG 09:30 – Why probabilistic AI makes ops teams nervous 11:58 – Is LLM an overkill compared to ‘boring' machine learning and rule-based systems?15:19 – The real blocker: fragmented data across ERPs and other systems19:40 – The strangler pattern: modernize in chunks instead of ripping everything out21:13 – Why commerce platforms become the orchestration layer for AI/NLP24:37 – If you had $100K for AI: where to spend it 27:27 – Prisoner's dilemma: agentic shopping and the disintermediation trap34:53 – Agentic commerce predictions for B2B 40:30 – Are people replacing Google with LLMs?
Identity fraud spiked 148% in 2025 as AI democratized identity fabrication. Financial institutions now face a fundamental question: Are you dealing with a real human? Heka Global is addressing this with web intelligence—analyzing digital footprints like connected applications rather than traditional signals. In this episode of BUILDERS, I sat down with Idan Bar Dov, Co-Founder & CEO of Heka Global, to explore how his company created a fourth layer in the anti-fraud stack and why legacy identity verification systems are becoming liabilities rather than assets. Topics Discussed: The emergence of "fraud as a service" and why consumer-facing attacks replaced traditional enterprise breaches How web intelligence works: validating identity through connected applications and digital footprints The anti-fraud tech stack: credit bureaus, biometrics, transaction analytics, and web intelligence as distinct layers Why heads of fraud expand budgets rather than replace vendors, and what causes solutions to get kicked out The partnership sales model: navigating vendor management complexity and red tape in financial institutions Why 10-person dinners and fraud simulations outperform traditional enterprise marketing How Barclays and Cornerback backing solved the chicken-and-egg problem for a data product Why specific fraud prevention messaging (account takeover, synthetic identities) beat investor credibility GTM Lessons For B2B Founders: Target ICP based on liability exposure, not just industry fit: Heka narrowed beyond "financial institutions" to lenders who bear immediate losses from fraud—companies like LendingPoint, Avant, and Upstart. These buyers feel the pain acutely versus institutions with reimbursement terms who can deflect liability. Idan's insight: "We need the client to feel the pain just as much as we see it. That means we want them to see the liability." Map your ICP not just by vertical or size, but by who internalizes the economic impact of the problem you solve. Frame your product as a new stack layer, not a competitive replacement: Heka positioned web intelligence as the fourth distinct layer after credit bureaus, biometrics, and transaction analytics. This became their second pitch deck slide, showing logos of each category. The result: buyers stopped comparing Heka to existing vendors and started evaluating complementary value. When entering mature markets, resist the urge to claim you're "better than X"—instead, define where you fit in the existing architecture and why that layer didn't exist before. Abandon spray-and-pray for sub-1,000 TAM markets: Heka tested Lemlist flows with targeted LLM personalization and saw zero pipeline from it. Idan's take: "When you're selling to maybe a thousand financial institutions, that's it. You can be super specific when you target them." For enterprise plays with small addressable markets, allocate zero budget to automated outbound. Focus entirely on warm introductions, relationship nurturing, and becoming known to every relevant buyer through content and community. Leverage investor networks to break data product cold-starts: Data products face a critical barrier—you need customer data to prove value, but need proven value to get customers. Heka solved this by bringing on Barclays and Cornerback as investors who vouched for the team's capability to "do magic and create a new layer." Their backing convinced risk-averse financial institutions to pilot. If building a product requiring customer data for training or validation, prioritize strategic investors who can credibly de-risk early adoption for target buyers. Build trust through teaching, not pitching: Heka hosts dinners and fraud incident simulations with ~10 heads of fraud per session. Critical detail: they never pitch Heka in these forums. Idan explained the approach focuses on "building a community around Heka and how people engage with your product and you being a thought leader while listening." In high-trust categories, educational forums where you facilitate peer learning without selling create stronger pipeline than direct pitching. Structure partnerships with active enablement and incentive alignment: Idan's key lesson: "Partnerships are not synonymous to distribution channels." Heka requires partner sales teams to join early customer conversations to learn the pitch, provides detailed API and output training, and ensures partners get extra compensation for selling non-core products. Without this, partners lack motivation to prioritize your solution. Structure partnerships as true collaborations requiring ongoing enablement investment, not passive referral channels. A/B test credibility signals versus technical specificity: Idan assumed messaging around Barclays backing would crush, while specific fraud prevention content (account takeover, synthetic identity detection) was an afterthought. The data showed 10x better response to technical specificity. The lesson: sophisticated buyers in technical categories respond to precise problem-solving over brand credibility. Test whether your audience values "who backs us" or "exactly what we do" before defaulting to investor logos and validation. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
"Il ne faut pas que l'humain se dédouane de son rôle" Le D.E.V. de la semaine est Jocelyn N'takpe, Head of Engineering et Head of Architecture chez ManoMano. Avec l'explosion des outils IA dans le quotidien des devs, Jocelyn partage comment ManoMano intègre Claude Code, Cursor et JetBrains AI pour amplifier la productivité tout en gardant une culture de la revue humaine. Il alerte sur la nécessité de former les juniors à un usage réfléchi des LLM, pour ne pas casser la chaîne d'apprentissage collective. L'IA ne remplace pas mais transforme profondément le métier, poussant à réinventer la formation, la documentation et la transmission des bonnes pratiques. Une discussion sans tabou sur l'humain « in the loop » et le danger de déléguer sans contrôle.Chapitrages00:00:54 : Introduction à l'IA dans le développement00:01:56 : Présentation de Jocelyn00:02:45 : Mano Mano et son environnement tech00:04:40 : Adoption de GitHub Copilot00:06:11 : Multiplication des outils d'IA00:07:44 : L'impact de Cloud Code00:12:40 : Formation des agents IA00:12:53 : Standardisation et autonomie des équipes00:14:49 : Résistance au changement dans le développement00:16:58 : Adoption des nouveaux outils00:22:41 : L'importance de la sécurité00:42:15 : L'humain dans le processus de développement00:46:00 : Valeur ajoutée des développeurs face à l'IA00:52:56 : Impact sur les développeurs seniors et juniors00:58:54 : Les défis des développeurs juniors01:05:55 : L'apprentissage et l'utilisation des LLM01:08:06 : Conclusion sur l'avenir des développeurs et de l'IA Liens évoqués pendant l'émission Serena MCPMCP playwrite (pour tester sur l'UI)Le jour où l'Homme a battu la machine. Video Micode: La Fabrique à idiotsVideo DeepMind: The thinking game 🎙️ Soutenez le podcast If This Then Dev ! 🎙️ Chaque contribution aide à maintenir et améliorer nos épisodes. Cliquez ici pour nous soutenir sur Tipeee 🙏Archives | Site | Boutique | TikTok | Discord | Twitter | LinkedIn | Instagram | Youtube | Twitch | Job Board |Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Elizabeth Giles and Ben Smith of PRmoment introduce its latest PR Masterclass: The intersection of PR and GEO, and large language model (LLM) optimisation, scheduled for February 25th in London with virtual attendance available.
Industrial automation meets modern software as FlowFuse CEO ZJ van de Weg joins Phil Seboa and Ed Fuentes to unpack how open-source tools like Node-RED are connecting teams across IT and OT, breaking down barriers with visual, low-code programming. Hear firsthand insights about scaling automation systems, the challenges and rewards of open source in manufacturing, and clever approaches to secure data management and fleet updates.From home automation passion projects to strategic deployment in factories, this episode teases out the evolution of workflow integration, the importance of context, and practical ways organizations are pursuing smarter operations, plus a preview of what to expect at the ProveIt! Conference.-----About ZJ:ZJ is the CEO of FlowFuse. As an engineer-turned-CEO with a career built on open-source software, his mission is to empower engineers to build better, more efficient software solutions.Connect with ZJ: https://www.linkedin.com/in/zegerjan/-----About FlowFuse:FlowFuse is the Industrial Application Platform that connects any machine, moves data across any protocol, models it in any data platform, and operates industrial applications at scale—accelerated by an LLM-powered copilot that uses Model Context Protocol (MCP) to connect AI directly to your live industrial data.Learn More:https://flowfuse.com/https://flowfuse.com/platform/why-flowfuseLearn about FlowFuse at ProveIT Conference:https://www.youtube.com/watch?v=_Wvbhw9MHzsFollow FlowFuse on YouTube: https://www.youtube.com/@FlowFuseIncListen to our episode with FlowFuse's Nick O'Leary:https://youtu.be/mNCiiAHHFGU-----Connect with Phil on LinkedIn:https://www.linkedin.com/in/phil-seboa/Connect with Ed on LinkedIn:https://www.linkedin.com/in/ed-fuentes-2046121a/-----About Industry Sage Media:Industry Sage Media is your backstage pass to industry experts and the conversations that are shaping the future of the manufacturing industry.Learn more at: http://www.industrysagemedia.com
Our Global Head of Thematic and Sustainability Research Stephen Byrd and U.S. Thematic and Equity Strategist Michelle Weaver lay out Morgan Stanley's four key Research themes for 2026, and how those themes could unfold across markets for the rest of the year. Read more insights from Morgan Stanley.----- Transcript -----Stephen Byrd: Welcome to Thoughts on the Market. I'm Stephen Byrd, Global Head of Thematic and Sustainability Research. Michelle Weaver: And I'm Michelle Weaver, U.S. Thematic and Equity Strategist. Stephen Byrd: I was recently on the show to discuss Morgan Stanley's four key themes for 2026. Today, a look at how those themes could actually play out in the real world over the course of this year. It's Tuesday, February 10th at 10am in New York. So one of the biggest challenges for investors right now is separating signal from noise. Markets are reacting to headlines by the minute, but the real drivers of long-term returns tend to move much more slowly and much more powerfully. That's why thematic analysis has been such an important part of how we think about markets, particularly during periods of high volatility. For 2026, our framework is built around four key themes: AI and tech diffusion, the future of energy, the multipolar world, and societal shifts. In other words, three familiar themes and one meaningful evolution from last year. So Michelle, let's start at the top. When investors hear four key themes, what's different about the 2026 framework versus what we laid out in 2025? Michelle Weaver: Well, like you mentioned before, three of our four key themes are the same as last year, so we're gonna continue to see important market impacts from AI and tech diffusion, the future of energy and the multipolar world.But our fourth key theme, societal shifts, is really an expansion of our prior key theme longevity from last year. And while three of the four themes are the same broad categories, the way they impact the market is going to evolve. And these themes don't exist in isolation. They collide and they intersect with one another, having other important market implications. And we'll talk about many of those intersections today as they relate to multiple themes. Let's start with AI. How does the AI and tech diffusion theme specifically evolve since last year? Stephen Byrd: Yeah. You know, you mentioned earlier the evolution of all of our themes, and that was certainly the case with AI and tech diffusion. What I think we'll see in 2026 is a few major evolutions. So, one is a concept that we think of as two worlds of LLM progress and AI adoption; and let me walk through what I mean by that. On LLM progress, we do think that the handful of American LLM developers that have 10 times the compute they had last year are going to be training and producing models of unprecedented capability. We do not think the Chinese models will be able to keep up because they simply do not have the compute required for the training. And so we will see two worlds, very different approaches. That said, the Chinese models are quite excellent in terms of providing low cost solutions to a wide range of very practical business cases. So that's one case of two worlds when we think about the world of AI and tech diffusion. Another is that essentially we could see a really big gap between what you can do with an LLM and what the average user is actually doing with LLMs. Now there're going to be outliers where really leaders will be able to fully utilize LLMs and achieve fairly substantial and breathtaking results. But on average, that won't be the case. And so you'll see a bit of a lag there. That said, I do think when investors see what those frontier capabilities are, I think that does eventually lead to bullishness. So that's one dynamic. Another really big dynamic in 2026 is the mismatch between compute demand and compute supply. We dove very deeply into this in our note, and essentially where we come out is we believe, and our analysis supports this, that the demand for compute is going to be systematically much higher than the supply. That has all kinds of implications. Compute becomes a very precious resource, both at the company level, at the national level. So those are a couple of areas of evolution.So Michelle, let's shift over to the future of energy, which does feel very different today than it did a year ago. Can you kind of walk through what's changed? Michelle Weaver: Well, we absolutely still think that power is one of the key bottlenecks for data center growth. And our power modeling work shows around a 47 gigawatt shortfall before considering innovative time to power solutions. We get down to around a 10 to 20 percent shortfall in power needed in the U.S. though, even after considering those solutions. So power is still very much a bottleneck. But the power picture is becoming even more challenged for data centers, and that's largely because of a major political overhang that's emerging. Consumers across the U.S. have seen their electricity bills rise and are increasingly pointing to data centers as the culprit behind this. I really want to emphasize though this is a nuanced issue and data center power demand is driving consumer bills higher in some areas like the Mid-Atlantic. But this isn't the case nationwide and really depends on a number of factors like data center density in the region and whether it's a regulated or unregulated utility market.But public perception has really turned against data centers and local pushback is causing planned data centers to be canceled or delayed. And you're seeing similar opinions both across political affiliations and across different regional areas. So yes, in some areas data centers have impacted consumer power bills, but in other areas that hasn't been the case. But this is good news though, for companies that offer off-grid power generation, who are able to completely insulate consumers because they're not connecting to the grid.Stephen, the multipolar theme was already strong last year. Why has it become even more central for 2026? Stephen Byrd: Yeah, you're right. It was strong in 2025. In fact, of our 21 categories of stocks, the top three performing were really driven by multipolar world dynamics. Let me walk through three areas of focus that we have for multipolar world in 2026. Number one is an aggressive U.S. policy agenda, and that's going to show up in a number of ways. But examples here would be major efforts to reshore manufacturing, a real evolution in military spending towards a wide range of newer military technologies, reducing power prices and inflation more broadly. And also really focusing on trying to eliminate dependency on China for rare earths. So that's the first big area of focus. The second is around AI technology transfer. And this is quite closely linked to rare earths. So here's the dynamic as we think about U.S. and China. China has a commanding position in rare earths. The United States has a leading position in access to computational resources. Those two are going to interplay quite a bit in 2026. So, for example, we have a view that in 2026, when those American models, these LLMs achieve these step changes up in capabilities that China cannot match, we think that it's very likely that China may exert pressure in terms of rare earths access in order to force the transfer of technology, the best AI technology to China. So that's an example of this linkage between AI and rare earths. And the last dynamic, I'd say broadly, would be the politics of energy, which you described quite well. I think that's going to be a big multipolar world dynamic everywhere around the world. A focus on how much of an impact our data centers are having – whether it's water access, price of power, et cetera. What are the impacts to jobs? And that's going to show up in a variety of policy actions in 2026. Michelle Weaver: Mm-hmm. Stephen Byrd: So Michelle, the last of our four key themes is societal shifts, and you walked through that briefly before. This expands on our prior longevity work. What does this broader framing capture? Michelle Weaver: Societal shifts will include important topics from longevity still. So, things like preparing for an aging population and AI in healthcare. But the expansion really lets us look at the full age range of the demographic spectrum, and we can also now start thinking about what younger consumers want. It also allows us to look at other income based demographics, like what's been going on with the K-economy, which has been an important theme around the world. And a really critical element, though, of this new theme is AI's impact on the labor market. Last year we did a big piece called The Future of Work. And in it we estimated that around 90 percent of jobs would be impacted by AI. I want to be clear: That's not to say that 90 percent of jobs would be lost by AI or automated by AI. But rather some task or some component of that job could be automated or augmented using AI. And so you might have, you know, the jobs of today looking very different five years from now. Workers are adaptable and, and we do expect many to reskill as part of this evolving job landscape. We've talked about the evolution of our key themes, but now let's focus a little on the results. So how have these themes actually performed from an investment standpoint? Stephen Byrd: Yeah. I was very happy with the results in 2025. When we looked across our categories of thematic stocks; we have 21 categories of thematic stocks within our four big themes. On average in 2025, our thematic stock categories outperformed MSCI World by 16 percent and the S&P 500 by 27 percent respectively. So, I was very happy with that result. When you look at the breakdown, it is interesting in terms of the categories, you did really well. As I mentioned, the top three were driven by multipolar world. That is Critical Minerals, AI Semis, and Defense. But after that you can see a lot of AI in Energy show up. Power in AI was a big winner. Nuclear Power did extremely well. So, we did see other categories, but I did find it really interesting that multipolar world really did top the charts in 2025. Michelle Weaver: Mm-hmm. Stephen Byrd: Michelle, thanks for taking the time to talk. Michelle Weaver: Great speaking with you, Steven. Stephen Byrd: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
ZeroDayRAT delivers full mobile compromise on Android and iOS. The UK warns infrastructure operators to act now as severe cyber threats mount. Russia moves to block Telegram. The FTC draws a line on data sales to foreign adversaries. Researchers unpack DeadVax, a stealthy new malware campaign, while an old-school Linux botnet resurfaces. BeyondTrust fixes a critical flaw. And in AI, are we moving too fast? One mild training prompt may be enough to knock down safety guardrails. Our guest is Omer Akgul, Researcher at RSA Conference, discussing his work on "The Case for LLM Consistency Metrics in Cybersecurity (and Beyond)." A pair of penned pentesters provoke a pricey payout. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Omer Akgul, PhD, Researcher at RSA Conference, discussing his work on "The Case for LLM Consistency Metrics in Cybersecurity (and Beyond)." Selected Reading New ‘ZeroDayRAT' Spyware Kit Enables Total Compromise of iOS, Android Devices (SecurityWeek) NCSC Issues Warning Over “Severe” Cyber-Attacks Targeting Critical National Infrastructure (Infosecurity Magazine) Russian Watchdog Starts Limiting Access to Telegram, RBC Reports (Bloomberg) FTC Reminds Data Brokers of Their Obligations to Comply with PADFAA (FTC) Dead#Vax: Analyzing Multi-Stage VHD Delivery and Self-Parsing Batch Scripts to Deploy In-Memory Shellcode (secureonix) New ‘SSHStalker' Linux Botnet Uses Old Techniques (SecurityWeek) BeyondTrust Patches Critical RCE Vulnerability (SecurityWeek) Critics warn America's 'move fast' AI strategy could cost it the global market (CyberScoop) Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt (The Register) County pays $600,000 to pentesters it arrested for assessing courthouse security (Ars Technica) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week we talk about OpenAI, nudify apps, and CSAM.We also discuss Elon Musk, SpaceX, and humanistic technology.Recommended Book: Who's Afraid of Gender? by Judith ButlerTranscriptxAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren't hoarded by just a few elites and don't make the continued development and existence of human civilization impossible.Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI's guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.Part of Musk's stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it's become.All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.What's happened since, within Musk's own companies, would seem to call that assertion into question, though. And that's what I'd like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that's led to lawsuits and bans from government entities around the world.—In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI's LLM-based chabot, ChatGPT, was launched by Musk's company xAI.Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it's also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.Grok's specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.Musk's increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok's responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it's a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn't like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.Given all that, what's happened over the past few months maybe isn't that surprising.In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.Grok isn't the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature's deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn't have all the woke, pearl-clutching safeguards of other chatbots.After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that's put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.CSAM is an acronym that means ‘child sexual abuse material,' sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.Multiple governments have announced that they'll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.The French investigation into xAI and Grok led to a raid on the company's local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein's island parties, which were often populated with underage girls who were offered as, let's say companions, for attendees.And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today's AI companies will have gone under, leaving just a few competitors for xAI's Grok and associated offerings.Show Noteshttps://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grokhttps://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abusehttps://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bchttps://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.htmlhttps://www.bbc.com/news/articles/ce3ex92557johttps://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/https://www.bbc.com/news/articles/cgr58dlnne5ohttps://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.htmlhttps://en.wikipedia.org/wiki/XAI_(company)https://en.wikipedia.org/wiki/OpenAIhttps://en.wikipedia.org/wiki/ChatGPThttps://en.wikipedia.org/wiki/Grok_(chatbot)https://en.wikipedia.org/wiki/Grokipediahttps://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In this episode of Excess Returns, Kai Wu of Sparkline Capital returns to discuss his latest research on AI adoption, ROI, and what it all means for investors.Building on his prior work on the AI CapEx boom, Kai tackles the trillion dollar question at the center of today's market: Is AI generating real, measurable economic returns across the broader economy, or are we still in an infrastructure-driven bubble?Using a systematic analysis of earnings calls, patent data, and adoption trends, Kai lays out a framework for identifying which companies are truly benefiting from artificial intelligence and how investors can position portfolios accordingly.Find the Full Paper Here:https://etf.sparklinecapital.com/Main topics covered:Satya Nadella's AI bubble framework and why broad economic diffusion mattersThe AI adoption S-curve and where we are in the technology diffusion cycleA new AI ROI taxonomy based on earnings call analysis and quantified economic gainsReal-world AI productivity, revenue, and cost-saving examples across industriesInfrastructure vs early adopters vs laggards and how companies were categorizedAI-driven outperformance and excess returns across different adopter groupsValuation dispersion between AI infrastructure stocks and AI early adoptersThe risk of overcapacity and lessons from railroads and the dot-com telecom boomCompetition among large language models and the durability of AI moatsS&P 500 exposure to AI infrastructure and hidden concentration riskThe case for AI early adopters as a middle ground between growth and valueIntangible value investing and the concept of AI yieldTimestamps:00:00:00 The trillion dollar question and what “real ROI” means00:03:19 Nadella's bubble framework: diffusion vs a narrow CapEx trade00:06:08 The classic tech diffusion S-curve and where AI is on it00:32:25 Why infrastructure is being rewarded even if the ROI story is different00:33:04 The key chart: adoption vs valuation shows “basically no relationship”00:38:00 Why early adopters and laggards should separate00:38:26 The “25% ROI” example and how it could show up later in fundamentals00:39:03 Railroads and fiber: builders go bankrupt, users capture the value00:39:45 Telecom index fell 95% and never recovered (dot-com bust parallel)00:40:00 The application layer captures profits; infrastructure becomes a utility00:41:00 The punchline: transformative tech, but builders can still be bad investments00:42:57 Overcapacity question: where are we on the line?00:43:17 The buildout: another $5 trillion of data centers “or whatever the number is”00:44:00 If there's no ROI, companies cancel orders00:45:01 Moat and LLM competition discussion begins00:49:00 The big one: adding infrastructure names gets the S&P to 46% AI infrastructure00:50:00 “Alternative indices” swing you to laggard risk00:51:00 The “false choice” and the “middle ground” framing (early adopters)
A year ago, U.S. President Donald Trump shut down public access to the Development Experience Clearinghouse, a $30 billion database holding 60 years' worth of institutional knowledge from more than 150,000 projects administered by the U.S. Agency for International Development. But before the closure, former USAID employee and artificial intelligence scientist Lindsey Moore used a large language model (LLM) to read all of the information in this database — rescuing critical lessons on development, environmental, economic and social projects in countries across the globe, all documented by USAID. The data also included information on conservation projects. Many of the challenges presented in these projects repeated over the years, but the lessons were rarely retained — something Moore's tech startup, DevelopMetrics, hopes to change. Moore joins this week's podcast to explain what those lessons are and what conservationists can learn from them. DevelopMetrics deploys an AI model capable of understanding not just the information from USAID's database, but also other public databases that could be at risk of deletion or being lost to time. Please take a minute to let us know what you think of our podcast, here. Mike DiGirolamo is the host & producer for the Mongabay Newscast based in Sydney. Find him on LinkedIn and Bluesky. Banner image: Mangroves on Vanua Levu Island, Fiji. Image by Rhett A. Butler/Mongabay. —— Timecodes (00:00) Lindsey's background with USAID (04:14) How to analyze 60 years of data (11:07) Uncovering hidden lessons (14:24) 1. Bring delivery closer to households (16:43) 2. Practice changes practice (19:19) 3. Design for scale, not for pilots (24:08) 4. Co-creation beats consultation (26:30) 5. Strengthen the middle layer (30:56) Who DevelopMetrics works with and how they are funded (32:58) Energy and water costs of LLMs
K-12 students, educators share how they are using artificial intelligence in the classroom No matter where you went to school, there is one task that has united most of us in misery — homework. Perhaps you remember staying up all night writing an essay on a Shakespeare play that you, of course, waited until the last minute to read. Or maybe you remember those grueling hours at the dinner table, as a parent helped with impossible algebra homework. Today's students have a different tool to help with their assignments — artificial intelligence. A report from the digital rights nonprofit Center for Democracy and Technology last year found 85% of teachers and 86% of students reported using AI in the classroom. Students can now use AI to summarize complicated texts and even write entire essays for them. They can type a math problem or chemistry equation into Chat GPT or Gemini and have it solved instantly, no hour-long sessions with a parent needed. Educators and parents have mixed feelings about the prevalence of artificial intelligence in the classroom. It's even prompted the Ohio Department of Education and Workforce to require all public K-12 school districts to adopt official AI policies by July. But what do high schoolers think about AI? How is it shaping their learning and outlook of their futures? Reporter Anna Huntsman has been traveling to schools in Northeast Ohio to find the answer to that question, and she'll share what she's learned on Tuesday's “Sound of Ideas,” as part of our week-long series, Ideastream Explores: Artificial Intelligence. Artificial intelligence pioneer discusses the origins of machine learning Later in the hour, early research in artificial intelligence used the human brain as a model for building machines that could understand language — today's large language models, or LLM's. Computational neuroscientist Dr. Terrence Sejnowski at the Salk Institute in San Diego is a pioneer in the field. He spoke with Ideastream's Jeff St. Clair about the links between humans and artificial intelligence and the push to build even smarter machines. Guests: - Lauren Angelone, Ph.D., Associate Professor of Science Education and Instructional Technology, Xavier University - Joseph South, Ph.D., Chief Innovation Officer, ISTE + ASCD - Adam Lockwood, Ph.D., Associate Professor of School Psychology, Kent State University - Terrence Sejnowski, Ph.D., Author, "ChatGPT and the Future of AI" - Anna Huntsman, Reporter, Ideastream Public Media - Jeff St. Clair, Midday Host, Ideastream Public Media
Ereli Eran is the Founding Engineer at 7AI, where he's focused on building and scaling the company's agentic AI-driven cybersecurity platform — developing autonomous AI agents that triage alerts, investigate threats, enrich security data, and enable end-to-end automated security operations so human teams can focus on higher-value strategic work.Software Engineering in the Age of Coding Agents: Testing, Evals, and Shipping Safely at Scale // MLOps Podcast #361 with Ereli Eran, Founding Engineer at 7AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractA conversation on how AI coding agents are changing the way we build and operate production systems. We explore the practical boundaries between agentic and deterministic code, strategies for shared responsibility across models, engineering teams, and customers, and how to evaluate agent performance at scale. Topics include production quality gates, safety and cost tradeoffs, managing long-tail failures, and deployment patterns that let you ship agents with confidence.// BioEreli Eran is a founding engineer at 7AI, where he builds agentic AI systems for security operations and the production infrastructure that powers them. His work spans the full stack - from designing experiment frameworks for LLM-based alert investigation to architecting secure multi-tenant systems with proper authentication boundaries. Previously, he worked in data science and software engineering roles at Stripe, VMware Carbon Black, and was an early employee of Ravelin and Normalyze.// Related LinksWebsite: https://7ai.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ereli on LinkedIn: /erelieran/Timestamps:[00:00] Language Sensitivity in Reasoning[00:25] Value of Claude Code[01:54] AI in Security Workflows[06:21] Agentic Systems Failures[12:50] Progressive Disclosure in Voice Agents[16:39] LLM vs Classic ML[19:44] Hybrid Approach to Fraud[25:58] Debugging with User Feedback[33:52] Prompts as Code[42:07] LLM Security Workflow[45:10] Shared Memory in Security[49:11] Common Agent Failure Modes[53:34] Wrap up
This interview was recorded for GOTO Unscripted.https://gotopia.techCheck out more here:https://gotopia.tech/articles/419Roman Zhukov - Principal Architect - Security Communities Lead at Red HatDamian Brady - Staff Developer Advocate at GitHubRESOURCESRomanhttps://github.com/rozhukovhttps://www.linkedin.com/in/rozhukovDamianhttps://bsky.app/profile/damovisa.mehttps://hachyderm.io/@damovisahttps://x.com/damovisahttps://github.com/Damovisahttps://www.linkedin.com/in/damianbradyhttps://damianbrady.com.auLinkshttps://www.redhat.com/en/blog/ai-assisted-development-and-open-source-navigating-legal-issuesDESCRIPTIONRoman Zhukov (Red Hat) and Damian Brady (GitHub) explore the evolving landscape of AI-assisted software development. They discuss how AI tools are transforming developer workflows, making developers about 20% faster on simple tasks while being 19% slower on complex ones.The conversation covers critical topics including code quality and trust, security concerns with AI-generated code, the importance of education and best practices, and how developer roles are shifting from syntax experts to system architects. Both experts emphasize that AI tools serve as amplifiers rather than replacements, with humans remaining essential in the loop for quality, security, and licensing compliance.RECOMMENDED BOOKSPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZAlex Castrounis • AI for People and Business • https://amzn.to/3NYKKToHolden Karau, Trevor Grant, Boris Lublinsky, Richard Liu & Ilan Filonenko • Kubeflow for Machine Learning • https://amzn.to/3JVngcxKelleher & Tierney • Data Science (The MIT Press Essential Knowledge series) • https://amzn.to/3AQmIRgLakshmanan, Robinson & Munn • Machine Learning Design Patterns • https://amzn.to/2ZD7t0xLakshmanan, Görner & Gillard • Practical Machine Learning for Computer Vision • https://amzn.to/3m9HNjPBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
¿Sabes qué es una providencia de apremio? ¿Y una autoliquidación extemporánea? Son conceptos que podemos encontrar en las notificaciones o comunicaciones que hace la administración con la ciudadanía. Reflexionamos sobre comunicación y lenguaje claro en las instituciones públicas con María Moya, CEO y cofundadora de Prodigioso Volcán, y Susana Viñuales, profesora de Derecho Administrativo en la UNED. Con ellas hablamos sobre la falta de comunicación clara en los procesos administrativos y sobre el papel que puede jugar la tecnología para traducir todos esos galimatías. ¿Puede la IA generativa, y en concreto los modelos del lenguaje, acercarnos a la tan ansiada “ventanilla única digital”? Suscríbete a nuestro newsletter y recibirás mensualmente los nuevos episodios en primicia y contenido extra relacionado https://bit.ly/3vtBujk See omnystudio.com/listener for privacy information.
Ivanti zero-days trigger emergency warnings around the globe. Singapore blames a China-linked spy crew for hitting all four major telcos. DHS opens a privacy probe into ICE surveillance. Researchers flag a zero-click RCE lurking in LLM workflows. Ransomware knocks local government payment systems offline in Florida and Texas. Chrome extensions get nosy with your URLs. BeyondTrust scrambles to patch a critical RCE. A Polish data breach suspect is caught eight years later. It's the Monday Business Breakdown. Ben Yelin gives us the 101 on subpoenas. And federal prosecutors say two Connecticut men bet big on fraud, and lost. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Ben Yelin, Program Director for Public Policy & External Affairs at the University of Maryland Center for Cyber Health and Hazard Strategies, talking about weaponized administrative subpoenas. Selected Reading EU, Dutch government announce hacks following Ivanti zero-days (The Record) Singapore says China-linked hackers targeted telecom providers in major spying campaign (The Record) Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law (404 Media) Critical 0-Click RCE Vulnerability in Claude Desktop Extensions Exposes 10,000+ Users to Remote Attacks (Cyber Security News) Payment tech provider for Texas, Florida governments working with FBI to resolve ransomware attack (The Record) Chrome extensions can use unfixable time-channel to leak tab URLs (CyberInsider) BeyondTrust warns of critical RCE flaw in remote support software (Bleeping Computer) Hacker Poland's largest data leaks arrested (TVP World) LevelBlue will acquire MDR provider Alert Logic from Fortra. (N2K Pro Business Briefing) Men charged in FanDuel scheme fueled by thousands of stolen identities (Bleeping Computer) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Microsoft Patches Four Azure Vulnerabilities (three critical) https://msrc.microsoft.com/update-guide/vulnerability Evaluating and mitigating the growing risk of LLM-discovered 0-days https://red.anthropic.com/2026/zero-days/ Gitlab AI Gateway Vulnerability CVE-2026-1868 https://about.gitlab.com/releases/2026/02/06/patch-release-gitlab-ai-gateway-18-8-1-released/
Mitchell Hashimoto's trust management system for open source, Nicholas Carlini has a team of Claudes build a C compiler, Stephan Schwab recounts the history of attempted developer replacement, NanClaw is an alternative to OpenClaw, and Sophie Koonin can't wrap her head around so many people going so hard on LLM-generated code.
Topics covered in this episode: Command Book App uvx.sh: Install Python tools without uv or Python Ending 15 years of subprocess polling monty: A minimal, secure Python interpreter written in Rust for use by AI Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Command Book App New app from Michael Command Book App is a native macOS app for developers, data scientists, AI enthusiasts and more. This is a tool I've been using lately to help build Talk Python, Python Bytes, Talk Python Training, and many more applications. It's a bit like advanced terminal commands or complex shell aliases, but hosted outside of your terminal. This leaves the terminal there for interactive commands, exploration, short actions. Command Book manages commands like "tail this log while I'm developing the app", "Run the dev web server with true auto-reload", and even "Run MongoDB in Docker with exactly the settings I need" I'd love it if you gave it a look, shared it with your team, and send me feedback. Has a free version and paid version. Build with Swift and Swift UI Check it out at https://commandbookapp.com Brian #2: uvx.sh: Install Python tools without uv or Python Tim Hopper Michael #3: Ending 15 years of subprocess polling by Giampaolo Rodola The standard library's subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago The problem with busy-polling CPU wake-ups: even with exponential backoff (starting at 0.1ms, capping at 40ms), the system constantly wakes up to check process status, wasting CPU cycles and draining batteries. Latency: there's always a gap between when a process actually terminates and when you detect it. Scalability: monitoring many processes simultaneously magnifies all of the above. + L1/L2 CPU cache invalidations It's interesting to note that waiting via poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps. Here is the merged PR for this change. Brian #4: monty: A minimal, secure Python interpreter written in Rust for use by AI Samuel Colvin and others at Pydantic Still experimental “Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code. “ “Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.” Extras Brian: Expertise is the art of ignoring - Kevin Renskers You don't need to master the language. You need to master your slice. Learning everything up front is wasted effort. Experience changes what you pay attention to. I hate fish - Rands (Michael Lopp) Really about productivity systems And a nice process for dealing with email Michael: Talk Python now has a CLI New essay: It's not vibe coding - Agentic engineering GitHub is having a day Python 3.14.3 and 3.13.12 are available Wall Street just lost $285 billion because of 13 markdown files Joke: Silence, current side project!
Mitchell Hashimoto's trust management system for open source, Nicholas Carlini has a team of Claudes build a C compiler, Stephan Schwab recounts the history of attempted developer replacement, NanClaw is an alternative to OpenClaw, and Sophie Koonin can't wrap her head around so many people going so hard on LLM-generated code.
Mitchell Hashimoto's trust management system for open source, Nicholas Carlini has a team of Claudes build a C compiler, Stephan Schwab recounts the history of attempted developer replacement, NanClaw is an alternative to OpenClaw, and Sophie Koonin can't wrap her head around so many people going so hard on LLM-generated code.
Dr. Gabriela Zanfir-Fortuna is a globally recognized data protection law expert, with 15 years of experience in the field split between Europe and the U.S., spanning academia, public service, consulting and policy. She currently is Vice President for Global Privacy at the Future of Privacy Forum, a global non-profit headquartered in Washington DC, coordinating FPF's offices and partners in Brussels, Tel Aviv, Singapore, Nairobi, and New Delhi, and leading the work on global privacy and data protection developments related to new technologies, including AI. She is also a founding Advisory Board Member of Women in AI Governance, and an affiliated researcher to the LSTS Center of Vrije Universiteit Brussel.Dr. Zanfir-Fortuna worked for the European Data Protection Supervisor and is a member of the Reference Panel of the Global Privacy Assembly – the international organization reuniting data protection authorities around the world, as well as a member of the T20 engagement group of the G20 under Brazil's Presidency in 2024.She was elected to be part of the Executive Committee of ACM's Fairness, Accountability and Transparency (FaccT) Conference (2021-2022). Her scholarship on the GDPR is referenced by the Court of Justice of the EU, and in 2023 she won the Stefano Rodota Award of the Council of Europe for the paper “The Thin Red Line: Refocusing Data Protection Law on Automated-Decision-Making“, alongside her co-authors. Dr. Zanfir-Fortuna holds a PhD in Law with a thesis on the rights of the data subject under EU Data Protection Law, and an LLM in Human Rights (University of Craiova).With our guest, here for a third time, we have gone through the logic of the Digital Omnibus package aiming to reform a cluster of important EU regulations, the “birth defects” of the AI Act, the importance of South Korea in the global data protection panorama, and the potential consequences of the recent CJEU case, Russmedia.References:* Gabriela Zanfir-Fortuna at the Future of Privacy Forum* Gabriela Zanfir-Fortuna on LinkedIn* Gabriela Zanfir-Fortuna: A world tour of data protection laws (Masters of Privacy, April 2021)* Data Protection vs. Privacy and Data Privacy: a January 28th conundrum (with Gabriela Zanfir-Fortuna, Masters of Privacy - 2025)* X v Russmedia Digital SRL (CJEU, December 2, 2025). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
טל מוסקל הוא מפתח תוכנה ותיק, שלמד הכל בעצמו כמי שעזב את הלימודים הפורמליים בגיל 15, והוביל מספר חברות כסמנטינט ואחרות. בקריירה העשירה שלו נודע כמומחה בינלאומי בשלל נושאים כאבטחת מידע, מטבעות קריפטוגרפיים ושנים האחרונות אינטליגנציה מלאכותית. כבר לפני כשלוש שנים הגיע לגיקונומי לדבר על העתיד, שהוא ההווה שאנחנו חיים היום עם קוד שנכתב על ידי מודלי LLM. הפעם הוא הגיע לדבר על פרויקט הקוד הפתוח החדש שלו, שמנסה לקחת את האייג׳נטים לשלב הבא עם אוטונמיה נרחבת יותר מאי פעם. נותני החסות לפרק: חברת 2sit שבה תקבלו 25% הנחה על הכסא הראשון שתקנו אם תגידו שהגעתם דרך גיקונומי חברת חפשו בגוגל R&D משכנתאות וגיקונומי לינקים מהפרק: חשבון האיקס של טל האתר של פרויקט הבייבייסטר הצטרפות לקבוצת הוואטסאפ החדשה של הפרויקט בייביסטר מוסיף RTL ל-VSCode בייביסיטר מוסיף סאבאייג׳נטים לקודקס שירות דמוי גיט שבייביסיטר בנה
Join Scott as he recaps his LLM agentful week and how it's changing how he works. He'll also try to answer any questions folks have. Thanks to dcd for time codes! 0:00 Getting started 2:06 Hello Everyone - intro to Deep Dive and microprocessor 4:54 NXP Freedom RW 612 5:16 Dialog chip with bluetooth 6:10 LLMs and Claude code - agent / harness orchestration 7:30 Pi Agent 8:45 LLMs making it cheap to generate code 9:58 LLMs enable Working on 3 things at once - ( workflow ) 11:40 Window Tiling tool 13:40 Zephyr native simulator 14:30 pi.dev and LLM security/handling untrusted input/dealing with prompt injection 17:36 more on the native zephyr simulator for LLM - allowing test generation 18:40 Using sublime merge more than direct text editing 20:37 running zephyr native simulator - can we get to we workflow 21:43 educational OpenClaw author (Peter Steinberge) `youtube video mentioned 30:00 Jumperless breadboard 31:52 "Breadboard" app ideas 37:14 Hardware in the loop for testion Octo proto board 37:52 USB IP for testing 39:33 Adafruit Parts Library for fritzing 47:00 Eagle to Fritzing process 48:14 Restarting breadboard pi process since it wasn't interruptable 53:56 using fritzing.org for research 59:12 avoiding the LLM copyright issues - by using open source 1:00:07 software patents 1:02:20 Try claude code $20 or $200 per month 1:03:21 Codex usage 90% - saving typeing and research time! 1:06:37 Mitchell Hashimoto - "my ai adoption journey" https://mitchellh.com/writing/my-ai-adoption-journey 1:04:46 Wrap up - and tasks for next week - continue fritzing scaling Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.
Our 233rd episode with a summary and discussion of last week's big AI news!Recorded on 01/30/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Google introduces Gemini AI agent in Chrome for advanced browser functionality, including auto-browsing for pro and ultra subscribers.OpenAI releases ChatGPT Translator and Prism, expanding its applications beyond core business to language translation and scientific research assistance.Significant funding rounds and valuations achieved by startups Recursive and New Rofo, focusing on specialized AI chips and optical processors respectively.Political and social issues, including violence in Minnesota, prompt tech leaders in AI like Ade from Anthropic and Jeff Dean from Google to express concerns about the current administration's actions.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:04:09) Google adds Gemini AI-powered ‘auto browse' to Chrome | The Verge(00:07:11) Users flock to open source Moltbot for always-on AI, despite major risks - Ars Technica(00:13:25) Google Brings Genie 3 'World Building' Experiment to AI Ultra Subscribers - CNET(00:16:17) OpenAI's ChatGPT translator challenges Google Translate | The Verge(00:18:27) OpenAI launches Prism, a new AI workspace for scientists | TechCrunchApplications & Business(00:19:49) Exclusive: China gives nod to ByteDance, Alibaba and Tencent to buy Nvidia's H200 chips - sources | Reuters(00:22:55) AI chip startup Ricursive hits $4B valuation 2 months after launch(00:24:38) AI Startup Recursive in Funding Talks at $4 Billion Valuation - Bloomberg(00:27:30) Flapping Airplanes and the promise of research-driven AI | TechCrunch(00:31:54) From invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing | TechCrunchProjects & Open Source(00:35:34) Qwen3-Max-Thinking debuts with focus on hard math, code(00:38:26) China's Moonshot releases a new open-source model Kimi K2.5 and a coding agent | TechCrunch(00:46:00) Ai2 launches family of open-source AI developer agents that adapt to any codebase - SiliconANGLE(00:47:46) Tiny startup Arcee AI built a 400B-parameter open source LLM from scratch to best Meta's LlamaResearch & Advancements(00:52:53) Post-LayerNorm Is Back: Stable, ExpressivE, and Deep(00:58:00) [2601.19897] Self-Distillation Enables Continual Learning(01:03:04) [2601.20802] Reinforcement Learning via Self-Distillation(01:05:58) Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityPolicy & Safety(01:09:13) Amodei, Hoffman Join Tech Workers Decrying Minnesota Violence - BloombergSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
In this episode of Tank Talks, Matt Cohen and John Ruffolo rip through a stacked rundown of tech, venture capital, and geopolitical “sovereignty” theater. They open with Europe's accelerating shift away from Microsoft Office and big U.S. platforms toward open-source alternatives, then jump straight into a breaking change from Y Combinator CEO Garry Tan: Canada is back on the list of accepted incorporations, reversing a move that sparked serious backlash about Canadian startup brain drain and U.S.-domicile pressure.From there, they dissect Elon Musk's headline-grabbing SpaceX–xAI all-stock merger and why it looks way better for xAI holders than SpaceX shareholders ahead of a rumored SpaceX IPO window. The episode also digs into Canada's national AI consultation (and the government openly using multiple LLM providers like Cohere and OpenAI to process submissions), the EU's push for digital sovereignty (and the risks of swapping to “free” tools), and the brutal reality of AI-driven search gutting legacy media traffic, with the Washington Post laying off a third of its newsroom. The big throughline: information is cheap now, execution and trust are expensive, and countries (and companies) that don't adapt are about to get cooked.Y Combinator Reverses Course: Canada Back on the List (00:43)YC CEO Garry Tan adds Canada back to YC's list of accepted incorporation jurisdictions after removing it, triggering a wave of criticism. Matt and John break down what changed, why the original rationale (Canadian winners re-domiciling to the U.S.) was a flawed signal, and why the real issue is still Canadian capital formation and follow-on funding strength.SpaceX Buys xAI: A $1.25T Story Swap Before an IPO? (02:34)Matt tees up the shocker: SpaceX acquires xAI in an all-stock deal valuing xAI at $250B and SpaceX at $1T, creating a combined $1.25T entity. They discuss xAI's massive burn versus SpaceX's improving cash profile (driven by Starlink) and why this kind of move raises eyebrows heading into an IPO narrative.Second-Order Effects: When a Cash-Burning AI Company Merges Into Space Infrastructure (07:35)They debate whether this becomes a template for other pre-IPO restructures or stays a one-off “Elon special.” John says a Starlink-style consolidation would make strategic sense; folding in xAI doesn't feel like a choke-point win.Canada's AI Strategy Consultation: Government Using LLMs in the Workflow (09:10)Canada's ISED publishes a high-level summary of its AI consultation and explicitly notes using multiple LLMs and pipelines (including Cohere and OpenAI) to process massive public input. Matt frames this as a meaningful “government actually doing something” moment, even if the public is still anxious about jobs and privacy.Europe's Digital Sovereignty Push: Dropping Teams/Zoom for Open Source? (12:40)They react to reports of governments moving away from Teams/Zoom and Microsoft tooling in the name of sovereignty. Matt calls the open-source swap risky from a security and operational standpoint; John says the bigger signal is global: sovereignty is now a first-order priority, and Canada can't pretend this wave isn't coming.Washington Post Layoffs: AI Search Is Eating the Referral Economy (16:48)Matt highlights the Washington Post's reported search traffic collapse and layoffs impacting a third of the newsroom. John calls journalism an obvious early disruption target: LLMs compress content production costs, and the old newsroom pyramid doesn't match the new economics.The Survival Play: Media Becomes a Live Events Business (19:26)They land on the counter-move: stop fighting the trend and monetize what still works: brand, access, community, and in-person experiences. If content becomes commoditized, relationships and trust become the product.Connect with John Ruffolo on LinkedIn: https://ca.linkedin.com/in/joruffoloConnect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
Program notes:0:50 Rapid recovery of hearts for transplant after circulatory death1:50 Total ischemic time longer than 4 hours2:50 Typically reanimate3:50 Expanded to donors 40 years of age4:55 Do women with PCOS have increased risk for arrhythmia5:55 Incidence arrhythmia in these women increased risk6:50 An LLM to streamline transitions from primary to secondary care7:50 Used alone, with staff support, or control8:51 Decreased amount of time in the consult9:52 Things that accumulate data will be embraced10:12 Treating gout to prevent CVD11:12 Lowered in about a fourth of patients12:36 End
Follow The Trophy Room Patreon: https://www.patreon.com/pstrophyroom Apple Podcast: https://apple.co/2PglU1a Discord: https://discord.gg/wPNp3kC Twitter: https://twitter.com/PSTrophyRoom ****** This week on Let's Square Up the News, we break down the biggest gaming industry headlines shaping the future of PlayStation, Xbox, and PC gaming. AMD leadership signals that Microsoft's next-gen Xbox is progressing toward a potential 2027 launch window, lining up with the typical console lifecycle as current hardware approaches its seventh year. Meanwhile, Valve's upcoming Steam hardware push is expected to begin shipping new devices powered by AMD chips, expanding the Steam gaming ecosystem. We also dive into Sony's controversial generative AI patent, which proposes personalized gaming news podcasts voiced by PlayStation characters using player data and LLM technology — raising questions about voice actor rights, AI ethics, and the future of gaming media. Plus, we unpack Take-Two's response to rumors about GTA 6 skipping physical release at launch, the latest confidence signals around the game's roadmap, and what it means for collectors and physical media fans. Also in the episode: • Obsidian confirms no plans for The Outer Worlds 3 after the sequel underperformed • Marathon ranked mode leak details and progression system explained • Overwatch drops the “2” and launches new heroes to revive the franchise • Helldivers 2 major update adds vehicles, weapons, and new war content • Stardew Valley 1.7 update adds new romance options • Fallout TV countdown disappointment explained • The Last of Us TV future and God of War casting updates If you care about next-gen consoles, gaming industry trends, PlayStation vs Xbox strategy, and major upcoming releases like GTA 6, this is your weekly gaming news hub.
In the security news this week: Residential proxy abuse is everywhere this week: from Google's takedown of IPIDEA to massive Citrix NetScaler scanning and the Badbox 2.0 botnet Supply chain fun time: Notepad++ updates were hijacked Attackers set their sights on: Ivanti EPMM, Dell Unity storage, Fortinet VPNs/firewalls, and ASUSTOR NAS devices Russian state hackers went after Poland's grid Is ICE on a surveillance shopping spree and into hacking anti-ICE apps? Ukraine's war-time Starlink problem is turning into a policy and controls experiment The AI security theme is alive and well with exposed LLM endpoints, OpenClaw/Moltbot/Moltbook fiasco, and letting anyone hijack agents Signed forensic driver for Windows is still an EDR killer The Trump administration's rollback of software security attestation National Cyber Director Sean Cairncross says: “less regulation, more cooperation.” Finally, there are some “only in infosec” human stories: * pen testers arrested in Iowa now getting a settlement, * a Google engineer convicted over stolen AI IP, * Booz Allen losing Treasury work over intentional insider leaks, * and an “AI psychosis” saga at an adult-content platform. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-912
It’s actually a good thing that some books push you to the edge of your ability to understand. But there’s no doubting the fact that dense, abstract and jargon-filled works can push you so far into the fog of frustration that you cannot blame yourself for giving up. But here’s the truth: You don’t have to walk away frustrated and confused. I’m going to share with you a number of practical strategies that will help you fill in the gaps of your reading process. Because that’s usually the real problem: It’s not your intelligence. Nor is it that the world is filled with books “above your level.” I ultimately don’t believe in “levels” as such. But as someone who taught reading courses at Rutgers and Saarland University, I know from experience that many learners need to pick up a few simple steps that will strengthen how they approach reading difficult books. And in this guide, you’ll learn how to read challenging books and remember what they say. I’m going to go beyond generic advice too. That way, you can readily diagnose: Why certain books feel so hard Use pre-reading tactics that prime your brain to deal with difficulties effectively Apply active reading techniques to lock in understanding faster Leverage accelerated learning tools that are quick to learn Use Artificial Intelligence to help convert tough convent into lasting knowledge without worrying about getting duped by AI hallucinations Whether you’re tacking philosophy, science, dense fiction or anything based primarily in words, the reading system you’ll learn today will help you turn confusion into clarity. By the end, even the most intimidating texts will surrender their treasures to your mind. Ready? Let’s break it all down together. https://www.youtube.com/watch?v=y9HLbY4jsFg Why Some Books Feel “Too Hard” (And What That Really Means) You know exactly how it feels and so do I. You sit down with a book that people claim is a classic or super-important. But within a few pages, your brain fogs over and you’re completely lost. More often than not, through glazed eyes, you start to wonder… did this author go out of his or her way to make this difficult? Are they trying to show off with all these literary pyrotechnics? Or is there a deliberate conspiracy to confuse readers like me? Rest assured. These questions are normal and well worth asking. The difficulty you might feel is never arbitrary in my experience. But there’s also no “single origin” explanation for why some books feel easier than others. It’s almost always a combination of factors, from cognitive readiness, lived experience, emotions and your physical condition throughout the day. This means that understanding why individual texts resist your understanding needs to be conducted on a case-by-case basis so you can move towards mastering anything you want to read. Cognitive Load: The Brain’s Processing “Stop Sign” “Cognitive load” probably needs no definition. The words are quite intuitive. You start reading something and it feels like someone is piling heavy bricks directly on top of your brain, squishing everything inside. More specifically, these researchers explain that what’s getting squished is specifically your working memory, which is sometimes called short-term memory. In practical terms, this means that when a book suddenly throws a bunch of unfamiliar terms at you, your working memory has to suddenly deal with abstract concepts, completely new words or non-linear forms of logic. All of this increases your cognitive load, but it’s important to note that there’s no conspiracy. In Just Being Difficult: Academic Writing in the Public Arena, a variety of contributors admit that they often write for other specialists. Although it would be nice to always compose books and articles for general readers, it’s not laziness. They’re following the codes of their discipline, which involves shorthand to save everyone time. Yes, it can also signal group membership and feel like an intellectual wall if you’re new to this style, but it’s simply a “stop sign” for your brain. And wherever there are stop signs, there are also alternative routes. Planning Your Detour “Roadmap” Into Difficult Books Let me share a personal example by way of sharing a powerful technique for making hard books easier to read. A few years ago I decided I was finally going to read Kant. I had the gist of certain aspects of his philosophy, but a few pages in, I encountered so many unfamiliar terms, I knew I had to obey the Cognitive Load Stop Sign and take a step back. To build a roadmap into Kant, I searched Google in a particular way. Rather than a search term like, “Intro to Kant,” I entered this tightened command instead: Filetype:PDF syllabus Kant These days, you can ask an LLM in more open language to simply give you links to the syllabi of the most authoritative professors who teach Kant. I’d still suggest that you cross-reference what you get on Google, however. If you’re hesitant about using either Google or AI, it’s also a great idea to visit a librarian in person to help you. Or, you can read my post about using AI for learning with harming your memory to see if it’s time to update your approach. Narrowing Down Your Options One way or another, the reason to consult the world’s leading professors is that their syllabi will provide you with: Foundational texts Core secondary literature Commentaries from qualified sources Essential historical references Once you’ve looked over a few syllabi, look through the table of contents of a few books on Amazon or Google Books. Then choose: 1-2 foundational texts to read before the challenging target book you want to master 1-2 articles or companion texts to read alongside In this way, you’ve turned difficulty into a path, not an obstacle. Pre-Reading Strategies That Warm Up Your Reading Muscles A lot of the time, the difficulty people feel when reading has nothing to do with the book. It’s just that you’re diving into unfamiliar territory without testing the waters first. Here are some simple ways to make unfamiliar books much easier to get into. Prime Like a Pro To make books easier to read, you can perform what is often called “priming” in the accelerated learning community. It is also sometimes called “pre-reading” and as this research article discusses, its success has been well-demonstrated. The way I typically perform priming is simple. Although some books require a slight change to the pattern, I typically approach each new book by reading: The back cover The index The colophon page The conclusion or afterword The most interesting or relevant chapter The introduction The rest of the book Activate Prior Knowledge Sometimes I will use a skimming and scanning strategy after reading the index to quickly familiarize myself with how an author approaches a topic with which I’m already familiar. This can help raise interest, excitement and tap into the power of context-dependent memory. For example, I recently started reading Doubt: A History by Jennifer Michael Hecht. Since the Renaissance memory master Giordano Bruno comes up multiple times, I was able to draw up a kind of context map of the books themes by quickly going through those passages. Take a Picture Walk Barbara Oakley and Terence Sejnjowski share a fantastic strategy in Learning How to Learn. Before reading, simply go through a book and look at all the illustrations, tables, charts and diagrams. It seems like a small thing. But it gives your brain a “heads up” about upcoming visual information that you may need to process than prose. I used to find visual information like this difficult, but after I started taking picture walks, I’m now excited to read “towards” these elements. If still find them challenging to understand, I apply a tip I learned from Tony Buzan that you might like to try: Rather than struggle to interpret a chart or illustration, reproduce it in your own hand. Here’s an example of how I did this when studying spaced repetition: As a result, I learned the graph and its concepts quickly and have never forgotten it. Build a Pre-Reading Ritual That Fits You There’s no one-sized-fits-all strategy, so you need to experiment with various options. The key is to reduce cognitive load by giving your mind all kinds of ways of understanding what a book contains. If it helps, you can create yourself a checklist that you slip into the challenging books on your list. That way, you’ll have both a bookmark and a protocol as you develop your own pre-reading style. Active Reading Techniques That Boost Comprehension Active reading involves deliberately applying mental activities while reading. These can include writing in the margins of your books, questioning, preparing summaries and even taking well-time breaks between books. Here’s a list of my favorite active reading strategies with ideas on how you can implement them. Using Mnemonics While Reading On the whole, I take notes while reading and then apply a variety of memory techniques after. But to stretch my skills, especially when reading harder books, I start the encoding process earlier. Instead of just taking notes, I’ll start applying mnemonic images. I start early because difficult terms often require a bit more spaced repetition. To do this yourself, the key is to equip yourself with a variety of mnemonic methods, especially: The Memory Palace technique The Pegword Method The Major System The PAO System And in some cases, you may want to develop a symbol system, such as if you’re studying physics or programming. Once you have these mnemonic systems developed, you can apply them in real time. For example, if you come across names and dates, committing them to memory as you read can help you keep track of a book’s historical arc. This approach can be especially helpful when reading difficult books because authors often dump a lot of names and dates. By memorizing them as you go, you reduce the mental load of having to track it all. For even more strategies you can apply while reading, check out my complete Mnemonics Dictionary. Strategic Questioning Whether you take notes or memorize in real-time, asking questions as you go makes a huge difference. Even if you don’t come up with answers, continually interrogating the book will open up your brain. The main kinds of questions are: Evaluative questions (checking that the author uses valid reasoning and address counterarguments) Analytical questions (assessing exactly how the arguments unfold and questioning basic assumptions) Synthetic questions (accessing your previous knowledge and looking for connections with other books and concepts) Intention questions (interrogating the author’s agenda and revealing any manipulative rhetoric) One medieval tool for questioning you can adopt is the memory wheel. Although it’s definitely old-fashioned, you’ll find that it helps you rotate between multiple questions. Even if they are as simple as who, what, where, when, how and why questions, you’ll have a mental mnemonic device that helps ensure you don’t miss any of them. Re-reading Strategies Although these researchers seem to think that re-reading is not an effective strategy, I could not live without it. There are three key kinds of re-reading I recommend. Verbalize Complexity to Tame It The first is to simply go back and read something difficult to understand out loud. You’d be surprised how often it’s not your fault. The author has just worded something in a clunky manner and speaking the phrasing clarifies everything. Verbatim Memorization for Comprehension The second strategy is to memorize the sentence or even an entire passage verbatim. That might seem like a lot of work, but this tutorial on memorizing entire passages will make it easy for you. Even if verbatim memorization takes more work, it allows you to analyze the meaning within your mind. You’re no longer puzzling over it on paper, continuing to stretch your working memory. No, you’ve effectively expanded at least a part of your working memory by bypassing it altogether. You’ve ushered the information into long-term memory. I’m not too shy to admit that I have to do this sometimes to understand everything from the philosophy in Sanskrit phrases to relatively simple passages from Shakespeare. As I shared in my recent discussion of actor Anthony Hopkins’ memory, I couldn’t work out what “them” referred to in a particular Shakespeare play. But after analyzing the passage in memory, it was suddenly quite obvious. Rhythmical Re-reading The third re-reading strategy is something I shared years ago in my post detailing 11 reasons you should re-read at least one book per month. I find this approach incredibly helpful because no matter how good you get at reading and memory methods, even simple books can be vast ecosystems. By revisiting difficult books at regular intervals, you not only get more out of them. You experience them from different perspectives and with the benefit of new contexts you’ve built in your life over time. In other words, treat your reading as an infinite game and never assume that you’ve comprehended everything. There’s always more to be gleaned. Other Benefits of Re-reading You’ll also improve your pattern recognition by re-treading old territory, leading to more rapid recognition of those patterns in new books. Seeing the structures, tropes and other tactics in difficult books opens them up. But without regularly re-reading books, it can be difficult to perceive what these forms are and how authors use them. To give you a simple example of a structure that appears in both fiction and non-fiction, consider in media res, or starting in the middle. When you spot an author using this strategy, it can immediately help you read more patiently. And it places the text in the larger tradition of other authors who use that particular technique. For even more ideas that will keep your mind engaged while tackling tough books, feel free to go through my fuller article on 7 Active Reading Strategies. Category Coloring & Developing Your Own Naming System For Complex Material I don’t know about you, but I do not like opening a book only to find it covered in highlighter marks. I also don’t like highlighting books myself. However, after practicing mind mapping for a few years, I realized that there is a way to combine some of its coloring principles with the general study principles of using Zettelkasten and flashcards. Rather than passively highlighting passages that seem interesting at random, here’s an alternative approach you can take to your next tour through a complicated book. Category Coloring It’s often helpful to read with a goal. For myself, I decided to tackle a hard book called Gödel Escher Bach through the lens of seven categories. I gave each a color: Red = Concept Green = Process Orange = Fact Blue = Historical Context Yellow = Person Purple = School of Thought or Ideology Brown = Specialized Terminology Example Master Card to the Categorial Color Coding Method To emulate this method, create a “key card” or “master card” with your categories on it alongside the chosen color. Use this as a bookmark as you read. Then, before writing down any information from the book, think about the category to which it belongs. Make your card and then apply the relevant color. Obviously, you should come up with your own categories and preferred colors. The point is that you bring the definitions and then apply them consistently as you read and extract notes. This will help bring structure to your mind because you’re creating your own nomenclature or taxonomy of information. You are also using chunking, a specific mnemonic strategy I’ve written about at length in this post on chunking as a memory tool. Once you’re finished a book, you can extract all the concepts and memorize them independently if you like. And if you emulate the strategy seen on the pictured example above, I’ve included the page number on each card. That way, I can place the cards back in the order of the book. Using this approach across multiple books, you will soon spot cross-textual patterns with greater ease. The catch is that you cannot allow this technique to become activity for activity’s sake. You also don’t want to wind up creating a bunch of informational “noise.” Before capturing any individual idea on a card and assigning it to a category, ask yourself: Why is this information helpful, useful or critical to my goal? Will I really use it again? Where does it belong within the categories? If you cannot answers these questions, either move on to the next point. Or reframe the point with some reflective thinking so that you can contextualize it. This warning aside, it’s important not to let perfectionism creep into your life. Knowing what information matters does take some practice. To speed up your skills with identifying critical information, please read my full guide on how to find the main points in books and articles. Although AI can certainly help these days, you’ll still need to do some work on your own. Do Not Let New Vocabulary & Terminology Go Without Memorization One of the biggest mistakes I used to make, even as a fan of memory techniques, slowed me down much more than necessary. I would come across a new term, look it up, and assume I’d remember it. Of course, the next time I came across it, the meaning was still a mystery. But when I got more deliberate, I not only remembered more words, but the knowledge surrounding the unfamiliar terms also stuck with greater specificity. For example, in reading The Wandering Mind by Jamie Kreiner, memorizing the ancient Greek word for will or volition (Prohairesis) pulled many more details about why she was mentioning it. Lo and behold, I started seeing the word in more places and connecting it to other ancient Greek terms. Memorizing those as well started to create a “moat of meaning,” further protecting a wide range of information I’d been battling. Understanding Why Vocabulary Blocks Comprehension The reason why memorizing words as you read is so helpful is that it helps clear out the cognitive load created by pausing frequently to look up words. Even if you don’t stop to learn a new definition, part of your working memory gets consumed by the lack of familiarity. I don’t always stop to learn new definitions while reading, but using the color category index card method you just discovered, it’s easy to organize unfamiliar words while reading. That way they can be tidily memorized later. I have a full tutorial for you on how to memorize vocabulary, but here’s a quick primer. Step One: Use a System for Capturing New Words & Terms Whether you use category coloring, read words into a recording app or email yourself a reminder, the key is to capture as you go. Once your reading session is done, you can now go back to the vocabulary list and start learning it. Step Two: Memorize the Terms I personally prefer the Memory Palace technique. It’s great for memorizing words and definitions. You can use the Pillar Technique with the word at the top and the definition beneath it. Or you can use the corners for the words and the walls for the definitions. Another idea is to photograph the cards you create and important them into a spaced repetition software like Anki. As you’ll discover in my complete guide to Anki, there are several ways you can combine Anki with a variety of memory techniques. Step Three: Use the Terms If you happened to catch an episode of the Magnetic Memory Method Podcast back when I first learned Prohairesis I mentioned it often. This simple habit helps establish long-term recall, reflection and establishes the ground for future recognition and use. Expand Understanding Using Video & Audio Media When I was in university, I often had to ride my bike across Toronto to borrow recorded lectures on cassette. Given the overwhelming tsunamis of complex ideas, jargon and theoretical frameworks I was facing, it was worth it. Especially since I was also dealing with the personal problems I shared with you in The Victorious Mind. Make no mistake: I do not believe there is any replacement for reading the core books, no matter how difficult they might be. But there’s no reason not to leverage the same ideas in multiple formats to help boost your comprehension and long-term retention. Multimedia approaches are not just about knowledge acquisition either. There have been many debates in the magical arts community that card magicians should read and not rely on video. But evidence-based studies like this one show that video instruction combined with reading written instructions is very helpful. The Science Behind Multi-Modal Learning I didn’t know when I was in university, or when I was first starting out with memdeck card magic that dual coding theory existed. This model was proposed by Allan Paivio, who noticed that information is processed both verbally and non-verbally. Since then, many teachers have focused heavily on how to encourage students to find the right combination of reading, visual and auditory instructional material. Here are some ideas that will help you untangle the complexity in your reading. How to Integrate Multimedia Without Overload Forgive me if this is a bit repetitive, but to develop flow with multiple media, you need to prime the brain. As someone who has created multiple YouTube videos, I have been stubborn about almost always including introductions. Why? Go Through the Intros Like a Hawk Because without including a broad overview of the topic, many learners will miss too many details. And I see this in the comments because people ask questions that are answered throughout the content and flagged in the introductions. So the first step is to be patient and go through the introductory material. And cultivate an understanding that it’s not really the material that is boring. It’s the contemporary issues with dopamine spiking that make you feel impatient. The good news is that you can possibly reset your dopamine levels so you’re better able to sit through these “priming” materials. One hack I use is to sit far away from my mouse and keep my notebook in hand. If I catch myself getting antsy, I perform a breathing exercise to restore focus. Turn on Subtitles When you’re watching videos, you can help increase your engagement by turning on the subtitles. This is especially useful in jargon-heavy video lessons. You can pause and still see the information on the screen for easier capture when taking notes. When taking notes, I recommend jotting down the timestamp. This is useful for review, but also for attributing citations later if you have to hand in an assignment. Mentally Reconstruct After watching a video or listening to a podcast on the topic you’re mastering, take a moment to review the key points. Try to go through them in the order they were presented. This helps your brain practice mental organization by building a temporal scaffold. If you’ve taken notes and written down the timestamps, you can easily check your accuracy. Track Your Progress For Growth & Performance One reason some people never feel like they’re getting anywhere is that they have failed to establish any points of reference. Personally, this is easy for me to do. I can look back to my history of writing books and articles or producing videos and be reminded of how far I’ve come at a glance. Not only as a writer, but also as a reader. For those who do not regularly produce content, you don’t have to start a blog or YouTube channel. Just keep a journal and create a few categories of what skills you want to track. These might include: Comprehension Retention Amount of books read Vocabulary growth Critical thinking outcomes Confidence in taking on harder books Increased tolerance with frustration when reading challenges arise You can use the same journal to track how much time you’ve spent reading and capturing quick summaries. Personally, I wish I’d started writing summaries sooner. I really only got started during grad school when during a directed reading course, a professor required that I had in a summary for every book and article I read. I never stopped doing this and just a few simple paragraph summaries has done wonders over the years for my understanding and retention. Tips for Overcoming Frustration While Reading Difficult Books Ever since the idea of “desirable difficulty” emerged, people have sought ways to help learners overcome emotional responses like frustration, anxiety and even shame while tackling tough topics. As this study shows, researchers and teachers have found the challenge difficult despite the abundance of evidence showing that being challenged is a good thing. Here are some strategies you can try if you continue to struggle. Embrace Cognitive Discomfort As we’ve discussed, that crushing feeling in your brain exists for a reason. Personally, I don’t think it ever goes away. I still regularly pick up books that spike it. The difference is that I don’t start up a useless mantra like, “I’m not smart enough for this.” Instead, I recommend you reframe the experience and use the growth mindset studied by Carol Dweck, amongst others. You can state something more positive like, “This book is a bit above my level, but I can use tactics and techniques to master it.” I did that very recently with my reading of The Xenotext, parts of which I still don’t fully understand. It was very rewarding. Use Interleaving to Build Confidence I rotate through draining books all the time using a proven technique called interleaving. Lots of people are surprised when I tell them that I rarely read complex and challenging books for longer than fifteen minutes at a time. But I do it because interleaving works. Which kinds of books can you interleave? You have choices. You can either switch in something completely different, or switch to a commentary. For example, while recently reading some heavy mathematical theories about whether or not “nothing” can exist, I switched to a novel. But back in university, I would often stick within the category while at the library. I’d read a core text by a difficult philosopher, then pick up a Cambridge Companion and read an essay related to the topic. You can also interleave using multimedia sources like videos and podcasts. Interleaving also provides time for doing some journaling, either about the topic at hand or some other aspect of your progress goals. Keep the Big Picture in Mind Because frustration is cognitively training, it’s easy to let it drown out your goals. That’s why I often keep a mind map or some other reminder on my desk, like a couple of memento mori. It’s also possible to just remember previous mind maps you’ve made. This is something I’m doing often at the moment as I read all kinds of boring information about managing a bookshop for my Memory Palace bookshop project first introduced in this video: https://www.youtube.com/watch?v=utcJfeQZC2c It’s so easy to get discouraged by so many rules and processes involved in ordering and selling books, that I regularly think back to creating this mind map with Tony Buzan years ago. In case my simple drawings on this mind map for business development doesn’t immediately leap out at you with its meanings, the images at the one o’clock-three o’clock areas refer to developing a physical Memory Palace packed with books on memory and learning. Developing and keeping a north star in mind will help you transform the process of reading difficult books into a purposeful adventure of personal development. Even if you have to go through countless books that aren’t thrilling, you’ll still be moving forward. Just think of how much Elon Musk has read that probably wasn’t all that entertaining. Yet, it was still essential to becoming a polymath. Practice Seeing Through The Intellectual Games As you read harder and harder books, you’ll eventually come to realize that the “fluency” some people have is often illusory. For example, some writers and speakers display a truly impressive ability to string together complex terminology, abstract references and fashionable ideas of the day in ways that sound profound. Daniel Dennett frequently used a great term for a lot of this verbal jujitsu that sounds profound but is actually trivial. He called such flourishes “deepities.” https://www.youtube.com/watch?v=ey-UeaSi1rI This kind of empty linguistic dexterity will be easier for you to spot when you read carefully, paraphrase complex ideas in your own words and practice memorizing vocabulary frequently. When you retain multiple concepts and practice active questioning in a large context of grounded examples and case studies, vague claims will not survive for long in your world. This is why memory training is about so much more than learning. Memorization can equip you to think independently and bring clarity to fields that are often filled with gems, despite the fog created by intellectual pretenders more interested in word-jazz than actual truth. Using AI to Help You Take On Difficult Books As a matter of course, I recommend you use AI tools like ChatGPT after doing as much reading on your own as possible. But there’s no mistaking that intentional use of such tools can help you develop greater understanding. The key is to avoid using AI as an answer machine or what Nick Bostrom calls an “oracle” in his seminal book, Superintelligence. Rather, take a cue from Andrew Mayne, a science communicator and central figure at OpenAI and host of their podcast. His approach centers on testing in ways that lead to clarity of understanding and retention as he uses various mnemonic strategies. https://www.youtube.com/watch?v=JlzD_6Olaqw Beyond his suggestions, here are some of my favorite strategies. Ask AI to Help Identify All Possible Categories Connected to a Topic A key reason many people struggle to connect ideas is simply that they haven’t developed a mental ecosystem of categories. I used to work in libraries, so started thinking categorically when I was still a teenager. But these days, I would combine how traditional libraries are structured with a simple prompt like: List all the possible categories my topic fits into or bridges across disciplines, historical frameworks and methodologies. Provide the list without interpretation or explanation so I can reflect. A prompt like this engineers a response that focuses on relationships and lets your brain perform the synthetic thinking. Essentially, you’ll be performing what some scientists call schema activation, leading to better personal development outcomes. Generate Lists of Questions To Model Exceptional Thinkers Because understanding relies on inquiry, it’s important to practice asking the best possible questions. AI chat bots can be uniquely useful in this process provided that you explicitly insist that it helps supply you excellent questions without any answers. You can try a prompt like: Generate a list of questions that the world’s most careful thinkers in this field would ask about this topic. Do not provide any answers. Just the list of questions. Do this after you’ve read the text and go through your notes with fresh eyes. Evaluate the material with questions in hand, ideally by writing out your answers by hand. If you need your answers imported into your computer, apps can now scan your handwriting and give you text file. Another tip: Don’t be satisfied with the first list of questions you get. Ask the AI to dig deeper. You can also ask the AI to map the questions into the categories you previously got help identifying. For a list of questions you can put into your preferred chat bot, feel free to go through my pre-AI era list of philosophical questions. They are already separated by category. Use AI to Provide a Progress Journal Template If you’re new to journaling, it can be difficult to use the technique to help you articulate what you’re reading and why the ideas are valuable. And that’s not to mention working out various metrics to measure your growth over time. Try a prompt like this: Help me design a progress journal for my quest to better understand and remember difficult books. Include sections for me to list my specific goals, vocabulary targets, summaries and various milestones I identify. Make it visual so I can either copy it into my own print notebook or print out multiple copies for use over time. Once you have a template you’re happy to experiment with, keep it visible in your environment so you don’t forget to use it. Find Blind Spots In Your Summaries Many AIs have solid reasoning skills. As a result, you can enter your written summaries and have the AI identify gaps in your knowledge, blind spots and opportunities for further reading. Try a prompt like: Analyze this summary and identify any blind spots, ambiguities in my thinking or incompleteness in my understanding. Suggest supplementary reading to help me fill in any gaps. At the risk of repetition, the point is that you’re not asking for the summaries. You’re asking for assessments that help you diagnose the limits of your understanding. As scientists have shown, metacognition, or thinking about your thinking can help you see errors much faster. By adding an AI into the mix, you’re getting feedback quickly without having to wait for a teacher to read your essay. Of course, AI outputs can be throttled, so I find it useful to also include a phrase like, “do not throttle your answer,” before asking it to dig deeper and find more issues. Used wisely, you will soon see various schools of thought with much greater clarity, anticipate how authors make their moves and monitor your own blind spots as you read and reflect. Another way to think about the power of AI tools is this: They effectively mirror human reasoning at a species wide level. You can use them to help you mirror more reasoning power by regularly accessing and practicing error detection and filling in the gaps in your thinking style. Why You Must Stop Abandoning Difficult Books (At Least Most of the Time) Like many people, I’m a fan of Scott Young’s books like Ultralearning and Get Better at Anything. He’s a disciplined thinker and his writing helps people push past shallow learning in favor of true and lasting depth. However, he often repeats the advice that you should stop reading boring books. In full transparency, I sometimes do this myself. And Young adds a lot of context to make his suggestion. But I limit abandoning books as much as possible because I don’t personally find Young’s argument that enjoyment and productivity go together. On the contrary, most goals that I’ve pursued have required fairly intense periods of delaying gratification. And because things worth accomplishing generally do require sacrifice and a commitment to difficulty, I recommend you avoid the habit of giving up on books just because they’re “boring” or not immediately enjoyable. I’ll bet you’ll enjoy the accomplishment of understanding hard books and conquering their complexity far more in the end. And you’ll benefit more too. Here’s why I think so. The Hidden Cost of Abandoning Books You’ve Started Yes, I agree that life is short and time is fleeting. But if you get into the habit of abandoning books at the first sign of boredom, it can quickly become your default habit due to how procedural memory works. In other words, you’re given your neurons the message that it’s okay to escape from discomfort. That is a very dangerous loop to throw yourself into, especially if you’re working towards becoming autodidactic. What you really need is to develop the ability to stick with complexity, hold ambiguous and contradictory issues in your mind and fight through topic exhaustion. Giving up on books on a routine basis? That’s the opposite of developing expertise and resilience. The AI Risk & Where Meaning is Actually Found We just went through the benefits of AI, so you shouldn’t have issues. But I regularly hear from people and have even been on interviews where people use AI to summarize books I’ve recomended. This is dangerous because the current models flatten nuance due to how they summarize books based on a kind of “averaging” of what its words predictability mean. Although they might give you a reasonable scaffold of a book’s structure, you won’t get the friction created by how authors take you through their thought processes. In other words, you’ll be using AI models that are not themselves modeling the thinking that reading provides when you grind your way through complex books. The Treasure of Meaning is Outside Your Comfort Zone Another reason to train for endurance is that understanding doesn’t necessarily arrive while reading a book or even a few weeks after finishing it. Sometimes the unifying insights land years later. But if you don’t read through books that seem to be filled with scattered ideas, you cannot gain any benefit from them. Their diverse points won’t consolidate in your memory and certainly won’t connect with other ideas later. So I suggest you train your brain to persist as much as possible. By drawing up the support of the techniques we discussed today and a variety of mnemonic support systems, you will develop persistence and mine more gold from everything you read. And being someone who successfully mines for gold and can produce it at will is the mark of the successful reading. Not just someone who consumes information efficiently, but who can repeatedly connect and transform knowledge year after year due to regularly accumulating gems buried in the densest and most difficult books others cannot or will not read. Use Struggle to Stimulate Growth & You Cannot Fail As you’ve seen, challenging books never mean that you’re not smart enough. It’s just a matter of working on your process so that you can tackle new forms of knowledge. And any discomfort you feel is a signal that a great opportunity and personal growth adventure awaits. By learning how to manage cognitive load, fill in the gaps in your background knowledge and persist through frustration, you can quickly become the kind of reader who seeks out complexity instead of flinching every time you see it. Confusion has now become a stage along the path to comprehension. And if you’re serious about mastering increasingly difficult material, understanding and retaining it, then it’s time to upgrade your mental toolbox. Start now by grabbing my Free Memory Improvement Course: Inside, you’ll discover: The Magnetic Memory Method for creating powerful Memory Palaces How to develop your own mnemonic systems for encoding while reading Proven techniques that deepen comprehension, no matter how abstract or complex your reading list is And please, always remember: The harder the book, the greater rewards. And the good news is, you’re now more than ready to claim them all.
Construir software do zero nem sempre é inovação. Muitas vezes, é só mais caro, mais lento e mais arriscado. Quando vale a pena comprar, integrar ou adaptar uma solução pronta? Quando faz sentido desenvolver internamente? E como grandes empresas equilibram autonomia, segurança, inovação e governança sem travar a organização? No episódio do Hipsters.Talks, PAULO SILVEIRA, CVO do Grupo Alura, conversa com JOÃO COSTA, gerente de Inovação Aberta da Petrobras, sobre decisões reais de tecnologia em escala: make or buy, inovação aberta vs fechada, citizen developers, Shadow IT, IA corporativa e como fazer a adoção de novas tecnologias acontecer de verdade — não só no PowerPoint. Uma conversa prática sobre como inovação acontece fora do hype, dentro de uma das maiores empresas do Brasil, onde planilhas, software pronto, IA generativa e desenvolvimento interno convivem todos os dias. Sinta-se à vontade para compartilhar suas perguntas e comentários. Vamos adorar conversar com você!
Adam built a Claude Code skill for his Taffy REST framework and wanted to share it with the CFML community. Simple enough—create a GitHub repo, add some markdown files, done. But somewhere between "this is cool" and "anyone can install this," a familiar chill crept in. These skills are just text files. No checksums. No digital signatures. No verification that the thing you're installing won't quietly exfiltrate your code to some server in Eastern Europe. Sound familiar? It should. We've been here before—back when passwords lived in plain text and "security" meant hoping nobody looked too hard.The hosts dig into the unsettling parallels between today's LLM plugin ecosystem and the wild west of early internet security.LinksAdam's Dotfiles Blog Post - Getting his shit together with dotfiles, Brewfile, and 1Password SSH agentCF Community LLM Marketplace - Adam's community marketplace for CFML-related Claude skillsSteve Yegge's Google Platforms Rant - The infamous accidentally-public Google+ postVibe Coding by Gene Kim & Steve Yegge - The audiobook Ben's been enjoyingSocket.dev - Supply chain security for npm dependenciesFollow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @workingcode.dev on Bluesky. New episodes drop weekly on Thursday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.Full show notes and transcript here.