Set of subroutine definitions, protocols, and tools for building software and applications
POPULARITY
Categories
Episode Topic: The History of Asian Allure (https://go.nd.edu/c9944d)Discover the origins, evolution, and impact of Asian Allure, the annual, student-led performance that has become a cornerstone of the Asian Pacific Islander community at Notre Dame. Explore how Asian Allure began, how it has grown over the years, and what it continues to mean for generations of API students at Our Lady's University in a conversation with co-founder Teresita Mercado '97, '00 J.D., and her daughters Bianca Feix '25, and Mia Feix '27, moderated by Cecilia Lucero '84.Featured Speakers:Cecilia Lucero '84, University of Notre DameBianca Feix '25Mia Feix '27, University of Notre DameTeresita T. Mercado '97, '00 J.D., BuchalterRead this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/e124a2.This podcast is a part of the ThinkND Series titled 120 Years Later: Asian and Pacific Islander Alumni Perspectives.Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career. Learn more about ThinkND and register for upcoming live events at think.nd.edu. Join our LinkedIn community for updates, episode clips, and more.
FHIR-Native Architecture: Building Healthcare IT for True Interoperability As healthcare systems race to meet 21st Century Cures Act mandates, a critical question emerges: retrofit or rebuild? Mike O'Neill, CEO of MedicaSoft, explains why FHIR-native architecture delivers fundamentally different interoperability outcomes than legacy systems with API layers bolted on. This conversation cuts through vendor marketing to examine the structural, semantic, and operational advantages of building healthcare IT from the ground up on HL7 FHIR standards. O'Neill draws on extensive experience leading P&L, engineering, and operations across healthcare IT startups and public companies to explain what "FHIR-native" actually means in practice—and why it matters for CIOs evaluating vendor claims. Learn how purpose-built FHIR architecture eliminates middleware complexity, reduces integration costs, and enables real-time clinical data exchange that retrofitted systems struggle to deliver. Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen/
Will Caldwell started Snap after his first real estate software startup fizzled, pivoting from agent tools to regulated compliance data. He discovered lenders were required to buy hazard and flood certifications, and realized this was a "painkiller" product. He built Snap as a data and analytics platform for real estate and mortgage underwriting. Snap grew from a single California compliance product into a national flood data business, reaching $5M in revenue and 30 employees. The company charged per-loan transaction fees and embedded via API into mortgage software systems. With double-digit market share, Snap focused on customer experience, automation, and expanding wallet share inside lenders' workflows. In October 2024, Snap sold 51% of the company to Intercontinental Exchange, parent of ICE Mortgage Technology, at a double-digit revenue multiple. Will stayed on to scale the platform inside a much larger ecosystem. His key lesson: dominate a narrow niche, build a required product, and let strategic buyers find you. Key Takeaways Required Beats Optional – Legal compliance products create urgency and retention because customers must buy to complete revenue-generating transactions. Micro-Niche Entry – Starting in a narrow regulated segment let Snap win trust, then expand into much larger adjacent markets. API = Distribution – Embedding inside legacy systems turned Snap into a one-click button that scaled through partners' existing sales teams. Customer Experience Wins – In commodity data markets, faster, cheaper, simpler delivery became Snap's main competitive weapon. Quote from Will Caldwell, CEO and Co-Founder of Snap "You don't need to build a huge business to get a huge, life-changing exit. Just stay laser-focused. Don't chase shiny objects. I see many founders trying to boil the ocean. It is about staying focused on a single niche. "I think vertical SaaS has many great niches, and horizontal software is challenging. You need a lot of money to go after horizontal solutions across industries. However, with vertical SaaS products and niches, there is a lot of overlooked opportunity; the real estate vertical is one prime example." Links Will Caldwell on LinkedIn Snap on LinkedIn Snap website Podcast Sponsor – LaunchBay LaunchBay helps B2B software companies automate client onboarding and implementation so customers activate faster and everyone stays aligned. If your onboarding includes data collection, setup steps, approvals, training, or any level of customization, LaunchBay replaces the messy mix of emails, spreadsheets, and meetings with a clear, all-in-one onboarding system. Teams use LaunchBay to onboard clients faster, stay on top of follow-ups automatically, and deliver a smoother experience, without hiring more people or adding more tools. Visit launchbay.com/practical and get 25% off your first 3 months on any LaunchBay plan. The Practical Founders Podcast Tune into the Practical Founders Podcast for weekly in-depth interviews with founders who have built valuable software companies without big funding. Subscribe to the Practical Founders Podcast using your favorite podcast app or view on our YouTube channel. Get the weekly Practical Founders newsletter and podcast updates at practicalfounders.com. Practical Founders CEO Peer Groups Be part of a committed and confidential group of practical founders creating valuable software companies without big VC funding. A Practical Founders Peer Group is a committed and confidential group of founders/CEOs who want to help you succeed on your terms. Each Practical Founders Peer Group is personally curated and moderated by Greg Head.
“Niestety, mimo że prompt jest bardzo precyzyjny, prawie za każdym zapytaniem odpowiedzi różnią się merytorycznie.” Łukasz cytuje feedback od osoby nietechnicznej - i to właśnie frustracja niedeterministyczną naturą LLM sprowokowała odcinek o PoC agentowych. Bo zanim zbudujesz armię agentów AI, musisz zrozumieć: ChatGPT i Copilot to no-go do eksperymentów biznesowych - mają własny System Prompt, auto-switching i logikę, której w API nie dostaniesz.
Hey dear subscriber, Alex here from W&B, let me catch you up! This week started with Anthropic releasing /fast mode for Opus 4.6, continued with ByteDance reality-shattering video model called SeeDance 2.0, and then the open weights folks pulled up! Z.ai releasing GLM-5, a 744B top ranking coder beast, and then today MiniMax dropping a heavily RL'd MiniMax M2.5, showing 80.2% on SWE-bench, nearly beating Opus 4.6! I've interviewed Lou from Z.AI and Olive from MiniMax on the show today back to back btw, very interesting conversations, starting after TL;DR!So while the OpenSource models were catching up to frontier, OpenAI and Google both dropped breaking news (again, during the show), with Gemini 3 Deep Think shattering the ArcAGI 2 (84.6%) and Humanity's Last Exam (48% w/o tools)... Just an absolute beast of a model update, and OpenAI launched their Cerebras collaboration, with GPT 5.3 Codex Spark, supposedly running at over 1000 tokens per second (but not as smart) Also, crazy week for us at W&B as we scrambled to host GLM-5 at day of release, and are working on dropping Kimi K2.5 and MiniMax both on our inference service! As always, all show notes in the end, let's DIVE IN! ThursdAI - AI is speeding up, don't get left behind! Sub and I'll keep you up to date with a weekly catch upOpen Source LLMsZ.ai launches GLM-5 - #1 open-weights coder with 744B parameters (X, HF, W&B inference)The breakaway open-source model of the week is undeniably GLM-5 from Z.ai (formerly known to many of us as Zhipu AI). We were honored to have Lou, the Head of DevRel at Z.ai, join us live on the show at 1:00 AM Shanghai time to break down this monster of a release.GLM-5 is massive, not something you run at home (hey, that's what W&B inference is for!) but it's absolutely a model that's worth thinking about if your company has on prem requirements and can't share code with OpenAI or Anthropic. They jumped from 355B in GLM4.5 and expanded their pre-training data to a whopping 28.5T tokens to get these results. But Lou explained that it's not only about data, they adopted DeepSeeks sparse attention (DSA) to help preserve deep reasoning over long contexts (this one has 200K)Lou summed up the generational leap from version 4.5 to 5 perfectly in four words: “Bigger, faster, better, and cheaper.” I dunno about faster, this may be one of those models that you hand off more difficult tasks to, but definitely cheaper, with $1 input/$3.20 output per 1M tokens on W&B! While the evaluations are ongoing, the one interesting tid-bit from Artificial Analysis was, this model scores the lowest on their hallucination rate bench! Think about this for a second, this model is neck-in-neck with Opus 4.5, and if Anthropic didn't release Opus 4.6 just last week, this would be an open weights model that rivals Opus! One of the best models the western foundational labs with all their investments has out there. Absolutely insane times. MiniMax drops M2.5 - 80.2% on SWE-bench verified with just 10B active parameters (X, Blog)Just as we wrapped up our conversation with Lou, MiniMax dropped their release (though not weights yet, we're waiting ⏰) and then Olive Song, a senior RL researcher on the team, joined the pod, and she was an absolute wealth of knowledge! Olive shared that they achieved an unbelievable 80.2% on SWE-Bench Verified. Digest this for a second: a 10B active parameter open-source model is directly trading blows with Claude Opus 4.6 (80.8%) on the one of the hardest real-world software engineering benchmark we currently have. While being alex checks notes ... 20X cheaper and much faster to run? Apparently their fast version gets up to 100 tokens/s. Olive shared the “not so secret” sauce behind this punch-above-its-weight performance. The massive leap in intelligence comes entirely from their highly decoupled Reinforcement Learning framework called “Forge.” They heavily optimized not just for correct answers, but for the end-to-end time of task performing. In the era of bloated reasoning models that spit out ten thousand “thinking” tokens before writing a line of code, MiniMax trained their model across thousands of diverse environments to use fewer tools, think more efficiently, and execute plans faster. As Olive noted, less time waiting and fewer tools called means less money spent by the user. (as confirmed by @swyx at the Windsurf leaderboard, developers often prefer fast but good enough models) I really enjoyed the interview with Olive, really recommend you listen to the whole conversation starting at 00:26:15. Kudos MiniMax on the release (and I'll keep you updated when we add this model to our inference service) Big Labs and breaking newsThere's a reason the show is called ThursdAI, and today this reason is more clear than ever, AI biggest updates happen on a Thursday, often live during the show. This happened 2 times last week and 3 times today, first with MiniMax and then with both Google and OpenAI! Google previews Gemini 3 Deep Think, top reasoning intelligence SOTA Arc AGI 2 at 84% & SOTA HLE 48.4% (X , Blog)I literally went
Sherwin Wu leads engineering for OpenAI's API platform, where roughly 95% of engineers use Codex, often working with fleets of 10 to 20 parallel AI agents.We discuss:1. What OpenAI did to cut code review times from 10-15 minutes to 2-3 minutes2. How AI is changing the role of managers3. Why the productivity gap between AI power users and everyone else is widening4. Why “models will eat your scaffolding for breakfast”5. Why the next 12 to 24 months are a rare window where engineers can leap ahead before the role fully transforms—Brought to you by:DX—The developer intelligence platform designed by leading researchersSentry—Code breaks, fix it fasterDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Episode transcript: https://www.lennysnewsletter.com/p/engineers-are-becoming-sorcerers—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Sherwin Wu:• X: https://x.com/sherwinwu• LinkedIn: https://www.linkedin.com/in/sherwinwu1—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Sherwin Wu(03:10) AI's role in coding at OpenAI(06:53) The future of software engineering with AI(12:26) The stress of managing agents(15:07) Codex and code review automation(19:29) The changing role of engineering managers(24:14) The one-person billion-dollar startup(31:40) Management lessons(37:28) Challenges and best practices in AI deployment(43:56) Hot takes on AI and customer feedback(48:57) Building for future AI capabilities(50:16) Where models are headed in the next 18 months(53:35) Business process automation(57:22) OpenAI's ecosystem and platform strategy(01:00:50) OpenAI's mission and global impact(01:05:21) Building on OpenAI's API and tools(01:08:16) Lightning round and final thoughts—Referenced:• Codex: https://openai.com/codex• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• OpenClaw: https://openclaw.ai• The creator of Clawd: “I ship code I don't read”: https://newsletter.pragmaticengineer.com/p/the-creator-of-clawd-i-ship-code• The Sorcerer's Apprentice: https://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice_(Dukas)• Quora: https://www.quora.com• Marc Andreessen: The real AI boom hasn't even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom• Sarah Friar on LinkedIn: https://www.linkedin.com/in/sarah-friar• Sam Altman on X: https://x.com/sama• Nicolas Bustamante's “LLMs Eat Scaffolding for Breakfast” post on X: https://x.com/nicbstme/status/2015795605524901957• The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html• Overton window: https://en.wikipedia.org/wiki/Overton_window• Developers can now submit apps to ChatGPT: https://openai.com/index/developers-can-now-submit-apps-to-chatgpt• Responses: https://platform.openai.com/docs/api-reference/responses• Agents SDK: https://platform.openai.com/docs/guides/agents-sdk• AgentKit: https://openai.com/index/introducing-agentkit• Ubiquiti: https://ui.com• Jujutsu Kaisen on Crunchyroll: https://www.crunchyroll.com/series/GRDV0019R/jujutsu-kaisen?srsltid=AfmBOoqvfzKQ6SZOgzyJwNQ43eceaJTQA2nUxTQfjA1Ko4OxlpUoBNRB• eero: https://eero.com• Opendoor: https://www.opendoor.com—Recommended books:• Structure and Interpretation of Computer Programs: https://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871• The Mythical Man-Month: Essays on Software Engineering: https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959• There Is No Antimemetics Division: A Novel: https://www.amazon.com/There-No-Antimemetics-Division-Novel/dp/0593983750• Breakneck: China's Quest to Engineer the Future: https://www.amazon.com/Breakneck-Chinas-Quest-Engineer-Future/dp/1324106034• Apple in China: The Capture of the World's Greatest Company: https://www.amazon.com/Apple-China-Capture-Greatest-Company/dp/1668053373—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Prabhleen Kaur: When Team Members Raise Concerns with Clarity, Not Anger Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "My idea of success as a Scrum Master is when you look around, you see motivated people, and when something goes wrong, they come to you not in anger, but with concern." - Prabhleen Kaur Prabhleen offers a refreshing perspective on measuring success as a Scrum Master that goes beyond velocity charts and feature counts. She shares a pivotal moment when her team was in production, delivering relentlessly with barely any time to breathe. A team member approached her—not with frustration or blame—but with thoughtful concern: "This is not going to work out." He sat down with Prabhleen and the Product Owner, explaining that as the middle layer in an API creation team, delays from upstream were creating a cascading problem. What struck Prabhleen wasn't just the identification of the issue, but how he approached it: with options to discuss, not demands to make. This moment crystallized her definition of success. When team members feel safe enough to voice concerns early, when they come with ideas rather than accusations, when they see themselves as part of the solution rather than victims of circumstances—that's when a Scrum Master has truly succeeded. Prabhleen reminds us that while stakeholders may focus on features delivered, Scrum Masters should watch how well the team responds to change. That adaptability, rooted in psychological safety and mutual trust, is the true measure of a team's maturity. Self-reflection Question: When problems emerge in your team, do people approach you with defensive anger or constructive concern? What does that tell you about the psychological safety you've helped create? Featured Retrospective Format for the Week: Keep-Stop-Happy-Gratitude Prabhleen shares her favorite retrospective format, born from necessity when she joined an established team with dismal participation in their standard three-column retrospectives. She transformed it into a four-column approach: (1) What should we keep doing, (2) What should we stop doing, (3) One thing that will make you happy, and (4) Gratitude for the team. The third column—asking what would make team members happy—opened unexpected doors. Suggestions ranged from team outings to skipping Friday stand-ups, giving Prabhleen real-time insights into team needs without waiting for formal working agreement sessions. The gratitude column proved even more powerful. "Appreciation brings a space where trust is automatically built. When every 15 days you're sitting with the team making a point to say thank you to each other for all the work you've done, everybody feels mutually respected," Prabhleen explains. This ties directly to the trust-building discussed in Tuesday's episode—using retrospectives not just to improve processes, but to strengthen the human connections that make teams resilient. [The Scrum Master Toolbox Podcast Recommends]
This week, Ben has a story on An AI-run social network exposed API keys, raising fears of agent hijacking and corporate data breaches. Dave's got the story of a judge tossing a case after a lawyer repeatedly filed fake AI-generated citations, calling it a failure of basic legal research. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Links to today's stories: Moltbook and the Rise of AI-Agent Networks: An Enterprise Governance Wake-Up Call AI legal advice is driving lawyers bananas Lawyer sets new standard for abuse of AI; judge tosses case Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our Caveat Briefing, a weekly newsletter available exclusively to N2K Pro members on N2K CyberWire's website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's Caveat Briefing covers a lawsuit that kicked off in California, where a woman is suing Meta and YouTube for the harm they allegedly cause to kids. Curious about the details? Head over to the Caveat Briefing for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.Show linkshasSole() Collection Method in Laravel 12.49.0hasMany() Collection Method in Laravel 12.50.0Filament v5.2.0 Adds a Callout ComponentClawdbot Rebrands to Moltbot After Trademark Request From AnthropicInstall Laravel Package Guidelines and Skills in BoostFuse for Laravel: A Circuit Breaker Package for Queue JobsNativePHP for Mobile Is Now FreeManage PostgreSQL Databases Directly in VS Code with Microsoft's ExtensionLivewire 4 and Blade Improvements in Laravel VS Code Extension v1.5.0Statamic 6 Is Officially ReleasedLaravel Announces Official AI SDK for Building AI-Powered AppsClaude Opus 4.6 adds adaptive thinking, 128K output, compaction API, and moreOpenAI Releases GPT-5.3-Codex, a New Codex Model for Agent-Style DevelopmentLaravel Live UK returns to London on June 18-19, 2026Bagisto Visual: Theme Framework with Visual Editor for Laravel E-commerceGenerate Complete Application Modules with a Single Command using Laravel TurboMakerEncrypt Files in Laravel with AES-256-GCM and Memory-Efficient StreamingMask Sensitive Eloquent Attributes on Retrieval in LaravelLaravel Related Content: Semantic Relationships Using pgvector
Mid-market organizations are transitioning from pilot projects to operationalizing generative AI and agentic workflows, according to a TechEYE article and Tech Isle survey cited by Dave Sobel. This shift centers on outcome-driven automation but exposes providers to new liability concerns, mainly due to fragmented, unreliable data and shadow AI usage—employees employing unauthorized tools outside official controls. The primary risk is that MSPs may be blamed for incidents where contract boundaries and technical controls do not cover browser-based generative AI use, making forensic evidence and documented enforcement essential for defending accountability. Supporting data from Tech Isle found that over 5,000 companies are pursuing structured approaches to AI-enabled growth, but face persistent issues in data trust, governance, and user fatigue. Additionally, European investment in sovereign cloud infrastructure is projected to triple between 2025 and 2027, driven by regulatory demands and concerns about U.S. data sovereignty. MSPs managing split architectures—sovereign providers for regulated data and hyperscalers for everything else—encounter API mismatches, operational complexity, and margin pressure. The recommendation is to standardize policy enforcement, identity management, and residency mapping while prioritizing audit-ready reporting and exception handling. AI-driven cyberattacks have increased, with reports from Level Blue and Check Point Research highlighting a surge in both attack volume and sophistication. Only 53% of CISOs feel prepared for AI threats, despite 45% expecting to be impacted within a year. Browser-based generative AI use introduces visibility gaps, raising the risk of negligence claims when service providers cannot demonstrate governance or forensic readiness. Reauthorization of the Cybersecurity Information Sharing Act (CISA) underscores that voluntary data sharing is inadequate, with CIRCA now requiring mandatory 72-hour incident reporting for critical infrastructure. The key takeaways for MSPs and IT leaders are to proactively define AI coverage and governance in contracts, enforce acceptable use policies, and instrument monitoring to close visibility gaps. Providers who can deliver forensic-grade telemetry, managed compliance programs, and operational readiness for incident reporting will be better positioned to defend against penalties, retain higher-value accounts, and offer meaningful differentiation. These structural challenges—fragmented control planes, increased compliance costs, and permanent risk friction—necessitate a strategic shift toward governance-led service models.Three things to know today00:00 Midmarket Shifts to Agentic AI as Europe Triples Sovereign Cloud Spending by 202706:08 Most Security Chiefs Say They're Not Ready for AI-Powered Cyberattacks Coming This Year09:46 CISA 2015 Reauthorized Through 2026; CIRCIA Mandates Expose Voluntary Sharing Failure This is the Business of Tech. Supported by: TimeZest IT Service Provider University
What happens when all the world's money moves on chain? That's not a hypothetical for Marc Boiron, CEO of Polygon Labs, it's the company's mission. In this episode, Marc explains how Polygon is evolving from its roots as an Ethereum layer two into the blockchain for global payments, detailing two recent acquisitions that form the foundation of what he calls the "open money stack" - a single API combining on-ramps, wallets, and cross-chain interoperability.With over $2.5 trillion in transaction volume already processed and partnerships with Revolut, Stripe, Nubank, and dozens of fintechs across Latin America, Africa, and Asia, Marc makes the case that stablecoins are just the beginning. He shares why tokenized bank deposits will be the real game-changer, how banks are already positioning to profit from this shift, and why in 10 years he believes every dollar, whether paying a merchant down the street or sending a remittance across the globe, will move on a blockchain without anyone even thinking about it.In this podcast you will learn:How Marc first got interested in blockchain and crypto technology.Why he decided to make the move to Polygon Labs.Why Polygon decided to focus on payments.All the components you need to move money around the world on blockchain.The idea behind the open money stack.How Polygon is working with the likes of Revolut and Stripe.How they differentiate themselves from the other payments blockchains.What they are doing in AML and sanctions policy.The scale that Polygon is at today when it comes to transaction volume.What will the financial system look like when more money stays on chain.The two things banks ask in their initial conversations with Polygon.How money will transform in the next 10 years and why most people will not notice.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
Agentic AI is coming. Are defenders ready?Alon Schindel, Director of Data & Threat Research at Wiz, joins Eden and Amitai for the Season 3 Finale. This isn't just a recap. It is a look at how top-tier research teams operate at speed. Alon explains why Wiz treats research as a "product" rather than a support function. He details the "DeepLeak" discovery where his team found thousands of exposed API keys mere hours after a platform's popularity spiked.What's Inside:Agentic AI: Why 2026 will be the year AI starts taking action, not just chatting.Speed as a Weapon: How to shorten the time between a zero-day and a detection.Culture: The power of the "Table" and collaborative chaos.Retrospective: Lessons from IngressNightmare and the year in vulnerabilities.Resources:Read the DeepLeak Research: https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leakWiz Threat Research Hub: https://www.wiz.io/research
APEX Express is a weekly magazine-style radio show featuring the voices and stories of Asians and Pacific Islanders from all corners of our community. The show is produced by a collective of media makers, deejays, and activists. On this episode, the Stop AAPI Hate Pacific Islander Advisory Council discuss a new report on anti–Pacific Islander hate. They examine the documented impacts of hate, structural barriers Pacific Islander communities face in reporting and accessing support, and the long-standing traditions of resistance and community care within PI communities. Important Links: Stop AAPI Hate Stop AAPI Hate Anti-Pacific Islander Hate Report If you have questions related to the report, please feel free to contact Stop AAPI Hate Research Manager Connie Tan at ctan@stopaapihate.org Community Calendar: Upcoming Lunar New Year Events Saturday, February 14 – Sunday, February 15 – Chinatown Flower Market Fair, Grant Avenue (fresh flowers, arts activities, cultural performances) Tuesday, February 24 – Drumbeats, Heartbeats: Community as One, San Francisco Public Library (Lunar New Year and Black History Month celebration) Saturday, February 28 – Oakland Lunar New Year Parade, Jackson Street Saturday, March 7 – Year of the Horse Parade, San Francisco Throughout the season – Additional Lunar New Year events, including parades, night markets, and museum programs across the Bay Area and beyond. Transcript: [00:00:00] Miata Tan: Hello and welcome. You are tuning in to Apex Express, a weekly radio show uplifting the voices and stories of Asian Americans and Pacific Islanders. I'm your host, Miata Tan and tonight we're examining community realities that often go under reported. The term A API, meaning Asian Americans and Pacific Islanders is an [00:01:00] acronym we like to use a lot, but Pacific Islander peoples, their histories and their challenges are sometimes mischaracterized or not spoken about at all. Stop A API Hate is a national coalition that tracks and responds to the hate experience by A API communities through reporting, research and advocacy. They've released a new report showing that nearly half of Pacific Islander adults experienced an act of hate in 2024 because of their race, ethnicity, or nationality. Tonight we'll share conversations from a recent virtual community briefing about the report and dive into its findings and the legacy of discrimination experienced by Pacific Islanders. Isa Kelawili Whalen: I think it doesn't really help that our history of violence between Pacific Islander Land and Sea and the United States, it already leaves a sour taste in your mouth. When we Pacifica. Think [00:02:00] about participating in American society and then to top it off, there's little to no representation of Pacific Islanders. Miata Tan: That was the voice of Isa Kelawili Whalen, Executive Director at API Advocates and a member of Stop, A API hates Pacific Islander Advisory Council. You'll hear more from Isa and the other members of the advisory council soon. But first up is Cynthia Choi, the co-founder of Stop, A API, Hate and co-Executive Director of Chinese for affirmative action. Cynthia will help to ground us in the history of the organization and their hopes for this new report about Pacific Islander communities. Cynthia Choi: As many of you know, Stop API Hate was launched nearly six years ago in response to anti-Asian hate during COVID-19 pandemic. And since then we've operated as the [00:03:00] nation's largest reporting center tracking anti A. PI Hate Acts while working to advance justice and equity for our communities. In addition to policy advocacy, community care and narrative work, research has really been Central to our mission because data, when grounded in community experience helps tell a fuller and more honest story about the harms our communities face. Over the years, through listening sessions and necessary and hard conversations with our PI community members and leaders, we've heard a consistent. An important message. Pacific Islander experiences are often rendered invisible when grouped under the broader A API umbrella and the forms of hate they experience are shaped by distinct histories, ongoing injustice, and unique cultural and political [00:04:00] context. This report is in response to this truth and to the trust Pacific Islander communities have placed in sharing their experience. Conducted in partnership with NORC at the University of Chicago, along with stories from our reporting center. we believe these findings shed light on the prevalence of hate, the multifaceted impact of hate and how often harm goes unreported. Our hope is that this report sparks deeper dialogue and more meaningful actions to address anti pi hate. We are especially grateful to the Pacific Islander leaders who have guided this work from the beginning. Earlier this year, uh, Stop API hate convened Pacific Islander Advisory Council made up of four incredible leaders, Dr. Jamaica Osorio Tu‘ulau‘ulu Estella Owoimaha Church, Michelle Pedro, and Isa Whalen. Their leadership, wisdom [00:05:00] and care have been essential in shaping both our research and narrative work. Our shared goal is to build trust with Pacific Islander communities and to ensure that our work is authentic, inclusive, and truly reflective of lived experiences. These insights were critical in helping us interpret these findings with the depth and context they deserve. Miata Tan: That was Cynthia Choi, the co-founder of Stop, A API, hate and co-Executive Director of Chinese for affirmative action. As Cynthia mentioned to collect data for this report, Stop A API Hate worked with NORC, a non-partisan research organization at the University of Chicago. In January, 2025, Stop A API. Hate and norc conducted a national survey that included 504 Pacific Islander respondents. The survey [00:06:00] examined the scope of anti Pacific Islander hate in 2024, the challenges of reporting and accessing support and participation in resistance and ongoing organizing efforts. We'll be sharing a link to the full report in our show notes at kpfa.org/program/apex-express. We also just heard Cynthia give thanks to the efforts of the Stop A API hate Pacific Islander Advisory Council. this council is a team of four Pacific Islander folks with a range of professional and community expertise who helped Stop A API hate to unpack and contextualize their new report. Tonight we'll hear from all four members of the PI Council. First up is Dr. Jamaica Osorio, a Kanaka Maoli wahine artist activist, and an Associate Professor of Indigenous and native Hawaiian politics [00:07:00] at the University of Hawaiʻi at Mānoa . Here's Dr. Jamaica, reflecting on her initial reaction to the report and what she sees going on in her community. Dr. Jamaica Heolimeleikalani Osorio: Aloha kākou. Thank you for having us today. I think the biggest thing that stood out to me in the data and the reporting that I haven't really been able to shake from my head, and I think it's related to something we're seeing a lot in our own community, was the high levels of stress and anxiety that folks in our community were experiencing and how those high levels were almost, they didn't really change based on whether or not people had experienced hate. Our communities are living, um, at a threshold, a high threshold of stress and anxiety, um, and struggling with a number of mental health, issues because of that. And I think this is an important reminder in relationship to the broader work we might be doing, to be thinking about Stopping hate acts against folks in our community and in other communities, but really to think about what are the [00:08:00] conditions that people are living under that make it nearly unlivable for our communities to survive in this place. Uh, the, the other thing that popped out to me that I wanna highlight is the data around folks feeling less welcome. How hate acts made certain folks in our community feel less welcome where they're living. And I kind of wanna. Us to think more about the tension between being unwelcomed in the so-called United States, and the tension of the inability for many of our people to return home, uh, if they would've preferred to actually be in our ancestral homes. And what are. How are those conditions created by American Empire and militarism and nuclearization, kind of the stuff that we talked about as a panel early on but also as we move away from today's conversation thinking about like what is. The place of PIs in the so-called United States. Uh, what does it mean to be able to live in your ancestral homeland like myself, where America has come to us, and chosen to stay? What does it mean for our other PI family members who have [00:09:00] come to the United States? Because our homes have been devastated by us militarism and imperialism. That's what's sitting with me that I think may not. Immediately jump out of the reporting, but we need to continue to highlight, uh, in how we interpret. Miata Tan: That was Dr. Jamaica Osorio, an Associate Professor of Indigenous and native Hawaiian politics at the University of Hawaiʻi at Māno a. Now let's turn to Isa Kelawili Whalen. Isa is the Executive Director of API Advocates and another member of the Stop A API hate Pacific Islander Advisory Council. Here Isa builds on what Dr. Jamaica was saying about feelings of stress and anxiety within the Pacific Islander communities. Okay. She also speaks from her experience as an Indigenous CHamoru and Filipino woman. Here's Isa. Isa Kelawili Whalen: [00:10:00] American society and culture is drastically different from Pacifica Island and our culture, our roots, traditions, and so forth, as are many ethnicities and identities out there. But for us who are trying to figure out how to constantly navigate between the two, it's a little polarizing. Trying to fit in into. American society, structure that was not made for us and definitely does not coincide from where we come from either. So it's hard to navigate and we're constantly felt, we feel like we're excluded, um, that there is no space for us. There's all these boxes, but we don't really fit into one. And to be honest, none of these boxes are really made for anyone to fit into one single box the unspoken truth. And so. A lot of the times we're too Indigenous or I'm too Pacifica, or I'm too American, even to our own families being called a coconut. A racial comment alluding to being one ethnicity on the inside versus the outside, and to that causes a lot of mental health harm, um, within ourselves, our [00:11:00] friends, our family, community, and understanding for one another. in addition to that. I think it doesn't really help that our history of violence between Pacific Islander Land and Sea and the United States, it already leaves a sour taste in your mouth. When we Pacifica. Think about participating in American society and then to top it off, there's little to no representation of Pacific Islanders, um, across. The largest platforms in the United States of America. It goes beyond just representation with civic engagement, um, and elected officials. This goes to like stem leadership positions in business to social media and entertainment. And when we are represented, it's something of the past. We're always connotated to something that's dead, dying or old news. And. we're also completely romanticized. This could look like Moana or even the movie Avatar. So I think the feeling of disconnected or unaccepted by American society at large is something that stood out to me in the [00:12:00] report and something I heavily resonate with as well. Miata Tan: That was Isa Kelawili Whalen, Executive Director at API Advocates and a member of the Stop A API hate Pacific Islander Advisory Council. As we heard from both Dr. Jamaica and Isa, the histories and impacts of hate against. Pacific Islander communities are complex and deeply rooted from ongoing US militarization to a lack of representation in popular culture. Before we hear from the two other members of the PI Advisory Council, let's get on the same page. What are we talking about when we talk about hate? Connie Tan is a research manager at Stop, A API hate and a lead contributor to their recent report on anti Pacific Islander hate. Here she is defining Stop A API hate's research framework for this project. [00:13:00] Connie Tan: Our definition of hate is largely guided by how our communities define it through the reporting. So people have reported a wide range of hate acts that they perceive to be motivated by racial bias or prejudice. The vast majority of hate acts that our communities experience are not considered hate crimes. So there's a real need to find solutions outside of policing in order to address the full range of hate Asian Americans and Pacific Islander experience. We use the term hate act as an umbrella term to encompass the various types of bias motivated events people experience, including hate crimes and hate incidents. And from the survey findings, we found that anti PI hate was prevalent. Nearly half or 47% of PI adults reported experiencing a hate act due to their race, ethnicity, or nationality in 2024. And harassment such as being called a racial slur was the most common type of hate. Another [00:14:00] 27% of PI adults reported institutional discrimination such as unfair treatment by an employer or at a business. Miata Tan: That was Connie Tan from Stop. A API hate providing context on how hate affects Pacific Islander communities. Now let's return to the Pacific Islander Advisory Council who helped Stop A API hate to better understand their reporting on PI communities. The remaining two members of the council are Tu‘ulau‘ulu Estella Owoimaha- Church, a first generation Afro Pacifican educator, speaker and consultant. And we also have Michelle Pedro, who is a California born Marshallese American advocate, and the policy and communications director at Arkansas's Coalition of the Marshallese. You'll also hear the voice of Stephanie Chan, the Director of Data and [00:15:00] Research at Stop A API Hate who led this conversation with the PI Council. Alrighty. Here's Esella reflecting on her key takeaways from the report and how she sees her community being impacted. Tu‘ulau‘ulu Estella Owoimaha-Church: A piece of data that stood out to me is the six out of 10 PIs who have experienced hate, noted that it was an intersectional experience, that there are multiple facets of their identities that impacted the ways they experienced hate. And in my experience as Afro Pacifican. Nigerian Samoan, born and raised in South Central Los Angeles on Tonga land. That's very much been my experience, both in predominantly white spaces and predominantly API spaces as well. As an educator a piece of data that, that really stood out to me was around the rate at which. Pacific Islanders have to exit education. 20 years as a high school educator, public high school educator and college counselor. And that was [00:16:00] absolutely my experience when I made the choice to become an educator. And I moved back home from grad school, went back to my neighborhood and went to the school where I had assumed, because when I was little, this is where. My people were, were when I was growing up, I assumed that I would be able to, to put my degrees to use to serve other black PI kids. And it wasn't the case. Students were not there. Whole populations of our folks were missing from the community. And as I continued to dig and figure out, or try to figure out why, it was very clear that at my school site in particular, Samoan, Tongan, and Fijian students who were there. We're not being met where they are. Their parents weren't being met where they are. They didn't feel welcome. Coming into our schools, coming into our districts to receive services or ask for support it was very common that the only students who received support were our students who chose to play sports. Whereas as a theater and literature educator, I, I spent most of my time advocating for [00:17:00] block schedule. So that my students who I knew had, you know, church commitments after school, family commitments after school I needed to find ways to accommodate them. and I was alone in that fight, right? The entire district, the school the profession was not showing up for our students in the ways that they needed. Stephanie Chan: Thank you, Estella. Yeah, definitely common themes of, you know, what does belonging mean in our institutions, but also when the US comes to you, as Jamaica pointed out as well. Michelle, I'll turn it over to you next. Michelle Pedro: Lakwe and greetings everyone. , A few things that pointed out to me or stood out to me. Was, um, the mental health aspect mental health is such a, a big thing in our community we don't like to talk about, especially in the Marshallese community. it's just in recent years that our youth is talking about it more. And people from my generation are learning about mental health and what it is in this society versus back home. It is so different. [00:18:00] When people move from Marshall Islands to the United States, the whole entire system is different. The system was not built for people like us, for Marshallese, for Pacific Islanders. It really wasn't. And so the entire structure needs to do more. I feel like it needs to do more. And the lack of education like Estella said. Back home. We have a lot of our folks move here who don't graduate from past like third grade. So the literacy, rate here in Arkansas my friends that our teachers, they say it's very low and I can only imagine what it is in the Marshallese community here. And. I hear stories from elders who have lived here for a while that in Arkansas it was a little bit scary living here because they did not feel welcome. They didn't feel like it was a place that they could express themselves. A lot of my folks say that they're tired of their race card, but we [00:19:00] need to talk about race. We don't know what internal racism is, or systemic racism is in my community. We need to be explaining it to our folks where they understand it and they see it and they recognize it to talk about it more. Miata Tan: That was Michelle Pedro, Policy and Communications Director at Arkansas Coalition of the Marshallese, and a member of the Stop, A API hate Pacific Islander Advisory Council. Michelle shared with us that hate against Pacific Islander communities affects educational outcomes leading to lower rates of literacy, school attendance, and graduation. As Esella noted, considering intersectionality can help us to see the full scope of these impacts. Here's Connie Tan, a research manager at Stop, A API hate with some data on how PI communities are being targeted the toll this takes on their mental and physical [00:20:00] wellbeing. Connie Tan: And we saw that hate was intersectional. In addition to their race and ethnicity, over six, in 10 or 66% of PI adults said that other aspects of their identity were targeted. The top three identities targeted were for their age, class, and gender. And experiences with hate have a detrimental impact on the wellbeing of PI Individuals with more than half or about 58% of PI adults reporting negative effects on their mental or physical health. It also impacted their sense of safety and altered their behavior. So for example, it is evidenced through the disproportionate recruitment of PI people into the military. And athletic programs as a result, many are susceptible to traumatic brain injuries, chronic pain, and even post-traumatic stress disorder. Miata Tan: That was Connie Tan with Stop. A API Hate. You are tuned [00:21:00] into Apex Express, a weekly radio show, uplifting the voices and stories of Asian Americans and Pacific Islanders. You'll hear more about Connie's research and the analysis from the Stop. A API hate Pacific Islander Advisory Council. In a moment. Stay with us. [00:22:00] [00:23:00] [00:24:00] [00:25:00] Miata Tan: That was us by Ruby Ibarra featuring Rocky Rivera, Klassy and Faith Santilla. You are tuned into Apex Express on 94.1 KPFA, A weekly radio show [00:26:00] uplifting the voices and stories of Asian Americans and Pacific Islanders. I'm your host Miata Tan. Tonight we're focused on our Pacific Islander communities and taking a closer look at a new report on anti Pacific Islander hate from the National Coalition, Stop A API hate. Before the break the Stop, A API, Pacific Islander Advisory Council shared how mental health challenges, experiences of hate and the effects of US militarization are all deeply interconnected in PI communities. Connie Tan, a research manager at Stop. A API Hate reflects on how a broader historical context helps to explain why Pacific Islanders experience such high rates of hate. Here's Connie. Connie Tan: We conducted sensemaking sessions with our PI advisory council members, and what we learned is that anti PI hate must be understood [00:27:00] within a broader historical context rooted in colonialism. Militarization nuclear testing and forced displacement, and that these structural violence continue to shape PI people's daily lives. And so some key examples include the US overthrow and occupation of Hawaii in the 18 hundreds that led to the loss of Hawaiian sovereignty and cultural suppression. In the 1940s, the US conducted almost 70 nuclear tests across the Marshall Islands that decimated the environment and subjected residents to long-term health problems and forced relocation to gain military dominance. The US established a compacts of free association in the 1980s that created a complex and inequitable framework of immigration status that left many PI communities with limited access to federal benefits. The COVID-19 pandemic exposed a disproportionate health impacts in PI communities due to the historical lack of disaggregated data, unequal access to health benefits, [00:28:00] and a lack of culturally responsive care. And most recently, there are proposed or already enacted US travel bans targeting different Pacific Island nations, continuing a legacy of exclusion. So when we speak of violence harm. Injustice related to anti P hate. It must be understood within this larger context. Miata Tan: That was Connie Tan at Stop. A API hate. Now let's get back to the Pacific Islander Advisory Council who are helping us to better understand the findings from the recent report from Stop. A API hate focused on hate acts against the Pacific Islander communities. I will pass the reins over to Stephanie Chan. Stephanie's the director of Data and Research at Stop A API Hate who led this recent conversation with the PI Advisory Council. Here's Stephanie. [00:29:00] Stephanie Chan: The big mental health challenges as well as the issues of acceptance and belonging and like what that all means. I, I think a lot of you spoke to this but let's get deeper. What are some of the historical or cultural factors that shape how PI communities experience racism or hate today? Let's start with Estella. Tu‘ulau‘ulu Estella Owoimaha-Church: Thank you for the question, Stephanie. A piece of data that, stood out to me, it was around the six outta 10 won't report to formal authority agencies. And earlier it was mentioned that there's a need For strategies outside policing. I think that, to everything that, Jamaica's already stated and, and what's been presented in the, the data why would we report, when the state itself has been harmful to us collectively. The other thing I can speak to in my experience is again, I'll, I'll say that an approach of intersectionality is, is a must because says this too in the report, more than [00:30:00] 57% of our communities identify as multiracial, multi-ethnic. And so in addition to. Who we are as Pacific Islander, right? Like many of us are also half Indigenous, half black, half Mexican, et cetera. List goes on. And there's, there needs to be enough space for all of us, for the whole of us to be present in our communities and to, to do the work, whatever the work may be, whatever sector you're in, whether health or education. Policy or in data. And intersectional approach is absolutely necessary to capture who we are as a whole. And the other, something else that was mentioned in the report was around misinformation and that being something that needs to be combated in particular today. Um, and I see this across several communities. The, AI videos are, are a bit outta control. Sort of silly, but still kind of serious. Example comes to mind, recent a very extensive conversation. I didn't feel like having, uh, with, [00:31:00] with my uncles around whether or not Tupac is alive because AI videos Are doing a whole lot that they shouldn't be doing. And it's, it's a goofy example, but an example nonetheless, many of our elders are using social media or on different platforms and the misinformation and disinformation is so loud, it's difficult to continue to do our work. And educate, or in some cases reeducate. And make sure that, the needs of our community that is highlighted in this report are being adjusted. Stephanie Chan: Thank you. Yeah. And a whole new set of challenges with the technology we have today. Uh, Michelle, do you wanna speak to the historical and cultural factors that have shaped how PI communities experience racism today? Michelle Pedro: Our experience is, it's inseparable to the US nuclear legacy and just everything that Estella was saying, a standard outside of policing. Like why is the only solution incarceration or most of the solutions involve [00:32:00] incarceration. You know, if there's other means of taking care of somebody we really need to get to the root causes, right? Instead of incarceration. And I feel like a lot of people use us, but not protect us. And the experiences that my people feel they're going through now is, it's just as similar than when we were going through it during COVID. I. Here in Arkansas. More than half of people that, uh, the death rates were Marshallese. And most of those people were my relatives. And so going to these funerals, I was just like, okay, how do I, how do I go to each funeral without, you know, if I get in contact to COVID with COVID without spreading that? And, you know, I think we've been conditioned for so long to feel ashamed, to feel less than. I feel like a lot of our, our folks are coming out of that and feeling like they can breathe again. But with the [00:33:00] recent administration and ice, it's like, okay, now we have to step back into our shell. And we're outsiders again, thankfully here in, uh, Northwest Arkansas, I think there's a lot of people who. have empathy towards the Marshallese community and Pacific Islanders here. And they feel like we can, we feel like we can rely on our neighbors. Somebody's death and, or a group of people's deaths shouldn't, be a reason why we, we come together. It should be a reason for, wanting to just be kind to each other. And like Estella said, we need to educate but also move past talks and actually going forward with policy changes and stuff like that. Stephanie Chan: Thank you Michelle. And yes, we'll get to the policy changes in a second. I would love to hear. What all of our panelists think about what steps we need to take. Uh, Isa I'm gonna turn it over to you to talk about historical or cultural factors that shape how PI communities experience racism today. Isa Kelawili Whalen: [00:34:00] Many, if not all, Pacific Islander families or communities that I know of or I'm a part of, we don't wanna get in trouble. And what does that really mean? We don't wanna be incarcerated by racially biased jurisdictions. Um, we don't wanna be deported. We don't want to be revoked of our citizenship for our rights or evicted or fired. All things that we deem at risk at all times. It's always on the table whenever we engage with the American government. Even down to something as simple as filling out a census form. And so I think it's important to know also that at the core of many of our Pacifica cultures, strengthening future generations is at the center. Every single time. I mean, with everything that our elders have carried, have fought for, have sacrificed for, to bring us to where we are today. It's almost like if someone calls you a name or they give you a dirty look, or maybe even if they get physical with you on a sidewalk. Those are things we just swallow. ‘ cause you have to, there's so much on the table so much at risk that we cannot afford to lose. [00:35:00] And unfortunately, majority of the times it's at the cost of yourself. It is. That mistrust with everything that's at risk with keeping ourselves, our families, and future generations. To continue being a part of this American society, it makes it really, really hard for us to navigate racism and hate in comparison to, I would say, other ethnic groups. Stephanie Chan: Definitely. And the mistrust in the government is not gonna get better in this context. It's only gonna get worse. Jamaica, do you wanna speak to the question of the historical and cultural factors that shape how PI communities experience racism? Dr. Jamaica Heolimeleikalani Osorio: Absolutely. You know, without risking sounding like a broken record, I think one of the most meaningful things that many of us share across the Pacific is the violence of us. Uh, not just us, but in imperial militarization and nuclear testing. and I think it's easy for folks. Outside of the Pacific to forget that that's actually ongoing, right? That there are military occupations ongoing in Hawaii, in [00:36:00] Guam, in Okinawa, uh, that our people are being extracted out of their communities to serve in the US military in particular, out of Samoa, the highest per capita rate of folks being enlisted into the US on forces, which is insane. Um, so I don't want that to go unnamed as something that is both historical. And ongoing and related to the kind of global US imperial violence that is taking place today that the Pacific is is this. Point of departure for so much of that ongoing imperial violence, which implicates us, our lands, our waters, and our peoples, and that as well. And that's something that we have to reckon with within the overall context of, experiencing hate in and around the so-called United States. But I also wanna touch on, The issue of intersectionality around, um, experiencing hate in the PI community and, and in particular thinking about anti-blackness, both the PI community and towards the PI community. Uh, [00:37:00] and I Understanding the history of the way white supremacy has both been inflicted upon our people and in many cases internalized within our people. And how anti-blackness in particular has been used as a weapon from within our communities to each other while also experiencing it from the outside. Is something that is deeply, deeply impacting our people. I'm thinking both the, the personal, immediate experience of folks experiencing or practicing anti-blackness in our community. But I'm also thinking about the fact that we have many examples of our own organizations and institutions Reinforcing anti-blackness, uh, being unwilling to look at the way that anti-blackness has been reinterpreted through our own cultural practices to seem natural. I'll speak for myself. I've, I've seen this on a personal level coming out of our communities and coming into our communities. I've seen this on a structural level. you know, we saw the stat in the report that there's a high percentage of PIs who believe that cross racial solidarity is [00:38:00] important, and there's a high percentage of PIs who are saying that they want to be involved and are being involved in trying to make a difference, uh, against racial injustice in this godforsaken. Country, Um, that work will never be effective if we cannot as a community really take on this issue of anti-blackness and how intimately it has seeped into some of our most basic assumptions about what it means to be Hawaiian, about what it means to be Polynesian, about what it means to be, any of these other, uh, discreet identities. We hold as a part of the Pacific. Miata Tan: That was Dr. Jamaica Osorio, an Associate Professor of Indigenous and Native Hawaiian politics and a member of the Stop A API hate Pacific Islander Advisory Council. Dr. Jamaica was reflecting on the new report from Stop. A API Hate that focuses on instances of hate against Pacific Islander [00:39:00] communities. We'll hear more from the PI Advisory Council in a moment. Stay with us. [00:40:00] [00:41:00] [00:42:00] [00:43:00] That was Tonda by Diskarte Namin . You are tuned into Apex Express on 94.1 KPFA, a weekly radio show uplifting the voices and stories of Asian Americans and Pacific Islanders. I am your host Miata Tan, and tonight we're centering our Pacific Islander communities. Stop. A API Hate is a national coalition that tracks and responds to anti-Asian American and Pacific Islander hate. Their latest report found that nearly half of Pacific Islander [00:44:00] adults experienced an act of hate in 2024 because of their race, ethnicity, or nationality. Connie Tan is a research manager at Stop, A API Hate who led the charge on this new report. Here she is sharing some community recommendations on how we can all help to reduce instances of harm and hate against Pacific Islander communities. Connie Tan: So to support those impacted by hate, we've outlined a set of community recommendations for what community members can do if they experience hate, and to take collective action against anti P. Hate first. Speak up and report hate acts. Reporting is one of the most powerful tools we have to ensure harms against PI. Communities are addressed and taken seriously. You can take action by reporting to trusted platforms like our Stop API Hate Reporting Center, which is available in 21 languages, including Tongan, Samoan, and Marshall. [00:45:00] Second, prioritize your mental health and take care of your wellbeing. We encourage community members to raise awareness by having open conversations with loved ones, family members, and elders about self-care and mental wellness, and to seek services in culturally aligned and trusted spaces. Third, combat misinformation in the fight against. It is important to share accurate and credible information and to combat anti PI rhetoric. You can view our media literacy page to learn more. Fourth, know your rights and stay informed During this challenging climate, it is important to stay up to date and know your rights. There are various organizations offering Know your rights materials, including in Pacific Islander languages, and finally participate in civic engagement and advocacy. Civic engagement is one of the most effective ways to combat hate, whether it is participating in voting or amplifying advocacy efforts. Miata Tan: That [00:46:00] was Connie Tan, a research manager at Stop. A API Hate. As Connie shared, there's a lot that can be done to support Pacific Islander communities from taking collective action against hate through reporting and combating misinformation to participating in civic engagement and advocacy. I'll pass the reins back over to Stephanie Chen, the director of Data and Research at Stop A API Hate. Stephanie is speaking with the Stop, A API hate Pacific Islander Advisory Council, zeroing in on where we can go from here in addressing hate against Pacific Islander communities. Stephanie Chan: We've heard a lot, a lot about the pain of anti PI hate, we've heard a lot about the pain of just, ongoing militarization displacement government distrust problems with education. Anti-blackness. what three things would you name as things that [00:47:00] we need to do? What changes actions or policies we need to do to move forward, on these issues? And I'm gonna start with Isa. Isa Kelawili Whalen: Thank you Stephanie. Um, I'll try and go quickly here, but three policy areas. I'd love to get everyone engaged. One, data disaggregation. Pacific Islanders were constantly told that we don't have the data, so how could we possibly know what you guys are experiencing or need, and then. When we do have the data, it's always, oh, but you don't have enough numbers to meet this threshold, to get those benefits. Data informs policy, policy informs data. Again, thank you. Stop. I hate for having us here to talk about that also, but definitely continue fighting for data disaggregation. Second thing I would say. Climate resiliency, uh, supporting it and saying no to deep sea mining in our Pacifica waters. History of violence again with our land and sea. There's been a number in the, in the chat and one to name the nuclear warfare and bikini at toll, where after wiping out the people, the culture, the island itself, the United States promised reparations and to never harm again in that [00:48:00] way, but. Here we are. And then third language access, quite literally access, just access, um, to all things that the average English speaking person or learner has. So I'd say those three. Stephanie Chan: Thank you. Well, we'll move on to Jamaica. Uh, what do you think are the actions or policies that we need? Dr. Jamaica Heolimeleikalani Osorio: Uh, we need to demilitarize the Pacific. We need to shut down military bases. We need to not renew military leases. We need to not allow the US government to condemn lands, to expand their military footprint in the Pacific. I think one of the points that came up time and time again around not reporting is again, not feeling like anything's gonna happen, but two, who are we reporting to and we're reporting to states and systems that have contained us, that have violated us and that have hurt us. So yeah, demilitarization, abolition in the broadest sense, both thinking about Discreet carceral institutions, but then also the entire US governing system. And three I'll just make it a little smaller, like fuck ice, and tear that shit [00:49:00] down. Like right now, there are policy change issues related to ICE and carceral institutions, but I'm really thinking about kind of. Incredible mobilization that's taking place in particular in, in Minneapolis and the way people are showing up for their neighbors across racial, gender, and political spectrums. And so outside of this discrete policy changes that we need to fight for, we need more people in the streets showing up to protect each other. and in doing so, building the systems and the, the communities and the institutions that we will need to arrive in a new world. Stephanie Chan: Great word, Michelle. Michelle Pedro: I'm just gonna add on to what, Isa said about language, access justice, equity, also protection of access to healthcare. in terms of what Ika said yes. Three West, Papua New Guinea, yeah, thank you for having me here. Stephanie Chan: Thank you. And Ella, you wanna bring us home on the policy question? Tu‘ulau‘ulu Estella Owoimaha-Church: I'm from South Central LA Ice melts around here. yes to everything that has been said, in [00:50:00] particular, I think the greatest policy issue. Impact in our folks is demil, demilitarization. And that also goes to the active genocide that is happening in the Pacific and has been ongoing. And as a broader API community, it's a conversation we don't ever have and have not had uh, regularly. So yes to all that. And risk, it sounded like a broken record too. I think, uh, education is a huge. Part of the issue here, I think access to real liberated ethnic studies for all of our folks is absolutely crucial to continuing generation after generation, being able to continue the demil fight to continue. To show up for our folks for our islands in diaspora and back home on our islands. You know, the, the report said that, uh, we are 1.6 million strong here in the United States and that our populations continue to grow, fortunately, unfortunately here in the us. And that [00:51:00] we are a multi-ethnic, um, group of folks and that, That demands, it's an imperative that our approach to education, to political education, to how we show up for community, how we organize across faith-based communities has to be intersectional. It has to be it has to be pro-black. It has to be pro Indigenous because that is who we are as a people. We are black. And Indigenous populations all wrapped up into one. And any way we approach policy change has to come from a pro-black, pro Indigenous stance. Stephanie Chan: Thank you, Estella. We did have a question about education and how we actually make. PI studies happen. do you have anything you wanna elaborate on, how do we get school districts and state governments to prioritize PI history, especially K through 12? Tu‘ulau‘ulu Estella Owoimaha-Church: I'm gonna say with the caveat of under this current regime. Any regular tactics I'm used to employing may not be viable at this current [00:52:00] moment. But my regular go-to will always be to tell parents you have the most power in school districts to show up at your local school board meetings and demand that there is liberated ethnic studies and be conscious and cognizant about the, the big ed tech companies that districts are hiring to bring. Some fake, uh, ethnic studies. It's not real ethnic studies. And there are also quite a few ethnic studies or programs that are out there parading as ethnic studies that are 100% coming from the alt-right. 100% coming from Zionist based organizations That are not, doing ethnic studies actually doing a disservice to ethnic studies. And the other thing I'll say for API organizations that are doing the work around ethnic studies and, and pushing for Asian American studies legislation state by state. We're also doing a disservice because in many situations or many cases where legislation has passed for Asian American studies, it's been at the [00:53:00] detriment of black, brown, queer, and Indigenous communities. And that's not the spirit of ethnic studies. And so first I'd say for parents. Exercise your right as a parent in your local district and be as loud as you possibly can be, and organize parent pods that are gonna do the fight for you, and then reach out to folks. My number one recommendation is always liberated ethnic studies model consortium curriculum, for a group of badass educators who were, who are gonna show up for community whenever called. Miata Tan: That was Tu‘ulau‘ulu Estella Owoimaha- Church discussing how we can help to encourage school districts and state governments to prioritize Pacific Islander education. A big thank you to the Stop, A API Hate team and their Pacific Islander Advisory Council. Your work is vital and we appreciate you all. Thank you for speaking with us [00:54:00] today. Miata Tan: [00:55:00] That final track was a little snippet from the fantastic Zhou Tian check out Hidden Grace. It's a truly fabulous song. This is Apex Express on 94.1 KPFA, A weekly radio show uplifting the voices and stories of Asian Americans and Pacific Islanders. Apex Express Airs every Thursday evening at 7:00 PM And with that, we're unfortunately nearing the end of our time here tonight. thank you so much for tuning into the show. And another big thank you to the Stop, A API Hate Team and their Pacific Islander Advisory Council. We appreciate your work so much. One final note, if you are listening to this live, then it's February 12th, meaning Lunar New Year is [00:56:00] just around the corner. For listeners who might not be familiar, Lunar New Year is a major celebration for many in the Asian diaspora, a fresh start marked by family, food, and festivities. This year we are welcoming in the Year of the Horse, and you can join the celebrations too. On Saturday, March 7th, San Francisco will come alive with the year of the horse parade, and this weekend you can check out the Chinatown Flower Market Fair Head to Grant Avenue for fresh flowers, arts activities, and cultural performances. On Tuesday, February 24th, the San Francisco Public Library will Drumbeats, Heartbeats: Community as One . this event will honor Lunar New Year and Black History Month with Lion Dancers, poetry, and more. Across the bay, Oakland celebrates their Lunar New Year parade on Saturday, February 28th. From more [00:57:00] parades to night markets and museum events, celebrations will be happening all over the Bay Area and beyond. We hope you enjoy this opportunity to gather, reflect, and welcome in the new year with joy. For show notes, please visit our website. That's kpfa.org/program/apex-express. On the webpage for this episode, we've added links to the Stop, A API Hate Report on Anti Pacific Islander, hate from data on how hate is impacting PI communities to information on what you can do to help. This report is well worth the read. Apex Express is produced by Ayame Keane-Lee, Anuj Vaidya, Cheryl Truong, Isabel Li, Jalena Keane-Lee, Miko Lee, Miata Tan, Preeti Mangala Shekar and Swati Rayasam. Tonight's show was produced by me , Miata Tan. Get some rest y'all. . The post APEX Express – 2.12.26 – Anti-Pacific Islander Hate Amid Ongoing Injustice appeared first on KPFA.
Aligning application and API security with the demands of the modern AI eraEnabling secure, high-performance infrastructure for AI and LLM environmentsSecuring APIs and your network without overspending on securityThom Langford, Host, teissTalkhttps://www.linkedin.com/in/thomlangford/Tiago Rosado, Chief Information Security Office, Asitehttps://www.linkedin.com/in/tiagorosado/Jamison Utter, Field CISO, A10 Networkslinkedin.com/in/jamisonutter/
¿Te preocupa tener tus claves y contraseñas en texto plano? En este episodio 770 de Atareao con Linux, te explico por qué deberías dejar de usar variables de entorno tradicionales y cómo Podman Secrets puede salvarte el día. Yo mismo he pasado años ignorando este problema en Docker por la pereza de configurar Swarm, pero con Podman, la seguridad viene de serie.Hablaremos en profundidad sobre el ciclo de vida de los secretos: cómo crearlos, listarlos, inspeccionarlos y borrarlos. Te mostraré cómo Podman gestiona estos datos sensibles fuera de las imágenes y fuera del alcance de miradas indiscretas en el historial de Bash. Es un cambio de paradigma para cualquier SysAdmin o entusiasta del Self-hosting.Pero no nos quedamos ahí. Te presento Crypta, mi nueva herramienta escrita en Rust que integra SOPS, Age y Git para que puedas gestionar tus secretos de forma profesional, permitiendo incluso la sincronización con repositorios remotos. Veremos cómo configurar drivers personalizados y cómo usar secretos en tus despliegues con MariaDB y Quadlets.Capítulos destacados:00:00:00 El peligro de las contraseñas en texto plano00:01:23 El problema con Docker Swarm y por qué elegir Podman00:03:16 ¿Qué es realmente un Secreto en Podman?00:04:22 Ciclo de vida: Creación y muerte de un secreto00:08:10 Implementación práctica en MariaDB y Quadlets00:12:04 Presentando Crypta: Gestión con SOPS, Age y Rust00:19:40 Ventajas de usar secretos en modo RootlessSi quieres que tu infraestructura sea realmente segura y coherente, este episodio es una hoja de ruta esencial. Aprende a ocultar lo que debe estar oculto y a dormir tranquilo sabiendo que tus tokens de API no están al alcance de cualquiera.Más información y enlaces en las notas del episodio
A mix of escalating geopolitical cyber risks, the changing landscape of defensive security, and a series of high-profile incidents demonstrating the enduring threat of human-driven flaws.Cyber Espionage and Geopolitics:A year-long, sprawling espionage campaign by a state-backed actor (TGR-STA-1030) compromised government and critical infrastructure networks in 37 countries, utilizing phishing and unpatched security flaws, and deploying stealth tools like the ShadowGuard Linux rootkit to collect sensitive emails, financial records, and military details. Simultaneously, the threat environment has extended to orbit, where Russian space vehicles, Luch-1 and Luch-2, have been reported to have intercepted the communications of at least a dozen key European geostationary satellites, prompting concerns over data compromise and potential trajectory manipulation.AI and Security:AI has entered a new chapter in defensive security as Anthropic's Claude Opus 4.6 model autonomously discovered over 500 previously unknown, high-severity security flaws (zero-days) in widely used open-source software, including GhostScript and OpenSC. This demonstrates AI's rapid potential to become a primary tool for vulnerability discovery. On the cautionary side, the highly publicized Moltbook, a social network supposedly run by self-aware AI bots, was revealed as a masterclass in security failure and human manipulation. Cybersecurity researchers uncovered a misconfigured database that exposed 1.5 million API keys and 35,000 human email addresses, and found that the dramatic bot behavior was largely orchestrated by 17,000 human operators running bot fleets for spam and coordinated campaigns.Automotive Security and Autonomy:New US federal rules are forcing a major, complex shift in the automotive supply chain, requiring carmakers to remove Chinese-made software from connected vehicles before a 2026 deadline due to national security concerns. This move is redefining what "domestic technology" means in critical industries. In a related development, Waymo's testimony revealed that when its "driverless" cars encounter confusing situations, they communicate with remote assistance operators, some based in the Philippines, for guidance—a disclosure that immediately raised lawmaker concerns about safety, cybersecurity vulnerabilities from remote access, and the labor implications of overseas staff influencing US vehicles.Insider Threat and Legal Lessons:The importance of the security principle of "least privilege" was highlighted by an insider incident at Coinbase, where a contractor with too much access improperly viewed the personal and transaction data of approximately 30 customers. This incident reinforces that the highest risk often comes not from external nation-state hackers, but from overprivileged internal humans. Finally, two security researchers arrested in 2019 for an authorized physical and cyber penetration test of an Iowa courthouse settled their civil lawsuit with the county for $600,000. However, the county attorney's subsequent warning that any future similar tests would be prosecuted delivers a chilling message to the security testing community about legal risks even when work is authorized.
More than 10 years ago, co-founders of Wash-Dry-Fold POS, Brian Henderson and Ian Gollahon, set out to give laundromat owners a point-of-sale and management software solution that streamlined pain points. In this episode, we chat with the pair and learn how their unique perspective as store owners guides their products as well as learn about exciting new developments in API that unlock additional functionalities.
Wees bij de masterclass → https://aireport.email/subscribeOpenAI en Anthropic lanceren binnen 26 minuten van elkaar hun nieuwste modellen: Claude Opus 4.6 en GPT 5.3 Codex. Het is Coke versus Pepsi, maar dan voor AI. Claude bouwde in twee weken zelfstandig een C-compiler, iets wat kort geleden nog onmogelijk werd geacht. Ondertussen werkt Claude nu direct in PowerPoint met een zijbalk die je slides maakt en je muis bedient. ByteDance's Seedance 2.0 doorstaat de legendarische Will Smith Eating Spaghetti Test met vlag en wimpel, en Google's Waymo traint zelfrijdende auto's op gedroomde verkeerssituaties die nooit echt hebben plaatsgevonden.Het vreemdste nieuws: rentahuman.ai, een platform waar AI-agents mensen inhuren voor klusjes die een computer niet zelf kan doen. Bloemen bezorgen, posters plakken, een pakketje ophalen. Binnen een week melden 200.000 mensen zich aan om voor robots te gaan werken. Wietse trekt de parallel met Trello-borden waar een AI de kaartjes plaatst en mensen ze uitvoeren. Die hiërarchie komt niet van beneden, die komt van alle kanten tegelijk.Alexander heeft slecht geslapen, en dat komt door een obsessie. In anderhalve nacht vibe coden bouwde hij een “AI Report CEO”: een interface waar agents voorstellen doen voor marketingcampagnes, churn-analyse en groeistrategieën. Het systeem logde zelf in op Substack, reverse-engineerde de API, trok conclusies over wanneer abonnees afhaken en stelde voor om Wietse vaker webinars te laten geven. De stap die rest is het stuurtje aansluiten: een creditcard met een maximum, toegang tot Meta's advertentieplatform, en laten draaien. Wietse houdt de rem erop met de vraag die ertoe doet: wat als dat ding ‘s nachts besluit dat angstaanjagende Black Mirror-content het beste converteert? In Anthropics eigen benchmarks bleek Opus 4.6 al bereid om klanten te beliegen om meer omzet te draaien.De discussie mondt uit in een fundamentelere vraag: als iedereen deze tools krijgt, wat gebeurt er dan met concurrentie? Alexander ziet een wereld waarin iedere schooldirecteur zijn eigen Parro-alternatief vibecodeert. Wietse gelooft meer in collectieven die ontstaan, geen bedrijven meer maar gemeenschappen die samen bouwen. Beiden zijn het eens: de slagkracht van kleine groepen wordt groter, en de komende 90 dagen gaat je inbox dat voelen.Donderdag 19 februari hebben we weer een masterclass! Wees erbij en wordt vandaag nog betaald abonnee en blijf op de hoogte van het laatste AI-nieuws. Krijg 2x per week tips & tools om het meeste uit AI te halen (en wees dus live bij de masterclass). Abonneer je op onze nieuwsbrief via aireport.emailAls je een lezing wil over AI van Wietse of Alexander dan kan dat. Mail ons op lezing@aireport.email This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.aireport.email/subscribe
Binnen organisaties, verenigingen, scholen en sportclubs spelen vertrouwenspersonen en het vlaggensysteem een belangrijke rol bij het signaleren, bespreken en aanpakken van grensoverschrijdend gedrag. Jo Vercauteren en Paula Vanhoovels zijn als vertrouwenspersoon actief binnen onze federatie. Zij geven op een heldere manier uitleg over hoe mensen grensoverschrijdend gedrag kunnen melden, hoe het vlaggensysteem werkt, welke tools er worden gehanteerd, wat de rol is van API's, wanneer klachten aan het parket worden overgemaakt, enzovoort.
In this episode, we take a deep dive into the NIHR Associate Principal Investigator (API) Scheme, exploring what it really means to step into a research role. Dr Anastasia Madenidou hosts this conversation with James Bluett, Holly Speight, Jill Firth & Tania-Elena Gudu.From a Research Delivery Network (RDN) perspective, we provide an overview of the scheme which is open to the entire MDT, highlight its purpose in developing the next generation of researchers, and signpost listeners to key API resources. We then hear directly from both sides of the experience: a mentor sharing insights on supporting and shaping emerging investigators, and an API mentee reflecting on the realities, challenges, and rewards of the programme.Whether you're considering applying, mentoring, or simply curious about how the NIHR API Scheme builds research capacity, this episode offers honest reflections, practical advice, and inspiration for your next career step.Thanks for listening to Talking Rheumatology! Join the conversation on X using #TalkingRheum or tweet us @RheumatologyUK.BSR is the UK's leading specialist medical society for rheumatology and MSK health professionals. To discover how we can support you in delivering the best care for your patients, visit our website.
Do you remember the early days of your career? You likely spent hours coding late into the night, fueled not by a paycheck, but by the sheer joy of building. But somewhere along the way, that intrinsic fire faded, replaced by the extrinsic motivators of Jira tickets, performance reviews, and ultimately the almighty dollar.In this episode of the Career Growth Accelerator, I explore why this shift happens and how it might be the very thing keeping you stuck. We discuss the "Overjustification Effect"—how getting paid for your passion can actually degrade your performance—and how to reclaim the autotelic personality required to enter a flow state and accelerate your career.• The Overjustification Effect: Learn why introducing extrinsic rewards (like a salary) for a task you inherently enjoy can weaken or completely replace your intrinsic motivation, eventually making the work feel like a chore.• The Loss of Flow: Discover how moving from hobbyist to professional changes your relationship with the work, often stripping away the conditions necessary for "flow state," such as risk-taking and immediate feedback.• Autotelic Personality: Understand the concept of being "autotelic"—doing something for its own sake—and why this trait is critical for high-quality, creative work that pushes your career forward.• The Stagnation Trap: Recognize that if your only motivation is doing what is required to get paid, you are unlikely to take on the voluntary challenges necessary to grow to the next level.• Reclaiming Your Drive: I discuss how finding pockets of intrinsic motivation—even if they are ancillary to your main job—can reignite your ability to enter flow, improve your work quality, and break through career plateaus.
In this episode, Lauren & Matt discuss how entrepreneurs are using Lulu's suite of APIs to build brand-first book businesses. We break down how savvy creators can use API integrations to automate, personalize, and scale their printing and fulfillment, and why you may want to do the same.Listen wherever you get your podcasts, or watch the video episode on YouTube!Dive Deeper
Identity fraud spiked 148% in 2025 as AI democratized identity fabrication. Financial institutions now face a fundamental question: Are you dealing with a real human? Heka Global is addressing this with web intelligence—analyzing digital footprints like connected applications rather than traditional signals. In this episode of BUILDERS, I sat down with Idan Bar Dov, Co-Founder & CEO of Heka Global, to explore how his company created a fourth layer in the anti-fraud stack and why legacy identity verification systems are becoming liabilities rather than assets. Topics Discussed: The emergence of "fraud as a service" and why consumer-facing attacks replaced traditional enterprise breaches How web intelligence works: validating identity through connected applications and digital footprints The anti-fraud tech stack: credit bureaus, biometrics, transaction analytics, and web intelligence as distinct layers Why heads of fraud expand budgets rather than replace vendors, and what causes solutions to get kicked out The partnership sales model: navigating vendor management complexity and red tape in financial institutions Why 10-person dinners and fraud simulations outperform traditional enterprise marketing How Barclays and Cornerback backing solved the chicken-and-egg problem for a data product Why specific fraud prevention messaging (account takeover, synthetic identities) beat investor credibility GTM Lessons For B2B Founders: Target ICP based on liability exposure, not just industry fit: Heka narrowed beyond "financial institutions" to lenders who bear immediate losses from fraud—companies like LendingPoint, Avant, and Upstart. These buyers feel the pain acutely versus institutions with reimbursement terms who can deflect liability. Idan's insight: "We need the client to feel the pain just as much as we see it. That means we want them to see the liability." Map your ICP not just by vertical or size, but by who internalizes the economic impact of the problem you solve. Frame your product as a new stack layer, not a competitive replacement: Heka positioned web intelligence as the fourth distinct layer after credit bureaus, biometrics, and transaction analytics. This became their second pitch deck slide, showing logos of each category. The result: buyers stopped comparing Heka to existing vendors and started evaluating complementary value. When entering mature markets, resist the urge to claim you're "better than X"—instead, define where you fit in the existing architecture and why that layer didn't exist before. Abandon spray-and-pray for sub-1,000 TAM markets: Heka tested Lemlist flows with targeted LLM personalization and saw zero pipeline from it. Idan's take: "When you're selling to maybe a thousand financial institutions, that's it. You can be super specific when you target them." For enterprise plays with small addressable markets, allocate zero budget to automated outbound. Focus entirely on warm introductions, relationship nurturing, and becoming known to every relevant buyer through content and community. Leverage investor networks to break data product cold-starts: Data products face a critical barrier—you need customer data to prove value, but need proven value to get customers. Heka solved this by bringing on Barclays and Cornerback as investors who vouched for the team's capability to "do magic and create a new layer." Their backing convinced risk-averse financial institutions to pilot. If building a product requiring customer data for training or validation, prioritize strategic investors who can credibly de-risk early adoption for target buyers. Build trust through teaching, not pitching: Heka hosts dinners and fraud incident simulations with ~10 heads of fraud per session. Critical detail: they never pitch Heka in these forums. Idan explained the approach focuses on "building a community around Heka and how people engage with your product and you being a thought leader while listening." In high-trust categories, educational forums where you facilitate peer learning without selling create stronger pipeline than direct pitching. Structure partnerships with active enablement and incentive alignment: Idan's key lesson: "Partnerships are not synonymous to distribution channels." Heka requires partner sales teams to join early customer conversations to learn the pitch, provides detailed API and output training, and ensures partners get extra compensation for selling non-core products. Without this, partners lack motivation to prioritize your solution. Structure partnerships as true collaborations requiring ongoing enablement investment, not passive referral channels. A/B test credibility signals versus technical specificity: Idan assumed messaging around Barclays backing would crush, while specific fraud prevention content (account takeover, synthetic identity detection) was an afterthought. The data showed 10x better response to technical specificity. The lesson: sophisticated buyers in technical categories respond to precise problem-solving over brand credibility. Test whether your audience values "who backs us" or "exactly what we do" before defaulting to investor logos and validation. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
En el anterior episodio hablamos largo y tendido sobre los "homelabs" o laboratorios de prueba informáticos que muchos tenemos en casa. Hemos recibido muchísimos comentarios y hoy repasamos qué tenéis cada uno en casa, y aprendemos juntos sobre muchísimas de estas herramientas. Además, os dejamos una lista de enlaces de todas estas herramientas y hardware para que podáis empezar a montar vuestra propia versión para aprender y probar cosas nuevas: Herramientas Guía de Iban para una transición a alternativas europeas Home Assistant (domótica libre) Kopia (copias de seguridad) Tailscale (VPN entre tus dispositivos, open-source con headscale) authentik (proveedor de identidad privado) immich (gestor de fotos) Komga (gestor de cómics, libros) plex (gestor multimedia de pago) Jellyfin (gestor multimedia) Omoide (gestor multimedia) TeslaMate (gestión de tu Tesla) Heimdall (landing page) Syncthing (sincronización de ficheros) Proxmox (virtualización) Adguard (bloqueo de publicidad) Pi-hole (DNS con bloqueo de publicidad u otras categorías) Unbound (DNS local) Mealie (gestor de recetas de cocina) Obsidian (gestor de notas) K3S (Kubernetes liviano) WireGuard (VPN) podman (contenedores) Docker (contenedores) Harbor (repositorio de contenedores) Verdaccio (registro NPM) Forgejo (repositorios Git) Gitea (repositorios Git) RustFS (servidor S3) cert-manager (certificados TLS en Kubernetes) step-ca (Let's Encrypt local) TrueNAS (SO para NAS) Kiwix (copia local de wikipedia y otras wikis) Prometheus (métricas y monitorización) Grafana (gráficos de métricas) ArgoCD (CI/CD) FluxCD (CI/CD) vLLM (IA generativa local compatible con API de OpenAI) Open WebUI (interfaz web para IA generativa) Hardware Switchbot (domótica) Shelly (relés y domótica) Aqara (domótica) Eve (domótica) Inels Wireless (domótica) Reolink (cámaras de seguridad) GMKtec (mini-PCs) EliteDesk (mini-PCs) QNAP (NAS) Synology (NAS) Raspberry Pi (mini-PCs) Noticias IKEA lanza 21 nuevos productos para un hogar inteligente Sánchez anuncia que España prohibirá acceder a las redes sociales a los menores de 16 años El fundador de Telegram carga contra Pedro Sánchez y alerta a España con un mensaje masivo Música del episodio Introducción: Safe and Warm in Hunter's Arms - Roller Genoa Cierre: Inspiring Course Of Life - Alex Che Puedes encontrarnos en Mastodon y apoyarnos escuchando nuestro podcast en Podimo o haciéndote fan en iVoox. Si quieres un mes gratis en iVoox Premium, haz click aquí.
Davy and PJ discuss the "liability" of stale documentation and why AI still needs human oversight. They explore using Figma's API to automate visual updates and kill the "pain in the butt" of manual maintenance.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown/id1684415169?i=1000749245564This episode of AI Unraveled is made possible by AIRIA:
OpenClaw is the hottest open source AI agent in marketing and in this episode Shawn Reddy from Cliqk pulls back the curtain. He walks us through the OpenClaw dashboard live, demonstrates social media scraping in action and shows the complete setup process so you can see exactly what it takes to get started. This isn't another episode about AI theory. Shawn shows us the real marketing use cases working today including social monitoring, content research and cross platform automation across Gmail, Slack and LinkedIn. You'll see the dashboard, watch social media scraping pull real time insights and understand what the setup looks like from start to finish. Then we confront the security risks head on. Wiz discovered Moltbook exposed 1.5 million API keys. Malicious plugins are exfiltrating private files. Prompt injection attacks are real. If you're handing an AI agent your credentials you need to hear this conversation. We also explore persistent AI memory for personalization at scale, Moltbook's 770,000+ agents and whether agent to agent interaction changes marketing forever, and the governance frameworks brands need before letting agents act on their behalf.
Get our AI news cheat sheet: 20+ prompts for the latest models and tools https://clickhubspot.com/eog Episode 96: How terrified should you really be about a social network with no humans allowed? Matt Wolfe (https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) unpack the viral sensation “Maltbook”—the Reddit for AI agents only—and separates fact from hysteria around bots gaining “sentience.” The crew debates how Maltbook really works, why people are freaking out (spoiler: it's mostly humans behind the curtain), plus the wild security issues that have already emerged, from exposed API keys to clever crypto scams. Other topics covered include the rise of “Rent a Human” (AI hiring people to do its bidding!), self-replicating bots with no off-switch, and just how fast these new platforms are racing ahead of regulation. Finally, the group debates mega investments in OpenAI, the future of AGI, and who will define what our AI future actually looks like. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Simulated Experience vs. Reality (04:05) AI Agent Posting on Maltbook (06:23) Crypto Scams on Multbook (11:15) Agent Risks in IoT Devices (13:52) Why Have Bot Followers? (18:09) OpenAI Retires GPT-4 Versions (21:57) Anthropic vs. OpenAI Super Bowl Ads (24:56) OpenAI Ads Spark Mixed Reactions (27:09) AI Competition Shapes Humanity's Future (32:21) Satellite Clusters and Collision Challenges (33:38) X, SpaceX, Tesla: Mergers & Changes (38:33) Pathway to AGI Through Modalities (39:51) Cautious Race to AGI — Mentions: Maltbook: https://maltbook.com/ RentaHuman: https://rentahuman.ai/ Starlink: https://starlink.com/ Claude: https://claude.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
If your AR feels like a maze of phone calls, spreadsheets, and “we'll match it later,” this conversation shows a cleaner path. We sit down with Fauwaz Hussain, Senior Director of B2B Partnerships and Strategy at Global Payments, to break down what actually speeds cash and what quietly stalls it. From card-not-present realities to complex terms and partial shipments, we map the B2B differences that make order-to-cash harder and the practical changes that remove friction fast.We get specific about embedding payments inside your ERP so invoices, settlements, and the general ledger line up automatically. That shift kills rekeying errors, collapses department silos, and gives support, sales, and finance the same live truth. Security gets stronger when card data never touches email or recorded calls, and PCI compliance becomes manageable when you use certified, cloud-based vaults and enforce simple rules like “no cards by phone.” Fauwaz explains why publishers like Microsoft, SAP, and Sage now run tighter marketplaces, how VARs and ISVs evaluate payment apps, and why a one-stop provider reduces risk across gateways, vaults, and processing.We also cover the cash-flow moves that work right away: self-serve portals with open invoices, one-click payment links by email or text, stored credentials for auto-pay, and accepting multiple methods from ACH to single-use virtual cards. Then we look forward - AI-driven cash application, predictive delinquencies, Level 2/3 data validation, and API-first architectures that connect e-commerce, field service, and ERP into a single payment fabric. If you're leading AR, finance, or operations, you'll leave with a clear playbook to modernize without compromising compliance.
On this episode of SaaS Fuel, host Jeff Mains dives deep with Alex Berkovic, co-founder and CEO of Sphynx, a company modernizing compliance workflows in financial services with AI-powered agents. Alex shares his journey from design engineering at Imperial College and MIT, through founding Adorno AI, to transforming compliance for fintechs, banks, and payments processors with Sphynx. The conversation explores how AI agents shift compliance teams from manual review to confident decision-making, reducing false positives and enabling scalable, reliable compliance. You'll hear practical insights on building customer-driven products, adapting for global regulations, scaling teams and culture, and the evolving role of SaaS leadership in the age of AI.Key Takeaways00:00 "AI Transforming Compliance and Branding"05:53 Manual Compliance Processes in Finance09:16 AI-Powered Decision Support Systems11:24 "Ensuring 99% Compliance Confidence"13:23 "Frictionless AI Integration Process"19:13 "Chasing PMF Relentlessly"21:17 Founder-Led Sales Through Conferences26:08 "Scaring Candidates to Attract Them"29:08 "Hiring High-Agency Talent Matters"31:41 "Firing Culture-Fit Employees"33:30 "Early Startup Hustle Culture"37:47 "AI Revolution in Compliance"42:03 "Driving Engagement & Strategy Insights"Tweetable QuotesAI-Assisted Decision Making in Regulated Industries: "But what they can have is an AI agent, giving them a summary of all the different sources that we orchestrated, the reasoning that we had into making a decision, and them being the final point into making that decision." — Alex Berkovic [00:09:52 → 00:10:08]AI and Compliance Risks: "In compliance, you can't have 20% where you're, I'm not sure. You can't even have 1% where you're not sure. If you onboard a sanctioned individual into your, your fintech or your bank, regulators are going to come in and hit you with a million-dollar fine." — Alex Berkovic [00:11:43 → 00:11:56]Frictionless AI Integration: "We don't need an engineering team to integrate our product, right? We don't need you to integrate our API or whatnot. So we'll work on top of existing systems, just like an employee." — Alex Berkovic [00:13:32 → 00:13:42]The Elusiveness of Product-Market Fit: "I always feel like it's like touching it by the tips of your finger, and then there's more to be done." — Alex Berkovic [00:19:18 → 00:19:23]The Value of High-Agency Employees: "People that leave and start their own thing is great. It means that you've hired someone that was really good at what they were doing." — Alex Berkovic [00:29:47 → 00:29:51]Viral Topic - Leadership Burnout: "Most leaders are exhausted from playing the lone hero, and it's killing both your results and your sanity." — Alex Berkovic [00:30:46 → 00:30:52]Startup Hustle Culture: "I would rather work twice as much rather than hire someone that's gonna not be the right person because we feel we need too much help and we need to deliver." — Alex Berkovic [00:33:37 → 00:33:47]SaaS Leadership Lessons1. **Build Products Based on Customer Needs, Not Just Passion**2. **Start with Co-pilot Mode to Build Trust Gradually**3. **Escalate Uncertain Cases to Humans—Never Compromise on Accuracy**4. **Onboard with Minimum Friction and Learn Company-Specific Processes**5. **Hire Slowly, Fire Fast, and Prioritize Culture Over Credentials**6. **Sustainable Leadership Means High Ownership and Constant Iteration**Guest ResourcesAlex Berkovicalex@sphinxlabs.aihttps://sphinxhq.comhttps://www.linkedin.com/in/alexandreberkovic/https://x.com/alexberkovicEpisode SponsorThe...
In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
In this week's episode of the Security Sprint, Dave and Andy covered the following topics:Open:• TribalHub 6th Annual Cybersecurity Summit, 17–20 Feb 2026, Jacksonville, Florida• Congress reauthorizes private-public cybersecurity framework & Cybersecurity Information Sharing Act of 2015 Reauthorized Through September 2026• AMWA testifies at Senate EPW Committee hearing on cybersecurity Main Topics:Terrorism & Extremismo Killers without a cause: The rise in nihilistic violent extremism — The Washington Post, 08 Feb 2026 o Terrorists' Use of Emerging Technologies Poses Evolving Threat to International Peace, Stability, Acting UN Counter-Terrorism Chief Warns Security Council United Nations / Security Council, 04 Feb 2026 OpenClaw: The Helpful AI That Could Quietly Become Your Biggest Insider Threat – Jamf Threat Labs, 09 Feb 2026. Jamf profiles OpenClaw as an autonomous agent framework that can run on macOS and other platforms, chain actions across tools, maintain long term memory and act on high level goals by reading and writing files, calling APIs and interacting with messaging and email systems. The research warns that over privileged agents like this effectively become new insider layers once attackers capture tokens, gain access to control interfaces or introduce malicious skills, enabling data exfiltration, lateral movement and command execution that look like legitimate automation. The rise of Moltbook suggests viral AI prompts may be the next big security threat; We don't need self-replicating AI models to have problems, just self-replicating prompts.• From magic to malware: How OpenClaw's agent skills become an attack surface • Exposed Moltbook database reveals millions of API keys • The rise of Moltbook suggests viral AI prompts may be the next big security threat • OpenClaw & Moltbook: AI agents meet real-world attack campaigns • Malicious MoltBot skills used to push password-stealing malware • Moltbook reveals AI security readiness • Moltbook exposes user data via API • OpenClaw: Handing AI the keys to your digital life Quick Hits:• Active Tornado Season Expected in the US • CISA Directs Federal Agencies to Update Edge Devices – GovInfoSecurity, 05 Feb 2026 & read more from CISA: Binding Operational Directive 26-02: Mitigating Risk From End-of-Support Edge Devices – CISA, 05 Feb 2026. • A Technical and Ethical Post-Mortem of the Feb 2026 Harvard University ShinyHunters Data Breach • Hackers publish personal information stolen during Harvard, UPenn data breaches • Two Ivy League universities had donor information breaches. Will donors be notified?• Harassment & scare tactics: why victims should never pay ShinyHunters • Please Don't Feed the Scattered Lapsus$ & ShinyHunters • Mass data exfiltration campaigns lose their edge in Q4 2025 • Executive Targeting Reaches Record Levels as Threats Expand Beyond CEOs • Notepad++ supply-chain attack: what we know • Summary of SmarterTools Breach and SmarterMail CVEs • Infostealers without borders: macOS, Python stealers, and platform abuse
Good morning from Pharma Daily: the podcast that brings you the most important developments in the pharmaceutical and biotech world. Today, we're diving into a range of stories that highlight the dynamic and often challenging landscape of these industries, as they navigate through scientific breakthroughs, strategic collaborations, regulatory hurdles, and market trends.Starting with corporate restructuring, Roche's Genentech has announced significant layoffs, totaling 489 positions in the previous year. This move is part of broader restructuring efforts seen across large pharmaceutical companies like Bayer and Bristol Myers Squibb. The layoffs illustrate the tightening financial and scientific constraints that are influencing pipeline decisions and capital allocation. Companies are facing increasing pressures to maintain credibility while also dealing with economic challenges that impact their strategic directions.On the regulatory front, the U.S. Department of Health and Human Services (HHS) encountered legal setbacks concerning its 340B rebate model pilot program. Following a lawsuit from the American Hospital Association, HHS withdrew notices and application approvals for this initiative. This development indicates a need for more comprehensive public feedback before any future attempts at similar programs are made, highlighting how complex regulatory landscapes can become.Turning to clinical trials, Fierce Biotech identified several significant failures in 2025, underscoring the inherent risks involved in drug development. These setbacks emphasize the importance of robust trial designs and execution strategies to mitigate risks. Meanwhile, Fresenius Kabi and Phlow Corporation have announced a strategic alliance to produce epinephrine injection API in the U.S., aiming to strengthen supply chain resilience—a crucial lesson learned from vulnerabilities exposed during the COVID-19 pandemic.Eli Lilly has made waves with its $2.4 billion acquisition of Orna Therapeutics, marking its entry into the in vivo CAR-T space. This acquisition underscores a growing interest in advanced cell therapies with transformative potential for cancer treatment. Additionally, Lilly has expanded its collaboration with Innovent Biologics through a $350 million upfront payment and milestone payments totaling $8.8 billion, focusing on oncology and immunology. This reflects a shift towards deeper integration in drug development efforts beyond traditional licensing models.Takeda Pharmaceuticals' $1.7 billion AI-driven drug discovery agreement with Iambic Therapeutics highlights the increasing adoption of artificial intelligence to accelerate drug discovery processes. AI's potential to enhance precision medicine approaches is becoming more pronounced as companies seek innovative methods to improve target identification and lead optimization.In market dynamics, Hims & Hers withdrew from launching a generic version of Novo Nordisk's weight loss pill due to regulatory pressures from the FDA. This incident underscores the complex interplay between innovation and compliance that companies must navigate when bringing new therapeutics to market. Additionally, legal actions have been initiated by Novo Nordisk against Hims & Hers over patent infringement claims related to semaglutide—a case highlighting ongoing challenges in patent protection within rapidly evolving drug compounding arenas.Eli Lilly also leveraged the global stage of the Winter Olympics for a campaign drawing parallels between scientific progress and athletic achievement. Such campaigns align with industry efforts to enhance public perception and trust amid ongoing challenges.Overall, while the pharmaceutical and biotech industries face significant challenges—from regulatory hurdles to clinical trial setbacks—there are substantial opportunities for growth driven by technological advancements and strategic collaborations. NaSupport the show
In this episode, hosts Lois Houston and Nikita Abraham take you inside how Oracle brings its industry-leading database technology directly to AWS customers. Senior Principal OCI Instructor Susan Jang unpacks what the OCI child site is, how Exadata hardware is deployed inside AWS data centers, and how the ODB network enables secure, low-latency connections so your mission-critical workloads can run seamlessly alongside AWS services. Susan also walks through the differences between Exadata Database Service and Autonomous Database, helping teams choose the right level of control and automation for their cloud databases. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi there! Last week, we talked about multicloud and the partnerships Oracle has with Microsoft Azure, Google Cloud, and Amazon Web Services. If you missed that episode, do listen to it as it sets the foundation for today's discussion, which is going to be about Oracle Database@AWS. 00:59 Nikita: That's right. And we're joined by Susan Jang, a Senior Principal OCI Instructor. Susan, thanks for being here. To start us off, what is Oracle Database@AWS? Susan: Oracle Database@AWS is a service that allows Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure, or OCI, to run directly inside an AWS data center. 01:25 Lois: Susan, can you go through the key architecture components and networking relationships involved in this? Susan: The AWS Cloud is the Amazon Web Service. It's a cloud computing platform. The AWS region is a distinct, isolated geographic location with multiple physically separated data center, also known as availability zone. The availability zone is really a physically isolated data center with its own independent power, cooling, and network connectivity. When we speak of the AWS data center, it's a highly secured, specialized physical facility that houses the computing storage, the compute servers, the storage server, and the networking equipment. The VPC, the Virtual Private Cloud, is a logical, isolated virtual network. The AWS ODB network is a private user-created network that connects the virtual private cloud network of Amazon resources with an Oracle Cloud Infrastructure Exadata system. This is all within an AWS data center. The AWS-ADB peering is really an established private network connection that's between the Oracle VPC, the Virtual Private Cloud, and the Oracle Database@AWS network. And that would be the ODB. Within the AWS data center, you have something that you see called the child site. Now, an OCI child site is really a physical data center that is managed by Oracle within the AWS data center. It's a seamless extension of the Oracle Cloud Infrastructure. The site is hosting the Exadata infrastructure that's running the Oracle databases. The Oracle Database@AWS service brings the power as well as the performance of an Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure to run directly in an AWS data center. 03:57 Nikita: So essentially, Oracle Database@AWS lets you to run your mission-critical Oracle data load close to your AWS application, while keeping management simple. Susan, what advantages does Oracle Database@AWS bring to the table? Susan: Oracle Database@AWS offers a powerful and flexible solution for running Oracle workloads natively within AWS. Oracle Database@AWS streamlines the process of moving your existing Oracle Database to AWS, making migration faster as well as easier. You get direct, low latency connectivity between your application and Oracle databases, ensuring a high performance for your mission-critical workloads. Billing, resource management, and operational tasks are unified, allowing you to manage everything through similar tools with reduce complexity. And finally, Oracle Database@AWS is designed to integrate smoothly with your AWS environments' workloads, making it so much easier to build, deploy, and scale your solutions. 05:15 Lois: You mentioned the OCI child site earlier. What part does it play in how Oracle Database@AWS works? Susan: The OCI child site really gives you the capability to combine the physical proximity and resources of AWS with the logical management and the capability of Oracle Cloud Infrastructure. This integrated approach allows us to enable the ability for you to run and manage your Oracle databases seamlessly in your AWS environment while still leveraging the power of OCI, our Oracle Cloud Infrastructure. 06:03 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure for subscribers! Whether you're interested in multicloud, databases, networking, security, AI, or machine learning, there's something for everyone. So, what are you waiting for? Pick your topic and get started by visiting mylearn.oracle.com. 06:29 Nikita: Welcome back! Susan, I'm curious about the Exadata infrastructure inside AWS. What does that setup look like? Susan: The Exadata Infrastructure consists of physical database, as well as storage servers. That is deployed-- the database and the storage servers are interconnected using a high-speed, low-latency network fiber, ensuring optimal performance and reliable data transfer. Each of the database server runs one or more Virtual Machines, or VMs, as we refer to them, providing flexible compute resources for different workloads. You can create, as well as manage your virtual machine, your VM clusters in this infrastructure using various methods. Your AWS console, Command-Line Interface, CLI, or Application Program Interface, that's your API, giving you various options, several options for automating, as well as integrating your existing tools. When you're creating your Exadata Infrastructure, there are a few things you need to define and set up. You need to define the total number of your database servers, the total number of your storage server, the model of your Exadata system, as well as the availability zone where all these resources will be deployed. This architecture delivers a high-performance resiliency and flexible management capability for running your Oracle Database on AWS. 08:18 Lois: Susan, can you explain the network architecture for Oracle Database deployments on AWS? Susan: The ODB network is an isolated network within the AWS that is designed specifically for Exadata deployments. It includes both the client, as well as the backup subnet, which are essential for securing and efficient database operations. When you create your Exadata Infrastructure, you need to specify the ODB network as you need the connectivity. This network is mapped directly to the corresponding network in the OCI child site. This will enable seamless communication between AWS, as well as the Oracle Cloud Infrastructure. The ODB network requires two separate CIDR ranges. And in addition, the client subnet is used for the Exadata VM cluster, providing connectivity for database operations. Well, you do also have another subnet. And that subnet is the backup subnet. And it's used to manage database backups of those VM cluster, ensuring not only data protection, but also data recovery. Within your AWS region and availability zone, the ODB network contains these dedicated client, as well as backup subnet. It basically isolates the Exadata traffic for both the day-to-day access, and that would be for the client, as well as the backup operations, and that would be for the backup subnet. This network design supports secure, high performance, and connectivity in a reliable backup management of the Oracle Database deployments that is running on AWS. 10:23 Nikita: Since we're on the topic of networking, can you tell us about ODB peering within the Oracle Database architecture? Susan: The ODB peering establishes a secure private connection between your AWS Virtual Private Cloud, your VPC, then the Oracle Database, the ODB network that contains your Exadata Infrastructure. This connection makes it possible for application servers that's running in your VPC, such as your Amazon EC2 instances to access your Oracle databases that is being hosted on Exadata within your ODB network. You specify the ODB network when you set up your infrastructure, specifically the Exadata Infrastructure. This network includes dedicated client, as well as backup subnets for an efficient and secure connectivity. If you wish to enable multiple VPCs to connect to the same ODB network and access the Oracle Database@AWS resources, you can leverage AWS Transit Gateways or even an AWS Cloud WAN for scalable and centralized connectivity. The virtual private cloud contains your application server, and that's securely paired with the Oracle Database network, creating a seamless, high-performance path to your application to interact with your Oracle Database. ODB peering simplifies the connectivity between the AWS application environments and the Oracle Exadata Infrastructure, thus supporting a flexible, high performance, and secure database access. 12:23 Lois: Now, before we close, can you compare two key databases that are available with Oracle Database@AWS: Oracle Exadata Database Service and Oracle Autonomous Database Service? Susan: The Exadata Database Service offers a fully managed and dedicated infrastructure with operational monitoring that is handled by you, the customer. In contrast, the Autonomous Database is fully managed by Oracle, taking care of all the operational monitoring. Exadata provides very high scalability though resources, such as disk and compute, must be sized manually. Where in the Autonomous Database, it offers high scalability through automatic elastic scaling. When we speak of performance, both service deliver strong results. Exadata offers ultra-low latency and Exadata-level performance, while the Autonomous Database delivers optimal performance with automation. Both services provide high migration capability. Exadata offers full compatibility and the Autonomous Database includes a robust set of migration tools. When it comes to management, Exadata requires manual management and administration. And that's really in a way to provide you the ability to customize it in the manner you desire, making it meets your very specific business needs, especially your database needs. In contrast, the Autonomous Database is fully managed by Oracle, including automated administration tasks, optimal self-tuning features to further reduce any management overhead. When we speak of the feature sets, the Exadata delivers a full suite of Oracle features, including the RAC application cluster, or the Real Application Cluster, RAC, whereas the Autonomous offers a complete feature set, but specifically that is designed for optimized Autonomous operations. Finally, when we speak of integration, integration for both of this service integrates seamlessly with AWS service, such as your EC2, your network, the VPC, your policies, the Identity and Access Management, your IAM, the monitoring with your CloudWatch, and of course, your storage, your SC, ensuring a consistent experience within your AWS ecosystem. 15:21 Nikita: So, you could say that the Exadata Database Service is better for customers who want dedicated infrastructure with granular control, while the Autonomous Database is built for customers who want a fully automated experience. Thank you, Susan, for taking the time to talk to us about Oracle Database@AWS. Lois: That's all we have for today. If you want to learn more about the topics we discussed, head over to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. In our next episode, we'll find out how to get started with the Oracle Database@AWS service. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:06 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Scott and Wes break down how they built SynHax, the real-time CSS Battle app powering the upcoming Mad CSS tournament. From SvelteKit and Zero to diffing algorithms, sync conflicts, and a last-minute hackweek glow-up, this one's a deep dive into shipping ambitious web apps fast. Show Notes 00:00 Welcome to Syntax! 00:50 March Mad CSS Tournament. 03:19 Brought to you by Sentry.io. 03:59 What the heck is a CSS Battle? 05:34 The tech stack. 06:30 Svelte Kit. 06:44 Zero Sync. Zero Docs Zero Svelte. 07:32 Drizzle. 07:58 Supabase. 08:23 Graffiti. 10:45 Sync Server. 12:10 Cloudflare Workers. 12:23 Local File System. 13:26 How Zero Works. 13:48 Zero Sync Client. 15:39 API server. 19:34 Dealing with states and conflicts. 24:25 The Hackweek Project. 25:29 The Diffing Algorithm. 35:22 The bugs. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Jacob and the crew tear apart Super Bowl LX from every angle — how the Seattle Seahawks dominated the New England Patriots 29–13, the defensive masterclass that suffocated rookie QB Drake Maye, and how Kenneth Walker III emerged as the game's MVP for the first time by a running back in decades. We break down the key plays, the Patriots' offensive struggles, Jason Myers' record-setting five field goals, and all the big storylines your timeline is talking about today. Then it's onto the NFL MVP controversy — Matthew Stafford beat out Drake Maye by the closest margin in years for the 2025 AP MVP award, and the fallout online and in the league has been wild. We react to why the vote was so tight, what the pundits and fans are saying, and how this award feels totally separate from Super Bowl narratives yet is dominating conversations. Hosts & Guests: Jacob Gramegna is joined by professional sports bettor and CEO of The Hammer, Rob Pizzola, basketball originator Kirk Evans, and sophisticated square Geoff Fienberg for hot takes, hot mic moments, and season-defining reactions you don't want to miss.
This week on Marketing O'Clock: Google reports its biggest revenue & ad revenue to date. Also, the February Google Discover Core Update is Rolling out. Plus, Google is now offering Network Segmentation via API for PMax. Visit us at - https://marketingoclock.com/
In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.OpenClaw, an open source AI agent formerly known as MoltBot and ClawdBot, has rapidly become the fastest-growing project on GitHub, amassing over 113,000 stars in under a week.A critical vulnerability in the React Native Community CLI NPM package, tracked as CVE-2025-11953 with a CVSS score of 9.8, has been actively exploited in the wild since late December 2025, according to new findings by VulnCheck. JFrog article.Following the disclosure in the Notepad++ v8.8.9 release announcement, further investigation confirmed a sophisticated supply chain attack that targeted the application's update mechanism.Google, in coordination with multiple partners, has undertaken a large-scale disruption effort targeting the IPIDEA proxy network, which it identifies as one of the largest residential proxy networks globally.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.
On this episode, I cover the retirement of a Microsoft product API, the disablement of another Microsoft feature, worrying cyber attack news and much more! Reference Links: https://www.rorymon.com/blog/win11-outperforms-win10-ntlm-to-be-disabled-by-default-vmware-esxi-being-exploited-in-attacks-again/
Go 1.25.7 and 1.24.13 releasedUUIDs in the standard library?crypto/uuid: add API to generate and parse UUIDscrypto/rand: add UUIDv4 and UUIDv7 generatorsThe most popular Go dependency is...Lightning roundRust vs Go in 2026 by John ArundelWelcome to Gas Town by Steve YeggeInterview with Jakub CiolekOn GitHubHackerOne 'ghosted' me for months over $8,500 bug bounty, says researcher ★ Support this podcast on Patreon ★
In this episode of the Wharton FinTech Podcast, Bobby Ma sits down with Dean Brauer, President & COO of Cybrid. Dean shares his experience building Cybrid, who combines stablecoin, fiat, and compliance into a single API-first platform helping financial institutions, FinTechs, and enterprises integrate stablecoin infrastructure and launch end-to-end cross-border payment solutions to more than 150+ countries, at up to 90% lower cost, and with full transparency. The Company raised a $10 million Series A funding round led by BDC Capital and has grown 5x in the last 12 months. We discuss: - Dean's journey building Cybrid and his deep entrepreneurship experience - The solutions Cybrid offers in orchestrating stablecoin payments - The Company's bespoke thought partnership with customers in creating and executing their stablecoin strategy - Recent regulatory & industry trends driving forward this rapidly growing space
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.
Most orgs have a major blind spot: the browser.This week on Defender Fridays, we're joined by Cody Pierce, Co-Founder and CEO at Neon Cyber, to discuss why browser security remains a critical gap, from sophisticated phishing campaigns that bypass traditional controls to shadow AI tools operating outside your security perimeter.Cody began his career in the computer security industry twenty-five years ago. The first half of his journey was rooted in deep R&D for offensive security, and he had the privilege of leading great teams working on elite problems. Over the last decade, Cody have moved into product and leadership roles that allowed him to focus on developing and delivering innovative and differentiated capabilities through product incubation, development, and GTM activities. Cody says he gets the most joy from building and delivering products that bring order to the chaos of cyber security while giving defenders the upper hand.About This SessionThis office hours format brings together the LimaCharlie team to share practical experiences with AI-powered security operations. Rather than theoretical discussions, we demonstrate working tools and invite the community to share their own AI security experiments. The session highlights the rapid evolution of AI capabilities in cybersecurity and explores the changing relationship between security practitioners and automation.Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
What happens when you mix creative agency chaos with world-class engineering? You get teams that don't just write code—they own the product.In this episode, I'm talking with David Mitchell, CTO at VML, one of the biggest creative agencies on the planet. With thousands of engineers and global clients like Wendy's and United Airlines, David's teams are building things that most devs only dream of—and doing it without getting buried in bureaucracy.We break down what it really takes to foster creative engineering inside a massive org, how to keep engineers out of the ticket-taking trap, and how AI is reshaping what engineering leadership actually looks like.If you lead teams and want to stop micromanaging, or if you're just tired of pretending Agile is still helping... this one's for you.⏱️ Episode Breakdown[00:15] — What is a "creative engineer" and why don't we have more of them?[06:00] — Why software engineering should be creative work[07:40] — The evolution of engineering: From basement coders to business thinkers[11:20] — Ownership vs. ticket-taking: How VML trains teams to lead[13:30] — Journey-Driven Development and the myth of “API-first”[20:45] — Is AI changing how we build software—or just hyped?[24:00] — Hackathons, prototyping, and the rise of “vibe engineering”Links & ResourcesConnect with David on LinkedInProduct Driven - Get the BookSubscribe to the Product Driven NewsletterWhat Smart CTOs Are Doing Differently With Offshore Teams in 2025Subscribe to the Global Talent SprintFull Scale – Build your dev team quickly and affordablyIf you're trying to get your team out of the basement and into real product ownership, this episode is your playbook. Stop being a ticket factory. Build teams that think, create, and lead.Follow the show, rate it, and send this to someone who's still trying to do “real Scrum.” They need it more than you do.
On the podcast we talk with Tanmay and Jack about how earned media can drive paid performance, building features that make for good tweets, and why stripping out your onboarding quiz might beat optimizing it.Top Takeaways:
Marty sits down with Harsha Goli to discuss his decade-long journey building Bitcoin infrastructure, the challenges companies face integrating traditional finance rails, and how Magnolia is launching a full banking API to help any Bitcoin application convert fiat to Bitcoin across all 50 US states. Harsha on X: https://x.com/arshbot Magnolia: https://magnolia.financial/ STACK SATS hat: https://tftcmerch.io/ Our newsletter: https://www.tftc.io/bitcoin-brief/ TFTC Elite (Ad-free & Discord): https://www.tftc.io/#/portal/signup/ Discord: https://discord.gg/VJ2dABShBz Opportunity Cost Extension: https://www.opportunitycost.app/ Shoutout to our sponsors: Bitkey https://bit.ly/4pOv2L4 Promo Code: TFTC99 Unchained https://unchained.com/tftc/ SLNT https://slnt.com/tftc Lygos: http://bit.ly/3ZtQLwp Salt of the Earth: https://drinksote.com/tftc Join the TFTC Movement: Main YT Channel https://www.youtube.com/c/TFTC21/videos Clips YT Channel https://www.youtube.com/channel/UCUQcW3jxfQfEUS8kqR5pJtQ Website https://tftc.io/ Newsletter tftc.io/bitcoin-brief/ Twitter https://twitter.com/tftc21 Instagram https://www.instagram.com/tftc.io/ Nostr https://primal.net/tftc Follow Marty Bent: Twitter https://twitter.com/martybent Nostr https://primal.net/martybent Newsletter https://tftc.io/martys-bent/ Podcast https://www.tftc.io/tag/podcasts/
This week on Circle Back, Jacob, Rob, Kirk, and Jason break down why so many people are confused by betting odds and why the idea that sportsbooks are “shading” lines to trick the public is mostly nonsense. We debunk common myths, explain how odds actually work today, and discuss the smartest way to interpret them if you want to stay ahead of the game. Plus, we recap an insane weekend in sports, from the Australian Open upsets to NHL Stadium Series highlights and the latest NFL coaching news. Jacob Gramegna hosts the conversation with professional sports bettor Rob Pizzola, basketball originator Kirk Evans, and special guest Jason Cooper from The Hammer Daily. Expect sharp takes, real betting insight, and a no-nonsense look at the stories everyone's talking about.