Magnitude of velocity
POPULARITY
Categories
AI isn't about productivity. It's about presence.In this special episode, the tables turn and I'm interviewed by Sham Colegado about my new book, Artificial Organizations. We explore why 95% of AI projects fail, why executives don't want more tools — they want their life back — and how the real competitive edge isn't automation, but judgment at speed.If you've been overwhelmed by the explosion of AI tools or unsure where to start, this episode will help you reframe the conversation. This isn't about doing more. It's about deciding better — faster, with clarity and confidence — by combining human instinct with machine intelligence.Key TakeawaysAI Used Only for Productivity Fails: When AI is treated as a cost-cutting tool instead of a transformation system, it rarely creates lasting value.Presence Is the Real Advantage: The goal isn't more output. It's showing up calmer, clearer, and better prepared — so decisions improve.Decision Velocity + Decision Advantage Wins: Make decisions faster and with better information. Speed without clarity is noise. Clarity without speed is stagnation.The Future Belongs to Human + Machine Judgment: Executives who combine instinct with machine intelligence will outperform those relying on either alone.Additional InsightsExecutives Don't Want More Tools — They Want Their Life Back: Leaders aren't overwhelmed by lack of tools. They're overwhelmed by fragmented workflows, constant context switching, and decision fatigue. AI must reduce cognitive load, not add to it.Presence Drives Performance: When AI handles capture and synthesis, leaders show up calmer, more prepared, and more focused. Productivity improves — but performance and clarity are the real unlock.The Identity Threat of AI: Many executives privately fear incompetence. They don't want to look behind or uninformed. That hesitation often shows up as skepticism or avoidance.Decision Velocity Is the New Differentiator: Artificial organizations move faster because they reduce decision latency. Meetings become focused. Context is pre-loaded. Choices are made with confidence.Traits + Tasks + Tools (T3 Model): Start with how you naturally work best. Then amplify your highest-leverage tasks with the right tools.Capture, Transcribe, Synthesize, Act: A simple workflow that turns every conversation into a reusable data asset. This loop compounds judgment and accelerates learning over time.Episode Highlights00:00 – Episode RecapBarry explains why AI used purely for productivity fails — and why the real advantage comes from transforming how leaders make decisions.02:58 – Guest Introduction: Sham ColegadoBarry welcomes Sham Colegado, a key member of the Artificial Organizations team, who interviews Barry about the book and its core ideas.03:32 – “Executives Don't Want More AI Tools”Barry shares the personal burnout moment that sparked a shift from productivity chasing to rethinking how he works.06:02 – AI's Real Promise: Presence Over ProductivityWhy performance and clarity matter more than output — and how AI can make leaders calmer and more focused.09:30 – The Identity Threat of AIExecutives reveal a hidden fear of incompetence and why one-on-one learning environments matter.12:26 – Decision Velocity & Decision AdvantageThe two engines of artificial organizations and how reducing decision latency compounds competitive advantage.15:15 – The Traits, Tasks, Tools FlywheelHow aligning natural strengths with high-leverage work determines which AI tools actually create impact.19:01 – What the Best AI...
Nicolai Tangen is the CEO of Norges Bank Investment Management, the world's largest sovereign wealth fund. He is responsible for managing $2.1 trillion. That's roughly 1.7% of every listed company on earth. In this episode, we explore the intersection of massive wealth, high-speed decision-making, and the psychological traits required to survive the AI revolution. ----- Approximate Timestamps: (00:00) Introduction (01:09) What Are You Leaning Against? (03:17) Tech Sector Evolution (04:15) The AI Bubble (05:44) Will AI Replace Humans in Investing? (06:24) Lessons on Listening (09:15) American vs. European Mindset (12:09) Prime Minster For a Day (14:27) Most Important Data (16:00) Speed and Agility (17:05) Ad Break (18:35) Using Urgency as a Tool (20:12) Can You Teach People to Change Their Minds? (22:14) Positive and Negative Comments (22:56) Testing Assumptions Before a Big Investment (25:07) Attitude Towards Risk (28:33) What's Gotten Harder in Investing? (29:07) The Rise of Passive Investing (33:42) Why Did You Take This Job? (35:04) Ad Break (36:14) Sovereign Wealth Funds (38:24) Voting Against Elon Musk's Pay Package (39:08) Building Long-Term Thinking (43:17) Slowing Down Decisions (45:13) Seeking Out Disagreement (48:08) Hiring Checklist (49:15) 140 Conversations To Prepare For A Huge Role (53:33) CEO Evaluation (01:01:25) What is Success For You? ------ Newsletter: The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter ------ Follow Shane Parrish: X: https://x.com/shaneparrish Insta: https://www.instagram.com/farnamstreet/ LinkedIn: https://www.linkedin.com/in/shane-parrish-050a2183/ Follow Nicolai Tangen: LinkedIn: https://www.linkedin.com/in/nicolai-tangen/?originalSubdomain=no Learn More: https://www.nbim.no/en/about-us/leader-group/leadergroup-persons/nicolai-tangen/ ------ Thank you to the sponsors for this episode: +Granola AI, The AI notepad for people in back-to-back meetings: https://www.granola.ai/shane Check out the Granola Notes +Download The League App today and find your perfect match! +Shopify: https://shopify.com/knowledgeproject Learn more about your ad choices. Visit megaphone.fm/adchoices
Romain Grosjean and Alex Rossi's friendship has come a long way over the years, so Romain was nice enough to come on Off Track to talk about his plans for the 2026 season with Rossi.+++Off Track is part of the SiriusXM Sports Podcast Network. If you enjoyed this episode and want to hear more, please give a 5-star rating and leave a review. Subscribe today wherever you stream your podcasts.Want some Off Track swag? Check out our store!Check out our website, www.askofftrack.comSubscribe to our YouTube Channel.Want some advice? Send your questions in for Ask Alex to AskOffTrack@gmail.comFollow us on Twitter at @askofftrack. Or individually at @Hinchtown, @AlexanderRossi, and @TheTimDurham. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of The Speed of Culture, Matt Britton sits down with Milo Speranzo, Chief Marketing Officer for Lenovo North America, live from CES 2026 in Las Vegas. Milo breaks down what it took to deliver Lenovo's sold-out Sphere showcase, the product story behind AI PCs, servers, wearables, and Motorola, and why edge computing AI privacy now shapes a new hardware refresh cycle. The conversation also explores the Lenovo FIFA World Cup partnership, FootballAI analytics, and Milo's leadership mantra for 2026: learn, iterate, and be a goldfish.Follow Suzy on Twitter: @AskSuzyBizFollow Milo Speranzo on LinkedInSubscribe to The Speed of Culture on your favorite podcast platform.And if you have a question or suggestions for the show, send us an email at suzy@suzy.com Hosted on Acast. See acast.com/privacy for more information.
The hosts take open a sweeping look at the week's most consequential retail developments before heading live to the Narvar Podcast Studio at the NRF Big Show for a deep dive into AI, agentic commerce, and the evolving post-purchase customer journey.The news segment explores Saks Global's decision to close nine full-line stores, underscoring ongoing consolidation in the luxury industry and challenges in multi-line retail. The hosts examine luxury's continued bifurcation, with Kering struggling while Hermès thrives, reinforcing that luxe positioning alone isn't enough — execution matters.In specialty retail, the “collapse of the unremarkable middle” continues as Toys “R” Us Canada, Francesca's, and Eddie Bauer face significant retrenchment if not extinction, while Tractor Supply and Aritzia aggressively expand. Kroger appoints its first external CEO, Greg Boren, signaling operational rigor ahead, while Costco once again posts remarkable sales growth Meanwhile, Target begins meaningful leadership restructuring — a foundational step in what is likely a multi-year turnaround. On the radar: AI-powered retail crime prevention at Bunnings and the imminent opening of the Gordie Howe International Bridge, a major infrastructure development for North American trade.The featured interview brings Henry Spear, SVP Digital North America, JD Sports, and David Morin, VP Customer Strategy for Narvar, to the mic for a timely discussion on agentic commerce and how leveraging product returns can create competitive differentiation. About UsSteve Dennis is a strategic advisor and keynote speaker focused on growth and innovation, who has also been named one of the world's top retail influencers. He is the bestselling author of two books: Leaders Leap: Transforming Your Company at the Speed of Disruption and Remarkable Retail: How To Win & Keep Customers in the Age of Disruption. Steve regularly shares his insights in his role as a Forbes senior retail contributor and on social media.Michael LeBlanc is a senior retail advisor, keynote speaker and media entrepreneur. Michael has delivered keynotes, hosted fire-side discussions hosted senior retail executive on-stage in 1:1 interviews worldwide. Michael produces and hosts a network of leading retail trade podcasts, including The Remarkable Retail Podcast, The Voice of Retail The Food Professor, The FEED powered by Loblaw and the Global eCommerce Leaders podcast. He has been recognized by the NRF as a global Top Retail Voice for 2025 and 2025 and continues to be a ReThink Retail Top Retail Expert for the fifth year in a row.
Frank Weisser is a two-time Blue Angels Pilot who was deployed in combat three separate times, including to Afghanistan and Iraq. He has accumulated more than 5,000 flight hours and nearly 500 carrier arrested landings. His decorations include multiple Meritorious Service Medals, Strike Flight Air Medals and various personal and unit awards. Because of his experience flying at extreme low altitudes and inverted, he was the pilot for the most complex and memorable air combat scenes in Top Gun: Maverick. His book is titled Lead Solo: Learning Life's Vectors from an F/A-18 Blue Angel Aviator. Summary Frank Weisser's career sits at a rare intersection: Navy fighter pilot, two-time Blue Angels pilot, combat deployments, and the low-altitude stunt flying that helped make Top Gun: Maverick feel real. Early in the conversation, Frank reframes what looks like "crazy risk" from the outside into a disciplined craft: aviation is inherently dangerous, but the real skill is identifying known risks and systematically mitigating them through the right people, the right preparation, and the right standards. The episode then rewinds to a formative disappointment: Frank entered the Naval Academy intent on becoming a SEAL, didn't get selected, and had to confront a painful identity-level failure. What changed his trajectory wasn't a new goal, but a reframing of motivation from "how I serve" to "that I serve," even if the role wasn't what he originally wanted. That lesson becomes a through-line for everything that follows: mission first, ego second, and meaning found in the sacrifice itself. From there, Frank breaks down what makes the Blue Angels such a high-functioning team. "Glad to be here" isn't a slogan; it's a culture-building mechanism rooted in gratitude, humility, and the idea that the work is bigger than the individual. The team reinforces that culture through small, repeatable behaviors and through the "great equalizer" effect of an environment that demands confidence without cockiness. Finally, the conversation translates elite aviation into practical leadership. Frank shares specific approaches to focus (compartmentalizing distractions), decision-making under pressure (the discipline to "underreact in the extreme"), trust-building (earned trust through vulnerability and consistency), and learning velocity (a debrief culture that prioritizes what went wrong so tomorrow gets better). Woven through it all is his definition of excellence: pushing past comfort, taking measured risks, being willing to fail, and then rebuilding smarter. Takeaways · Risk isn't eliminated in high-stakes work; it's acknowledged upfront and managed through preparation, expertise, and process. · "Mission first" is a practical operating system, not a motivational poster. It keeps ego from quietly taking over. · Gratitude can be engineered into culture through small rituals, and those rituals compound into trust and performance. · Confidence is required, but cockiness is actively corrected by a team that refuses to let anyone go rogue. · Compartmentalization is a skill: name the distraction, surface it with the team when needed, then "do not disturb" your mind for the task. · When you're solo, focus comes back to priorities: stop saying "I didn't have time" and tell the truth about what wasn't a priority. · Fear shrinks when you're properly prepared: know the systems, memorize the critical failures, rehearse in simulation, then execute. · The best operators train themselves to underreact. Even one second of composure can be the difference between solving the right problem and making it worse. · Trust is built fastest through earned vulnerability and consistency, not "blind trust." · Excellence, in Frank's words, is helping yourself and others attempt what feels out of reach, being willing to fail, and restarting with better intelligence. Notes: Book: Lead Solo: Learning Life's Vectors from an F/A-18 Blue Angel Aviator Frank Weisser leadership consulting: https://frankweisser.net/
Get our AI Video Guide: https://clickhubspot.com/dth Episode 97: How close are we to a world where AI-generated videos are indistinguishable from reality? Matt Wolfe (https://x.com/mreflow) and Joe Fier (linkedin.com/in/joefier) dive deep into Seedance 2.0—ByteDance's new AI video model that could outpace giants like Sora and Veo. Joe, a marketing and business expert known for his hands-on approach and insights into AI's rapid evolution, helps to break down the five most fascinating developments in the AI space this week. They tackles game-changing AI advances: Seedance 2.0's mind-blowing video generation for ads and motion graphics, the rollout of Google's Veo 3.1 in Google Ads, the GPT-5.3 Codex Spark coding model built on specialized inference chips, Gemini's DeepThink model for scientific research, and the early rollout of ChatGPT ads. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Seedance 2.0 arrives – AI video generation blurs reality, ad creation moves fast. (03:03) Google's Veo 3.1 powers video ads, advertisers can now generate clips directly from image uploads. (05:33) Comparison of Runway, Kling, Veo, and Sora—head-to-head prompt showdown. (07:00) Motion graphics and explainers—AI's take on the creative industry. (08:35) US vs. China—Copyright, IP, and training data debates. (12:10) Deepfake and video authenticity—why we now default to skepticism. (13:30) Google's edge in visual AI via YouTube's massive corpus. (14:39) The next frontier: Longer, more consistent video generation. (15:14) Where do humans fit in? Taste, storytelling, and creative direction. (18:30) GPT-5.3 Codex Spark—coding models on Cerebras inference chips, demo generating a website in 18 seconds. (24:34) AI tool comparisons—Codex vs. Cursor vs. Claude Code. (25:12) Speed as the key bottleneck breaker in creative and technical workflows. (28:02) Google's Gemini DeepThink—state-of-the-art research, advanced coding and physics capabilities. (32:52) Gemini demo attempt—3D-printable STL file and solving the three-body problem. (33:20) ChatGPT rolls out ads—impact on monetization and user trust. (40:02) Google's ad history—how “sponsored” is becoming harder to distinguish. (44:02) Democratizing AI access via ad-supported models. (45:03) Matt Schumer's viral article—why AI is moving even faster than most people realize. (51:11) Tools that build tools—AGI's path and the new role for humans. (53:12) Real-world skills and taste—where humanity still wins (for now). (54:01) Final thoughts—wake up, pay attention, and stay on the leading edge. — Mentions: Seedance 2.0: https://www.seedance.com/ ByteDance: https://www.bytedance.com/ CapCut: https://www.capcut.com/ Veo: https://deepmind.google/models/veo/ Runway: https://runwayml.com/ ChatGPT Codex: https://chatgpt.com/codex Matt Schumer's Viral Article: https://www.mattshumer.com/blog/ai-changes-everything Super Bowl Claude Commercial: https://www.anthropic.com/news/super-bowl-ad Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Like the aroma of our favorite delicacies, we are drawn to feel the “intention” of intention before the mind has the opportunity to distill it. Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode, recorded live from the Oregon HFMA Winter Conference, reflects on how change shows up across healthcare revenue cycle operations and how augmented intelligence has become a recurring thread connecting those experiences. We'll explore why some parts of healthcare evolve quickly while others resist change, and what that means for the people navigating it every day.Brought to you by www.infinx.com
Episode Info Wayne Slavin is the CEO and Co-Founder of Sure, a VC backed insurtech startup. Prior to Sure he was the VP of Product Management at Tapingo, TechCrunch's Most Innovative Company of 2013. His other past projects and companies include NetStumbler, a consumer app with more than 1.5 billion downloads, the Barnes & Noble Nook eBook reader, Buddy Media (now part of salesforce), and BackupRight the enterprise SaaS company he sold in 2012. He has a Masters Degree from Columbia University. You can see Wayne from his appearance on the show in April of 2024 in the final episode of Season 5. Episode Overview: SURE's Role: SURE provides the technology infrastructure and services that enable large brands, including Fortune 500 companies and major auto manufacturers, to launch and manage their own digital insurance businesses. This allows these brands to control the customer experience and build long-term, durable insurance operations. Embedded Insurance: The trend of "embedded insurance" is driven by the fact that insurance is often a necessary component of a core product (like cars or homes) or can be a friction point in a sale. Companies are recognizing the value of offering insurance directly to their customers to enhance the overall experience and capture economic benefits. The "One-Stop-Shop" Vision: Many large consumer brands aim to be a comprehensive provider for their customers, whether it's for car ownership, homeownership, or financial well-being. Insurance is a natural extension of this strategy, allowing them to create a complete ecosystem around their core offerings. Structural Advantage: Brands that already have a customer base have a significant advantage. Acquiring these customers for insurance purposes costs them next to nothing, giving them better economics than external insurance providers. Evolution of SURE: Over the past year, SURE has focused on helping its partners achieve "permanence" in their insurance offerings. This means enabling them to build stable, long-term insurance programs that are not subject to the fluctuating appetites or market conditions of traditional insurers. Challenges for Traditional Insurers: The existing insurance industry has had ample opportunity to improve its technology and customer experience but has largely failed to do so. This has created an opening for new models. The "Build vs. Buy" Dilemma: While some companies attempt to build their own insurance carriers, this is capital-intensive and distracts from their core business. Partnering with a third-party carrier often results in a loss of control over customer experience and technology, leading to suboptimal outcomes. SURE's Sweet Spot: SURE offers a middle ground, enabling brands to have their own differentiated insurance programs with control and economic upside without the need to become full-fledged insurers or rely on inadequate partnerships with traditional carriers. Speed to Market: SURE can bring partners to market with approved insurance products in as little as 90 days, or even faster for simpler offerings, demonstrating a significant advantage over the lengthy internal development times typical for such initiatives. Industry Inertia: The insurance industry often suffers from a lack of incentive for long-term growth and innovation. Decisions are often based on avoiding blame (omission vs. commission) rather than proactively pursuing new opportunities. This makes it difficult for established players to adapt to new models. The Future of Insurance Distribution: The future will likely involve insurance being more deeply integrated into the customer journey, moving away from discrete purchases and towards seamless, embedded solutions. The current models of comparison engines and traditional carrier partnerships are becoming less relevant. Investor Appetite: There is a significant appetite from investors like private equity and sovereign wealth funds for insurance-like returns, especially for well-defined, scalable programs that leverage existing customer bases. This episode is brought to you by The Future of Insurance book series (future-of-insurance.com) from Bryan Falchuk. Follow the podcast at future-of-insurance.com/podcast for more details and other episodes. Music courtesy of Hyperbeat Music, available to stream or download on Spotify, Apple Music, and Amazon Music and more.
What if the reason your easy runs feel hard is because you're doing too much, too soon?Marathon training can feel overwhelming when every run type seems important and everything feels hard at once. In this episode, I break down the simple structure behind effective marathon training and explain why most runners struggle not because they lack effort, but because they train in the wrong order. I walk through the three run types that quietly build most of your fitness, why they matter more than speed work early on, and how following the right sequence helps you stay healthy, consistent, and confident all the way to race day.Key TakeawaysMarathon training works best when it follows a clear order, starting with easy runs, long runs, and threshold work before adding speed. Skipping this order is one of the fastest ways to get injured or burned out.Easy runs and long runs are not filler workouts. They build the aerobic base that lets your body recover, adapt, and handle harder training later.Speed work only helps once your foundation is solid. Without a base, harder workouts create damage faster than your body can rebuild.Timestamps[00:34] What You'll Learn[01:47] The Problem[04:10] Use This To Crush Your Next Marathon[05:17] The Solution: How To Sequence The 16 Weeks[07:35] Run Type #1: The Easy Run[09:41] Run Type #2: The Long Run[14:05] Run Type #3: The Threshold Run[18:27] Find Out What Level Marathoner You AreLinks & Learnings
The Institute of Internal Auditors Presents: All Things Internal Audit In this episode, Adam Ross is joined by Filipe Ribeiro and Julien Perreault to discuss how supply chain risk has evolved into an interconnected, enterprisewide challenge. They discuss where organizations underestimate exposure, how risks quietly accumulate across the value chain, and why internal audit is uniquely positioned to identify blind spots before disruptions escalate. The conversation spans real-world examples from agriculture and highly regulated industries, third-party risk, continuous monitoring, and the growing impact of automation and AI on supply chains. HOST: Adam Ross, CIA, CISA Partner, Grant Thornton Advisors LLC GUEST: Filipe Ribeiro, CIA, CRMA, CFE Group Internal Audit Manager, Aldar Julien Perreault, CPIM, MBA Experienced Manager, Sourcing and Supply Chain Advisory, Grant Thornton Advisors LLC KEY POINTS: Introduction to Modern Supply Chain Risk [00:00:02–00:01:22] From Operational Inconvenience to Strategic Risk [00:01:22–00:02:24] Why Supply Chain Risk Is Now Systemic and Enterprisewide [00:02:32–00:03:11] Where Organizations Commonly Underestimate Exposure [00:03:22–00:04:33] When "Green Dashboards" Mask Emerging Risk [00:03:33–00:05:08] How Informal Workarounds Quietly Accumulate Enterprise Risk [00:05:08–00:05:47] Agricultural Case Study: How Small Upstream Delays Become Major Downstream Failures [00:05:53–00:07:52] Using Continuous Monitoring to Detect Hidden Timing and Dependency Risks [00:07:57–00:12:26] Supply Chain Risk in Remote, Capital-Intensive, and Highly Regulated Environments [00:12:54–00:15:25] Balancing Regulatory Compliance and Operational Efficiency [00:15:42–00:18:55] Procure-to-Pay Risk and the Rise of Operational "Noise" [00:19:14–00:21:01] When Exceptions Become the Normal Operating Model [00:21:01–00:23:15] Third-Party Risk as a Business Resilience Issue [00:25:15–00:27:12] Governance, Speed of Business, and Supplier Ecosystems [00:27:12–00:30:25] Managing Supplier Concentration Risk Without Sacrificing Resilience [00:31:20–00:35:04] Geographic and Cultural Complexity as an Underestimated Risk Driver [00:35:36–00:37:27] How Internal Audit Can Add Value Without Compromising Independence [00:38:35–00:41:29] Emerging Risks: Automation, AI, Data Quality, and Governance Lag [00:41:47–00:46:18] Final Thoughts on the Future of Supply Chain Risk [00:46:29–00:47:05] Visit The IIA's website or YouTube channel for related topics and more. IIA RELATED CONTENT: Interested in this topic? Visit the links below for more resources: Global Internal Audit Standards Third-Party Topical Requirement Continuous Auditing and Monitoring, 3rd Edition Boardroom: Breaks in the Chain GAM 2026 Follow All Things Internal Audit: Apple Podcasts Spotify Libsyn Deezer
The use of stimulants during WWII is no secret, but in the last decade, there has been a lot of discussion and analysis of it. Just how significant was drug use in Nazi Germany, and how did the Allies compare? Research: Ackermann, Paul. “Les soldats nazis dopés à la méthamphétamine pour rester concentrés.” HuffPost France. June 4, 2013. https://www.huffingtonpost.fr/actualites/article/les-soldats-nazis-dopes-a-la-methamphetamine-pour-rester-concentres_19714.html Andreas, Peter. “How Methamphetamine Became a Key Part of Nazi Military Strategy.” Time. Jan. 7, 2020. https://time.com/5752114/nazi-military-drugs/ Blakemore, Erin. “A Speedy History of America’s Addiction to Amphetamine.” Smithsonian. Oct. 27, 2017. https://www.smithsonianmag.com/history/speedy-history-americas-addiction-amphetamine-180966989/ Boeck, Gisela, and Vera Koester. “Who Was the First to Synthesize Methamphetamine?” Chemistry Views. https://www.chemistryviews.org/9-who-first-synthesized-methamphetamine/ “Ephedra.” National Center for Complementary and Integrative Health.” https://www.nccih.nih.gov/health/ephedra Eghigian, Greg, PhD. “A Methamphetamine Dictatorship? Hitler, Nazi Germany, and Drug Abuse.” Psychiatric Times. June 23, 2016. https://www.psychiatrictimes.com/view/methamphetamine-dictatorship-hitler-nazi-germany-and-drug-abuse Garber, Megan, “‘Pilot’s Salt’: The Third Reich Kept Its Soldiers Alert With Meth.” The Atlantic. May 31, 2013. https://www.theatlantic.com/technology/archive/2013/05/pilots-salt-the-third-reich-kept-its-soldiers-alert-with-meth/276429/ Gifford, Bill. “The Scientific AmericanGuide to Cheating in the Olympics.” Scientific American. August 5, 2016. https://www.scientificamerican.com/article/the-scientific-american-guide-to-cheating-in-the-olympics/ Gorvett, Zaria. “The Drug Pilots Take to Stay Awake.” BBC. March 14, 2024. https://www.bbc.com/future/article/20240314-the-drug-pilots-take-to-stay-awake Grinspoon, Lester. “The speed culture : amphetamine use and abuse in America.” Harvard University Press. 1975. Accessed online: https://archive.org/details/speedcultureamph0000grin_n3i0/mode/1up Gupta, Raghav et al. “Understanding the Influence of Parkinson Disease on Adolf Hitler's Decision-Making during World War II.” World Neurosurgery. Volume 84, Issue 5. 2015. Pages 1447-1452. https://doi.org/10.1016/j.wneu.2015.06.014. Hurst, Fabienne. “The German Granddaddy of Crystal Meth.” Spiegel. Dec. 23, 2013. https://www.spiegel.de/international/germany/crystal-meth-origins-link-back-to-nazi-germany-and-world-war-ii-a-901755.html Isenberg, Madison. “Volksdrogen: The Third Reich Powered by Methamphetamine.” The Macksey Journal. University of Texas at Tyler. Volume 4, Article 21. 2023. https://scholarworks.uttyler.edu/cgi/viewcontent.cgi?article=1001&context=senior_projects Laskow, Sarah. “Brewing Bad: The All-Natural Origins of Meth.” The Atlantic. Oct. 3, 2014. https://www.theatlantic.com/technology/archive/2014/10/brewing-bad-the-all-natural-origins-of-meth/381045/ Lee, Ella. “Fact check: Cocaine in Coke? Soda once contained drug but likely much less than post claims.” USA Today. July 25, 2021. https://www.usatoday.com/story/news/factcheck/2021/07/25/fact-check-coke-once-contained-cocaine-but-likely-less-than-claimed/8008325002/ Leite, Fagner Carvalho et al. “Curine, an alkaloid isolated from Chondrodendron platyphyllum inhibits prostaglandin E2 in experimental models of inflammation and pain.” Planta medica 80,13 (2014): 1072-8. doi:10.1055/s-0034-1382997 Meyer, Ulrich. “Fritz hauschild (1908-1974) and drug research in the 'German Democratic Republic' (GDR).” Die Pharmazie 60 6 (2005): 468-72. Natale, Fabian. “Pervitin: how drugs transformed warfare in 1939-45.” Security Distillery. May 6, 2020. https://thesecuritydistillery.org/all-articles/pervitin-how-drugs-transformed-warfare-in-1939-45 Ohler, Norman. “Blitzed: Drugs in the Third Reich.” Houghton Mifflin Harcourt. 2017. Rasmussen, Nicolas. “Medical Science and the Military: The Allies’ Use of Amphetamine during World War II.” The Journal of Interdisciplinary History, vol. 42, no. 2, 2011, pp. 205–33. JSTOR, http://www.jstor.org/stable/41291190 “Reich Minister of Health Dr. Leonardo Conti Speaks with Hitler’s Personal Physician, Dr. Karl Brandt (August 1, 1942).” German History in Documents and Images. https://germanhistorydocs.org/en/nazi-germany-1933-1945/reich-minister-of-health-dr-leonardo-conti-speaks-with-hitler-s-personal-physician-dr-karl-brandt-august-1-1942 Schwarcz, Joe. “The Right Chemistry: Once a weapon, methamphetamine is now a target.” Oct. 1, 2021. https://montrealgazette.com/opinion/columnists/the-right-chemistry-once-a-weapon-methamphetamine-is-now-a-target Snelders, Stephen and Toine Pieters. “Speed in the Third Reich: Metamphetamine (Pervitin) Use and a Drug History From Below.” Social History of Medicine. Volume 24, Issue 3. December 2011. Pages 686–699. https://doi.org/10.1093/shm/hkq101 “Stimulant Pervitin.” Deutschland Museum. https://www.deutschlandmuseum.de/en/collection/stimulant-pervitin/ Tinsley, Grant. “Ephedra (Ma Huang): Weight Loss, Dangers, and Legal Status.” Helthline. March 14, 2019. https://www.healthline.com/nutrition/ephedra-sinica See omnystudio.com/listener for privacy information.
On an all-new Speed Dates episode, host Joel Kim Booster sits down with the hilarious Jon Daly (Kroll Show, Hail, Caesar!, Big Mouth) to talk about his role on the hit series Fallout, his journey from drama school to comedy and back to dramatic roles, falling in love during the pandemic, and why Billy Joel's “New York State Of Mind” depicts a perfect relationship (between Billy and the city of New York, obvs). Plus: Peaches is a very good dog, and this should be commemorated. Subscribe to our YouTube Channel for full episodes. Merch available at SiriusXMStore.com/BadDates. Joel Kim Booster: Psychosexual, Fire Island, Loot Season 3Jon Daly: Fallout Seasons 1 and 2 are streaming now! Check out The Fallout Fake Talkshow! Subscribe to SiriusXM Podcasts+ to listen to new episodes of Bad Dates ad-free. Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Are we getting too lazy to think without AI?You use it for emails, reports, research. It saves time. But every shortcut you take, every task you hand over, you feel a quiet trade-off happening. Efficiency for autonomy. Speed for depth. Convenience for critical thinking.In this episode:Why AI acts as a cosmic mirror that reflects our worst habits back at usHow laziness becomes the trap when machines can outthink, outwork, and outlast usWhat happens when humans drift into digital dependency instead of staying groundedWhy short-term pain might be necessary for long-term transformationHow to decide which tasks to outsource and which require you to stay sharpWhat the hero's journey teaches us about navigating AI's crucibleGuest: Jeff Burningham, author of The Last Book Written by a Human and former gubernatorial candidate. He believes AI is forcing humanity to confront an uncomfortable question: Are we ready to evolve, or will we choose the easy path and lose ourselves in the process?
In this week's One Vision Podcast, we welcome back Sandeep Mangaraj, now co-founder of Aileron Group, to discuss lessons from building a company in a crowded AI market and why AI success is more about people and process than technology. The conversation covers executives rushing into GenAI due to FOMO, the importance of starting with desired outcomes rather than applying GenAI everywhere, and why mid-market institutions may benefit from faster time-to-decision, less technical debt, and faster learning cycles. The key question is whether firms can afford the downside of waiting. And the answer is increasingly: No.00:00 Welcome Back: Meet Sandeep Mangaraj 00:55 Building Aileron: Lessons From a Crowded AI Market03:49 The AI ROI Myth: What's Actually Being Measured?05:28 FOMO vs Outcomes: Picking the Right AI Use Cases10:09 Falling Costs & Speed of Learning12:40 Who Owns AI Outcomes and Who's Accountable When It Breaks?14:55 Third-Party Risk: Regulators, Vendors, and Dependencies19:51 “Can You Afford to Wait?”23:02 Closing: Act With Purpose in 2026Hot take: The barrier to AI success isn't technology, it's people and process. #AI #Fintech #GenerativeAI #DigitalTransformation Hosted on Acast. See acast.com/privacy for more information.
Every leadership team craves alignment; no one wants the meetings. When executives hear "RevOps for Clients," they may picture more red tape and overhead. Courtney Baker, David DeWolf, and Mohan Rao argue that the right rigor doesn't slow business down—it slows bad decisions down. They unpack the "Minimum Viable Cadence," swapping hours of reactive fire drills for a single 30-minute triage, and discuss why exposing "dirty data" is the only path to shared accountability. Courtney also sits down with Alyssa Nolte of Ology to discuss AI in Customer Experience. Alyssa shares why the data you need is already there—just trapped in silos—and offers a "Kobe Bryant" approach to mastering the unsexy fundamentals of change management. All that, plus Pete Buer analyzes IBM's move to package its internal AI efficiency tool, IBM Consulting Advantage, as a client-facing product. Is it the ultimate example of productizing services? Get the resources to build your own RevOps for Clients discipline at our 2/25 webinar: www.knownwell.com/revops Watch the full episode on YouTube: https://www.youtube.com/watch?v=gwdcu54dfR4
In this episode of Take the Stage InSights, Brad Bialy sits down to interview ChatGPT (yes...the AI-powered LLM) to unpack the real state of staffing, the rise of AI, and what firms must do now to stay relevant in a rapidly shifting talent market. About the Guest For sake of this conversation, ChatGPT was primed to be an AI strategist and staffing industry analyst specializing in the intersection of talent markets, technology, and future disruption. With a data-driven lens and objective insights, ChatGPT explores how automation, workforce trends, and evolving recruiter roles are reshaping the future of staffing. Key Takeaways Transactional recruiting is dying; consultative partnership is winning. AI will eliminate tasks, not the need for trust. Specialization builds authority; dilution breeds confusion. Data is no longer optional—it's your competitive edge. Speed matters, but humanity closes the deal. Timestamps [00:01] – Resetting the staffing narrative [01:52] – The uncomfortable truth about talent shortages [03:34] – Why recruiters must become career architects [06:32] – Specialize or slowly disappear [07:49] – Selling roles vs. solving business problems [08:22] – Designing candidate experience that actually wins [12:02] – When AI becomes a gatekeeper (and how to stop it) [14:26] – The legacy mindset killing growth [17:43] – The questions that elevate you to strategic partner [21:53] – What AI will automate first — and fast [25:48] – Fewer recruiters. Bigger results. Here's why. [29:35] – The AI blind spot redefining success About the Host Brad Bialy is a trusted voice and highly sought-after speaker in the staffing and recruiting industry, known for helping firms grow through integrated marketing, sales, and recruiting strategies. With over 13 years at Haley Marketing and a proven track record guiding hundreds of firms, Brad brings deep expertise and a fresh, actionable perspective to every engagement. He's the host of Take the Stage and InSights, two of the staffing industry's leading podcasts with more than 200,000 downloads. Sponsors InSights is presented by Haley Marketing. For a limited time, we're offer 50% off of a brand new staffing website. Just message Brad Bialy on LinkedIn and mention the Crazy Website Promo. Book a 30-minute business and marketing consultation with host, Brad Bialy: https://bit.ly/Bialy30 This episode is brought to you by FoxHire. If you're looking for an Employer of Record partner that helps recruiters confidently grow contract placements and build recurring revenue without taking on extra risk, FoxHire is perfect for you. Learn more at FoxHire.com/Haley
In this episode, Steve Fretzin and Ted DeBettencourt discuss:Making prospects feel heard before they hireFixing intake as the real growth leverOptimizing website conversion channelsAdopting a builder's mindset in business development Key Takeaways:Emotional needs matter as much as legal expertise when someone is choosing a lawyer. If prospects do not feel listened to or cared for, they continue shopping. Human connection is often the deciding factor in a crowded legal marketplace.Many firms invest heavily in marketing but lose revenue at the intake stage. Speed of response determines success, as delays of even 24 hours can cost the case. Clear criteria for qualified leads protect attorney time and improve conversion rates.Law firm websites must make contact effortless through visible phone numbers, live chat, text, and forms. Human-powered chat and SMS are increasingly driving higher engagement and signed cases. Firms that reduce friction in communication dramatically improve conversion outcomes.Opportunities rarely arrive on their own, even with strong credentials. Growth begins when professionals stop waiting and start creating value. Taking initiative and solving real problems can open entirely new career paths. "Running a law firm 101: don't answer your own phone, because you're never getting any work done." — Ted DeBettencourt Check out my new show, Be That Lawyer Coaches Corner, and get the strategies I use with my clients to win more business and love your career again. Ready to go from good to GOAT in your legal marketing game? Don't miss PIMCON—where the brightest minds in professional services gather to share what really works. Lock in your spot now: https://www.pimcon.org/ Thank you to our Sponsor!Rankings.io: https://rankings.io/Lawyer.com: https://www.lawyer.com Ready to grow your law practice without selling or chasing? Book your free 30-minute strategy session now—let's make this your breakout year: https://fretzin.com/ About Ted DeBettencourt: Ted DeBettencourt is the founder and CEO of Juvo Leads, a human-powered intake and chat service helping law firms convert more website visitors into signed clients. With a JD/MBA background, Ted shifted from pursuing traditional legal roles to building solutions that improve law firm marketing and intake performance. He focuses on speed, connection, and ensuring prospects feel heard — proving that human engagement remains a powerful differentiator in a digital world. Connect with Ted DeBettencourt: Website: https://juvoleads.com/ Connect with Steve Fretzin:LinkedIn: Steve FretzinTwitter: @stevefretzinInstagram: @fretzinsteveFacebook: Fretzin, Inc.Website: Fretzin.comEmail: Steve@Fretzin.comBook: Legal Business Development Isn't Rocket Science and more!YouTube: Steve FretzinCall Steve directly at 847-602-6911
Hey there, Welcome to Living Word! We're so glad you're here with us. If you find this message inspiring, don't forget to hit that like button and subscribe for more amazing content. We've got a lineup of guest speakers, pastors, and engaging discussions with our awesome community members coming your way. Let's dive in together! Our Links–• Join The Prayer Movement!: https://theprayermovement.com• Instagram: https://www.instagram.com/livingwordmn• Facebook: https://www.facebook.com/livingwordmn• Stay up to date with all things LWCC at https://www.LWCC.org• Join our Online Church community here: https://www.lwcc.org/onlinechurch• Give online: https://www.lwcc.org/give/• If you recently committed your life to God, we'd like to give you a free eBook to help you in your spiritual journey. Click here to download: https://www.lwcc.org/nextsteps/#LivingWord #ChurchSermon #Worship
A last-lap crash led to a marquee owner's first victory at NASCAR's biggest race. Correspondent Gethin Coolbaugh reports.
A prison expansion is on such a fast track, it prompted officials to ask if compromises are being made. Papers show the Hawke's Bay Regional Prison project is using a one-off "untested" design to speed it up. Phil Pennington reports.
In this edition of Canucks Talk with Thomas Drance and Landon Ferraro, Olympic hockey rolls on as Team Canada's women open against Switzerland, with questions about whether they have the pace to match Team USA on the smaller international rink. The tournament has leaned heavily toward dump-and-chase hockey, exposing teams like Sweden whose blue line lacks true pace and transition ability. We also react to Jeff Skinner being placed on waivers for mutual contract termination and what it signals about aging NHL scorers like Skinner and Kane searching for their next opportunity. The episode breaks down how style, speed, and roster construction are shaping this year's Olympic tournament. This podcast is produced by Dominic Sramaty and Elan CharkThe views and opinions expressed in this podcast are those of the hosts and guests and do not necessarily reflect the position of Rogers Media Inc. or any affiliate.
She feels guilty but.. is the grass always greener on the other side?See omnystudio.com/listener for privacy information.
App Masters - App Marketing & App Store Optimization with Steve P. Young
AI has fundamentally changed app development. With vibe coding, agentic workflows, and modern AI development tools, what used to take 6 months to build can now be shipped in weeks. Costs are lower than ever. Speed is no longer the bottleneck.But here's the problem: faster development hasn't led to better products or stronger app businesses.In this livestream, we're joined by Chaim Sajnovsky, Founder of B7 Dev, a development studio that has helped startups go from idea to launch for over a decade.Chaim brings real-world experience helping founders navigate AI-powered app development, outsourced development teams, and early-stage decision-making. Together, we'll break down why planning, positioning, and distribution now matter more than code, and why so many founders still fail even when building faster than ever.You will discover:✅ How AI reduced app development time and cost by 80%✅ Why “what to build” is now more important than “how to build.”✅ How to define your main hypothesis before writing code✅ Why most of these features don't drive product-market fit✅ AI tools founders are using right now (including Claude Code)Learn More:https://www.linkedin.com/in/chaim-sajnovsky-5003494/http://www.b7dev.com/You can also watch this video here: https://youtube.com/live/unb8p-5MkuA*********************************************SPONSORSStill designing, resizing, and uploading screenshots manually? AppScreens lets you pick from hundreds of high-converting templates, generate for every device size and language in minutes, and upload automatically to directly to App Store Connect and Google Play Console. Trusted by more than 100K developers and ASO experts worldwide.Try it free: https://appscreens.com/?via=am*********************************************Got tons of freemium users who won't upgrade? Encore turns free users into paying customers and reduces churn by adding smart, curated affiliate offers at key user moments. Everyone wins with Encore.Learn more at https://encorekit.com/*********************************************If you're advertising your growing mobile app, you need a measurement partner you can actually rely on — and that's where AppsFlyer comes in.It gives you a clear view of your entire funnel — from the first impression all the way to the install, in-app events, and user LTV. You'll know what's driving real results, and what's just noise.What teams love about it? It's stable, accurate, and built to handle everything the mobile world throws at you — privacy changes, creative optimization, you name it.And when you need help? Their global support team is there 24/7 — not just to fix things, but to help you grow.If you're ready to level up your mobile marketing and make smarter decisions, check out AppsFlyer.com *********************************************Follow us:YouTube: AppMasters.com/YouTubeInstagram: @App MastersTwitter: @App MastersTikTok: @stevepyoungFacebook: App Masters*********************************************
Black, Death, Speed, Thrash, Doom, Folk, Shred, Power, Prog & Traditional MetalPlaylists: https://spinitron.com/WSCA/show/160737/Black-Night-MeditationsWSCA 106.1 FM is non-commercial and non-profit.
Invariably at an Olympics, we're packing so much in that at some point we break. At Milano Cortina 2026, it's Day 8, when Alison tells Jill about a very special friendship. In other Olympic fun, Jill went out to Bormio to catch the men's Alpine giant slalom--and Brazil's first-ever Winter Olympics gold medal. Alison went to speed skating for Jordan Stolz' second gold medal of the Games and the wildness of the team pursuit. Sports on today's program: Alpine skiing Biathlon Cross-country skiing Curling Freestyle skiing Ice hockey Short track speed skating Skeleton Ski jumping Speed skating Keep our Flame Alive! We podcast about the Games all year. If you appreciate the independent voice that we provide, please consider supporting us today. Go to http://flamealivepod.com/support to learn about our one-time and ongoing patronage options (as well as the bonus content for our patrons). For a transcript of this episode, please visit http://flamealivepod.com. Thanks so much for listening, and until next time, keep the flame alive! *** Keep the Flame Alive: Obsessed with the Olympics and Paralympics? Just curious about how Olympic and Paralympic sports work? You've found your people! Join your hosts, Olympic aunties Alison Brown and Jill Jaracz for smart, fun, and down-to-earth interviews with athletes coaches, and the unsung heroes behind the Games. Get the stories you don't find anywhere else. Tun in weekly all year-round, and daily during the Olympics and Paralympics. We're your cure for your Olympic Fever! Call us: (208) FLAME-IT. *** Support the show: http://flamealivepod.com/support Bookshop.org store: https://bookshop.org/shop/flamealivepod Become a patron and get bonus content: http://www.patreon.com/flamealivepod Buy merch here: https://flamealivepod.dashery.com Hang out with us online: Facebook: https://www.facebook.com/flamealivepod Insta: http://www.instagram.com/flamealivepod Facebook Group: hhttps://www.facebook.com/groups/flamealivepod Newsletter: Sign up at https://flamealivepod.substack.com/subscribe VM/Text: (208) FLAME-IT / (208) 352-6348
In honor of the world's most prestigious winter sporting event, the Lutheran Ladies have embarked upon their own Winter Hymnastics series. Over three — make that four — consecutive episodes, they'll laugh, they'll cry, they'll sweat (sometimes literally), and above all, they'll sing as they celebrate some of the greatest hymns and hymnwriters past, present, and even yet to come. In this second of four episodes, Erin challenges Sarah and Rachel to a series of hymn-related challenges, which they tackle together as a team. Can they name the hymn based on a single measure of the tune? Speed read lyrics without one stumble? Remember every single word to a few beloved hymns? Choose hymns that are objectively beautiful in every way? Joining the Ladies halfway through are celebrity judges Deaconess Cara Patton (coordinator for LCMS Worship Ministry) and Kantor Christina Roberts (Our Savior Lutheran Church, Grand Rapids, Michigan). Connect with the Lutheran Ladies on social media in The Lutheran Ladies' Lounge Facebook discussion group (facebook.com/groups/LutheranLadiesLounge) and on Instagram @lutheranladieslounge. Follow Sarah (@hymnnerd), Rachel (@rachbomberger), and Erin (@erinaltered) on Instagram! Sign up for the Lutheran Ladies' Lounge monthly e-newsletter here, and email the Ladies at lutheranladies@kfuo.org.
For more thoughts, clips, and updates, follow Avetis Antaplyan on Instagram: https://www.instagram.com/avetisantaplyanIn this episode of The Tech Leader's Playbook, Avetis Antaplyan sits down with Kylee Ingram, a decision science expert and co-founder of Wizer, a platform built to help leaders design better decision-making rooms at scale. Kylee's journey began in sports television and documentary work before pivoting into interactive media and ultimately decision intelligence—a shift inspired by her desire to remove industry gatekeepers and build systems that empower diverse thinking.Kylee unpacks the science behind why good leaders still make bad decisions, revealing how cognitive diversity—not just demographic diversity—is the missing ingredient in most executive teams. She breaks down the three hidden biases that compromise leadership groups (social, information, and capacity bias), why “smart people in the room” isn't enough, and how decision profiles dramatically change communication, hiring, fundraising, and strategic alignment.Through research from Dr. Juliet Burke and real-world examples from organizations like Enron, Kylee illustrates how teams drift toward sameness as companies scale, quietly erasing the diversity of thought needed for innovation. She also shares practical tactics for CEOs to improve decision quality—without slowing down execution—and how leaders can tailor communication to different decision styles for more buy-in, clarity, and outcomes.This episode is a masterclass on designing better rooms, better conversations, and ultimately, better decisions. TakeawaysCognitive diversity—not demographic diversity alone—is what prevents bad decisions in leadership teams.Most CEOs fall into just two decision-making styles, which creates blind spots and groupthink at scale.The “hippo effect” (highest-paid person's opinion) strongly influences decisions unless leaders intentionally speak last.Independence is critical in decision design; decisions made before people enter the room create false consensus.Structured diversity in decision profiles can reduce decision error by 30% and increase innovation by 20%.Decision profiles offer a practical way to identify missing perspectives (e.g., risk-focused, analytical, visionary).Leaders should audit each decision by asking: “Who is missing from this room?”Communication should match decision styles; most organizations inadvertently ignore analyzers, achievers, and risk-oriented leaders.Designing rooms—not relying on gut instinct—is the most reliable way to scale high-quality decisions.Chapters00:00 The Hidden Problem in Leadership Decisions01:12 Kylee's Journey: From TV to Decision Intelligence03:07 Early Wins & The Birth of Wizer04:45 When Gut Instinct Isn't Enough05:40 The Three Biases Undermining Every Leadership Team09:17 The Hippo Effect & Room Dynamics12:22 Cognitive Overload & Oversimplification14:16 Speed vs. Quality: Avoiding Paralysis by Analysis17:38 Cognitive Skew & The Enron Example19:07 The Seven Decision Profiles22:47 Small Teams & Practical Application25:55 Why Personality Tests Don't Work30:34 Cognitive Drift in Scaling Companies33:10 Conflict Entrepreneurs & Modern Culture34:08 Why the Wrong People Keep Making the Decisions36:00 Designing Better Interviews & Panels37:29 Messaging & Decision Styles41:27 Tailoring Communication Without Manipulation43:07 One Thing CEOs Should Implement This Week45:15 Mapping Your Organization with Wizer47:30 Kylee's Aha Moments & Reflections49:06 Closing Thoughts & What's NextKylee Ingram's Social Media Link:https://www.linkedin.com/in/kyleeingram/Resources and Links:https://www.hireclout.comhttps://www.podcast.hireclout.comhttps://www.linkedin.com/in/hirefasthireright
BJ has been invited to join a speed puzzling team! Have you ever heard of speed puzzling?? Apparently, Colorado ranks in the top 5 states in the country for speed puzzling!
Send a textIn Part 1 of this in-depth conversation, Steven sits down with Brandon to break down the true foundations of sprint speed. The discussion centers on acceleration mechanics, horizontal force production, and why the broad jump may be one of the most underrated indicators of sprint performance.Brandon shares his philosophy on training sprinters based on their natural tendencies—whether they're twitchy, elastic athletes or strength-dominant performers—and explains how resistance training, box squats, sumo deadlifts, and bounding can be used to build explosive power without sacrificing efficiency. Real coaching case studies highlight dramatic improvements in 200-meter times, state championship performances, and how mastering the first three steps can unlock an athlete's speed ceiling.This episode is a deep dive into individualized sprint training, balancing technical execution with physical preparation, and understanding how force application and elasticity drive performance.https://youtube.com/@platesandpancakes4593https://instagram.com/voodoo4power?igshid=YmMyMTA2M2Y=https://voodoo4ranch.com/To possibly be a guest or support the show email Voodoo4ranch@gmail.comhttps://www.paypal.com/paypalme/voodoo4ranch
What if sustainable growth isn't about becoming the biggest player in the market? Agencies triple in size and lose their soul. They chase national footprints and abandon the community relationships that made them valuable. Meanwhile, regional firms with deep roots are building something holding companies can't replicate. In this episode, host Dan Nestle sits down with Jennifer Kaplan, founder and president of Evolve PR and Marketing—Arizona's largest PR firm. Since 2010, Jen has grown to 28 full-time publicists serving more than 140 clients. She's earned Copper Anvil's 2025 Agency of the Year and PR News' Top Women in PR—but the real story is how she built an agency that's simultaneously deeply local and nationally sophisticated. Dan and Jen explore why relationship-driven business is becoming a competitive moat in the AI era, the tension between AI efficiency and human authenticity, and what two decades of agency leadership teaches you about leading through chaos without losing what makes you special. Listen in and hear about... Why local authority often delivers more impact than national vanity coverage Balancing AI tool adoption with authentic client and media relationships Building agency culture through purpose, consistency, and trust The power of niche specialization over broad service sprawl Leadership lessons from 20 years of agency growth—including when to stop trying to do it all Notable Quotes from Jennifer Kaplan "My mom would say that I was born doing pr, and it is something that I love so much because I feel like it's who I am. So when I talk to people, I do say, you know, look at something that isn't just a job." [00:04:07 – 00:04:23] "With any trend, I feel in any business you have to be on it, you have to be aware, but don't lose your identity and don't let it replace things that it shouldn't replace." [00:11:19 – 00:11:34] "You have to be in tune with AI, but I wouldn't let it take the place of so many wonderful things that we can offer as individuals and that are afforded us in, in the world." [00:12:45 – 00:13:02] "Go be you. You know, don't let AI or all the things that are around us take away from who you are and lose who you know what makes you special." [01:04:02 – 01:04:15] Resources and Links Dan Nestle Inquisitive Communications | Website The Trending Communicator | Website Communications Trends from Trending Communicators | Dan Nestle's Substack Dan Nestle | LinkedIn Jennifer Kaplan Evolve PR and Marketing | Website Jennifer Kaplan | LinkedIn Timestamps 0:00:00 Introduction: Rethinking Growth in Communications0:07:44 Importance of Networking, Building Trust, and Relationships0:13:09 Navigating AI's Impact on Communications and Authentic Content0:18:32 Evolve PR's Niche Approach and Building Media Relationships0:24:03 Local vs. National Recognition: Awards, Impact, and Credibility0:31:17 Challenges of Speed and Building Relationships in PR0:36:30 Integrating Traditional and Tech Approaches in PR0:43:38 Measuring Success and the Role of Tools like AI in Agency Work0:45:49 Purpose-Driven Work and Letting Team Guide Agency Culture0:51:01 Agency Values: Neutrality, Team Dynamics, and Audience Focus0:56:12 Leadership Lessons: Don't Try to Do It All, Embrace Mistakes1:04:02 Closing Thoughts: “Go Be You” and Staying Authentic (Notes co-created by Human Dan, Claude, and Castmagic) Learn more about your ad choices. Visit megaphone.fm/adchoices
Speed skating is an intense Olympic sport. Joe wants to see checking in speed skating. Would fans boo the Steelers taking Ty Simpson? Joe thinks Steelers fans would cheer any quarterback! Donny thinks it's very possible that Steelers fans boo.
KC Boutiette is a four-time Olympic Speed Skater. He also is a competitor on the TV show, American Ninja Warrior. Come join this fun conversation about his early life and what the journey was to the world's biggest stage. Please like, follow and share. Also, please subscribe to our The RMFJ Podcast YouTube channel.
It's Day 6 of the Milano Cortina 2026 Winter Olympics, and we're having new adventures! Jill learned some scary new Italian words trying to get out to the Tesero Cross-Country Stadium to see TKFLASTANI Bruna Moura compete in the 10K freestyle race. She made it to the stadium, but more importantly, did she make it back to Milano in time to record the episode? Listen to find out! Meanwhile, Alison had a full day in Milano and hit up nearly every venue. Sports on today's program: · Alpine skiing · Cross-country skiing · Curling · Freestyle skiing · Ice hockey · Luge · Short track speed skating · Skeleton · Snowboard · Speed skating P Plus, what did we forget to tell you yesterday and many mascots! Keep our Flame Alive! We podcast about the Games all year. If you appreciate the independent voice that we provide, please consider supporting us today. Go to http://flamealivepod.com/support to learn about our one-time and ongoing patronage options (as well as the bonus content for our patrons). For a transcript of this episode, please visit http://flamealivepod.com. *** Keep the Flame Alive: Obsessed with the Olympics and Paralympics? Just curious about how Olympic and Paralympic sports work? You've found your people! Join your hosts, Olympic aunties Alison Brown and Jill Jaracz for smart, fun, and down-to-earth interviews with athletes coaches, and the unsung heroes behind the Games. Get the stories you don't find anywhere else. Tun in weekly all year-round, and daily during the Olympics and Paralympics. We're your cure for your Olympic Fever! Call us: (208) FLAME-IT. *** Support the show: http://flamealivepod.com/support Bookshop.org store: https://bookshop.org/shop/flamealivepod Become a patron and get bonus content: http://www.patreon.com/flamealivepod Buy merch here: https://flamealivepod.dashery.com Hang out with us online: Facebook: https://www.facebook.com/flamealivepod Insta: http://www.instagram.com/flamealivepod Facebook Group: hhttps://www.facebook.com/groups/flamealivepod Newsletter: Sign up at https://flamealivepod.substack.com/subscribe VM/Text: (208) FLAME-IT / (208) 352-6348
With testing in Sebring and cars on track between now and St. Pete, we're calling it: the off season is done! Hallelujah! Tim's getting his daughter ready for school while we record so he's gone, so the guys cover AI images of Off Track, what testing in Sebring means or doesn't mean, a possible new qualifying formate, and more!+++Off Track is part of the SiriusXM Sports Podcast Network. If you enjoyed this episode and want to hear more, please give a 5-star rating and leave a review. Subscribe today wherever you stream your podcasts.Want some Off Track swag? Check out our store!Check out our website, www.askofftrack.comSubscribe to our YouTube Channel.Want some advice? Send your questions in for Ask Alex to AskOffTrack@gmail.comFollow us on Twitter at @askofftrack. Or individually at @Hinchtown, @AlexanderRossi, and @TheTimDurham. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Fresh off a gruelling 20-plus hour travel day from Tasmania, Pinkbike's own social media correspondent Dan Wolfe sits down with Matt Beer and me to deconstruct the 2026 edition of Red Bull Hardline Tasmania. While the finals were ultimately rained out, Dan provides some on-the-ground insights at the "new school" of freeride-DH builds, noting that the days of "bro-science" are over, replaced by precise measurements and "green-light" marshals.The crew dives into the technical nuances of hitting 70+ km/h features, why Aaron Gwin looked surprisingly dangerous on his new ride, and the rumours surrounding a potential 2026 Canadian stop at Cypress Mountain.Featuring a rotating cast of the editorial team and other guests, the Pinkbike podcast is a weekly update on all the latest stories from around the world of mountain biking, as well as some frank discussion about tech, racing, and everything in between.
These are the headlines you NEED to know about!
Full show - Wednesday | Dumb thing you believed as a kid | News or Nope - James Van Der Beek, Love Is Blind, and speed puzzling | Erica made people on TikTok mad | OPP - I stalked my boyfriend | How many times have you been in love? | Being a good aunt is hard | Is this show member's kid...CHEATING!? | Sleep hygiene and dinner routines | Stupid stories www.instagram.com/theslackershow www.instagram.com/ericasheaaa www.instagram.com/thackiswack www.instagram.com/radioerin
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
In this detailed conversation, Alex Harvey discusses his recent impressive half marathon in Japan, the skepticism around his progression, and his transparent use of Strava to share his full training. Alex breaks down his early marathon times and steady improvement through consistent training. He covers marathon challenges (fueling and getting intensity right), how business and family life fits around training, and his aspirations heading into Tokyo Marathon. He also shares why racing without a strict time goal can be valuable, plus how context-specific training has helped him progress. We also get into his preference for training alone, keeping training efficient, and his approach to diet - along with why he largely avoids strength training and cross training. Follow Alex Instagram: https://www.instagram.com/alexxharvey/ Strava: https://www.strava.com/athletes/46089368/ Work With / Follow Matt Coaching: https://www.sweatelitecoaching.com/coaching-2026 Shareholders Club / Private Feed: https://www.sweatelite.co/shareholders Instagram: https://www.instagram.com/mattinglisfox/ Strava: https://www.strava.com/athletes/6248359 Contact: matt@sweatelite.co Topics 00:00 Introduction and Recent Achievements 00:10 Addressing Skepticism and Progression 02:42 Early Running Experiences 03:51 Transition to Serious Training 05:05 High School and Early Twenties 07:35 Inspiration to Start Running 09:38 Recent Race Highlights 12:59 Training Philosophy and Volume 17:48 Training Alone and Flexibility 19:42 Speed Work and Coaching 24:28 Long Runs and Marathon Preparation 25:03 Training in the Heat: Adapting to Queensland's Climate 26:02 Key Training Sessions: Building Endurance and Speed 27:09 Mental Strategies for Pacing and Performance 30:21 Fueling Challenges and Solutions 33:34 Balancing Life: Business, Family, and Running 38:28 Speed and Distance: Exploring Potential and Preferences 41:59 Diet and Weight Management for Optimal Performance 44:45 Cross Training and Strength Training Insights 47:16 Final Thoughts and Where to Follow
This talk was given by Gil Fronsdal on 2026.02.11 at the Insight Meditation Center in Redwood City, CA. ******* A machine generated transcript of this talk is available. It has not been edited by a human, so errors will exist. Download Transcript: https://www.audiodharma.org/transcripts/24436/download ******* For more talks like this, visit AudioDharma.org ******* If you have enjoyed this talk, please consider supporting AudioDharma with a donation at https://www.audiodharma.org/donate/. ******* This talk is licensed by a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
Rob is in Florida this week with Dallas kicking off his 2026 racing season at Auburndale Speedway in Winter Haven. It's a five-day national Legends tournament — practice, qualifying, and racing every single day — with Dallas running the Semi-Pro class in the white #13. The goal this season is simple: run the biggest Legends events possible, graduate up and out of the class, and take the next step forward in his racing career. While that action unfolds in real time, we're setting the stage for Daytona week the best way we know how. To celebrate the Daytona 500, we're revisiting our full review of Days of Thunder — still the greatest racing movie ever made. Yes, it's “Top Gun in a stock car,” and that's exactly why it works. We break down the racing roots of Cole Trickle, the Harry Hogge mentorship arc, the real NASCAR influences behind the film, and why quotes like “Rubbin', son, is racin'” still live rent-free in every race fan's brain. Released in 1990, directed by Tony Scott, and starring Tom Cruise, Robert Duvall, Nicole Kidman, and Michael Rooker, the film earned nearly $158 million worldwide and remains a cornerstone of modern racing pop culture. If it's Daytona week, it's time for Days of Thunder. The post K&F Show #356: Dallas Race Week, Florida Speed, and Daytona Fever // NASCAR Movie Review + Days of Thunder first appeared on The Muscle Car Place.
This talk was given by Gil Fronsdal on 2026.02.11 at the Insight Meditation Center in Redwood City, CA. ******* A machine generated transcript of this talk is available. It has not been edited by a human, so errors will exist. Download Transcript: https://www.audiodharma.org/transcripts/24436/download ******* For more talks like this, visit AudioDharma.org ******* If you have enjoyed this talk, please consider supporting AudioDharma with a donation at https://www.audiodharma.org/donate/. ******* This talk is licensed by a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
For the past 20 years, subscription streaming has produced an outcome that still gets overlooked. The category winners weren't the big tech giants or the major studios. In music, Spotify became the default. In premium video, Netflix did the same. In this episode, we break down how pure-play focus, faster decision-making, and a single retention-driven scoreboard create compounding advantages that big tech's money and bundling can't easily copy. CHAPTERS 01:03 Why Spotify and Netflix Succeeded 06:13 Pure-Play Edge 10:36 Speed & Ownership 14:29 Survivorship Reality SPONSORS Chartmetric: Listen in for our Stat of the Week Symphonic: Distribute your music to one of the largest networks in the industry. Symphonic delivers your music to over 200 digital service providers ensuring that you're monetizing every stream and use of your music on Spotify, TikTok, YouTube, and more TRAPITAL Where technology shapes culture. New episodes and memos every week. Sign up here for free.
Want to work directly with me to close more deals? Go Here: https://www.titaniumu.comWant the Closer's Formula sales process I've used to close 2,000+ deals (FREE) Go Here: https://www.kingclosersformula.com/closeIf you're new to my channel my name is RJ Bates III. Myself and my partner Cassi DeHaas are the founders of Titanium Investments.We are nationwide virtual wholesalers and on this channel we share EVERYTHING that we do inside our business. So if you're looking to close more deals - at higher assignments - anywhere in the country… You're in the right place.Who is Titanium Investments and What Have We Accomplished?Over 10 years in the real estate investing businessClosed deals in all 50 statesOwned rentals in 12 statesFlipped houses in 11 statesClosed on over 2,000 properties125 contracts in 50 days (all live on YouTube)Back to back Closers Olympics ChampionTrained thousands of wholesalers to close more deals_________________________________With over 2,000 Videos, this is the #1 channel on YouTube for all things Virtual Wholesaling. SUBSCRIBE NOW! https://www.youtube.com/@RJBatesIII_________________________________RESOURCES FOR YOU:If you want my team and I to walk you through how to build or scale your virtual wholesaling business from A to Z, click here to learn more about Titanium University: https://www.titaniumu.com(FREE) If you want to learn how to close deals just like me, The King Closer, then download the free King Closer Formula PDF: https://www.kingclosersformula.com/close(FREE) Click here to grab our Titanium fleet free PDF & training: Our battle tested strategies and tools that we actually use… and are proven to work: https://www.kingclosersformula.com/fleetGrab the King Closer Blueprint: My Step by Step Sales Process for closing over 2,000 deals (Only $37): https://www.kingclosersformula.com/kcblueprintGrab Titanium Profits: Our exact system we use to comp and underwrite deals in only 4 minutes. (Only $99) https://www.kingclosersformula.com/titaniumprofitsSupport the show
The guys catch up after a weekend of football and ads, then they get into some interesting questions about IndyCar: what's more important to a successful IndyCar career, timing or talent? Plus, what rule would they change if they were in charge? +++Off Track is part of the SiriusXM Sports Podcast Network. If you enjoyed this episode and want to hear more, please give a 5-star rating and leave a review. Subscribe today wherever you stream your podcasts.Want some Off Track swag? Check out our store!Check out our website, www.askofftrack.comSubscribe to our YouTube Channel.Want some advice? Send your questions in for Ask Alex to AskOffTrack@gmail.comFollow us on Twitter at @askofftrack. Or individually at @Hinchtown, @AlexanderRossi, and @TheTimDurham. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Dadley Boyz preview tonight's NXT and discuss...What next for NXT Champion Joe Hendry?Jaida Parker wants REVENGE!Swipe Right vs. Hank & Tank!The Speed tournament continues!Can ZaRuca co-exist?!ENJOY!Follow us on Twitter:@AdamWilbourn@MSidgwick@MichaelHamflett@WhatCultureWWEFor more awesome content, check out: whatculture.com/wwe Hosted on Acast. See acast.com/privacy for more information.