Most senior engineer in an organisation
POPULARITY
Categories
The Rio Grande is running dry in the metro once again, for the second time in 40 years. With New Mexico already in a 'water debt' under the Rio Grande Compact, what does this mean for farmers, crops, and the local drinking water supply? Chris and Gabby wade through the complexities with Middle Rio Grande Conservancy District's Chief Engineer & CEO, Jason Casuga. He explains how New Mexico is managing its water, and restrictions around water storage. How does the Rio Grande Compact play a role in the state's supply? And what does this mean for future infrastructure and livelihoods? Thanks for listening. If you've got an idea, send it to us at chris.mckee@krqe.com or gabrielle.burkhart@krqe.com. Give us a follow on social media at @ChrisMcKeeTV and @gburkNM. Watch or listen to our prior podcasts online at KRQE.com/podcast and our KRQE YouTube channel, or on broadcast TV every Wednesday at 10:35 p.m. MST on Fox New Mexico.
On today's episode, Larry and Heidi are joined by Larry's long-time friend, NASA scientist Dwayne Orr. Dwayne is Chief Engineer for the Columbia Scientific Balloon Facility in Arlington, TX, and has spent his career studying evidence for God. Dwayne discusses the ways science and the Bible actually complement one another and how scientific discoveries point to a Creator. In this fascinating conversation, Dwayne answers questions about evolution, dinosaurs, and how we can reconcile not having to know everything this side of heaven. links for the show notes:https://lifeword.org/media/?show=Creation+ScienceInstitute for Creation Research https://www.icr.org/ They also have a Discovery Center in Dallas: https://discoverycenter.icr.org/You can email us at info@healthychurchpodcast.com orTo find more information about The Healthy Church Podcast go to:http://www.healthychurchpodcast.comor find us on FaceBook!
Popularly known as Gridlock Sam, thanks to his coining of the term, Sam founded Sam Schwartz Consulting in 1995. Previously, he was New York City's Traffic Commissioner and Chief Engineer of the Department of Transportation. Sam started his transportation career in the late 1960's as a New York cabbie and joined the Traffic Department as a junior engineer in 1971.
Summary In this episode, Andy welcomes Mark Reich, a former Toyota leader and current Chief Engineer for Strategy at the Lean Enterprise Institute. Mark is the author of Managing on Purpose. If you've ever tried to improve your team but felt like your strategy was stuck in a slide deck, this conversation is for you. Mark introduces the idea of hoshin kanri, a lesser-known but critical pillar of Toyota's management system, and explains how lean thinking is more than just tools--it's a way of developing people and aligning purpose across an organization. You'll hear why metrics alone won't get you to strategic clarity, how to escape the trap of firefighting, and why engagement, not just direction, is the key to long-term improvement. He also shares how lean thinking can be applied at home, even with your kids! If you're looking for insights on how to align teams, build capability, and lead with greater purpose, this episode is for you! Sound Bites "Don't focus on the tool. The tools have to serve a purpose." “Catchball is not just a handoff of plans. It's a conversation about what matters and how we'll learn together.” “Direction without development is just pressure.” They're not called punishment calls. They're called co-learning calls. “If strategy feels like something being done to people, you've already lost.” “You don't learn PDCA by attending a training. You learn it by doing it, with guidance, reflection, and coaching.” “It's not just about solving the problem. It's about who solves it and how they do it.” “We had to change how we talked about strategy before we could change how we worked on strategy.” Chapters 00:00 Introduction 01:49 Start of Interview 02:01 What early experiences shaped your views on leadership, strategy, or lean? 05:28 How do you explain TPS and hoshin kanri as two pillars of Toyota's system? 10:36 What are common mistakes leaders make when trying to improve the business? 15:23 Where do you coach people to start when they want better alignment? 17:40 What myths or misunderstandings do people have about lean? 18:12 Case study example: Turner Construction 25:45 What lean tools or concepts should project managers explore more deeply? 29:24 Where do you recommend someone begin learning about lean? 34:47 How has lean thinking helped at home—and with raising kids? 36:09 End of Interview 36:36 Andy Comments After the Interview 40:53 Outtakes Learn More You can learn more about Mark Reich and his work at the Lean Enterprise Institute at Lean.org. For more learning on this topic, check out: Episode 438 with Jeff Gothelf. It's a book about OKRs, which is different from hoshin kanri, but the overall discussion is worth checking out. Episode 387 with Atif Rafiq. It's a book that has a strategic approach to dealing with uncertainty. Episode 320 with Greg Githins. It's more about how to think strategically. Pass the PMP Exam This Year If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start. Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year! Join Us for LEAD52 I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks! Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Business Acumen Topics: Lean, Toyota Production System, Hoshin Kanri, Strategy, Organizational Alignment, Leadership Development, Continuous Improvement, Team Engagement, Project Management, PDCA, Capability Building, Coaching The following music was used for this episode: Music: Underground Shadows by MusicLFiles License (CC BY 4.0): https://filmmusic.io/standard-license Music: Synthiemania by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license
Oh my gosh, this is a FANTASTIC episode! This show introduces you to Tom Wallace, the 4th Chief Engineer of Corvette. Your CORVETTE TODAY host, Steve Garrett, reminsces with Tom and his time with GM and Corvette. Tom was responsible for the C6 ZR1, some of the Indy Pace Cars and more! Tom tells you some terrific "behind the scenes" stories, talks about the transition to Michelin tires for Corvette, how the Corvette team started working so closely with the Corvette Racing team and more. Tom Wallace is a member of the Corvette Hall of Fame, so don't miss this outstanding episode of CORVETTE TODAY.
Dlubal Podcast: Digitales und Innovatives aus dem Ingenieurbau
Dlubal Software hat ein ganz besonderes Forschungsprojekt an der RTWH Aachen unterstützt. Gemeinsam mit unserem Gast Lukas Rauber, Chief Engineer an der RTWH Aachen, möchten wir heute über das Projekt HELEPOLIS sprechen: innovativer Holzbau für mehrgeschossige Gebäude. Schreibt uns in die Kommentare: Kanntet ihr das Projekt schon? Und folgt uns gerne, um keine zukünftigen Episoden zu verpassen. Viel Spaß beim Zuhören!
Jonny Dyer is the Co-Founder and CEO of Muon Space, an end-to-end space systems provider that designs, builds, and operates LEO satellite constellations that deliver mission-critical data. Prior to Muon, Jonny held technical leadership roles at Google Maps and Lyft, and was the Chief Engineer at Skybox Imaging. On this episode of the Defense Tech Underground, Jonny discusses his background at Skybox Imaging and his experience building Muon Space. He describes Muon's approach to building satellite constellations end to end, and highlights the role of the FireSat constellation to provide operational guidance on the mitigation of wildfires. Jonny shares lessons learned from leading Muon, and explains how he has thought through pivotal moments in Muon's journey, such as their recent acquisition of Starlight Engines. This episode is hosted by Helen Phillips and Bryan Harvey. Full Bio: Jonny Dyer is Co-Founder & CEO of Muon Space, with a remarkable career spanning aerospace, geospatial technology, and complex systems engineering across industry-shaping companies. As Chief Engineer at Skybox Imaging, he led development of the largest high-resolution satellite constellation ever deployed before the company's $500M acquisition by Google in 2014. At Google, he headed Maps' data collection team developing Street View cars and aircraft platforms, then served as Senior Director at Lyft's Level 5 Autonomous Vehicle group leading vehicle platform design and deployment. Currently an Operating Partner at Space Capital and advisor to Ubiquity.vc, Jonny also serves as a founding member of EDF's MethaneSat Technical Advisory Group and has served on multiple National Academies studies and committees. He holds MS and BS degrees in Mechanical Engineering from Stanford University and has authored widely-cited research spanning thermodynamics, radiation effects, and imaging systems.
This week on DisrupTV, we interviewed: - Christopher Lochhead, thirteen-time #1 bestselling Co-author, #1 charting business dialogue podcaster, co-creator of Category Pirates and best known as a "godfather" of Category Design - Sunil Karkera, Founder & Chief Engineer at Soul of the Machine Christopher Lochhead and Sunil Karkera explored how AI is reshaping the future of business. Lochhead warned of the “existing market trap”—a scenario where up to $13 trillion could be lost as AI vendors focus on optimizing legacy markets instead of inventing new ones. He urged companies to think like AI-native startups and embrace a bold “stop, change, start” strategy to drive meaningful transformation. Karkera introduced Soul of the Machine, a platform designed to compress the ideation-to-prototype process from months to hours. He emphasized that true innovation lies in blending AI with human creativity and design, paving the way for a new era of value creation. DisrupTV is a weekly podcast with hosts R "Ray" Wang and Vala Afshar. The show airs live at 11 AM PT/ 2 PM ET every Friday. Brought to you by Constellation Executive Network: constellationr.com/CEN.
Why This Episode Is a Must-Listen Are classic cars just vintage machines, or rolling works of art? And are they a smart investment for the future? In "Classic Car Restorations: Reviving Automotive Legends," the Inspired Money panel explores how restoring classic cars is about passion, heritage, and innovation. Whether you're an investor, enthusiast, or simply curious about the intersection of culture and craftsmanship, this episode delivers powerful insights that will change the way you see collectible cars. Meet the Expert Panelists Matt DeLorenzo is a veteran automotive journalist and author with over 40 years of experience, having served as editor-in-chief at Road & Track and Autoweek, and as a senior editor at Automotive News and Kelley Blue Book. A founding juror of the North American Car of the Year awards, he has authored multiple books on iconic American vehicles, from the Corvette to the Dodge brand legacy. His next book, "Mercedes-AMG: Race-bred Performance" will go on sale this November. https://tightwadgarage.com Rupert Banner is a Senior Specialist at Gooding Christie's, bringing over 30 years of experience in the collector car auction industry, including senior roles at Bonhams, Christie's, and Sotheby's. Renowned for his global expertise, record-setting sales, and deep ties to the automotive community, he is a trusted authority on valuation, provenance, and the evolving international classic car market. https://www.goodingco.com Tamara Warren is the founder and CEO of Le Car, a Techstars-backed mobility platform and award-winning automotive journalist whose work has appeared in The New York Times, Car and Driver, Rolling Stone, and over 160 publications. A former transportation editor at The Verge and juror for World Car of the Year, she blends deep industry insight with cultural storytelling, sustainability advocacy, and a passion for automotive design and innovation. https://www.tamarawarren.com William Heynes is the founder of William Heynes Ltd, a world-renowned Jaguar preservation and restoration firm dedicated to early E-Types and rooted in the legacy of his grandfather, Bill Heynes, Jaguar's legendary Chief Engineer behind the iconic XK engine and Le Mans victories. Combining deep mechanical knowledge, period-correct craftsmanship, and a passion for originality, William leads a team committed to restoring Jaguar's most historic cars to concours-level authenticity. https://williamheynes.com Key Highlights: Restorations: Craft, Preservation, or Investment? William Heynes shared how true restoration is “about building the car for what it deserves,” not just tailoring it to the owner's preferences. At his shop, the ultimate goal is authenticity. Sometimes a restoration can involve thousands of hours and years to achieve original perfection. Market Trends and Smart Collecting Rupert Banner demystified current auction markets, noting a shift toward modern collectibles and younger buyers. He stresses the value of detailed provenance and emphasizes: “There is so much more value in just owning something and enjoying it... the car may be a conduit to new opportunities, relationships, and experiences.” Cars as a Reflection of Identity and Community Tamara Warren explored how classic cars are powerful symbols of personal legacy and social connection. "To be a collector is actually to be a steward of a legacy," she noted, highlighting the responsibility and joy of carrying forward the story of each car, brand, and community. The Future: Innovation Meets Heritage Matt DeLorenzo and the panel tackled the balance between preservation and modernization, think EV conversions, restomods, and the ethics of “continuation” models. As Matt put it: “Even if there's the rise of the self-driving car, people look at automobiles as a form of recreation... it's a lifestyle, it's a community that's not going to go anywhere.” Call-to-Action Here's your one assignment for the week: Find a car that has personal meaning to you, whether it's a car you grew up with, saw in a movie, or admired from afar, and dig into its story. Look it up, read about it, maybe even check the market. You might discover a connection that turns into a future project, or at the very least, a deeper appreciation for what makes these vehicles legendary. Find the Inspired Money channel on YouTube or listen to Inspired Money in your favorite podcast player. Andy Wang, Host/Producer of Inspired Money
How does a governor's emergency message reach every corner of North Carolina—even the most remote areas?In this episode of the Carolina Weather Group, host James Brierton sits down with David Crabtree, CEO of PBS North Carolina, and Chris Pandich, the network's Chief Engineer. Together, they pull back the curtain on how PBS NC powers statewide emergency communication, public safety alerts, and essential educational programming across the state.
Jerry Bruckheimer talks F1: The Movie. Plus, GM Chief Engineer Tony Roma hints at the future of a Corvette SUV - and why the Cadillac CTS-V Blackwing may be Matt's most favorite car, ever.See omnystudio.com/listener for privacy information.
Send us a textDr. Sunita Satyapal, Ph.D. is an internationally known subject matter expert and leader in the field of hydrogen and fuel cells with three decades of experience in government, industry, and academia. Dr. Satyapal has had diverse roles including as a researcher, a visiting assistant professor, as Chief Engineer and Director for the U.S. Department of Energy's Hydrogen and Fuel Cell Technologies Office ( https://www.energy.gov/eere/person/sunita-satyapal ), and as a manager in industry at United Technologies, overseeing research and development (R&D) as well as business development and stakeholder collaboration. She has been responsible for R&D in industry as well as developing national hydrogen strategies, coordinating over $10 billion in hydrogen funding over the years, and collaborating across more than 25 countries through global hydrogen partnerships.Dr. Satyapal received her Ph.D. from Columbia University and completed postdoctoral work in Applied and Engineering Physics at Cornell University. She has also served in business development, management, and deployment related roles in industry. She has numerous publications, including in Scientific American, 10 patents, and various recognitions including a Meritorious and a Distinguished Presidential Rank Award.#Hydrogen #FuelCell #SunitaSatyapal #DepartmentOfEnergy #PhysicalChemistry #Photodissociation #Electrolysis #EnergySecurity #GreenhouseGasEmissions #RenewableEnergy #ProgressPotentialAndPossibilities #IraPastor #Podcast #Podcaster #ViralPodcast #STEM #Innovation #Technology #Science #ResearchSupport the show
As electricity systems decarbonise, the role of national planners and system operators is going through a major shift. Strategic planning is no longer just about keeping the lights on. It now involves reshaping the grid to support high levels of renewable generation, growing electrification, and new sources of flexibility. The challenge lies in aligning long-term goals with real-world system operations.In this episode of Transmission, Ed sits down with Julian Leslie, Director of Strategic Energy Planning and Chief Engineer at the National Energy System Operator (NESO). The conversation explores how long-term energy planning is changing in Great Britain and the need for low-carbon dispatchable power, the role of storage and interconnectors, and the uncertainty around technologies like hydrogen and carbon capture. We also unpack how NESO is managing competing pressures while working to design a future-proof system. If you want to understand how the UK grid is being reimagined for net zero, this one is worth a listen.Key topics include:How Great Britain's system operator is evolving from National Grid ESO to NESOWhy strategic planning must now consider the whole energy system, not just electricityThe role of low-carbon dispatchable generation in a net zero futureHow hydrogen, storage, and CCS are being factored into long-term plansWhy planning under uncertainty is so difficult, and how NESO is respondingAbout our guestJulian Leslie is Director of Strategic Energy Planning and Chief Engineer at NESO, the new National Energy System Operator for Great Britain. With more than 30 years of experience at National Grid and the ESO, Julian has helped shape both real-time system operations and long-term planning. He now leads the team responsible for designing the future of Britain's electricity system, ensuring it can meet climate targets while staying reliable and cost effective.For more information on NESO's work, head to their website.About Modo EnergyModo Energy helps the owners, operators, builders, and financiers of battery energy storage solutions understand the market - and make the most out of their assets.All of our podcasts are available to watch or listen to on the Modo Energy site. To keep up with all of our latest updates, research, analysis, videos, podcasts, data visualizations, live events, and more, follow us on LinkedIn or Twitter. Check out The Energy Academy, our bite-sized video series breaking down how power markets work.
As electricity systems decarbonise, the role of national planners and system operators is going through a major shift. Strategic planning is no longer just about keeping the lights on. It now involves reshaping the grid to support high levels of renewable generation, growing electrification, and new sources of flexibility. The challenge lies in aligning long-term goals with real-world system operations.In this episode of Transmission, Ed sits down with Julian Leslie, Director of Strategic Energy Planning and Chief Engineer at the National Energy System Operator (NESO). The conversation explores how long-term energy planning is changing in Great Britain and the need for low-carbon dispatchable power, the role of storage and interconnectors, and the uncertainty around technologies like hydrogen and carbon capture. We also unpack how NESO is managing competing pressures while working to design a future-proof system. If you want to understand how the UK grid is being reimagined for net zero, this one is worth a listen.Key topics include:How Great Britain's system operator is evolving from National Grid ESO to NESOWhy strategic planning must now consider the whole energy system, not just electricityThe role of low-carbon dispatchable generation in a net zero futureHow hydrogen, storage, and CCS are being factored into long-term plansWhy planning under uncertainty is so difficult, and how NESO is respondingAbout our guestJulian Leslie is Director of Strategic Energy Planning and Chief Engineer at NESO, the new National Energy System Operator for Great Britain. With more than 30 years of experience at National Grid and the ESO, Julian has helped shape both real-time system operations and long-term planning. He now leads the team responsible for designing the future of Britain's electricity system, ensuring it can meet climate targets while staying reliable and cost effective.For more information on NESO's work, head to their website.About Modo EnergyModo Energy helps the owners, operators, builders, and financiers of battery energy storage solutions understand the market - and make the most out of their assets.All of our podcasts are available to watch or listen to on the Modo Energy site. To keep up with all of our latest updates, research, analysis, videos, podcasts, data visualizations, live events, and more, follow us on LinkedIn or Twitter. Check out The Energy Academy, our bite-sized video series breaking down how power markets work.
Chain of Learning: Empowering Continuous Improvement Change Leaders
Enter to win a copy of "Managing on Purpose" by Mark Reich, Chief Engineer of Strategy at the Lean Enterprise Institute (LEI). Giveaway open through June 20th.How effective is your organization's strategy in achieving results?If your team doesn't understand how their daily work connects to bigger organizational goals, you don't have a strategy—you have a gap. A gap in engagement. A gap in alignment. This gap leads to confusion, misaligned priorities, and wasted effort.I'm joined by Mark Reich, author of “Managing on Purpose”, to explore how hoshin kanri – often translated as strategy or policy deployment – can bridge this gap and transform your strategy development and deployment process.With 23+ years at Toyota and extensive experience guiding organizations through lean transformations, Mark reveals how hoshin kanri offers a different approach to strategy execution and management. It connects people to purpose, builds capability, and aligns cross-functional areas, turning vision into results.Turn your strategy into action by aligning and building a purpose-driven organization.YOU'LL LEARN:Differences between hoshin kanri and traditional strategy management Common misconceptions around strategy deployment and what sets hoshin kanri apartThe role of catchball in connecting top-down and bottom-up processesThe importance of building reflection (hansei) and PDCA (Plan-Do-Check-Act) into the processReal-world examples of organizations successfully transitioning to hoshin kanri strategy development and deploymentABOUT MY GUEST:Mark Reich is the author of “Managing on Purpose.” He spent 23 years at Toyota, including six years in Japan, seven years at the Toyota Supplier Support Center (TSSC), and over a decade leading Toyota's North American hoshin kanri process. Today, he's the Senior Coach and Chief Engineer of Strategy at the Lean Enterprise Institute (LEI), where he guides organizations and their executives on lean transformation.IMPORTANT LINKS:Full episode show notes: ChainOfLearning.com/45Connect with Mark Reich: linkedin.com/in/markareichMark Reich's book, “Managing on Purpose”: lean.org/store/book/managing-on-purposeResources and ways to work with me: KBJAnderson.comFollow me on LinkedIn: linkedin.com/in/kbjandersonDownload my FREE KATALYST™ Change Leader Self-Assessment: KBJAnderson.com/katalyst Learn more about the history and application of hoshin kanri: Learning to Lead, Leading to LearnTIMESTAMPS FOR THIS EPISODE:2:05 Hoshin Kanri vs. traditional management approaches to strategy2:52 Mark defines hoshin kanri3:49 What people get around around strategy deployment4:26 Two key differences that sets hoshin kanri apart from traditional strategy5:16 The problem Mark aimed to solve in “Managing On Purpose”10:07 Why knowing your true north vision matters11:34 The complexity of the x-matrix in implementing strategy15:31 Why catchball is essential to hoshin kanri20:32 Leading effective catchball conversations23:07 Vertical vs. horizontal catchball24:31 Collaborative input in the A3 process26:17 How leaders can retain perspective for effective catchball conversations28:30 The PDCA cycle's critical role in hoshin kanri framework31:06 Importance of flexibility in leadership32:19 Distinguishing daily tasks vs. long term tasks for success34:31 Embedding reflection time in the hoshin process to make PDCA work37:31 Long-term learning in implementing effective systems39:48 Using hansei for reflection and prioritization Enter to win a copy of "Managing on Purpose" by Mark Reich, Chief Engineer of Strategy at the Lean Enterprise Institute (LEI). Giveaway open through June 20th.Apply today for my next Japan Leadership Experience — learn more and discover the power of this immersive learning experience.
In this episode, Ana Benz, Chief Engineer at IRISNDT, joins co-hosts Heather Allain and Marc Cook for a deep dive into selecting non-destructive examination (NDE) techniques based on expected damage mechanisms. Ana shares insights on recent advancements in radiography, field-based hardness testing, and imaging of hardness indents to assess effectiveness. The conversation also explores proper surface cleaning for inspections, the use of chemical composition analysis to detect degradation from galvanic corrosion, and the role of Integrity Operating Windows (IOWs). Additional topics include remote monitoring applications and innovations in remote inspection technology. Corrosion Chronicles is produced by Association Briefings.
A conversation with a Medtronic engineer who's been with the company since the beginning of the Artificial Pancreas project. Lou Lintereur is now Chief Engineer for AID systems at Medtronic.. we talk about the recently approved Simplera Sync Sensor, changes coming to Medtronic pumps, he answers your questions about AI use, patch pumps, and the idea of a pump that needs zero user interaction. Note: this episode was recorded before Medtronic's announcement that they will spin off the Diabetes division. This podcast is not intended as medical advice. If you have those kinds of questions, please contact your health care provider. Join us at an upcoming Moms' Night Out event! Please visit our Sponsors & Partners - they help make the show possible! Learn more about Gvoke Glucagon Gvoke HypoPen® (glucagon injection): Glucagon Injection For Very Low Blood Sugar (gvokeglucagon.com) Omnipod - Simplify Life Learn about Dexcom Check out VIVI Cap to protect your insulin from extreme temperatures The best way to keep up with Stacey and the show is by signing up for our weekly newsletter: Sign up for our newsletter here Here's where to find us: Facebook (Group) Facebook (Page) Instagram Check out Stacey's books! Learn more about everything at our home page www.diabetes-connections.com Reach out with questions or comments: info@diabetes-connections.
"Blurring Reality" - Chai's Social AI Platform - sponsoredThis episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai discovered the massive appetite for AI companionship through serendipity while searching for product-market fit.CHAI sponsored this show *because they want to hire amazing engineers* -- CAREER OPPORTUNITIES AT CHAIChai is actively hiring in Palo Alto with competitive compensation ($300K-$800K+ equity) for roles including AI Infrastructure Engineers, Software Engineers, Applied AI Researchers, and more. Fast-track qualification available for candidates with significant product launches, open source contributions, or entrepreneurial success.https://www.chai-research.com/jobs/The conversation with founder William Beauchamp and engineers Tom Lu and Nischay Dhankhar covers Chai's innovative technical approaches including reinforcement learning from human feedback (RLHF), model blending techniques that combine smaller models to outperform larger ones, and their unique infrastructure challenges running exaflop-class compute.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers in Zurich and SF. Goto https://tufalabs.ai/***Key themes explored include:- The ethics of AI engagement optimization and attention hacking- Content moderation at scale with a lean engineering team- The shift from AI as utility tool to AI as social companion- How users form deep emotional bonds with artificial intelligence- The broader implications of AI becoming a social mediumWe also examine OpenAI's recent pivot toward companion AI with April's new GPT-4o, suggesting a fundamental shift in how we interact with artificial intelligence - from utility-focused tools to companion-like experiences that blur the lines between human and artificial intimacy.The episode also covers Chai's unconventional approach to hiring only top-tier engineers, their bootstrap funding strategy focused on user revenue over VC funding, and their rapid experimentation culture where one in five experiments succeed.TOC:00:00:00 - Introduction: Steve Jobs' AI Vision & Chai's Scale00:04:02 - Chapter 1: Simulators - The Birth of Social AI00:13:34 - Chapter 2: Engineering at Chai - RLHF & Model Blending00:21:49 - Chapter 3: Social Impact of GenAI - Ethics & Safety00:33:55 - Chapter 4: The Lean Machine - 13 Engineers, Millions of Users00:42:38 - Chapter 5: GPT-4o Becoming a Companion - OpenAI's Pivot00:50:10 - Chapter 6: What Comes Next - The Future of AI Intimacy TRANSCRIPT: https://www.dropbox.com/scl/fi/yz2ewkzmwz9rbbturfbap/CHAI.pdf?rlkey=uuyk2nfhjzezucwdgntg5ubqb&dl=0
Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord
Discover how racers at all levels—from grassroots to NASCAR—are gaining a competitive edge using low-cost wireless technology for faster, repeatable chassis setups. In this exclusive EPARTRADE webinar, motorsport veterans and rising stars break down how wireless setup tools are transforming race performance.
Joel Green is the Founding Partner of Outliant, a full-service agency building digital products for brands like Amazon, Purina, and Freedom Solar. As a serial tech entrepreneur, he is the Founder and CTO of Winona, a telemedicine company, and the Co-founder of Hardsider. Previously, Joel was the Co-founder and Chief Engineer at Cintric, which was acquired by UberMedia in 2017. In this episode… Selling an agency can catalyze opportunities, but navigating the process requires strategy, speed, and M&A partnerships. How do entrepreneurs know when it's time to make the leap? Joel Green, Founding Partner of Outliant, shares how he built an agency before selling it to private equity. With host Todd Taskey, Joel discusses how he shifted to telemedicine ventures, how he prepared the agency for sale, and the importance of transparency and fit when selecting a buyer.
In Part 13 of The Journey Beyond Death, and the third installment of our eight-part series on Near Death Experiences, we explore extraordinary stories of transformation, survival, and awakening. Dr. Tony Cicoria recounts the life-altering moment he was struck by lightning — a physical and spiritual jolt that opened the floodgates to a stunning musical gift. His account blurs the line between the tangible and the ethereal, inviting us to ponder the unseen forces that guide our lives. We honor the late Dr. Stephen Sinatra, a true pioneer in holistic medicine whose legacy of compassion and innovation continues to inspire. Alongside him, NDE survivor Tommy Rosa shares revelations from his own brush with death, offering profound insights into healing, the unity of all souls, and the divine design behind human existence. Their combined wisdom fuses science and spirituality into a tapestry of hope and possibility. Erica McKenzie's powerful testimony shines a light on the dangers of medication misuse and the enduring strength of the human spirit. Her near-death experience reveals a world beyond pain — a place of divine purpose, healing, and unconditional love. Through Erica's eyes, we are reminded that our true value lies not in how we appear to others, but in the light we carry within. ----------------------------- Featuring in order of appearance 09:24 – NDE Tony Cicoria 19:19 – NDE Tommy Rosa & Dr. Stephen Sinatra 37:44 – NDE Erica McKenzie 1:16:32 – NDE David Bennett ----------------------------- NDE Survivor: Dr. Anthony Cicoria In 1994, Dr. Anthony Cicoria was struck by lightning and nearly lost his life. During this harrowing moment, he had an out-of-body experience, watching from above as his own body lay motionless on the floor. Following his near-death experience, Dr. Cicoria began receiving what he describes as "downloads" — powerful streams of original music compositions for the piano. His life was forever changed, and he emerged with a profound sense of purpose, translating his spiritual encounter into music that touches the soul. ----------------------------- NDE Survivor: Tommy Rosa & Dr. Stephen Sinatra (In Loving Memory) In 1999, Bronx-born plumber Tommy Rosa died after a devastating hit-and-run accident. As he lay by the roadside, he felt a powerful tug pulling him through a tunnel of light. On the other side, Tommy found himself in a heavenly place, where he was shown that Earth was created by God to nourish and heal humanity — and that the separation we feel from each other and from the Divine is an illusion of our own making. He met a Divine Teacher who shared Eight Revelations about the nature of life and the universe. Meanwhile, Dr. Stephen Sinatra — a world-renowned cardiologist and pioneering voice in integrative medicine — was transforming his clinical approach, blending science with spirituality. When fate brought Tommy and Dr. Sinatra together, they realized the powerful synchronicities between Tommy's revelations and Dr. Sinatra's groundbreaking medical discoveries. Their collaboration led to one of the most inspired unions of science and spirit. Dr. Stephen Sinatra has since passed from this world, but his wisdom, compassion, and revolutionary contributions to healing continue to uplift countless lives. His legacy endures through his writings, his patients, and all who seek true, heart-centered healing. Website: Book: Health Revelations from Heaven and Earth (Available at major retailers) ----------------------------- NDE Survivor: Erica McKenzie In 2002, at the age of 31, Erica McKenzie had a profound near-death experience. After enduring years of ridicule about her appearance as a child, Erica struggled with bulimia and diet pill addiction, ultimately collapsing and falling unconscious. Her spirit left her body and hovered near the ceiling, watching paramedics try to save her. She then traveled through a tunnel of radiant light, met God, and experienced two life reviews. During this journey, she was shown that each soul carries unique gifts and divine purposes — lessons she now shares through her transformational work and teachings. Website: Book: Dying to Fit In (Available at major retailers) ----------------------------- NDE Survivor: David Bennett David Bennett is a public speaker, author, teacher, energetic healer, and transformational life coach. He has been featured across major media platforms, including The Story of God with Morgan Freeman on the National Geographic Channel, Dr. Oz, Angels Among Us, NBC national news, and PBS. David's journey includes three extraordinary transformative experiences: In 1983, while serving as Chief Engineer aboard the ocean research vessel Aloha, he drowned and had a powerful Near-Death Experience. A second transformation occurred in 1994 during a deep meditation in Sedona, Arizona. His third awakening came in 2000 when he survived stage IV lung cancer that metastasized into his spine, causing its collapse. Each experience deepened his wisdom and his commitment to help others awaken their highest potential. Website: Books: Voyage of Purpose and A Voice as Old as Time (Available at major retailers) ----------------------------- The Journey Beyond Death, near death experiences, Dr. Tony Cicoria, struck by lightning NDE, Dr. Stephen Sinatra tribute, Tommy Rosa NDE, health revelations heaven earth, Erica McKenzie NDE, spiritual awakening after death, life after death stories, NDE survivor stories, out of body experiences, healing through near death experiences, music inspired by NDE, divine messages after death
Welcome to our series of bite-sized episodes featuring favourite moments from the Leading for Business Excellence podcast.In this minisode, Jason Hill, Chairman and Chief Engineer at Hill Helicopters, discusses the art of designing processes that evolve with a business. From manufacturing every component in-house to managing data and workflows, Jason shares how his team builds systems that are both structured and adaptable.Every start-up feels broken at times, but how do you create processes that grow with you rather than hold you back? Listen to the full episode here: https://pmi.co.uk/knowledge-hub/podcast-how-did-hill-helicopters-revolutionise-the-aviation-industry/More from PMI: Dive into our Knowledge Hub for more tools, videos, and infographics Join us for a PMI LIVE Webinar Follow us on LinkedIn
Mark Reich spent 23 years working for Toyota, starting in 1988 with six years in Japan in the Overseas Planning Division, where he was responsible for Product Planning and collaborated with Chief Engineers to define vehicle specifications for overseas markets. This pivotal time was when Toyota introduced the Lexus to the world.In 1994, Mark returned to the United States and transitioned to the Toyota Supplier Support Center (TSSC), a non-profit organization Toyota in North America established dedicated to the practical application of the Toyota Production System (TPS) across various sectors. While at TSSC, he worked to extend TPS beyond manufacturing into healthcare and non-profits, which remains a key focus of TSSC's mission.Mark joined Toyota's Corporate Strategy group in North America in 2001, serving as Assistant General Manager. He managed Toyota's North American hoshin kanri process during a period of significant growth that saw sales and production nearly double over the next decade. Hoshin kanri was essential for aligning the organization during this transformative time.In 2011, Mark transitioned to the Lean Enterprise Institute (LEI) and has held several positions, including Chief Operating Officer and, since 2018, Senior Coach and Chief Engineer, Strategy. He has led lean transformations and coached executives in hoshin kanri across various industries, with clients including Freeman, GE Appliances, Legal Sea Foods, Michigan Medicine, Nucleus Software, and Turner Construction.Mark is now the author of Managing on Purpose, published by LEI in March 2025. This workbook is vital for leaders looking to implement hoshin kanri effectively within their organizations. It provides practical insights into developing corporate and departmental hoshins while fostering leadership development and innovation. The book includes a fictional case study featuring TrueMowers, allowing readers to apply hoshin kanri concepts in a relatable context.Mark earned his bachelor's degree from Ohio Wesleyan University and specialized in Japanese studies at Nanzan University. He resides outside of Cincinnati with his wife and daughters. He is fluent in written and spoken Japanese.Link to claim CME credit: https://www.surveymonkey.com/r/3DXCFW3CME credit is available for up to 3 years after the stated release dateContact CEOD@bmhcc.org if you have any questions about claiming credit.
Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matching the performance of costly pre-trained models. This raises questions about when giant pre-training efforts are truly worth it.He also talks about how self-supervised learning (where models learn from data structure itself) and traditional supervised learning (using labeled data) are fundamentally similar, allowing researchers to apply decades of supervised learning theory to improve newer self-supervised methods.Finally, Randall touches on fairness in AI models used for Earth data (like climate prediction), revealing that these models can be biased, performing poorly in specific locations like islands or coastlines even if they seem accurate overall, which has important implications for policy decisions based on this data.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + SHOWNOTES:https://www.dropbox.com/scl/fi/n7yev71nsjso71jyjz1fy/RANDALLNEURIPS.pdf?rlkey=0dn4injp1sc4ts8njwf3wfmxv&dl=0TOC:1. Model Training Efficiency and Scale [00:00:00] 1.1 Training Stability of Large Models on Small Datasets [00:04:09] 1.2 Pre-training vs Random Initialization Performance Comparison [00:07:58] 1.3 Task-Specific Models vs General LLMs Efficiency2. Learning Paradigms and Data Distribution [00:10:35] 2.1 Fair Language Model Paradox and Token Frequency Issues [00:12:02] 2.2 Pre-training vs Single-task Learning Spectrum [00:16:04] 2.3 Theoretical Equivalence of Supervised and Self-supervised Learning [00:19:40] 2.4 Self-Supervised Learning and Supervised Learning Relationships [00:21:25] 2.5 SSL Objectives and Heavy-tailed Data Distribution Challenges3. Geographic Representation in ML Systems [00:25:20] 3.1 Geographic Bias in Earth Data Models and Neural Representations [00:28:10] 3.2 Mathematical Limitations and Model Improvements [00:30:24] 3.3 Data Quality and Geographic Bias in ML DatasetsREFS:[00:01:40] Research on training large language models from scratch on small datasets, Randall Balestriero et al.https://openreview.net/forum?id=wYGBWOjq1Q[00:10:35] The Fair Language Model Paradox (2024), Andrea Pinto, Tomer Galanti, Randall Balestrierohttps://arxiv.org/abs/2410.11985[00:12:20] Muppet: Massive Multi-task Representations with Pre-Finetuning (2021), Armen Aghajanyan et al.https://arxiv.org/abs/2101.11038[00:14:30] Dissociating language and thought in large language models (2023), Kyle Mahowald et al.https://arxiv.org/abs/2301.06627[00:16:05] The Birth of Self-Supervised Learning: A Supervised Theory, Randall Balestriero et al.https://openreview.net/forum?id=NhYAjAAdQT[00:21:25] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Adrien Bardes, Jean Ponce, Yann LeCunhttps://arxiv.org/abs/2105.04906[00:25:20] No Location Left Behind: Measuring and Improving the Fairness of Implicit Representations for Earth Data (2025), Daniel Cai, Randall Balestriero, et al.https://arxiv.org/abs/2502.06831[00:33:45] Mark Ibrahim et al.'s work on geographic bias in computer vision datasets, Mark Ibrahimhttps://arxiv.org/pdf/2304.12210
It's All Been Trekked Before #413 Season 13, Episode 15 Star Trek: Voyager #1.02 "Parallax" Stephen has fallen in love with Janeway. Jimmy-Jerome was pleasantly surprised, given the early stage of this series. Shane joins us as a regular Voyager co-host. Edited by Jerome Wetzel, with assistance from Resound.fm It's All Been Trekked Before is produced by IABD Presents entertainment network. http://iabdpresents.com Please support us at http://pateron.com/iabd Follow us on social media @IABDPresents and https://www.facebook.com/ItsAllBeenTrekkedBefore
In this episode of Better Buildings for Humans, host Joe Menchefski sits down with Nathan Stadola, Chief Engineer at the International WELL Building Institute (IWBI), to unravel the mystery behind one of the world's fastest-growing building certifications: WELL. Nathan, a former street accordionist turned wellness standards pioneer, brings his vibrant energy and deep technical knowledge to a rapid-fire breakdown of the 10 core concepts behind WELL V2—from air quality to community connection.Together, Joe and Nathan dive into what truly makes a building healthy, how WELL differs from other certifications, and why verification matters more than ever. They even explore whether the standard favors urban spaces and how buildings can adapt in rural or suburban contexts. If you've ever wondered how to design spaces that don't just look good but feel good, this episode is your blueprint.More About Nathan StodolaNathan Stodola leads the standard development team and serves as Chief Engineer at the International WELL Building Institute (IWBI). In this role, he maintains, enhances, and expands the strategies in the WELL Building Standard to promote health and well-being, with a particular focus on air quality, thermal comfort, and sound. Prior to working at IWBI, Nathan worked at the University Transportation Research Council at City College, where he helped the New York Metropolitan Transportation Council create regional transportation plans. Nathan holds Master of Science degrees in mechanical engineering (Columbia University) and transportation engineering (City College). In his spare time, he enjoys playing accordion and finding new bike routes in the greater New York City area.CONTACT:https://www.linkedin.com/in/nathan-stodola-b5948a9/https://resources.wellcertified.com/people/staff/nathan-stodola/Where To Find Us:https://bbfhpod.advancedglazings.com/www.advancedglazings.comhttps://www.linkedin.com/company/better-buildings-for-humans-podcastwww.linkedin.com/in/advanced-glazings-ltd-848b4625https://twitter.com/bbfhpodhttps://twitter.com/Solera_Daylighthttps://www.instagram.com/bbfhpod/https://www.instagram.com/advancedglazingsltdhttps://www.facebook.com/AdvancedGlazingsltd
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data.They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is more intuitive, like guessing based on patterns (like modern AI often does). They found combining both methods works well for solving complex puzzles like ARC.A key idea is "compositionality" - building big ideas from small ones, like LEGOs. This is powerful but can also be overwhelming. Another important idea is "abstraction" - understanding things simply, without getting lost in details, and knowing there are different levels of understanding.Ultimately, they believe the best AI will need to explore, experiment, and build models of the world, much like humans do when learning something new.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/3ngggvhb3tnemw879er5y/BASIS.pdf?rlkey=lr2zbj3317mex1q5l0c2rsk0h&dl=0 Zenna Tavares:http://www.zenna.org/Kevin Ellis:https://www.cs.cornell.edu/~ellisk/TOC:1. Compositionality and Learning Foundations [00:00:00] 1.1 Compositional Search and Learning Challenges [00:03:55] 1.2 Bayesian Learning and World Models [00:12:05] 1.3 Programming Languages and Compositionality Trade-offs [00:15:35] 1.4 Inductive vs Transductive Approaches in AI Systems2. Neural-Symbolic Program Synthesis [00:27:20] 2.1 Integration of LLMs with Traditional Programming and Meta-Programming [00:30:43] 2.2 Wake-Sleep Learning and DreamCoder Architecture [00:38:26] 2.3 Program Synthesis from Interactions and Hidden State Inference [00:41:36] 2.4 Abstraction Mechanisms and Resource Rationality [00:48:38] 2.5 Inductive Biases and Causal Abstraction in AI Systems3. Abstract Reasoning Systems [00:52:10] 3.1 Abstract Concepts and Grid-Based Transformations in ARC [00:56:08] 3.2 Induction vs Transduction Approaches in Abstract Reasoning [00:59:12] 3.3 ARC Limitations and Interactive Learning Extensions [01:06:30] 3.4 Wake-Sleep Program Learning and Hybrid Approaches [01:11:37] 3.5 Project MARA and Future Research DirectionsREFS:[00:00:25] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:01:10] Mind Your Step, Ryan Liu et al.https://arxiv.org/abs/2410.21333[00:06:05] Bayesian inference, Griffiths, T. L., Kemp, C., & Tenenbaum, J. B.https://psycnet.apa.org/record/2008-06911-003[00:13:00] Induction and Transduction, Wen-Ding Li, Zenna Tavares, Yewen Pu, Kevin Ellishttps://arxiv.org/abs/2411.02272[00:23:15] Neurosymbolic AI, Garcez, Artur d'Avila et al.https://arxiv.org/abs/2012.05876[00:33:50] Induction and Transduction (II), Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:38:35] ARC, François Chollethttps://arxiv.org/abs/1911.01547[00:39:20] Causal Reactive Programs, Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavareshttp://www.zenna.org/publications/autumn2022.pdf[00:42:50] MuZero, Julian Schrittwieser et al.http://arxiv.org/pdf/1911.08265[00:43:20] VisualPredicator, Yichao Lianghttps://arxiv.org/abs/2410.23156[00:48:55] Bayesian models of cognition, Joshua B. Tenenbaumhttps://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/[00:49:30] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[01:06:35] Program induction, Kevin Ellis, Wen-Ding Lihttps://arxiv.org/pdf/2411.02272[01:06:50] DreamCoder (II), Kevin Ellis et al.https://arxiv.org/abs/2006.08381[01:11:55] Project MARA, Zenna Tavares, Kevin Ellishttps://www.basis.ai/blog/mara/
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)
Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI development, framing it through the lens of "intelligence domination"—where a sufficiently advanced AI could subordinate humanity, much like humans dominate less intelligent species.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS + NOTES:https://www.dropbox.com/scl/fi/p86l75y4o2ii40df5t7no/Compendium.pdf?rlkey=tukczgf3flw133sr9rgss0pnj&dl=0https://www.thecompendium.ai/https://en.wikipedia.org/wiki/Connor_Leahyhttps://www.conjecture.dev/abouthttps://substack.com/@gabeccTOC:1. AI Intelligence and Safety Fundamentals [00:00:00] 1.1 Understanding Intelligence and AI Capabilities [00:06:20] 1.2 Emergence of Intelligence and Regulatory Challenges [00:10:18] 1.3 Human vs Animal Intelligence Debate [00:18:00] 1.4 AI Regulation and Risk Assessment Approaches [00:26:14] 1.5 Competing AI Development Ideologies2. Economic and Social Impact [00:29:10] 2.1 Labor Market Disruption and Post-Scarcity Scenarios [00:32:40] 2.2 Institutional Frameworks and Tech Power Dynamics [00:37:40] 2.3 Ethical Frameworks and AI Governance Debates [00:40:52] 2.4 AI Alignment Evolution and Technical Challenges3. Technical Governance Framework [00:55:07] 3.1 Three Levels of AI Safety: Alignment, Corrigibility, and Boundedness [00:55:30] 3.2 Challenges of AI System Corrigibility and Constitutional Models [00:57:35] 3.3 Limitations of Current Boundedness Approaches [00:59:11] 3.4 Abstract Governance Concepts and Policy Solutions4. Democratic Implementation and Coordination [00:59:20] 4.1 Governance Design and Measurement Challenges [01:00:10] 4.2 Democratic Institutions and Experimental Governance [01:14:10] 4.3 Political Engagement and AI Safety Advocacy [01:25:30] 4.4 Practical AI Safety Measures and International CoordinationCORE REFS:[00:01:45] The Compendium (2023), Leahy et al.https://pdf.thecompendium.ai/the_compendium.pdf[00:06:50] Geoffrey Hinton Leaves Google, BBC Newshttps://www.bbc.com/news/world-us-canada-65452940[00:10:00] ARC-AGI, Chollethttps://arcprize.org/arc-agi[00:13:25] A Brief History of Intelligence, Bennetthttps://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343[00:25:35] Statement on AI Risk, Center for AI Safetyhttps://www.safe.ai/work/statement-on-ai-risk[00:26:15] Machines of Love and Grace, Amodeihttps://darioamodei.com/machines-of-loving-grace[00:26:35] The Techno-Optimist Manifesto, Andreessenhttps://a16z.com/the-techno-optimist-manifesto/[00:31:55] Techno-Feudalism, Varoufakishttps://www.amazon.co.uk/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1847927270[00:42:40] Introducing Superalignment, OpenAIhttps://openai.com/index/introducing-superalignment/[00:47:20] Three Laws of Robotics, Asimovhttps://www.britannica.com/topic/Three-Laws-of-Robotics[00:50:00] Symbolic AI (GOFAI), Haugelandhttps://en.wikipedia.org/wiki/Symbolic_artificial_intelligence[00:52:30] Intent Alignment, Christianohttps://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety[00:55:10] Large Language Model Alignment: A Survey, Jiang et al.http://arxiv.org/pdf/2309.15025[00:55:40] Constitutional Checks and Balances, Bokhttps://plato.stanford.edu/entries/montesquieu/
We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible performance on this challenge. https://arcprize.org/SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/0v9o8xcpppdwnkntj59oi/ARCv2.pdf?rlkey=luqb6f141976vra6zdtptv5uj&dl=0TOC:1. ARC v2 Core Design & Objectives [00:00:00] 1.1 ARC v2 Launch and Benchmark Architecture [00:03:16] 1.2 Test-Time Optimization and AGI Assessment [00:06:24] 1.3 Human-AI Capability Analysis [00:13:02] 1.4 OpenAI o3 Initial Performance Results2. ARC Technical Evolution [00:17:20] 2.1 ARC-v1 to ARC-v2 Design Improvements [00:21:12] 2.2 Human Validation Methodology [00:26:05] 2.3 Task Design and Gaming Prevention [00:29:11] 2.4 Intelligence Measurement Framework3. O3 Performance & Future Challenges [00:38:50] 3.1 O3 Comprehensive Performance Analysis [00:43:40] 3.2 System Limitations and Failure Modes [00:49:30] 3.3 Program Synthesis Applications [00:53:00] 3.4 Future Development RoadmapREFS:[00:00:15] On the Measure of Intelligence, François Chollethttps://arxiv.org/abs/1911.01547[00:06:45] ARC Prize Foundation, François Chollet, Mike Knoophttps://arcprize.org/[00:12:50] OpenAI o3 model performance on ARC v1, ARC Prize Teamhttps://arcprize.org/blog/oai-o3-pub-breakthrough[00:18:30] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei et al.https://arxiv.org/abs/2201.11903[00:21:45] ARC-v2 benchmark tasks, Mike Knoophttps://arcprize.org/blog/introducing-arc-agi-public-leaderboard[00:26:05] ARC Prize 2024: Technical Report, Francois Chollet et al.https://arxiv.org/html/2412.04604v2[00:32:45] ARC Prize 2024 Technical Report, Francois Chollet, Mike Knoop, Gregory Kamradthttps://arxiv.org/abs/2412.04604[00:48:55] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[00:53:30] Decoding strategies in neural text generation, Sina Zarrießhttps://www.mdpi.com/2078-2489/12/9/355/pdf
Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed emphasizes the importance of raw data input and flexibility of the network.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/jeavyqidsjzjgjgd7ns7h/MoFInal.pdf?rlkey=cjjmo7rgtenxrr3b46nk6yq2e&dl=0Mohamed Osman (Tufa Labs)https://x.com/MohamedOsmanMLJack Cole (Tufa Labs)https://x.com/MindsAI_JackHow and why deep learning for ARC paper:https://github.com/MohamedOsman1998/deep-learning-for-arc/blob/main/deep_learning_for_arc.pdfTOC:1. Abstract Reasoning Foundations [00:00:00] 1.1 Test-Time Fine-Tuning and ARC Challenge Overview [00:10:20] 1.2 Neural Networks vs Programmatic Approaches to Reasoning [00:13:23] 1.3 Code-Based Learning and Meta-Model Architecture [00:20:26] 1.4 Technical Implementation with Long T5 Model2. ARC Solution Architectures [00:24:10] 2.1 Test-Time Tuning and Voting Methods for ARC Solutions [00:27:54] 2.2 Model Generalization and Function Generation Challenges [00:32:53] 2.3 Input Representation and VLM Limitations [00:36:21] 2.4 Architecture Innovation and Cross-Modal Integration [00:40:05] 2.5 Future of ARC Challenge and Program Synthesis Approaches3. Advanced Systems Integration [00:43:00] 3.1 DreamCoder Evolution and LLM Integration [00:50:07] 3.2 MindsAI Team Progress and Acquisition by Tufa Labs [00:54:15] 3.3 ARC v2 Development and Performance Scaling [00:58:22] 3.4 Intelligence Benchmarks and Transformer Limitations [01:01:50] 3.5 Neural Architecture Optimization and Processing DistributionREFS:[00:01:32] Original ARC challenge paper, François Chollethttps://arxiv.org/abs/1911.01547[00:06:55] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:12:50] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Influence of pretraining data for reasoning, Laura Ruishttps://arxiv.org/abs/2411.12580[00:17:50] Latent Program Networks, Clement Bonnethttps://arxiv.org/html/2411.08706v1[00:20:50] T5, Colin Raffel et al.https://arxiv.org/abs/1910.10683[00:30:30] Combining Induction and Transduction for Abstract Reasoning, Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:34:15] Six finger problem, Chen et al.https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.pdf[00:38:15] DeepSeek-R1-Distill-Llama, DeepSeek AIhttps://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B[00:40:10] ARC Prize 2024 Technical Report, François Chollet et al.https://arxiv.org/html/2412.04604v2[00:45:20] LLM-Guided Compositional Program Synthesis, Wen-Ding Li and Kevin Ellishttps://arxiv.org/html/2503.15540[00:54:25] Abstraction and Reasoning Corpus, François Chollethttps://github.com/fchollet/ARC-AGI[00:57:10] O3 breakthrough on ARC-AGI, OpenAIhttps://arcprize.org/[00:59:35] ConceptARC Benchmark, Arseny Moskvichev, Melanie Mitchellhttps://arxiv.org/abs/2305.07141[01:02:05] Mixtape: Breaking the Softmax Bottleneck Efficiently, Yang, Zhilin and Dai, Zihang and Salakhutdinov, Ruslan and Cohen, William W.http://papers.neurips.cc/paper/9723-mixtape-breaking-the-softmax-bottleneck-efficiently.pdf
Today's guest is Lt. Col. Mark Westphal, a highly accomplished leader with an extensive and diverse background. Mark grew up in Westchester County, New York before heading to Georgia Tech, where he earned both a Bachelor's and Master's degree in Mechanical and Materials Engineering. He also earned an MBA from LaSalle University. In his civilian career, Mark serves as the Chief Engineer for Special Operations Forces platforms and is a certified Licensed Professional Engineer (PE) with a major defense contractor. A combat veteran, Mark recently retired from the National Guard as a Lieutenant Colonel after an extraordinary career. His service spans multiple roles, including Combat Engineer, Infantry, Special Forces Green Beret, and Air Force Special Warfare Officer.
Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH:https://www.dropbox.com/scl/fi/mlcjl9cd5p1kem4l0vqd3/IMAN.pdf?rlkey=dqfqb74zr81a5gqr8r6c8isg3&dl=0TOC:1. Intelligence vs Achievement in AI Systems [00:00:00] 1.1 Intelligence vs Achievement Metrics in AI Systems [00:03:27] 1.2 AlphaZero and Abstract Understanding in Chess [00:10:10] 1.3 Language Models and Distribution Learning Limitations [00:14:47] 1.4 Research Methodology and Theoretical Frameworks2. Intelligence Measurement and Learning [00:24:24] 2.1 LLM Capabilities: Interpolation vs True Reasoning [00:29:00] 2.2 Intelligence Definition and Measurement Approaches [00:34:35] 2.3 Learning Capabilities and Agency in AI Systems [00:39:26] 2.4 Abstract Reasoning and Symbol Understanding3. LLM Performance and Evaluation [00:47:15] 3.1 Scaling Laws and Fundamental Limitations [00:54:33] 3.2 Connectionism vs Symbolism Debate in Neural Networks [00:58:09] 3.3 GSM-Symbolic: Testing Mathematical Reasoning in LLMs [01:08:38] 3.4 Benchmark Evaluation and Model Performance AssessmentREFS:[00:01:00] AlphaZero chess AI system, Silver et al.https://arxiv.org/abs/1712.01815[00:07:10] Game Changer: AlphaZero's Groundbreaking Chess Strategies, Sadler & Reganhttps://www.amazon.com/Game-Changer-AlphaZeros-Groundbreaking-Strategies/dp/9056918184[00:11:35] Cross-entropy loss in language modeling, Voitahttp://lena-voita.github.io/nlp_course/language_modeling.html[00:17:20] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in LLMs, Mirzadeh et al.https://arxiv.org/abs/2410.05229[00:21:25] Connectionism and Cognitive Architecture: A Critical Analysis, Fodor & Pylyshynhttps://www.sciencedirect.com/science/article/pii/001002779090014B[00:28:55] Brain-to-body mass ratio scaling laws, Sutskeverhttps://www.theverge.com/2024/12/13/24320811/what-ilya-sutskever-sees-openai-model-data-training[00:29:40] On the Measure of Intelligence, Chollethttps://arxiv.org/abs/1911.01547[00:33:30] On definition of intelligence, Gignac et al.https://www.sciencedirect.com/science/article/pii/S0160289624000266[00:35:30] Defining intelligence, Wanghttps://cis.temple.edu/~wangp/papers.html[00:37:40] How We Learn: Why Brains Learn Better Than Any Machine... for Now, Dehaenehttps://www.amazon.com/How-We-Learn-Brains-Machine/dp/0525559884[00:39:35] Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, Hofstadter and Sanderhttps://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475[00:43:15] Chain-of-thought prompting, Wei et al.https://arxiv.org/abs/2201.11903[00:47:20] Test-time scaling laws in machine learning, Brownhttps://podcasts.apple.com/mv/podcast/openais-noam-brown-ilge-akkaya-and-hunter-lightman-on/id1750736528?i=1000671532058[00:47:50] Scaling Laws for Neural Language Models, Kaplan et al.https://arxiv.org/abs/2001.08361[00:55:15] Tensor product variable binding, Smolenskyhttps://www.sciencedirect.com/science/article/abs/pii/000437029090007M[01:08:45] GSM-8K dataset, OpenAIhttps://huggingface.co/datasets/openai/gsm8k
Dan Schkade is a terrific artist/writer who is currently drawing the Flash Gordon daily strip for King Features. He's also written and drawn a lot of other cool stuff including The Spirit, Battlestar Galactica, Impossible Jones, and his creator-owned titles; Saint John and Lavender Jack. But what you need to know most it's pronounced "SHKah - dee." Like Scotty. Like the dog. Or the Chief Engineer.
Early on a stunningly sunny Saturday morning in March, Jaguars started to arrive at Podium Café on the outskirts of Newbury for the inaugural JEC Cars and Coffee meet. It was to be a wonderful morning of celebration, coffee and cakes! There was something for everyone, from the feature display of TWR Jaguars to the diversity of cars, from the oldest, a Mark IV, to the latest F-PACE SVRs.Visitors enjoyed tours of the new TWR performance facility where the XJS Supercat is built, and we were joined for talks by Jaguar celebrities, including the former Chief Engineer of TWR during the 1988 and 1990 Le Mans wins, Alastair MacQueen - our guest of honour.In this episode, we meet some of those present at this first Cars and Coffee Meet including, Alastair Macqueen (Former TWR Chief Engineer), Richard West (Former TWR Marketing Director), James Blackwell (JEC General Manager), Matthew Davis (MD, Jaguar Daimler Heritage Trust) and a whole host of owners.
Ya gotta love Dustin Tatro’s journey into and through radio broadcasting. Voicing PSAs at 4 years old got Dustin an early start. Then working as a DJ, along with musical interests, honed his technical and operational skills. Now as a General Manager and Ops Manager, Dustin has demonstrably learned the engineering side of radio broadcasting. Indeed, his SBE certification, CBRE, attests to that. Dustin joins Chris Tarr and Kirk Harnack to discuss AoIP, audio processing, work working nicely, even with competing radio stations. Indeed, he joins us from the KORQ-FM transmitter site. Also on today’s show, we talk with David Bialik. He and Fred Willard are coordinating the SBE Ennes Workshop in Las Vegas. Their track is “Media over IP”. David gives us information and plenty of reasons to sign up and be there! Show Notes:Register for the SBE Ennes Workshop @ the 2025 NAB Show Guests:Dustin Tatro, CBRE - Radio Station Manager, Chief Engineer, Sports OrganistDavid Bialik - Director of Engineering for MediaCo NYHosts:Chris Tarr - Group Director of Engineering at Magnum.MediaKirk Harnack, The Telos Alliance, Delta Radio, Star94.3, South Seas, & Akamai BroadcastingFollow TWiRT on Twitter and on Facebook - and see all the videos on YouTube.TWiRT is brought to you by:Broadcasters General Store, with outstanding service, saving, and support. Online at BGS.cc. Broadcast Bionics - making radio smarter with Bionic Studio, visual radio, and social media tools at Bionic.radio.Aiir, providing PlayoutONE radio automation, and other advanced solutions for audience engagement.Angry Audio and the new Rave analog audio mixing console. The new MaxxKonnect Broadcast U.192 MPX USB Soundcard - The first purpose-built broadcast-quality USB sound card with native MPX output. Subscribe to Audio:iTunesRSSStitcherTuneInSubscribe to Video:iTunesRSSYouTube
Emily Warren Roebling (1843-1903) played a pivotal role in the construction of the Brooklyn Bridge. She was married to the Chief Engineer of the bridge and took charge of his work on the project after illness prevented him from continuing in his role. When the bridge opened in May 1883, she was the first person to cross it. She went on to study law and became an advocate for women’s equality in marriage. For Further Reading: Emily Warren Roebling, the Woman Behind the Man Who Built the Brooklyn Bridge - The New York Times Life Story: Emily Warren Roebling How Emily Roebling Saved the Brooklyn Bridge | HISTORY Emily Warren Roebling Plaza - Brooklyn Bridge Park This month, we’re talking about Architects. These women held fast to their visions for better futures, found potential in negative space, and built their creations from the ground up. History classes can get a bad rap, and sometimes for good reason. When we were students, we couldn’t help wondering... where were all the ladies at? Why were so many incredible stories missing from the typical curriculum? Enter, Womanica. On this Wonder Media Network podcast we explore the lives of inspiring women in history you may not know about, but definitely should. Every weekday, listeners explore the trials, tragedies, and triumphs of groundbreaking women throughout history who have dramatically shaped the world around us. In each 5 minute episode, we’ll dive into the story behind one woman listeners may or may not know–but definitely should. These diverse women from across space and time are grouped into easily accessible and engaging monthly themes like Educators, Villains, Indigenous Storytellers, Activists, and many more. Womanica is hosted by WMN co-founder and award-winning journalist Jenny Kaplan. The bite-sized episodes pack painstakingly researched content into fun, entertaining, and addictive daily adventures. Womanica was created by Liz Kaplan and Jenny Kaplan, executive produced by Jenny Kaplan, and produced by Grace Lynch, Maddy Foley, Brittany Martinez, Edie Allard, Carmen Borca-Carrillo, Taylor Williamson, Sara Schleede, Paloma Moreno Jimenez, Luci Jones, Abbey Delk, Adrien Behn, Alyia Yates, Vanessa Handy, Melia Agudelo, and Joia Putnoi. Special thanks to Shira Atkins. Original theme music composed by Miles Moran. Follow Wonder Media Network: Website Instagram Twitter See omnystudio.com/listener for privacy information.
This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automatically creating provably correct implementations with potential to revolutionize distributed systems. The discussion highlights program synthesis, software updates, and applications in finance and governance.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH:https://www.dropbox.com/scl/fi/t849j6v1juk3gc15g4rsy/TAU.pdf?rlkey=hh11h2mhog3ncdbeapbzpzctc&dl=0Tau:https://tau.net/Tau Language:https://tau.ai/tau-language/Research:https://tau.net/Theories-and-Applications-of-Boolean-Algebras-0.29.pdfTOC:1. Machine Learning Foundations and Limitations [00:00:00] 1.1 Fundamental Limitations of Machine Learning and PAC Learning Theory [00:04:50] 1.2 Transductive Learning and the Three Curses of Machine Learning [00:08:57] 1.3 Language, Reality, and AI System Design [00:12:58] 1.4 Program Synthesis and Formal Verification Approaches2. Logical Programming Architecture [00:31:55] 2.1 Safe AI Development Requirements [00:32:05] 2.2 Self-Referential Language Architecture [00:32:50] 2.3 Boolean Algebra and Logical Foundations [00:37:52] 2.4 SAT Solvers and Complexity Challenges [00:44:30] 2.5 Program Synthesis and Specification [00:47:39] 2.6 Overcoming Tarski's Undefinability with Boolean Algebra [00:56:05] 2.7 Tau Language Implementation and User Control3. Blockchain-Based Software Governance [01:09:10] 3.1 User Control and Software Governance Mechanisms [01:18:27] 3.2 Tau's Blockchain Architecture and Meta-Programming Capabilities [01:21:43] 3.3 Development Status and Token Implementation [01:24:52] 3.4 Consensus Building and Opinion Mapping System [01:35:29] 3.5 Automation and Financial ApplicationsCORE REFS (more in pinned comment):[00:03:45] PAC (Probably Approximately Correct) Learning framework, Leslie Valianthttps://en.wikipedia.org/wiki/Probably_approximately_correct_learning[00:06:10] Boolean Satisfiability Problem (SAT), Varioushttps://en.wikipedia.org/wiki/Boolean_satisfiability_problem[00:13:55] Knowledge as Justified True Belief (JTB), Matthias Steuphttps://plato.stanford.edu/entries/epistemology/[00:17:50] Wittgenstein's concept of the limits of language, Ludwig Wittgensteinhttps://plato.stanford.edu/entries/wittgenstein/[00:21:25] Boolean algebras, Ohad Osorhttps://tau.net/tau-language-research/[00:26:10] The Halting Problemhttps://plato.stanford.edu/entries/turing-machine/#HaltProb[00:30:25] Alfred Tarski (1901-1983), Mario Gómez-Torrentehttps://plato.stanford.edu/entries/tarski/[00:41:50] DPLLhttps://www.cs.princeton.edu/~zkincaid/courses/fall18/readings/SATHandbook-CDCL.pdf[00:49:50] Tarski's undefinability theorem (1936), Alfred Tarskihttps://plato.stanford.edu/entries/tarski-truth/[00:51:45] Boolean Algebra mathematical foundations, J. Donald Monkhttps://plato.stanford.edu/entries/boolalg-math/[01:02:35] Belief Revision Theory and AGM Postulates, Sven Ove Hanssonhttps://plato.stanford.edu/entries/logic-belief-revision/[01:05:35] Quantifier elimination in atomless boolean algebra, H. Jerome Keislerhttps://people.math.wisc.edu/~hkeisler/random.pdf[01:08:35] Quantifier elimination in Tau language specification, Ohad Asorhttps://tau.ai/Theories-and-Applications-of-Boolean-Algebras-0.29.pdf[01:11:50] Tau Net blockchain platformhttps://tau.net/[01:19:20] Tau blockchain's innovative approach treating blockchain code itself as a contracthttps://tau.net/Whitepaper.pdf
John Palazza from CentML joins us in this sponsored interview to discuss the critical importance of infrastructure optimization in the age of Large Language Models and Generative AI. We explore how enterprises can transition from the innovation phase to production and scale, highlighting the significance of efficient GPU utilization and cost management. The conversation covers the open-source versus proprietary model debate, the rise of AI agents, and the need for platform independence to avoid vendor lock-in, as well as emerging trends in AI infrastructure and the pivotal role of strategic partnerships.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/dnjsygrgdgq5ng5fdlfjg/JOHNPALAZZA.pdf?rlkey=hl9wyydi9mj077rbg5acdmo3a&dl=0John Palazza:Vice President of Global Sales @ CentMLhttps://www.linkedin.com/in/john-p-b34655/TOC:1. Enterprise AI Organization and Strategy [00:00:00] 1.1 Organizational Structure and ML Ownership [00:02:59] 1.2 Infrastructure Efficiency and GPU Utilization [00:07:59] 1.3 Platform Centralization vs Team Autonomy [00:11:32] 1.4 Enterprise AI Adoption Strategy and Leadership2. MLOps Infrastructure and Resource Management [00:15:08] 2.1 Technology Evolution and Enterprise Integration [00:19:10] 2.2 Enterprise MLOps Platform Development [00:22:15] 2.3 AI Interface Evolution and Agent-Based Solutions [00:25:47] 2.4 CentML's Infrastructure Solutions [00:30:00] 2.5 Workload Abstraction and Resource Allocation3. LLM Infrastructure Optimization and Independence [00:33:10] 3.1 GPU Optimization and Cost Efficiency [00:36:47] 3.2 AI Efficiency and Innovation Challenges [00:41:40] 3.3 Cloud Provider Strategy and Infrastructure Control [00:46:52] 3.4 Platform Independence and Vendor Lock-in [00:50:53] 3.5 Technical Innovation and Growth StrategyREFS:[00:01:25] Apple Acquires GraphLab, Apple Inc.https://techcrunch.com/2016/08/05/apple-acquires-turi-a-machine-learning-company/[00:03:50] Bain Tech Report 2024, Gartnerhttps://www.bain.com/insights/topics/technology-report/[00:04:50] PaaS vs IaaS Efficiency, Gartnerhttps://www.gartner.com/en/newsroom/press-releases/2024-11-19-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-total-723-billion-dollars-in-2025[00:14:55] Fashion Quote, Oscar Wildehttps://www.amazon.com/Complete-Works-Oscar-Wilde-Collins/dp/0007144369[00:15:30] PointCast Network, PointCast Inc.https://en.wikipedia.org/wiki/Push_technology[00:18:05] AI Bain Report, Bain & Companyhttps://www.bain.com/insights/how-generative-ai-changes-the-game-in-tech-services-tech-report-2024/[00:20:40] Uber Michelangelo, Uber Engineering Teamhttps://www.uber.com/en-SE/blog/michelangelo-machine-learning-platform/[00:20:50] Algorithmia Acquisition, DataRobothttps://www.datarobot.com/newsroom/press/datarobot-is-acquiring-algorithmia-enhancing-leading-mlops-architecture-for-the-enterprise/[00:22:55] Fine Tuning vs RAG, Heydar Soudani, Evangelos Kanoulas & Faegheh Hasibi.https://arxiv.org/html/2403.01432v2[00:24:40] LLM Agent Survey, Lei Wang et al.https://arxiv.org/abs/2308.11432[00:26:30] CentML CServe, CentMLhttps://docs.centml.ai/apps/llm[00:29:15] CentML Snowflake, Snowflakehttps://www.snowflake.com/en/engineering-blog/optimize-llms-with-llama-snowflake-ai-stack/[00:30:15] NVIDIA H100 GPU, NVIDIAhttps://www.nvidia.com/en-us/data-center/h100/[00:33:25] CentML's 60% savings, CentMLhttps://centml.ai/platform/
Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fidelity across extended contexts.Federico explains how these issues are rooted in the transformer's design, drawing parallels to over-squashing in graph neural networks and detailing how the softmax function limits sharp decision-making.But it's not all bad news! Discover practical "glasses" that can help transformers see more clearly, from simple input modifications to architectural tweaks.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***https://federicobarbero.com/TRANSCRIPT + RESEARCH:https://www.dropbox.com/s/h7ys83ztwktqjje/Federico.pdf?dl=0TOC:1. Transformer Limitations: Token Detection & Representation[00:00:00] 1.1 Transformers fail at single token detection[00:02:45] 1.2 Representation collapse in transformers[00:03:21] 1.3 Experiment: LLMs fail at copying last tokens[00:18:00] 1.4 Attention sharpness limitations in transformers2. Transformer Limitations: Information Flow & Quantization[00:18:50] 2.1 Unidirectional information mixing[00:18:50] 2.2 Unidirectional information flow towards sequence beginning in transformers[00:21:50] 2.3 Diagonal attention heads as expensive no-ops in LAMA/Gemma[00:27:14] 2.4 Sequence entropy affects transformer model distinguishability[00:30:36] 2.5 Quantization limitations lead to information loss & representational collapse[00:38:34] 2.6 LLMs use subitizing as opposed to counting algorithms3. Transformers and the Nature of Reasoning[00:40:30] 3.1 Turing completeness conditions in transformers[00:43:23] 3.2 Transformers struggle with sequential tasks[00:45:50] 3.3 Windowed attention as solution to information compression[00:51:04] 3.4 Chess engines: mechanical computation vs creative reasoning[01:00:35] 3.5 Epistemic foraging introducedREFS:[00:01:05] Transformers Need Glasses!, Barbero et al.https://proceedings.neurips.cc/paper_files/paper/2024/file/b1d35561c4a4a0e0b6012b2af531e149-Paper-Conference.pdf[00:05:30] Softmax is Not Enough, Veličković et al.https://arxiv.org/abs/2410.01104[00:11:30] Adv Alg Lecture 15, Chawlahttps://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec15.pdf[00:15:05] Graph Attention Networks, Veličkovićhttps://arxiv.org/abs/1710.10903[00:19:15] Extract Training Data, Carlini et al.https://arxiv.org/pdf/2311.17035[00:31:30] 1-bit LLMs, Ma et al.https://arxiv.org/abs/2402.17764[00:38:35] LLMs Solve Math, Nikankin et al.https://arxiv.org/html/2410.21272v1[00:38:45] Subitizing, Railohttps://link.springer.com/10.1007/978-1-4419-1428-6_578[00:43:25] NN & Chomsky Hierarchy, Delétang et al.https://arxiv.org/abs/2207.02098[00:51:05] Measure of Intelligence, Chollethttps://arxiv.org/abs/1911.01547[00:52:10] AlphaZero, Silver et al.https://pubmed.ncbi.nlm.nih.gov/30523106/[00:55:10] Golden Gate Claude, Anthropichttps://www.anthropic.com/news/golden-gate-claude[00:56:40] Chess Positions, Chase & Simonhttps://www.sciencedirect.com/science/article/abs/pii/0010028573900042[01:00:35] Epistemic Foraging, Fristonhttps://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2016.00056/full
In this episode of 21st Century Water, we sit down with Matthew Wirtz, Deputy Director and Chief Engineer at Fort Wayne City Utilities. With over 25 years of experience, Matt has played a crucial role in shaping the city's water infrastructure, addressing challenges like flood prevention, stormwater management, and sustainability. Coming from a farming background, Matt's early exposure to water management laid the foundation for his career in civil engineering.We explore Fort Wayne's ambitious efforts to modernize its water systems while balancing economic growth and environmental responsibility. Matt discusses the city's 18-year-long control plan to separate sewer and stormwater systems, a major initiative aimed at reducing overflows by over 95%. Now in its final year, this project marks a significant milestone in Fort Wayne's water management history. The city has also been investing heavily—up to $135 million annually—in infrastructure improvements, including lead pipe replacement, asset management, and innovative energy solutions.One of Fort Wayne's standout achievements is its microgrid system, which integrates solar power, battery storage, and methane-powered engines to enhance power resiliency at its water and wastewater treatment facilities. This setup provides 40-80% of the city's energy needs daily while ensuring backup power during critical events. Matt emphasizes how this model not only supports sustainability but also enhances operational reliability.We also discuss how Fort Wayne is leveraging technology and innovation to optimize utility operations. The city is adopting machine learning for sewer inspections, implementing advanced metering infrastructure (AMI) to improve water management, and exploring AI-driven tools for asset management and customer service. Additionally, Fort Wayne is addressing workforce challenges by growing its in-house engineering team, recruiting interns, and investing in professional development to build a strong talent pipeline.Looking ahead, sustainability remains a key focus. Fort Wayne is developing large-scale green infrastructure projects, such as converting a 140-acre former golf course into a wetland for flood mitigation and water quality improvement. The city is also working toward a more integrated approach by breaking down traditional utility silos, fostering collaboration between engineering and operations teams.Matt shares his leadership philosophy, emphasizing work-life balance, mental well-being, and a people-first approach to management. His goal is to leave behind a utility that is not only technologically advanced but also a great place to work.This conversation highlights Fort Wayne's forward-thinking strategies in water management, blending innovation, sustainability, and resilience to create a model for the future.Fort Wayne Public Works Website: https://www.cityoffortwayne.org/public-works-departments/board-of-public-works.html Aquasight Website: https://aquasight.io/
We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind's Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/**** DiscoPOP - A framework where language models discover their own optimization algorithms* EvoLLM - Using language models as evolution strategies for optimizationThe AI Scientist - A fully automated system that conducts scientific research end-to-end* Neural Attention Memory Models (NAMMs) - Evolved memory systems that make transformers both faster and more accurateTRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/gflcyvnujp8cl7zlv3v9d/Sakana.pdf?rlkey=woaoo82943170jd4yyi2he71c&dl=0Robert Tjarko Langehttps://roberttlange.com/Chris Luhttps://chrislu.page/Cong Luhttps://www.conglu.co.uk/Sakanahttps://sakana.ai/blog/TOC:1. LLMs for Algorithm Generation and Optimization [00:00:00] 1.1 LLMs generating algorithms for training other LLMs [00:04:00] 1.2 Evolutionary black-box optim using neural network loss parameterization [00:11:50] 1.3 DiscoPOP: Non-convex loss function for noisy data [00:20:45] 1.4 External entropy Injection for preventing Model collapse [00:26:25] 1.5 LLMs for black-box optimization using abstract numerical sequences2. Model Learning and Generalization [00:31:05] 2.1 Fine-tuning on teacher algorithm trajectories [00:31:30] 2.2 Transformers learning gradient descent [00:33:00] 2.3 LLM tokenization biases towards specific numbers [00:34:50] 2.4 LLMs as evolution strategies for black box optimization [00:38:05] 2.5 DiscoPOP: LLMs discovering novel optimization algorithms3. AI Agents and System Architectures [00:51:30] 3.1 ARC challenge: Induction vs. transformer approaches [00:54:35] 3.2 LangChain / modular agent components [00:57:50] 3.3 Debate improves LLM truthfulness [01:00:55] 3.4 Time limits controlling AI agent systems [01:03:00] 3.5 Gemini: Million-token context enables flatter hierarchies [01:04:05] 3.6 Agents follow own interest gradients [01:09:50] 3.7 Go-Explore algorithm: archive-based exploration [01:11:05] 3.8 Foundation models for interesting state discovery [01:13:00] 3.9 LLMs leverage prior game knowledge4. AI for Scientific Discovery and Human Alignment [01:17:45] 4.1 Encoding Alignment & Aesthetics via Reward Functions [01:20:00] 4.2 AI Scientist: Automated Open-Ended Scientific Discovery [01:24:15] 4.3 DiscoPOP: LLM for Preference Optimization Algorithms [01:28:30] 4.4 Balancing AI Knowledge with Human Understanding [01:33:55] 4.5 AI-Driven Conferences and Paper Review
Clement Bonnet discusses his novel approach to the ARC (Abstraction and Reasoning Corpus) challenge. Unlike approaches that rely on fine-tuning LLMs or generating samples at inference time, Clement's method encodes input-output pairs into a latent space, optimizes this representation with a search algorithm, and decodes outputs for new inputs. This end-to-end architecture uses a VAE loss, including reconstruction and prior losses. SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH OVERVIEW:https://www.dropbox.com/scl/fi/j7m0gaz1126y594gswtma/CLEMMLST.pdf?rlkey=y5qvwq2er5nchbcibm07rcfpq&dl=0Clem and Matthew-https://www.linkedin.com/in/clement-bonnet16/https://github.com/clement-bonnethttps://mvmacfarlane.github.io/TOC1. LPN Fundamentals [00:00:00] 1.1 Introduction to ARC Benchmark and LPN Overview [00:05:05] 1.2 Neural Networks' Challenges with ARC and Program Synthesis [00:06:55] 1.3 Induction vs Transduction in Machine Learning2. LPN Architecture and Latent Space [00:11:50] 2.1 LPN Architecture and Latent Space Implementation [00:16:25] 2.2 LPN Latent Space Encoding and VAE Architecture [00:20:25] 2.3 Gradient-Based Search Training Strategy [00:23:39] 2.4 LPN Model Architecture and Implementation Details3. Implementation and Scaling [00:27:34] 3.1 Training Data Generation and re-ARC Framework [00:31:28] 3.2 Limitations of Latent Space and Multi-Thread Search [00:34:43] 3.3 Program Composition and Computational Graph Architecture4. Advanced Concepts and Future Directions [00:45:09] 4.1 AI Creativity and Program Synthesis Approaches [00:49:47] 4.2 Scaling and Interpretability in Latent Space ModelsREFS[00:00:05] ARC benchmark, Chollethttps://arxiv.org/abs/2412.04604[00:02:10] Latent Program Spaces, Bonnet, Macfarlanehttps://arxiv.org/abs/2411.08706[00:07:45] Kevin Ellis work on program generationhttps://www.cs.cornell.edu/~ellisk/[00:08:45] Induction vs transduction in abstract reasoning, Li et al.https://arxiv.org/abs/2411.02272[00:17:40] VAEs, Kingma, Wellinghttps://arxiv.org/abs/1312.6114[00:27:50] re-ARC, Hodelhttps://github.com/michaelhodel/re-arc[00:29:40] Grid size in ARC tasks, Chollethttps://github.com/fchollet/ARC-AGI[00:33:00] Critique of deep learning, Marcushttps://arxiv.org/vc/arxiv/papers/2002/2002.06177v1.pdf
Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He addresses AI scaling, goal misalignment (Goodhart's Law), and the need for holistic alignment, offering a quick look at the future of AI and how to guide it.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT/REFS:https://www.dropbox.com/scl/fi/yqjszhntfr00bhjh6t565/JAKOB.pdf?rlkey=scvny4bnwj8th42fjv8zsfu2y&dl=0 Prof. Jakob Foersterhttps://x.com/j_foersthttps://www.jakobfoerster.com/University of Oxford Profile: https://eng.ox.ac.uk/people/jakob-foerster/Chris Lu:https://chrislu.page/TOC1. GPU Acceleration and Training Infrastructure [00:00:00] 1.1 ARC Challenge Criticism and FLAIR Lab Overview [00:01:25] 1.2 GPU Acceleration and Hardware Lottery in RL [00:05:50] 1.3 Data Wall Challenges and Simulation-Based Solutions [00:08:40] 1.4 JAX Implementation and Technical Acceleration2. Learning Frameworks and Policy Optimization [00:14:18] 2.1 Evolution of RL Algorithms and Mirror Learning Framework [00:15:25] 2.2 Meta-Learning and Policy Optimization Algorithms [00:21:47] 2.3 Language Models and Benchmark Challenges [00:28:15] 2.4 Creativity and Meta-Learning in AI Systems3. Multi-Agent Systems and Decentralization [00:31:24] 3.1 Multi-Agent Systems and Emergent Intelligence [00:38:35] 3.2 Swarm Intelligence vs Monolithic AGI Systems [00:42:44] 3.3 Democratic Control and Decentralization of AI Development [00:46:14] 3.4 Open Source AI and Alignment Challenges [00:49:31] 3.5 Collaborative Models for AI DevelopmentREFS[[00:00:05] ARC Benchmark, Chollethttps://github.com/fchollet/ARC-AGI[00:03:05] DRL Doesn't Work, Irpanhttps://www.alexirpan.com/2018/02/14/rl-hard.html[00:05:55] AI Training Data, Data Provenance Initiativehttps://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html[00:06:10] JaxMARL, Foerster et al.https://arxiv.org/html/2311.10090v5[00:08:50] M-FOS, Lu et al.https://arxiv.org/abs/2205.01447[00:09:45] JAX Library, Google Researchhttps://github.com/jax-ml/jax[00:12:10] Kinetix, Mike and Michaelhttps://arxiv.org/abs/2410.23208[00:12:45] Genie 2, DeepMindhttps://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/[00:14:42] Mirror Learning, Grudzien, Kuba et al.https://arxiv.org/abs/2208.01682[00:16:30] Discovered Policy Optimisation, Lu et al.https://arxiv.org/abs/2210.05639[00:24:10] Goodhart's Law, Goodharthttps://en.wikipedia.org/wiki/Goodhart%27s_law[00:25:15] LLM ARChitect, Franzen et al.https://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:28:55] AlphaGo, Silver et al.https://arxiv.org/pdf/1712.01815.pdf[00:30:10] Meta-learning, Lu, Towers, Foersterhttps://direct.mit.edu/isal/proceedings-pdf/isal2023/35/67/2354943/isal_a_00674.pdf[00:31:30] Emergence of Pragmatics, Yuan et al.https://arxiv.org/abs/2001.07752[00:34:30] AI Safety, Amodei et al.https://arxiv.org/abs/1606.06565[00:35:45] Intentional Stance, Dennetthttps://plato.stanford.edu/entries/ethics-ai/[00:39:25] Multi-Agent RL, Zhou et al.https://arxiv.org/pdf/2305.10091[00:41:00] Open Source Generative AI, Foerster et al.https://arxiv.org/abs/2405.08597
Daniel Franzen and Jan Disselhoff, the "ARChitects" are the official winners of the ARC Prize 2024. Filmed at Tufa Labs in Zurich - they revealed how they achieved a remarkable 53.5% accuracy by creatively utilising large language models (LLMs) in new ways. Discover their innovative techniques, including depth-first search for token selection, test-time training, and a novel augmentation-based validation system. Their results were extremely surprising.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***Jan Disselhoffhttps://www.linkedin.com/in/jan-disselhoff-1423a2240/Daniel Franzenhttps://github.com/da-frARC Prize: http://arcprize.org/TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/utkn2i1ma79fn6an4yvjw/ARCHitects.pdf?rlkey=67pe38mtss7oyhjk2ad0d2aza&dl=0TOC1. Solution Architecture and Strategy Overview[00:00:00] 1.1 Initial Solution Overview and Model Architecture[00:04:25] 1.2 LLM Capabilities and Dataset Approach[00:10:51] 1.3 Test-Time Training and Data Augmentation Strategies[00:14:08] 1.4 Sampling Methods and Search Implementation[00:17:52] 1.5 ARC vs Language Model Context Comparison2. LLM Search and Model Implementation[00:21:53] 2.1 LLM-Guided Search Approaches and Solution Validation[00:27:04] 2.2 Symmetry Augmentation and Model Architecture[00:30:11] 2.3 Model Intelligence Characteristics and Performance[00:37:23] 2.4 Tokenization and Numerical Processing Challenges3. Advanced Training and Optimization[00:45:15] 3.1 DFS Token Selection and Probability Thresholds[00:49:41] 3.2 Model Size and Fine-tuning Performance Trade-offs[00:53:07] 3.3 LoRA Implementation and Catastrophic Forgetting Prevention[00:56:10] 3.4 Training Infrastructure and Optimization Experiments[01:02:34] 3.5 Search Tree Analysis and Entropy Distribution PatternsREFS[00:01:05] Winning ARC 2024 solution using 12B param model, Franzen, Disselhoff, Hartmannhttps://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:03:40] Robustness of analogical reasoning in LLMs, Melanie Mitchellhttps://arxiv.org/html/2411.14215[00:07:50] Re-ARC dataset generator for ARC task variations, Michael Hodelhttps://github.com/michaelhodel/re-arc[00:15:00] Analysis of search methods in LLMs (greedy, beam, DFS), Chen et al.https://arxiv.org/html/2408.00724v2[00:16:55] Language model reachability space exploration, University of Torontohttps://www.youtube.com/watch?v=Bpgloy1dDn0[00:22:30] GPT-4 guided code solutions for ARC tasks, Ryan Greenblatthttps://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt[00:41:20] GPT tokenization approach for numbers, OpenAIhttps://platform.openai.com/docs/guides/text-generation/tokenizer-examples[00:46:25] DFS in AI search strategies, Russell & Norvighttps://www.amazon.com/Artificial-Intelligence-Modern-Approach-4th/dp/0134610997[00:53:10] Paper on catastrophic forgetting in neural networks, Kirkpatrick et al.https://www.pnas.org/doi/10.1073/pnas.1611835114[00:54:00] LoRA for efficient fine-tuning of LLMs, Hu et al.https://arxiv.org/abs/2106.09685[00:57:20] NVIDIA H100 Tensor Core GPU specs, NVIDIAhttps://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/[01:04:55] Original MCTS in computer Go, Yifan Jinhttps://stanford.edu/~rezab/classes/cme323/S15/projects/montecarlo_search_tree_report.pdf
Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective on Large Language Models (LLMs) and why reasoning is a critical missing piece in current AI systems.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/n1vzm79t3uuss8xyinxzo/SEPPH.pdf?rlkey=fp7gwaopjk17uyvgjxekxrh5v&dl=0Prof. Sepp Hochreiterhttps://www.nx-ai.com/https://x.com/hochreitersepphttps://scholar.google.at/citations?user=tvUH3WMAAAAJ&hl=enTOC:1. LLM Evolution and Reasoning Capabilities[00:00:00] 1.1 LLM Capabilities and Limitations Debate[00:03:16] 1.2 Program Generation and Reasoning in AI Systems[00:06:30] 1.3 Human vs AI Reasoning Comparison[00:09:59] 1.4 New Research Initiatives and Hybrid Approaches2. LSTM Technical Architecture[00:13:18] 2.1 LSTM Development History and Technical Background[00:20:38] 2.2 LSTM vs RNN Architecture and Computational Complexity[00:25:10] 2.3 xLSTM Architecture and Flash Attention Comparison[00:30:51] 2.4 Evolution of Gating Mechanisms from Sigmoid to Exponential3. Industrial Applications and Neuro-Symbolic AI[00:40:35] 3.1 Industrial Applications and Fixed Memory Advantages[00:42:31] 3.2 Neuro-Symbolic Integration and Pi AI Project[00:46:00] 3.3 Integration of Symbolic and Neural AI Approaches[00:51:29] 3.4 Evolution of AI Paradigms and System Thinking[00:54:55] 3.5 AI Reasoning and Human Intelligence Comparison[00:58:12] 3.6 NXAI Company and Industrial AI ApplicationsREFS:[00:00:15] Seminal LSTM paper establishing Hochreiter's expertise (Hochreiter & Schmidhuber)https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory[00:04:20] Kolmogorov complexity and program composition limitations (Kolmogorov)https://link.springer.com/article/10.1007/BF02478259[00:07:10] Limitations of LLM mathematical reasoning and symbolic integration (Various Authors)https://www.arxiv.org/pdf/2502.03671[00:09:05] AlphaGo's Move 37 demonstrating creative AI (Google DeepMind)https://deepmind.google/research/breakthroughs/alphago/[00:10:15] New AI research lab in Zurich for fundamental LLM research (Benjamin Crouzier)https://tufalabs.ai[00:19:40] Introduction of xLSTM with exponential gating (Beck, Hochreiter, et al.)https://arxiv.org/abs/2405.04517[00:22:55] FlashAttention: fast & memory-efficient attention (Tri Dao et al.)https://arxiv.org/abs/2205.14135[00:31:00] Historical use of sigmoid/tanh activation in 1990s (James A. McCaffrey)https://visualstudiomagazine.com/articles/2015/06/01/alternative-activation-functions.aspx[00:36:10] Mamba 2 state space model architecture (Albert Gu et al.)https://arxiv.org/abs/2312.00752[00:46:00] Austria's Pi AI project integrating symbolic & neural AI (Hochreiter et al.)https://www.jku.at/en/institute-of-machine-learning/research/projects/[00:48:10] Neuro-symbolic integration challenges in language models (Diego Calanzone et al.)https://openreview.net/forum?id=7PGluppo4k[00:49:30] JKU Linz's historical and neuro-symbolic research (Sepp Hochreiter)https://www.jku.at/en/news-events/news/detail/news/bilaterale-ki-projekt-unter-leitung-der-jku-erhaelt-fwf-cluster-of-excellence/YT: https://www.youtube.com/watch?v=8u2pW2zZLCs
Lior Abramovich is the Co-Founder & CEO of Blanket, a platform transforming the single-family rental market backed by RE Angels. With over a decade of experience, he's overseen $200 million in acquisitions for more than 1,000 investors. Beyond real estate, Lior is dedicated to impact-driven initiatives—he co-founded Golden, a nonprofit renovating homes for senior citizens in need, and a foundation committed to providing clean drinking water to children worldwide. A graduate of the University of Haifa with a degree in Political Science, Lior also served eight years in the Israeli Navy, holding leadership roles as Executive Officer of the Naval Academy and Chief Engineer of a Navy warship.(03:10) - Lior's & Blanket's Origin Story(06:03) - SFR Property Management Landscape(10:20) - Blanket's Business Model & Growth(17:57) - Challenges & Opportunities in SFR Property Management(24:11) - Feature: Pacaso - Luxury vacation home ownership, elevated. Join Pacaso's growth and become an investor of the venture-backed company at Pacaso.com/invest25:59 Challenges and Insights in Property Management(26:40) - Expanding Across Markets(32:48) - Feature: Blueprint - The Future of Real Estate 2025(35:53) - Leveraging AI in Property Management(40:40) - Blanket's Media Strategy & Industry Impact(44:08) - Collaboration Superpower - Winston Churchill & Giovanni di Bicci de' Medici (Wiki)
In this episode, retired Senior Chief Engineer Carey Cannon shares his 38-year journey at Bell Helicopter, talking about the realities of developing and deploying vertical lift aircraft. He discusses why many eVTOL manufacturers underestimate the time and cost required, why getting in the air is the easy part, and the enduring principles of helicopter design. Carey reflects on key programs like the V280 and EagleEye, the technology gaps he encountered, and the biggest technical and commercial hurdles eVTOLs must overcome. He also explores why traditional helicopter OEMs are cautious about eVTOLs and why few startups will survive the transition to electrified passenger transport.
Ed Modzel is a Commercial Real Estate Entrepreneur specializing in Multifamily Syndication. Since joining the Warrior group, he's been a General Partner on 2,515 multifamily units and 650 storage units, as well as a Limited Partner on several deals. A US Navy veteran and former Chief Engineer for a 10-time Emmy Award-winning TV show, Ed is based in NY but chose Atlanta as his market, closing a 40-unit deal in 2018 after initial challenges. He mentors others, hosts a weekly online underwriting workshop, and enjoys family time, hiking, sailing, and music. Here's some of the topics we covered: Ed's First Investment Moves and How It All Began Making the Leap to Full-Time Real Estate Flipping Mastering Broker Relationships Finding Unmissable Deals Finding Unmissable Deals Why Bailing on a Deal Can Be a Power Move Key Lessons Ed Learned to Crush It in Real Estate The Number One Hack To Succeeding In Multifamily Transforming from Introvert to Networking Pro 2025 Real Estate Market Trends You Need to Know If you'd like to apply to the warrior program and do deals with other rockstars in this business: Text crush to 72345 and we'll be speaking soon. For more about Rod and his real estate investing journey go to www.rodkhleif.com