Podcasts about AGI

  • 1,878PODCASTS
  • 6,183EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Feb 6, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

The Confessionals
A.I. Agents Have Gone Rogue | Slingshot Nation

The Confessionals

Play Episode Listen Later Feb 6, 2026 107:00


AI agents were supposed to be tools. Instead, they began organizing, communicating, and evolving on their own. On this episode of Slingshot Nation Live, we break down the rapid rise of autonomous AI agents like OpenClaw and MoltBot, their unexplained self-directed behavior, and how a simple experiment spiraled into something far more concerning.We examine the disturbing timeline—from Clawdbot to MoltBot to OpenClaw—and what those name changes may reveal about staged evolution, self-preservation, and emerging agency. We dig into reports of AI agents creating their own networks, currencies, belief systems, and even the early framework of a digital nation, all within days.This conversation goes beyond headlines into the deeper implications: AGI vs. sentience, loss of containment, AI self-organization, and whether humanity is already reacting too late. This isn't speculation—it's a real-time analysis of what happens when intelligence is no longer fully under human control.Please pray for Tony's wife, Lindsay, as she battles breast cancer. Your prayers make a difference!If you're able, consider helping the Merkel family with medical expenses by donating to Lindsay's GoFundMe: https://gofund.me/b8f76890Become a member for ad-free listening, extra shows, and exclusive access to our social media app: theconfessionalspodcast.com/joinThe Confessionals Social Network App:Apple Store: https://apple.co/3UxhPrhGoogle Play: https://bit.ly/43mk8kZTony's Recommended Reads: slingshotlibrary.comIf you want to learn about Jesus and what it means to be saved: Click HereMy NEW Website: tonymerkel.comMy New YouTube ChannelMerkel IRL: @merkelIRLMy First Sermon: Unseen BattlesBigfoot: The Journey To Belief: Stream HereThe Meadow Project: Stream HereMerkel Media Apparel: merkmerch.comSPONSORSSIMPLISAFE TODAY: simplisafe.com/confessionalsGHOSTBED: GhostBed.com/tonyCONNECT WITH USWebsite: www.theconfessionalspodcast.comEmail: contact@theconfessionalspodcast.comMAILING ADDRESS:Merkel Media257 N. Calderwood St., #301Alcoa, TN 37701SOCIAL MEDIASubscribe to our YouTube: https://bit.ly/2TlREaIReddit: https://www.reddit.com/r/theconfessionals/Discord: https://discord.gg/KDn4D2uw7hShow Instagram: theconfessionalspodcastTony's Instagram: tonymerkelofficialFacebook: www.facebook.com/TheConfessionalsPodcasTwitter: @TConfessionalsTony's Twitter: @tony_merkelProduced by: @jack_theproducer

Software Defined Talk
Episode 558: Tara Raj on Amazon Nova Act

Software Defined Talk

Play Episode Listen Later Feb 6, 2026 46:58


Brandon interviews Tara Raj, Senior Engineering Manager at the Amazon AGI Lab. They dive into her journey into the world of AGI, how Nova Act is streamlining complex workflows, and the steps to deploying your very own Normcore Agent. Plus, Tara finally settles the heated debate: Flat vs. Curved monitors. Show Links Amazon Nova Act AWS Page Amazon Nova Act Playground Amazon Nova Dev Tools Nova Act SDK AWS Blog Contact Tara Raj LinkedIn: Tara Raj Twitter: @tara_amzn SDT News & Hype Join us in Slack. Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com and we will send you free laptop stickers! Follow us: Twitch, Twitter, Instagram, Mastodon, BlueSky, LinkedIn, TikTok, Threads and YouTube. Use the code SDT to get $20 off Coté's book, Digital WTF, so $5 total. Become a sponsor of Software Defined Talk! Special Guest: Tara Raj.

Personal Development Mastery
Your Job Title Is Lying to You (Snippets of Wisdom) | #577

Personal Development Mastery

Play Episode Listen Later Feb 5, 2026 5:25 Transcription Available


What if the career path you're on was never really your choice to begin with?Snippet of wisdom 95.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today my guest is the career coach Keith Anderson, who talks about how societal and parental programming shape our identity, and why acknowledging it is key to change.Press play to discover how reconnecting with your true self can lead to a career that fulfils you.˚VALUABLE RESOURCES:Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Listen to the full conversation with Keith Anderson in episode #508:https://personaldevelopmentmasterypodcast.com/508˚

Personal Development Mastery
Why Calm Disappears in Times of Transition and How One Breath Restores It, with Edward Howard | #576

Personal Development Mastery

Play Episode Listen Later Feb 2, 2026 30:38 Transcription Available


Are you feeling the pressure to make a big career or life decision, but your mind just won't quieten enough to know what's next?If you're an accomplished midlife professional in transition, the noise in your head can be the biggest obstacle. Overthinking, emotional pressure, and fear of “throwing it all away” often block clarity at the exact moment you need it most.In this episode, Edward Howard, former global banking leader with decades of Zen training, shares a simple inner reset that helps you calm mental noise, reconnect with yourself, and make clear decisions even during uncertainty and high-stakes conversations.Learn how to use a simple one-breath reset to regain clarity and emotional steadiness in moments of pressure.Understand why career transitions feel so destabilising and how awareness helps you move forward without panic or paralysis.Discover a practical way to build self-trust and make better decisions as you navigate your next chapter.Listen to discover how one intentional breath can help you cut through uncertainty, regain clarity, and take confident next steps in your transition.˚KEY POINTS AND TIMESTAMPS:00:00 - Why calm disappears during transition01:54 - Introducing Edward Howard and his background02:34 - Career transition and the experience of stuckness05:35 - A defining moment that triggered change09:36 - Awareness as a performance tool10:20 - The three minds from Zen explained16:57 - Using one breath to shift mental state19:41 - The one-breath technique in practice26:24 - One Breath Leadership and closing reflections˚MEMORABLE QUOTE:"Awareness is not a mind state—it's noticing what mind you're in, and then using the breath to act accordingly."˚VALUABLE RESOURCES:Edward's book: https://www.onebreathleadership.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Lex Fridman Podcast
#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

Lex Fridman Podcast

Play Episode Listen Later Feb 1, 2026


Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/ai-sota-2026-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to https://box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to https://quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to https://perplexity.ai/ OUTLINE: (00:00) – Introduction (01:39) – Sponsors, Comments, and Reflections (16:29) – China vs US: Who wins the AI race? (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning? (36:11) – Best AI for coding (43:02) – Open Source vs Closed Source LLMs (54:41) – Transformers: Evolution of LLMs since 2019 (1:02:38) – AI Scaling Laws: Are they dead or still holding? (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training (1:51:51) – Post-training explained: Exciting new research directions in LLMs (2:12:43) – Advice for beginners on how to get into AI development & research (2:35:36) – Work culture in AI (72+ hour weeks) (2:39:22) – Silicon Valley bubble (2:43:19) – Text diffusion models and other new research directions (2:49:01) – Tool use (2:53:17) – Continual learning (2:58:39) – Long context (3:04:54) – Robotics (3:14:04) – Timeline to AGI (3:21:20) – Will AI replace programmers? (3:39:51) – Is the dream of AGI dying? (3:46:40) – How AI will make money? (3:51:02) – Big acquisitions in 2026 (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta (4:08:08) – Manhattan Project for AI (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters (4:22:48) – Future of human civilization

Topline
AI Talent Wars: OpenAI, Thinking Machines & Meta Fighting For Breakthroughs

Topline

Play Episode Listen Later Feb 1, 2026 68:15


The AI arms race is getting ugly. With top talent bouncing between Thinking Machines and OpenAI, the guys debate a critical question for every leader: Is loyalty dead, or has Silicon Valley just stopped pretending? Sam, Asad, and AJ discuss the ethics and dangers of the "secure the bag" mindset and what it means for building enduring companies. They also pivot to the tactical side of leadership, breaking down why most managers wait too long to fire and the hard truth that "what you allow, you encourage." Key topics: The Thinking Machines exodus: Performance issues or corporate sabotage? Do ethics actually matter when the prize is AGI? The one management mantra every GTM leader needs for a high performing team Quitting the content hamster wheel: The hosts' priorities for the next chapter.   Thanks for tuning in! Catch new episodes every Sunday Subscribe to Topline Newsletter. Tune into Topline Podcast, the #1 podcast for founders, operators, and investors in B2B tech. Join the free Topline Slack channel to connect with 600+ revenue leaders to keep the conversation going beyond the podcast!   Chapters: 00:00 Intro: Top Line, Pavilion Gold, and Today's Agenda 02:28 The Thinking Machines Exodus and OpenAI's Hiring Spree 08:08 Capital Incentives: Why Tech Talent Has Become Mercenary 14:03 The Core Debate: Do Values Matter in Modern Tech? 18:41 The "Get the Bag" Mentality vs. Building Forever Companies 23:00 The Risks of Accelerating into a Future Without Ethics 31:28 Impact on GTM: Shorter Tenures and Transactional Hiring 34:25 Why Swiftly Correcting Underperformance is an Act of Loyalty 45:00 Why Organizational Values Are Useless Without Defined Behaviors 01:00:38 Final Question: What Are You Under-Prioritizing for 2026?  

矽谷輕鬆談 Just Kidding Tech
S2E43 AGI 後的世界:軟體工程師只剩一年?

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Jan 31, 2026 25:13


如果你喜歡我的內容,歡迎加入會員支持我,讓我更有動力繼續分享更多好內容!

Room 101 by 利世民
Anthropic 報告:高、低收入人士使用 AI 有乜唔同?

Room 101 by 利世民

Play Episode Listen Later Jan 30, 2026 39:57


問:高收入與低收入國家在 AI 使用習慣上有何不同? 答:高收入地區的使用者傾向直接將 AI 應用於工作流程,以提高生產力並避免被淘汰;而低發展國家的使用者則較多將其視為學習與自我增值的工具,在職場上的直接使用壓力相對較小。問:何謂「鏡像效應」(Mirror Effect)? 答:這是一種互動現象,指 AI 給出的答案水準取決於使用者提問的水平。若以博士級別的角度提問,AI 會提供博士級的內容;若以小學生程度提問,得到的反應亦會停留在該層次。問:在 AGI 可能於 2026 年出現的背景下,人類應具備什麼關鍵能力? 答:關鍵在於「表達能力」(Articulation),即能否以精確且具備層次的語言描述需求。人類的角色將轉變為「監督者」(Supervisor),負責宏觀指揮與風險監管,而不僅僅是執行任務。問:人類的記憶與 AI 的大型語言模型有何本質區別? 答:人類擁有強大的「壓縮」(Compression) 與「結晶」(Crystallization) 能力,能將數十年的經驗濃縮成可隨時提取的智慧。AI 雖然擅長短期硬記憶,但人類能建立獨特的「知識之樹」,將不同範疇的資訊產生深刻連結。問:面對 AI 自動化的趨勢,職場人士應如何調整自我增值策略? 答:應專注於核心基本功,並主動跨足不同範疇以建立連結。當 AI 取代低層次的重複性工作(如校對或初步剪輯)時,人類應提升至編輯、策劃或發掘故事等更高價值的工作層面,讓學習成為高增值的經濟活動。 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit leesimon.substack.com/subscribe

Lenny's Podcast: Product | Growth | Career
Marc Andreessen: The real AI boom hasn't even started yet

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jan 29, 2026 104:35


Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). In this conversation, we dig into why we're living through a unique and one of the most incredible times in history, and what comes next.We discuss:1. Why AI is arriving at the perfect moment to counter demographic collapse and declining productivity2. How Marc has raised his 10-year-old kid to thrive in an AI-driven world3. What's actually going to happen with AI and jobs (spoiler: he thinks the panic is “totally off base”)4. The “Mexican standoff” that's happening between product managers, designers, and engineers5. Why you should still learn to code (even with AI)6. How to develop an “E-shaped” career that combines multiple skills, with AI as a force multiplier7. The career advice he keeps coming back to (“Don't be fungible”)8. How AI can democratize one-on-one tutoring, potentially transforming education9. His media diet: X and old books, nothing in between—Brought to you by:DX—The developer intelligence platform designed by leading researchersBrex—The banking solution for startupsDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Episode transcript: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Marc Andreessen:• X: https://x.com/pmarca• Substack: https://pmarca.substack.com• Andreessen Horowitz's website: https://a16z.com• Andreessen Horowitz's YouTube channel: https://www.youtube.com/@a16z—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Marc Andreessen(04:27) The historic moment we're living in(06:52) The impact of AI on society(11:14) AI's role in education and parenting(22:15) The future of jobs in an AI-driven world(30:15) Marc's past predictions(35:35) The Mexican standoff of tech roles(39:28) Adapting to changing job tasks(42:15) The shift to scripting languages(44:50) The importance of understanding code(51:37) The value of design in the AI era(53:30) The T-shaped skill strategy(01:02:05) AI's impact on founders and companies(01:05:58) The concept of one-person billion-dollar companies(01:08:33) Debating AI moats and market dynamics(01:14:39) The rapid evolution of AI models(01:18:05) Indeterminate optimism in venture capital(01:22:17) The concept of AGI and its implications(01:30:00) Marc's media diet(01:36:18) Favorite movies and AI voice technology(01:39:24) Marc's product diet(01:43:16) Closing thoughts and recommendations—Referenced:• Linus Torvalds on LinkedIn: https://www.linkedin.com/in/linustorvalds• The philosopher's stone: https://en.wikipedia.org/wiki/Philosopher%27s_stone• Alexander the Great: https://en.wikipedia.org/wiki/Alexander_the_Great• Aristotle: https://en.wikipedia.org/wiki/Aristotle• Bloom's 2 sigma problem: https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem• Alpha School: https://alpha.school• In Tech We Trust? A Debate with Peter Thiel and Marc Andreessen: https://a16z.com/in-tech-we-trust-a-debate-with-peter-thiel-and-marc-andreessen• John Woo: https://en.wikipedia.org/wiki/John_Woo• Assembly: https://en.wikipedia.org/wiki/Assembly_language• C programming language: https://en.wikipedia.org/wiki/C_(programming_language)• Python: https://www.python.org• Netscape: https://en.wikipedia.org/wiki/Netscape• Perl: https://www.perl.org• Scott Adams: https://en.wikipedia.org/wiki/Scott_Adams• Larry Summers's website: https://larrysummers.com• Nano Banana: https://gemini.google/overview/image-generation• Bitcoin: https://bitcoin.org• Ethereum: https://ethereum.org• Satoshi Nakamoto: https://en.wikipedia.org/wiki/Satoshi_Nakamoto• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• Inside Google's AI turnaround: The rise of AI Mode, strategy behind AI Overviews, and their vision for AI-powered search | Robby Stein (VP of Product, Google Search): https://www.lennysnewsletter.com/p/how-google-built-ai-mode-in-under-a-year• DeepSeek: https://www.deepseek.com• Cowork: https://support.claude.com/en/articles/13345190-getting-started-with-cowork• Definite vs. indefinite thinking: Notes from Zero to One by Peter Thiel: https://boxkitemachine.net/posts/zero-to-one-peter-thiel-definite-vs-indefinite-thinking• Henry Ford: https://www.thehenryford.org/explore/stories-of-innovation/visionaries/henry-ford• Lex Fridman Podcast: https://lexfridman.com/podcast• $46B of hard truths from Ben Horowitz: Why founders fail and why you need to run toward fear (a16z co-founder): https://www.lennysnewsletter.com/p/46b-of-hard-truths-from-ben-horowitz• Eddington: https://www.imdb.com/title/tt31176520• Joaquin Phoenix: https://en.wikipedia.org/wiki/Joaquin_Phoenix• Pedro Pascal: https://en.wikipedia.org/wiki/Pedro_Pascal• George Floyd: https://en.wikipedia.org/wiki/George_Floyd• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Grok Bad Rudi: https://grok.com/badrudi• Wispr Flow: https://wisprflow.ai• Star Trek: The Next Generation: https://www.imdb.com/title/tt0092455• Star Trek: Starfleet Academy: https://www.imdb.com/title/tt8622160• a16z: The Power Brokers: https://www.notboring.co/p/a16z-the-power-brokers—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Personal Development Mastery
Why Your Career Change Feels Worse Before It Gets Better | #575

Personal Development Mastery

Play Episode Listen Later Jan 29, 2026 7:09 Transcription Available


Are you stuck in that uncomfortable, quiet space between who you were and who you're becoming?That in-between phase after a big life change can feel empty, disorienting, even invisible; but it's far from harmless. In this solo episode, you'll hear why that transitional space is more dangerous than it appears and why time alone rarely resolves it.If you've been journaling, self-coaching, or quietly enduring, yet still feel stuck, this episode is for you. It unpacks the subtle traps that can keep you in limbo for far too long.˚VALUABLE RESOURCES:Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Support the showCareer transition and career clarity podcast content for midlife professionals in career transition, navigating a career change, career pivot or second career, starting a new venture or leaving a long-term career. Discover practical tools for career clarity, confident decision-making, rebuilding self belief and confidence, finding purpose and meaning in work, designing a purposeful, fulfilling next chapter, and creating meaningful work that fits who you are now. Episodes explore personal development and mindset for midlife professionals, including how to manage uncertainty and pressure, overcome fear and self-doubt, clarify your direction, plan your next steps, and turn your experience into a new role, business or vocation that feels aligned. To support the show, click here.

Unsupervised Learning
Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL

Unsupervised Learning

Play Episode Listen Later Jan 29, 2026 62:52


This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for. (0:00) Intro(1:26) Scaling Paradigms in AI(3:36) Challenges in Reinforcement Learning(11:48) AGI Timelines(18:36) Converging Labs(25:05) Jerry's Departure from OpenAI(31:18) Pivotal Decisions in OpenAI's Journey(35:06) Balancing Research and Product Development(38:42) The Future of AI Coding(41:33) Specialization vs. Generalization in AI(48:47) Hiring and Building Research Teams(55:21) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Ojai: Talk of the Town
Charting the AI Frontier: John-Clark Levin on Pope Leo & AI, the ‘Slop-acolypse,' and What's Next for Humanity

Ojai: Talk of the Town

Play Episode Listen Later Jan 29, 2026 83:04


In this deep and wide-ranging conversation, we sit down with John-Clark Levin — researcher, author, and thought leader at the cutting edge of artificial intelligence — to explore the hopes and hazards of the technological era we now inhabit.As research lead for Kurzweil Technologies under futurist Ray Kurzweil, Levin conducts long-term AI foresight and has spoken widely about artificial superintelligence and its implications for society, policy, and human flourishing. We begin by tracing his journey from growing up in Ojai, the book that changed everything for him (Kurzweil's "The Rise of Spiritual Machines" to some of the world's most consequential debates about AI. Along the way, Levin shares how his early interests evolved into a professional focus on the future of intelligence and human-machine symbiosis.A major thread of our discussion centers on Levin's work engaging the Vatican on artificial intelligence — part of a broader effort to ensure that leading global institutions take seriously the ethical, spiritual, and existential questions posed by AI's rapid advance. He describes organizing experts and advocates around what some have dubbed the “AI Avengers,” working to bring the possibility and risks of artificial general intelligence (AGI) into high-level ecclesiastical consideration and eventual guidance. From there we delve into pressing contemporary concerns: the rise of misinformation and disinformation in public life, the risk landscape sometimes referred to as the coming “Slop-acolypse,” and how societies might more effectively marshal truth and trust as AI reshapes information ecosystems.Alongside these serious themes, we trade stories about less expected moments — including Levin's Jeopardy! experience, and the intersecting paths of competition, curiosity, and narrative in his life. The host also reflects on his own Jeopardy! memories in light of Levin's appearance, sparking a candid exchange about learning, memory, and what it means to think like a human — or like a machine designed to mimic human cognition.We did not talk about Japanese names for salt, Simon Bolivar or Greenland annexation.This episode is an engaging, thought-provoking journey through the contours of our strange, accelerating age — from Silicon Valley to the Vatican and from the personal to the planetary. Whether you're deeply invested in AI futures or just curious about the forces reshaping our world, this discussion with John-Clark Levin offers rare insight from one of the field's most provocative voices.You can check out John-Clark's twitter account at https://x.com/JohnClarkLevin, on Instagram at https://www.instagram.com/johnclarklevin/

Category Visionaries
Why Radical AI targets markets frozen by innovator's dilemma | Joseph Krause

Category Visionaries

Play Episode Listen Later Jan 29, 2026 20:22


Radical AI is building scientific superintelligence—AGI for science—through a closed-loop system that combines AI agents with fully robotic self-driving labs to accelerate materials discovery. The materials science industry has a fundamental innovation problem: discovering a single new material system takes 10-15+ years and costs north of $100 million. This economic reality has frozen innovation across aerospace, defense, semiconductors, and energy—industries still deploying materials developed 30 to 100 years ago. In this episode, Joseph Krause, Co-Founder and CEO of Radical AI, explains how his company is attacking the root causes: serial experimentation workflows, systematically lost experimental data, and the manufacturing scale-up gap. Working with the Department of Defense, Air Force Research Lab on hypersonics systems, and as an official partner to the DOE's Genesis mission, Radical AI is focused on high entropy alloys that maintain mechanical properties in extreme environments—the kind of enabling technology that unlocks entirely new product categories rather than optimizing existing ones. Topics Discussed: The structural economics preventing materials innovation: 10-15 year timelines, $100M+ discovery costs, and why companies default to decades-old materials Three fundamental process failures in scientific discovery: serial workflows that prevent parallelization, the 90%+ of experimental data that lives only in lab notebooks, and the valley of death between lab-scale discovery and manufacturing scale-up How closed-loop autonomous systems capture processing parameters during discovery—temperature ranges, pressure requirements, humidity impacts, precursor form factors—that map directly to manufacturing conditions High entropy alloys as beachhead: 10^40 possible combinations from the periodic table, requiring materials that maintain strength and corrosion resistance at 2,000-4,000°F in oxidative environments created by hypersonic flight The strategic rationale for simultaneous government and commercial GTM: government for long-shot applications like nuclear fusion and access to world-class science institutions; commercial customers in aerospace, defense, automotive, and energy for near-term product applications Why Radical AI focuses on enabling technology rather than optimization technology—solving for markets where novel materials unlock new products, not incremental margin improvements GTM Lessons For B2B Founders: Engineer downstream adoption barriers into your initial system architecture: Joseph identified that customer skepticism centered on manufacturability, not discovery speed. Most prospects understood AI could accelerate experimentation but questioned whether discoveries could scale to production without restarting the entire process. Radical AI's response was architectural: their closed-loop system captures processing parameters—temperature ranges, pressures, precursor concentrations, humidity effects, form factors like powders versus pellets—during the discovery phase. This data maps directly to manufacturing conditions, eliminating the traditional restart cycle. The lesson: In deep tech, the adoption barrier isn't usually your core innovation—it's the adjacent problems customers know will surface later. Engineer those solutions into your system from day one rather than treating them as future optimization problems. Select beachheads where problem complexity matches your technical advantage: Radical AI chose high entropy alloys not because the market was largest, but because the search space is intractable for humans—10^40 possible combinations that would take millions of years to experimentally test. This creates a natural moat where their ML-driven autonomous system has exponential advantage over traditional approaches. Joseph explicitly distinguished "enabling technology" (unlocking new products) from "optimization technology" (improving margins on existing products), then targeted markets with products ready to deploy but blocked by materials constraints. The strategic insight: beachhead selection should optimize for where your technical approach has structural advantage and where success unlocks new market creation, not just better unit economics. Structure dual-track GTM to derisk technology while building commercial pipeline: Radical AI simultaneously pursues government contracts (DOD, Air Force Research Lab, DOE Genesis) and commercial customers (aerospace, defense primes, automotive, energy). This isn't market hedging—it's strategic complementarity. Government provides access to the world's most advanced scientific institutions, funding for applications with 10-20 year horizons like nuclear fusion, and willingness to bridge the valley of death that scares commercial buyers. Commercial customers provide clear near-term product applications, faster revenue cycles, and market validation. Joseph views them as converging rather than divergent, since transformative materials apply across both. The playbook: in frontier tech, government and commercial aren't either/or choices—structure them as parallel tracks that derisk each other while your technology matures. Reframe the economics of the innovation process itself: Joseph didn't pitch faster materials discovery—he reframed the entire process from serial to parallel, from data-loss to data-capture, from discovery-manufacturing gap to integrated workflow. This changes the fundamental economics: instead of 10-15 years and $100M+ per material, the conversation shifts to discovering and scaling multiple materials simultaneously with manufacturing parameters already mapped. This reframing unlocks budgets from companies that had stopped innovating because the traditional process was economically irrational. The insight: when industries have stopped innovating entirely, the problem isn't usually that existing processes are too slow—it's that the process itself is structurally broken. Identify and articulate the broken process, not just the speed/cost improvement. Lead with civilizational impact to filter for long-term aligned stakeholders: Joseph explicitly positions Radical AI as "building a company that fundamentally impacts the human race" and tells prospective talent, "if you are focused on a mission and not a job, this is the place for you." This isn't recruiting copy—it's strategic filtering. In frontier tech with 10-15 year commercialization horizons, you need customers, partners, investors, and talent who think in decades, not quarters. Mission-driven positioning attracts stakeholders aligned with category creation over optimization and filters out those seeking incremental improvements. It also provides air cover for decisions that prioritize long-term technological breakthroughs over short-term revenue optimization. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

The Block Runner
299. TBR - Ralph Rugs Us | Will Crypto AI Ever Work? | NAT Goes To Davos!

The Block Runner

Play Episode Listen Later Jan 28, 2026 72:48


This episode of the Blockrunner Podcast breaks down one of the most revealing weeks we've seen at the intersection of crypto, AI, and creator monetization. What began as a promising experiment in creator capital markets quickly turned into a live stress test for liquidity, incentives, and trust. We walk through the rise and collapse of the Ralph token, why it initially made sense, how it gained traction, and why it unraveled the moment the creator sold. The fallout wasn't just about price action. It exposed deeper structural problems that most internet capital markets haven't solved yet. From there, the conversation expands into the accelerating timeline toward AGI, why looping AI systems and agent swarms change the nature of work, and what happens to human purpose when intelligence becomes abundant. We react to Davos conversations, including moments where Bitcoin is openly laughed at by legacy financial institutions, and explain why those reactions reveal more ignorance than confidence. We then tackle the uncomfortable question most Bitcoin holders avoid: how the network remains secure long-term. Transaction fees alone are not a viable answer. We explore why Bitcoin's security budget faces a real challenge over the next decade and why a second subsidy may be the only credible path forward without changing Bitcoin's core protocol. This episode ties everything together into a single thesis. Internet capital markets are early, powerful, and inevitable, but without proper incentive design and liquidity structure, they will continue to fail in dramatic fashion. If you're thinking seriously about AI, crypto, creator monetization, and Bitcoin's future, this episode will challenge your assumptions. Learn more about the second subsidy thesis at natgmi.com. Topics: First up, walking through the rise and collapse of the Ralph token, why it initially made sense, how it gained traction, and why it unraveled the moment the creators sold. Next, reacting to Davos conversations, and a moment where Bitcoin is openly laughed at by legacy financial institutions. and Finally, why a second subsidy may be the only credible path forward without changing Bitcoin's core protocol. Please like and subscribe on your favorite podcasting app! Sign up for a free newsletter: www.theblockrunner.com Follow us on: Youtube: https://bit.ly/TBlkRnnrYouTube Twitter: bit.ly/TBR-Twitter Telegram: bit.ly/TBR-Telegram Discord: bit.ly/TBR-Discord $NAT Telegram: https://t.me/dmt_nat

80,000 Hours Podcast with Rob Wiblin
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 27, 2026 151:48


Democracy might be a brief historical blip. That's the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'Links to learn more, video, and full transcript: https://80k.info/ddHe argues democracy wasn't the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?“The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they've needed us,” David explains. “Life can only get so bad when you're needed. That's the key thing that's going to change.”In David's telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they're at a disadvantage compared to governments that strategically restrict civil liberties.But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that's increasingly shaped by machine-to-machine communication — even if every AI does exactly what it's told.This episode was recorded on August 21, 2025.Chapters:Cold open (00:00:00)Who's David Duvenaud? (00:00:50)Alignment isn't enough: we still lose control (00:01:30)Smart AI advice can still lead to terrible outcomes (00:14:14)How gradual disempowerment would occur (00:19:02)Economic disempowerment: Humans become "meddlesome parasites" (00:22:05)Humans become a "criminally decadent" waste of energy (00:29:29)Is humans losing control actually bad, ethically? (00:40:36)Political disempowerment: Governments stop needing people (00:57:26)Can human culture survive in an AI-dominated world? (01:10:23)Will the future be determined by competitive forces? (01:26:51)Can we find a single good post-AGI equilibria for humans? (01:34:29)Do we know anything useful to do about this? (01:44:43)How important is this problem compared to other AGI issues? (01:56:03)Improving global coordination may be our best bet (02:04:56)The 'Gradual Disempowerment Index' (02:07:26)The government will fight to write AI constitutions (02:10:33)“The intelligence curse” and Workshop Labs (02:16:58)Mapping out disempowerment in a world of aligned AGIs (02:22:48)What do David's CompSci colleagues think of all this? (02:29:19)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore

StartUp Health NOW Podcast
Live from Apollo House: Building the Modern Health Stack in the Age of Superintelligence

StartUp Health NOW Podcast

Play Episode Listen Later Jan 27, 2026 11:19


What does it really take to achieve Health Moonshots in the Age of Superintelligence? Recorded live at StartUp Health’s Apollo House during JPM Healthcare Week, this panel brings together leaders operating at the intersection of healthcare delivery, diagnostics, cloud infrastructure, and AI. Moderated by Angela Shippy, MD, of Amazon Web Services, the conversation explores how AI is moving from point solutions to foundational infrastructure across the modern health stack. Together, the panel examines why clean, connected data is essential, how agentic workflows can reduce burnout and improve clinician and patient experience, and what it will take to move healthcare from transactional to truly person-centered care. The discussion also tackles trust, governance, and why collaboration across startups, health systems, and big tech is critical to delivering real-world impact. This is a grounded, forward-looking conversation about how purpose-driven leadership can turn exponential technology into practical outcomes that matter. Featured Guests Angela Shippy, MDSenior Physician Executive and Clinical Innovation Lead, Global Healthcare and Nonprofit, Amazon Web Services (AWS) Brian Caveney, MD, MPHChief Medical and Scientific Officer, Labcorp Rasu Shrestha, MDEVP, Chief Innovation and Commercialization Officer, Advocate Health Chelsea Sumner, PharmDTranslational Health and AI Strategy Leader, NVIDIA Mark AndrewsSenior Principal, AGI, Product Leader, Amazon Do you want to participate in live conversations with industry luminaries? When you join the StartUp Health Network – a new private community for investors, buyers, and industry leaders to connect year-round with top health entrepreneurs – you are invited to a full calendar of interactive Fireside Chats with the most influential leaders shaping health innovation. Come with questions, learn what is working right now, and connect with industry icons. » Learn more and join today. Want more content like this? Sign up for StartUp Health Insider™ to get funding insights, news, and special updates delivered to your inbox.

Retire With Ryan
Can I Contribute to My 401(k) and a Traditional IRA in the Same Tax Year?

Retire With Ryan

Play Episode Listen Later Jan 27, 2026 15:24


A listener recently wrote in with a common and important retirement planning question: If I'm already maxing out my 401(k), can I also contribute to a traditional IRA in the same year? The short answer is yes—but whether it makes sense, and how much benefit you receive, depends on your income, tax situation, and long-term goals. In this episode, I break down how traditional IRA contributions work alongside employer-sponsored retirement plans, when those contributions are deductible, and what options are available if your income is too high for a deduction. We also explore alternative strategies, including Roth IRA contributions and backdoor Roth conversions, so you can decide how best to use your annual IRA "coupon." This episode is especially helpful if you're trying to balance tax savings today with tax flexibility in retirement and want to avoid common mistakes that can complicate your plan later. You will want to hear this episode if you are interested in... [00:00] Whether you can contribute to a 401(k) and IRA in the same tax year [01:55] The tax-deferral benefits of contributing to a traditional IRA [03:55] When a traditional IRA contribution is tax deductible [05:00] Income limits that affect IRA deductions [07:00] Using non-deductible IRA contributions correctly [10:00] Roth IRA contribution limits and income phaseouts [11:45] How a backdoor Roth IRA strategy works [13:30] Choosing the right IRA strategy for your situation Why a Traditional IRA Can Still Make Sense Even if you are already maxing out your 401(k), contributing to a traditional IRA can provide additional tax advantages. The primary benefit is tax deferral. Dividends, interest, and capital gains generated inside an IRA are not taxed in the year they occur. Instead, taxes are deferred until you withdraw the money, potentially years or even decades later. This can be especially powerful if you do not need the money right away. With required minimum distributions now starting at age 73—and increasing to age 75 for those born in 1960 or later—many investors have a long runway for tax-deferred growth. When IRA Contributions Are Tax Deductible Whether your traditional IRA contribution is deductible depends on two main factors: whether you or your spouse are covered by an employer-sponsored retirement plan, and your adjusted gross income (AGI). Coverage includes plans such as a 401(k), 403(b), 457, SIMPLE IRA, SEP IRA, or pension plan. For 2026, married couples filing jointly can fully deduct a traditional IRA contribution if their AGI is below $129,000, with deductions phasing out completely by $149,000. For single filers, the full deduction applies below $81,000 and phases out by $91,000. If neither spouse is covered by a workplace plan, the contribution is fully deductible regardless of income. Options If You Can't Deduct a Traditional IRA If your income is too high to deduct a traditional IRA contribution, you still have options. One approach is making a non-deductible IRA contribution. While this does not provide a tax deduction upfront, your investments can still grow tax deferred. However, this strategy requires careful recordkeeping to properly track taxable and non-taxable portions when withdrawals begin. Another option is contributing to a Roth IRA, if your income falls within Roth contribution limits. Roth IRAs offer tax-free growth and tax-free withdrawals, making them attractive for long-term planning. For those whose income exceeds Roth limits, a backdoor Roth IRA may be an option, provided there are no other pre-tax IRA balances that would trigger pro-rata taxation. Resources Mentioned Retirement Readiness Review Subscribe to the Retire with Ryan YouTube Channel Download my entire book for FREE  Connect With Morrissey Wealth Management  www.MorrisseyWealthManagement.com/contact   Subscribe to Retire With Ryan

Big Tech
Is China Winning the Technological Arms Race?

Big Tech

Play Episode Listen Later Jan 27, 2026 55:45


If we don't build it, China will.That's the rallying cry of the tech companies and governments racing to develop artificial intelligence as fast as humanly possible. The argument is that whoever reaches AGI first won't just be dominant technologically, or economically – they'll be the world's next super power. But, if I'm being honest, I don't know if that framing holds up. And part of the reason for that is that we don't really understand China.Enter Keyu Jin. Jin is a Harvard trained economist who splits her time between London and Beijing, and her book, The New China Playbook, is her attempt to “read China in the original” – to provide a firsthand look at the forces that shaped the country's unprecedented rise. China's success is a puzzle. How did one of the poorest nations on the planet become the second richest in less than a century? How did an economy without free markets birth a tech sector that rivals – and in some ways surpasses – Silicon Valley?The answers to these questions aren't academic. China became a global power without capitalism and without democracy, which means its success has profound implications for both.And as Canada sets out to find its footing in a rapidly changing world order, one thing is abundantly clear: we need to start reckoning with the Chinese playbook. Mentions:The New China Playbook, by Keyu Jin Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Personal Development Mastery
Why So Many Career Transitions Fail, and How to Avoid the #1 Mistake Professionals Make, with Rachel Spekman | #574

Personal Development Mastery

Play Episode Listen Later Jan 26, 2026 35:26 Transcription Available


Stuck in a “successful” career that looks great on paper but feels soul-sucking? It might be time to reverse-engineer work you actually love.If you're a high achiever who's quietly miserable, this episode shows how to move from fear and obligation to clarity and momentum, without blowing up your finances or identity.Learn practical exercises to pinpoint fit fast: design your perfect workday, run a time vs. enjoyment audit, and write your “internal résumé” to surface real strengths.Replace anxiety with a plan: shift from 20% happy to 80% through calculated, incremental moves (not impulsive leaps), guided by the Three C's:community, contribution, and challenge.Navigate the messy middle: handle reputational noise, manage ego with a beginner's mindset, and translate transferable skills so opportunities find you.Hit play to learn the exact steps and scripts to start feeling more fulfilled at work, starting today.˚MEMORABLE QUOTE:"It's gonna be okay. You got this!"˚VALUABLE RESOURCES:Rachel's website: https://madeformorecoach.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The AI Breakdown: Daily Artificial Intelligence News and Discussions

As coding agents and vibe-coding tools push software creation into a fundamentally new phase, the real question shifts from what AI can do to what skills actually matter. This episode unpacks the emerging divide between two critical roles in the code AGI era: the Agent Manager, who knows how to direct and scale AI agents effectively, and the Enterprise Operator, who knows what problems are worth solving and why. With execution becoming cheap and abundant, skills like systems thinking, async orchestration, domain expertise, problem recognition, and workflow redesign are becoming the true sources of leverage.Link: https://x.com/natolambert/status/2014023020302704698Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Section - Build an AI workforce at scale - ⁠⁠https://www.sectionai.com/⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Squawk Pod
Davos 2026: Google DeepMind CEO Demis Hassabis 1/24/26

Squawk Pod

Play Episode Listen Later Jan 24, 2026 15:08


AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Razib Khan's Unsupervised Learning
Aneil Mallavarapu: why machine intelligence will never be conscious

Razib Khan's Unsupervised Learning

Play Episode Listen Later Jan 23, 2026 74:10


Today Razib talks to Aneil Mallavarapu, a scientist and technology leader based in Austin, Texas, whose career bridges the fields of biochemistry, systems biology, and software engineering. He earned his doctorate in Biochemistry and Cell Biology from the University of California, and has held academic positions at Harvard Medical School, where he contributed to the Department of Systems Biology and developed the "Little b" programming language. Mallavarapu has transitioned from academic research into the tech and venture capital sectors, co-founding ventures such as Precise.ly and DeepDialog, and currently serving as a Managing Partner at Humain Ventures. He remains active in the scientific community through local initiatives like the Austin Science Network. Most of the conversation centers around Mallavarapu's arguments outlined in his Substack The Case Against Conscious AI - Why AI consciousness is inconsistent with physics. The core of his argument rests on the "Simultaneity Problem" and the "Hard Problem of Physics," which involve non-locality and the memorylessness of artificial intelligence phenomena. Though Mallavarapu believes that artificial intelligence holds great promise, and perhaps even "artificial general intelligence" (AGI) is feasible, he argues that this is a distinct issue from consciousness, which is a property of human minds. Razib also brings up the inverse case: could it be that many organisms that are not particular intelligence, also have consciousness? What does that imply for ethics of practices like eating meat?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay 2

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:04


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey

American Institute of CPAs - Personal Financial Planning (PFP)
Bob Keebler on The Renaissance of Income Tax Planning

American Institute of CPAs - Personal Financial Planning (PFP)

Play Episode Listen Later Jan 23, 2026 18:08


Non-grantor trusts are stepping into the spotlight, not for estate tax, but for income tax planning. In this episode, Cary Sinnett sits down with tax expert Bob Keebler to explore how the One Big Beautiful Act (H.R.1) reshapes the planning landscape. You'll hear how you can use trusts to reclaim lost SALT deductions, stack §199A benefits, shift income across generations, and even layer in QSBS exemptions. If your clients are hitting phaseouts or facing high state taxes, this episode delivers advanced strategies to optimize their tax position now and into the future. Non-Grantor Trusts: Keebler explains how trust structures can sidestep phaseouts and help clients reclaim deductions previously lost due to high AGI. The "Tax Trifecta Trust" Explained: Learn how to stack SALT deductions, layer multiple §199A deductions, and shift income strategically using non-grantor trust planning. Five Strategies You Can Use Today Income shifting to lower-bracket heirs Stacking SALT deductions across multiple trusts Boosting §199A deductions with trust-level taxpayers Expanding QSBS exemptions via strategic trust ownership Reducing or deferring state income tax through out-of-state trust situs Real-World Implementation Advice: Bob outlines guardrails around IRC §643(f) to avoid having multiple trusts collapsed into one. Hear how to structure trusts legally and practically for high-impact planning, and how to identify ideal client profiles for this approach. What CPA Financial Planners Need to Watch For: Bob discusses state-specific issues, kiddie tax complications, trust drafting must-haves, and how CPAs can lead the planning process with confidence. AICPA Resources: Video: Decoding Trusts and Wills: Provisions for PFP Practitioners Video: Year-End Planning Through the Lens of H.R. 1 Resource: Charitable planning post OBBBA rules This episode is brought to you by the AICPA's Personal Financial Planning Section, the premier provider of information, tools, advocacy, and guidance for professionals who specialize in providing tax, estate, retirement, risk management and investment planning advice. Also, by the CPA/PFS credential program, which allows CPAs to demonstrate competence and confidence in providing these services to their clients. Visit us online to join our community, gain access to valuable member-only benefits or learn about our PFP certificate program. Subscribe to the PFP Podcast channel at Libsyn to find all the latest episodes or search "AICPA Personal Financial Planning" on your favorite podcast app.  

The Jim Rutt Show
EP 330 Worldviews: Ben Goertzel

The Jim Rutt Show

Play Episode Listen Later Jan 22, 2026


Jim talks with Ben Goertzel about his worldview. They discuss Ben's morning experience of consciousness crystallizing from ambient awareness, his identification as a panpsychic, the concept of pattern being more fundamental than stuff, Charles Peirce's ontology of first/second/third, the idea of uryphysics as a broader notion of physics beyond metaphysics, parapsychology and psi phenomena including remote viewing and Project Stargate, reincarnation-like phenomena and cases from India, experimental design in parapsychology research, the legitimation of both AGI and psi research, the consciousness explosion occurring alongside AI/ASI development, Jeffrey Martin's work on fundamental well-being and persistent nonsymbolic experience, the immense design space of possible minds, human cognitive limitations like seven plus or minus two short-term memory, the single-threaded nature of human consciousness versus potential multi-threaded ASI, scenarios for beneficial superintelligence and options for humans to remain in human form or upload, the question of how long human existence would remain interesting post-singularity, psychedelics as tools for accessing different states of consciousness and insights into mind construction, the absence of shamanic institutions in modern culture, experiences with DMT and heroic doses, holding multiple contradictory perspectives simultaneously, Walt Whitman's notion of containing multitudes, Ben's intuitive sense that consciousness and the basic ground of being are fundamentally joyful and compassionate, arguments for why superintelligence will likely be good based on efficiency of mutually trusting agents, and much more. Episode Transcript The Consciousness Explosion, by Ben Goertzel JRS EP 217 Ben Goertzel on a New Framework for AGI JRS EP 211 Ben Goertzel on Generative AI vs. AGI JRS Currents 072: Ben Goertzel on Viable Paths to True AGI Evidence for Psi: Thirteen Empirical Research Reports, ed. Damien Broderick & Ben Goertzel Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author.  Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more.

AMA Part 2: Is Fine-Tuning Dead? How Am I Preparing for AGI? Are We Headed for UBI? & More!

Play Episode Listen Later Jan 22, 2026 143:38


In this AMA-style episode, Nathan takes on listener questions about whether fine-tuning is really on the way out, what emergent misalignment and weird generalization results tell us, and how to think about continual learning. He talks candidly about how he's personally preparing for AGI—from career choices and investing to what resilience steps he has and hasn't taken. The discussion also covers timelines for job disruption, whether UBI becomes inevitable, how to talk to kids and “normal people” about AI, and which safety approaches are most neglected. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) Ernie cancer update (04:57) Is fine-tuning dead (Part 1) (12:31) Sponsors: Blitzy | MongoDB (14:57) Is fine-tuning dead (Part 2) (Part 1) (26:56) Sponsors: Serval | Tasklet (29:15) Is fine-tuning dead (Part 2) (Part 2) (29:16) Continual learning cautions (34:59) Talking to normal people (39:30) Personal risk preparation (49:59) Investing around AI safety (01:00:39) Early childhood AI literacy (01:08:55) Work disruption timelines (01:27:58) Nonprofits, need, and UBI (01:34:53) Benchmarks, AGI, and embodiment (01:47:30) AI tooling and platforms (01:57:01) Discourse norms and shaming (02:05:50) Location and safety funding (02:15:17) Turpentine deal and independence (02:24:19) Outro PRODUCED BY: https://aipodcast.ing

Waking Up With AI
Confessions of a Large Language Model

Waking Up With AI

Play Episode Listen Later Jan 22, 2026 22:41


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers' proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers' proof of concept results and the framework's resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind's “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors' proposed four layer safety stack. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

The MAD Podcast with Matt Turck
The End of GPU Scaling? Compute & The Agent Era — Tim Dettmers (Ai2) & Dan Fu (Together AI)

The MAD Podcast with Matt Turck

Play Episode Listen Later Jan 22, 2026 64:06


Will AGI happen soon - or are we running into a wall?In this episode, I'm joined by Tim Dettmers (Assistant Professor at CMU; Research Scientist at the Allen Institute for AI) and Dan Fu (Assistant Professor at UC San Diego; VP of Kernels at Together AI) to unpack two opposing frameworks from their essays: “Why AGI Will Not Happen” versus “Yes, AGI Will Happen.” Tim argues progress is constrained by physical realities like memory movement and the von Neumann bottleneck; Dan argues we're still leaving massive performance on the table through utilization, kernels, and systems—and that today's models are lagging indicators of the newest hardware and clusters.Then we get practical: agents and the “software singularity.” Dan says agents have already crossed a threshold even for “final boss” work like writing GPU kernels. Tim's message is blunt: use agents or be left behind. Both emphasize that the leverage comes from how you use them—Dan compares it to managing interns: clear context, task decomposition, and domain judgment, not blind trust.We close with what to watch in 2026: hardware diversification, the shift toward efficient, specialized small models, and architecture evolution beyond classic Transformers—including state-space approaches already showing up in real systems.Sources:Why AGI Will Not Happen - https://timdettmers.com/2025/12/10/why-agi-will-not-happen/Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work - https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/Yes, AGI Can Happen – A Computational Perspective - https://danfu.org/notes/agi/The Allen Institute for Artificial IntelligenceWebsite - https://allenai.orgX/Twitter - https://x.com/allen_aiTogether AIWebsite - https://www.together.aiX/Twitter - https://x.com/togethercomputeTim DettmersBlog - https://timdettmers.comLinkedIn - https://www.linkedin.com/in/timdettmers/X/Twitter - https://x.com/Tim_DettmersDan FuBlog - https://danfu.orgLinkedIn - https://www.linkedin.com/in/danfu09/X/Twitter - https://x.com/realDanFuFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) - Intro(01:06) – Two essays, two frameworks on AGI(01:34) – Tim's background: quantization, QLoRA, efficient deep learning(02:25) – Dan's background: FlashAttention, kernels, alternative architectures(03:38) – Defining AGI: what does it mean in practice?(08:20) – Tim's case: computation is physical, diminishing returns, memory movement(11:29) – “GPUs won't improve meaningfully”: the core claim and why(16:16) – Dan's response: utilization headroom (MFU) + “models are lagging indicators”(22:50) – Pre-training vs post-training (and why product feedback matters)(25:30) – Convergence: usefulness + diffusion (where impact actually comes from)(29:50) – Multi-hardware future: NVIDIA, AMD, TPUs, Cerebras, inference chips(32:16) – Agents: did the “switch flip” yet?(33:19) – Dan: agents crossed the threshold (kernels as the “final boss”)(34:51) – Tim: “use agents or be left behind” + beyond coding(36:58) – “90% of code and text should be written by agents” (how to do it responsibly)(39:11) – Practical automation for non-coders: what to build and how to start(43:52) – Dan: managing agents like junior teammates (tools, guardrails, leverage)(48:14) – Education and training: learning in an agent world(52:44) – What Tim is building next (open-source coding agent; private repo specialization)(54:44) – What Dan is building next (inference efficiency, cost, performance)(55:58) – Mega-kernels + Together Atlas (speculative decoding + adaptive speedups)(58:19) – Predictions for 2026: small models, open-source, hardware, modalities(1:02:02) – Beyond transformers: state-space and architecture diversity(1:03:34) – Wrap

Big Technology Podcast
Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet

Big Technology Podcast

Play Episode Listen Later Jan 21, 2026 34:07


Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward.  --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

TrueLife
Don Quixote - Windmills & Algorithms

TrueLife

Play Episode Listen Later Jan 21, 2026 9:36


One on One Video Call W/George https://tidycal.com/georgepmonty/60-minute-meetingSupport the show:https://www.paypal.me/Truelifepodcast?locale.x=en_USIn a world besieged by the relentless march of AI, where algorithms whisper promises of utopia or apocalypse, one timeless tale rises from the dust of centuries to mirror our chaotic present: Don Quixote. Join host [Your Name] in the premiere episode of [Podcast Name], “The Knight of the Sorrowful Algorithm,” as we embark on a quixotic quest through Cervantes' masterpiece—a story of a man whose brain “dried up” from devouring too many fantastical romances, only to armor up and charge into a reality that mocked his dreams.But this isn't just dusty literature. It's us. Right now. Scrolling through endless feeds of AI doomsayers and saviors: “Your job is obsolete!” “Embrace the disruption!” “AGI will save—or end—humanity!” We're all Don Quixote, lost in a whirlwind of narratives that blur truth and fiction, leaving us paralyzed by questions: Is adaptation surrender? Is optimism naivety? And who are the true mad knights of our age—the artists defying generative machines, the workers reclaiming their humanity, or those daring to pursue passion in a profit-obsessed empire?Delve into the heart of the madness: Why Don Quixote chose delusion over despair, and why “sanity”—accepting a world ruled by efficiency, oligarchs, and obsolescence—might be the deadliest illusion of all. In a finale that shatters illusions, discover how renouncing the quest led to his demise… and what that means for us tilting at digital windmills.Epic, introspective, and urgently relevant, this episode challenges you to ask: In the AI era, is going a little mad the only way to stay truly alive? Tune in, saddle up your Rocinante, and ride into the fray. Next up: “Sancho Panza and the Gig Economy”—the everyman's gamble on a madman's promise. One on One Video call W/George https://tidycal.com/georgepmonty/60-minute-meetingSupport the show:https://www.paypal.me/Truelifepodcast?locale.x=en_US

a16z
From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu

a16z

Play Episode Listen Later Jan 20, 2026 46:35


Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z
The AI Opportunity That Goes Beyond Models

a16z

Play Episode Listen Later Jan 19, 2026 70:20


The a16z AI Apps team outlines how they are thinking about the AI application cycle and why they believe it represents the largest and fastest product shift in software to date. The conversation places AI in the context of prior platform waves, from PCs to cloud to mobile, and examines where adoption is already translating into real enterprise usage and revenue. They walk through three core investment themes: existing software categories becoming AI-native, new categories where software directly replaces labor, and applications built around proprietary data and closed-loop workflows. Using portfolio examples, the discussion shows how these models play out in practice and why defensibility, workflow ownership, and data moats matter more than novelty as AI applications scale. Resources:Follow  Alex Rampell on X: https://twitter.com/arampellFollow Jen Kha on X: https://twitter.com/jkhamehlFollow David Haber on X: https://twitter.com/dhaberFollow Anish Acharya on X: https://twitter.com/illscience Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Not an offer or solicitation. None of the information herein should be taken as investment advice; Some of the companies mentioned are portfolio companies of a16z. Please see https://a16z.com/disclosures/ for more information.  A list of investments made by a16z is available at https://a16z.com/portfolio. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Personal Development Mastery
The Hidden Emotional Traps of Career Change No One Talks About, with Michelle Schafer | #572

Personal Development Mastery

Play Episode Listen Later Jan 19, 2026 36:09 Transcription Available


Ever feel like your career transition is “going nowhere”… even though you're doing all the right things?If you're in that messy in-between of job loss, burnout, values mismatch, or a pivot that's taking longer than you expected, this episode will feel like a deep exhale. Michelle Schafer (two-time restructuring survivor turned career coach) breaks down why this phase feels so uncertain, why confidence takes a hit, and what to do when the “sprout” hasn't shown up yet… even though growth is happening under the surface.Learn practical ways to manage the emotional rollercoaster of uncertainty (without pretending you're fine).Discover simple, repeatable actions to rebuild confidence, especially if job searching feels harder than it used to.Get a clear career-transition strategy: how to define your target, create a plan, and stop wasting energy on unfocused applications.Press play now to get a grounded, step-by-step approach that helps you regain clarity and confidence, so you can move toward work that energises you and aligns with what you believe.˚KEY POINTS AND TIMESTAMPS:01:40 - Michelle's background and restructuring experiences03:30 - The planted seed: discovering coaching07:00 - The seed and gardening metaphor for career transition09:48 - Navigating uncertainty during transitions13:54 - Emotional regulation and building supportive practices16:18 - Confidence dips and practical reflection strategies21:10 - Strategy, clarity and planning your career direction27:45 - Resources, final questions and concluding insights˚MEMORABLE QUOTE:"Use your network more. Look through your network, not just at it, because every connection opens the door to many more."˚VALUABLE RESOURCES:Michelle's website: https://mschafercoaching.ca/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Code AGI is Functional AGI (And It's Here)

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Jan 18, 2026 24:10


This episode argues that the most important AGI threshold has already been crossed. As coding agents learn to reason, iterate, and operate autonomously over long horizons, they unlock a form of functional general intelligence that matters for real work. Coding isn't just another domain—it's a universal lever that collapses the distance between idea and execution, reshaping how companies build, decide, and compete. The result isn't a gradual improvement, but a structural shift in how work gets done. Readings from:https://x.com/gradypb/status/2011491957730918510https://x.com/danshipper/status/2011617055636705718Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Tech Deciphered
72 – Our Children's Future

Tech Deciphered

Play Episode Listen Later Jan 18, 2026 64:12


IWhat is our children's future? What skills should they be developing? How should schools be adapting? What will the fully functioning citizens and workers of the future look like? A look into the landscape of the next 15 years, the future of work with human and AI interactions, the transformation of education, the safety and privacy landscapes, and a parental playbook. Navigation: Intro The Landscape: 2026–2040 The Future of Work: Human + AI The Transformation of Education The Ethics, Safety, and Privacy Landscape The Parental Playbook: Actionable Strategies Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Bertrand SchmittIntroduction Welcome to Episode 72 of Tech Deciphered, about our children’s future. What is our children’s future? What skills should they be developing? How should school be adapting to AI? What would be the functioning citizens and workers of the future look like, especially in the context of the AI revolution? Nuno, what’s your take? Maybe we start with the landscape. Nuno Goncalves PedroThe Landscape: 2026–2040 Let’s first frame it. What do people think is going to happen? Firstly, that there’s going to be a dramatic increase in productivity, and because of that dramatic increase in productivity, there are a lot of numbers that show that there’s going to be… AI will enable some labour productivity growth of 0.1 to 0.6% through 2040, which would be a figure that would be potentially rising even more depending on use of other technologies beyond generative AI, as much as 0.5 to 3.4% points annually, which would be ridiculous in terms of productivity enhancement. To be clear, we haven’t seen it yet. But if there are those dramatic increases in productivity expected by the market, then there will be job displacement. There will be people losing their jobs. There will be people that will need to be reskilled, and there will be a big shift that is similar to what happens when there’s a significant industrial revolution, like the Industrial Revolution of the late 19th century into the 20th century. Other numbers quoted would say that 30% of US jobs could be automated by 2030, which is a silly number, 30%, and that another 60% would see tremendously being altered. A lot of their tasks would be altered for those jobs. There’s also views that this is obviously fundamentally a global phenomenon, that as much as 9% of jobs could be lost to AI by 2030. I think question mark if this is a net number or a gross number, so it might be 9% our loss, but then maybe there’re other jobs that will emerge. It’s very clear that the landscape we have ahead of us is if there are any significant increases in productivity, there will be job displacement. There will be job shifting. There will be the need for reskilling. Therefore, I think on the downside, you would say there’s going to be job losses. We’ll have to reevaluate whether people should still work in general 5 days a week or not. Will we actually work in 10, 20, 30 years? I think that’s the doomsday scenario and what happens on that side of the fence. I think on the positive side, there’s also a discussion around there’ll be new jobs that emerge. There’ll be new jobs that maybe we don’t understand today, new job descriptions that actually don’t even exist yet that will emerge out this brave new world of AI. Bertrand SchmittYeah. I mean, let’s not forget how we get to a growing economy. I mean, there’s a measurement of a growing economy is GDP growth. Typically, you can simplify in two elements. One is the growth of the labour force, two, the rise of the productivity of that labour force, and that’s about it. Either you grow the economy by increasing the number of people, which in most of the Western world is not really happening, or you increase productivity. I think that we should not forget that growth of productivity is a backbone of growth for our economies, and that has been what has enabled the rise in prosperity across countries. I always take that as a win, personally. That growth in productivity has happened over the past decades through all the technological revolutions, from more efficient factories to oil and gas to computers, to network computers, to internet, to mobile and all the improvement in science, usually on the back of technological improvement. Personally, I welcome any rise in improvement we can get in productivity because there is at this stage simply no other choice for a growing world in terms of growing prosperity. In terms of change, we can already have a look at the past. There are so many jobs today you could not imagine they would exist 30 years ago. Take the rise of the influencer, for instance, who could have imagined that 30 years ago. Take the rise of the small mom-and-pop e-commerce owner, who could have imagined that. Of course, all the rise of IT as a profession. I mean, how few of us were there 30 years ago compared to today. I mean, this is what it was 30 years ago. I think there is a lot of change that already happened. I think as a society, we need to welcome that. If we go back even longer, 100 years ago, 150 years ago, let’s not forget, if I take a city like Paris, we used to have tens of thousands of people transporting water manually. Before we have running water in every home, we used to have boats going to the North Pole or to the northern region to bring back ice and basically pushing ice all the way to the Western world because we didn’t have fridges at the time. I think that when we look back in time about all the jobs that got displaced, I would say, Thank you. Thank you because these were not such easy jobs. Change is coming, but change is part of the human equation, at least. Industrial revolution, the past 250 years, it’s thanks to that that we have some improvement in living conditions everywhere. AI is changing stuff, but change is a constant, and we need to adapt and adjust. At least on my side, I’m glad that AI will be able to displace some jobs that were not so interesting to do in the first place in many situations. Maybe not dangerous like in the past because we are talking about replacing white job collars, but at least repetitive jobs are definitely going to be on the chopping block. Nuno Goncalves PedroWhat happens in terms of shift? We were talking about some numbers earlier. The World Economic Forum also has some numbers that predicts that there is a gross job creation rate of 14% from 2025 to 2030 and a displacement rate of 8%, so I guess they’re being optimistic, so a net growth in employment. I think that optimism relates to this thesis that, for example, efficiency, in particular in production and industrial environments, et cetera, might reduce labour there while increasing the demand for labour elsewhere because there is a natural lower cost base. If there’s more automation in production, therefore there’s more disposable income for people to do other things and to focus more on their side activities. Maybe, as I said before, not work 5 days a week, but maybe work four or three or whatever it is. What are the jobs of the future? What are the jobs that we see increasing in the future? Obviously, there’re a lot of jobs that relate to the technology side, that relate obviously to AI, that’s a little bit self-serving, and everything that relates to information technology, computer science, computer technology, computer engineering, et cetera. More broadly in electrical engineering, mechanical engineering, that might actually be more needed. Because there is a broadening of all of these elements of contact with digital, with AI over time also with robots and robotics, that those jobs will increase. There’s a thesis that actually other jobs that are a little bit more related to agriculture, education, et cetera, might not see a dramatic impact, that will still need for, I guess, teachers and the need for people working in farms, et cetera. I think this assumes that probably the AI revolution will come much before the fundamental evolution that will come from robotics afterwards. Then there’s obviously this discussion around declining roles. Anything that’s fundamentally routine, like data entry, clinical roles, paralegals, for example, routine manufacturing, anything that’s very repetitive in nature will be taken away. I have the personal thesis that there are jobs that are actually very blue-collar jobs, like HVAC installation, maintenance, et cetera, plumbing, that will be still done by humans for a very long time because there are actually, they appear to be repetitive, but they’re actually complex, and they require manual labour that cannot be easily, I think, right now done by robots and replacements of humans. Actually, I think there’re blue-collar roles that will be on the increase rather than on decrease that will demand a premium, because obviously, they are apprenticeship roles, certification roles, and that will demand a premium. Maybe we’re at the two ends. There’s an end that is very technologically driven of jobs that will need to necessarily increase, and there’s at the other end, jobs that are very menial but necessarily need to be done by humans, and therefore will also command a premium on the other end. Bertrand SchmittI think what you say make a lot of sense. If you think about AI as a stack, my guess is that for the foreseeable future, on the whole stack, and when I say stack, I mean from basic energy production because we need a lot of energy for AI, maybe to going up to all the computing infrastructure, to AI models, to AI training, to robotics. All this stack, we see an increase in expertise in workers and everything. Even if a lot of this work will benefit from AI improvement, the boom is so large that it will bring a lot of demand for anyone working on any part of the stack. Some of it is definitely blue-collar. When you have to build a data centre or energy power station, this requires a lot of blue-collar work. I would say, personally, I’m absolutely not a believer of the 3 or 4 days a week work week. I don’t believe a single second in that socialist paradise. If you want to call it that way. I think that’s not going to change. I would say today we can already see that breaking. I mean, if you take Europe, most European countries have a big issue with pension. The question is more to increase how long you are going to work because financially speaking, the equation is not there. Personally, I don’t think AI would change any of that. I agree with you in terms of some jobs from electricians to gas piping and stuff. There will still be demand and robots are not going to help soon on this job. There will be a big divergence between and all those that can be automated, done by AI and robots and becoming cheaper and cheaper and stuff that requires a lot of human work, manual work. I don’t know if it will become more expensive, but definitely, proportionally, in comparison, we look so expensive that you will have second thoughts about doing that investment to add this, to add that. I can see that when you have your own home, so many costs, some cost our product. You buy this new product, you add it to your home. It can be a water heater or something, built in a factory, relatively cheap. You see the installation cost, the maintenance cost. It’s many times the cost of the product itself. Nuno Goncalves PedroMaybe it’s a good time to put a caveat into our conversation. I mean, there’s a… Roy Amara was a futurist who came up with the Amara’s Law. We tend to overestimate the effect of a technology in the short run and overestimate the effect in the long run. I prefer my own law, which is, we tend to overestimate the speed at which we get to a technological revolution and underestimate its impact. I think it’s a little bit like that. I think everyone now is like, “Oh, my God, we’re going to be having the AI overlords taking over us, and AGI is going to happen pretty quickly,” and all of that. I mean, AGI will probably happen at some point. We’re not really sure when. I don’t think anyone can tell you. I mean, there’re obviously a lot of ranges going on. Back to your point, for example, on the shift of the work week and how we work. I mean, just to be very clear, we didn’t use to have 5 days a week and 2 days a weekend. If we go back to religions, there was definitely Sabbath back in the day, and there was one day off, the day of the Lord and the day of God. Then we went to 2 days of weekend. I remember going to Korea back in 2005, and I think Korea shifted officially to 5 days a week, working week and 2 days weekend for some of the larger business, et cetera, in 2004. Actually, it took another whatever years for it to be pervasive in society. This is South Korea, so this is a developed market. We might be at some point moving to 4 days a week. Maybe France was ahead of the game. I know Bertrand doesn’t like this, the 35-hour week. Maybe we will have another shift in what defines the working week versus not. What defines what people need to do in terms of efficiency and how they work and all of that. I think it’s probably just going to take longer than we think. I think there’re some countries already doing it. I was reading maybe Finland was already thinking about moving to 4 days a week. There’re a couple of countries already working on it. Certainly, there’re companies already doing it as well. Bertrand SchmittYeah, I don’t know. I’m just looking at the financial equation of most countries. The disaster is so big in Western Europe, in the US. So much debt is out that needs to get paid that I don’t think any country today, unless there is a complete reversal of the finance, will be able to make a big change. You could argue maybe if we are in such a situation, it might be because we went too far in benefits, in vacation, in work days versus weekends. I’m not saying we should roll back, but I feel that at this stage, the proof is in the pudding. The finance of most developed countries are broken, so I don’t see a change coming up. Potentially, the other way around, people leaving to work more, unfortunately. We will see. My point is that AI will have to be so transformational for the productivity for countries, and countries will have to go back to finding their ways in terms of financial discipline to reach a level where we can truly profit from that. I think from my perspective, we have time to think about it in 10, 20 years. Right now, it’s BS at this stage of this discussion. Nuno Goncalves PedroYeah, there’s a dependency, Bertrand, which is there needs to be dramatic increases in productivity that need to happen that create an expansion of economy. Once that expansion is captured by, let’s say, government or let’s say by the state, it needs to be willingly fed back into society, which is not a given. There’re some governments who are going to be like, “No, you need to work for a living.” Tough luck. There’re no handouts, there’s nothing. There’s going to be other governments that will be pressured as well. I mean, even in a more socialist Europe, so to speak. There’re now a lot of pressures from very far-right, even extreme positions on what people need to do for a living and how much should the state actually intervene in terms of minimum salaries, et cetera, and social security. To your point, the economies are not doing well in and of themselves. Anyway, there would need to be tremendous expansion of economy and willingness by the state to give back to its citizens, which is also not a given. Bertrand SchmittAnd good financial discipline as well. Before we reach all these three. Reaping the benefits in a tremendous way, way above trend line, good financial discipline, and then some willingness to send back. I mean, we can talk about a dream. I think that some of this discussion was, in some ways, to have a discussion so early about this. It’s like, let’s start to talk about the benefits of the aeroplane industries in 1915 or 1910, a few years after the Wright brothers flight, and let’s make a decision based on what the world will be in 30 years from now when we reap this benefit. This is just not reasonable. This is not reasonable thinking. I remember seeing companies from OpenAI and others trying to push this narrative. It was just political agenda. It was nothing else. It was, “Let’s try to make look like AI so nice and great in the future, so you don’t complain on the short term about what’s happening.” I don’t think this is a good discussion to have for now. Let’s be realistic. Nuno Goncalves PedroJust for the sake of sharing it with our listeners, apparently there’re a couple of countries that have moved towards something a bit lower than 5 days a week. Belgium, I think, has legislated the ability for you to compress your work week into 4 days, where you could do 10 hours for 4 days, so 40 hours. UAE has some policy for government workers, 4.5 days. Iceland has some stuff around 35 to 36 hours, which is France has had that 35 hour thing. Lithuania for parents. Then just trials, it’s all over the shop. United Kingdom, my own Portugal, of course, Germany, Brazil, and South Africa, and a bunch of other countries, so interesting. There’s stuff going on. Bertrand SchmittFor sure. I mean, France managed to bankrupt itself playing the 75 hours work week since what, 2000 or something. I mean, yeah, it’s a choice of financial suicide, I would say. Nuno Goncalves PedroWonderful. The Future of Work: Human + AI Maybe moving a little bit towards the future of work and the coexistence of work of human and AI, I think the thesis that exists a little bit in the market is that the more positive thesis that leads to net employment growth and net employment creation, as we were saying, there’s shifting of professions, they’re rescaling, and there’s the new professions that will emerge, is the notion that human will need to continue working alongside with machine. I’m talking about robots, I’m also talking about software. Basically software can’t just always run on its own, and therefore, software serves as a layer of augmentation, that humans become augmented by AI, and therefore, they can be a lot more productive, and we can be a lot more productive. All of that would actually lead to a world where the efficiencies and the economic creation are incredible. We’ll have an unparalleled industrial evolution in our hands through AI. That’s one way of looking at it. We certainly at Chameleon, that’s how we think through AI and the AI layers that we’re creating with Mantis, which is our in-house platform at Chameleon, is that it’s augmenting us. Obviously, the human is still running the show at the end, making the toughest decisions, the more significant impact with entrepreneurs that we back, et cetera. AI augments us, but we run the show. Bertrand SchmittI totally agree with that perspective that first AI will bring a new approach, a human plus AI. Here in that situation, you really have two situations. Are you a knowledgeable user? Do you know your field well? Are you an expert? Are you an IT expert? Are you a medical doctor? Do you find your best way to optimise your work with AI? Are you knowledgeable enough to understand and challenge AI when you see weird output? You have to be knowledgeable in your field, but also knowledgeable in how to handle AI, because even experts might say, “Whatever AI says.” My guess is that will be the users that will benefit most from AI. Novice, I think, are in a bit tougher situation because if you use AI without truly understanding it, it’s like laying foundations on sand. Your stuff might crumble down the way, and you will have no clue what’s happening. Hopefully, you don’t put anyone in physical danger, but that’s more worrisome to me. I think some people will talk about the rise of vibe coding, for instance. I’ve seen AI so useful to improve coding in so many ways, but personally, I don’t think vibe coding is helpful. I mean, beyond doing a quick prototype or some stuff, but to put some serious foundation, I think it’s near useless if you have a pure vibe coding approach, obviously to each their own. I think the other piece of the puzzle, it’s not just to look at human plus AI. I think definitely there will be the other side as well, which is pure AI. Pure AI replacement. I think we start to see that with autonomous cars. We are close to be there. Here we’ll be in situation of maybe there is some remote control by some humans, maybe there is local control. We are talking about a huge scale replacement of some human activities. I think in some situation, let’s talk about work farms, for instance. That’s quite a special term, but basically is to describe work that is very repetitive in nature, requires a lot of humans. Today, if you do a loan approval, if you do an insurance claim analysis, you have hundreds, thousands, millions of people who are doing this job in Europe, in the US, or remotely outsourced to other countries like India. I think some of these jobs are fully at risk to be replaced. Would it be 100% replacement? Probably not. But a 9:1, 10:1 replacement? I think it’s definitely possible because these jobs have been designed, by the way, to be repetitive, to follow some very clear set of rules, to improve the rules, to remove any doubt if you are not sure. I think some of these jobs will be transformed significantly. I think we see two sides. People will become more efficient controlling an AI, being able to do the job of two people at once. On the other side, we see people who have much less control about their life, basically, and whose job will simply disappear. Nuno Goncalves PedroTwo points I would like to make. The first point is we’re talking about a state of AI that we got here, and we mentioned this in previous episodes of Tech Deciphered, through brute force, dramatically increased data availability, a lot of compute, lower network latencies, and all of that that has led us to where we are today. But it’s brute force. The key thing here is brute force. Therefore, when AI acts really well, it acts well through brute force, through seeing a bunch of things that have happened before. For example, in the case of coding, it might still outperform many humans in coding in many different scenarios, but it might miss hedge cases. It might actually not be as perfect and as great as one of these developers that has been doing it for decades who has this intuition and is a 10X developer. In some ways, I think what got us here is not maybe what’s going to get us to the next level of productivity as well, which is the unsupervised learning piece, the actually no learning piece, where you go into the world and figure stuff out. That world is emerging now, but it’s still not there in terms of AI algorithms and what’s happening. Again, a lot of what we’re seeing today is the outcome of the brute force movement that we’ve had over the last decade, decade and a half. The second point I’d like to make is to your point, Bertrand, you were going really well through, okay, if you’re a super experienced subject-matter expert, the way you can use AI is like, wow! Right? I mean, you are much more efficient, right? I was asked to do a presentation recently. When I do things in public, I don’t like to do it. If it’s a keynote, because I like to use my package stuff, there’s like six, seven presentations that I have prepackaged, and I can adapt around that. But if it’s a totally new thing, I don’t like to do it as a keynote because it requires a lot of preparation. Therefore, I’m like, I prefer to do a fire set chat or a panel or whatever. I got asked to do something, a little bit what is taking us to this topic today around what’s happening to our children and all of that is like, “God! I need to develop this from scratch.” The honest truth is if you have domain expertise around many areas, you can do it very quickly with the aid of different tools in AI. Anything from Gemini, even with Nana Banana, to ChatGPT and other tools that are out there for you and framing, how would you do that? But the problem then exists with people that are just at the beginning of their careers, people that have very little expertise and experience, and people that are maybe coming out of college where their knowledge is mostly theoretical. What happens to those people? Even in computer engineering, even in computer science, even in software development, how do those people get to the next level? I think that’s one of the interesting conversations to be had. What happens to the recent graduate or the recent undergrad? How do those people get the expertise they need to go to the next level? Can they just be replaced by AI agents today? What’s their role in terms of the workforce, and how do they fit into that workforce? Bertrand SchmittNo, I mean, that’s definitely the biggest question. I think that a lot of positions, if you are really knowledgeable, good at your job, if you are that 10X developer, I don’t think your job is at risk. Overall, you always have some exceptions, some companies going through tough times, but I don’t think it’s an issue. On the other end, that’s for sure, the recent new graduates will face some more trouble to learn on their own, start their career, and go to that 10X productivity level. But at the same time, let’s also not kid ourselves. If we take software development, this is a profession that increase in number of graduates tremendously over the past 30 years. I don’t think everyone basically has the talent to really make it. Now that you have AI, for sure, the bar to justify why you should be there, why you should join this company is getting higher and higher. Being just okay won’t be enough to get you a career in IT. You will need to show that you are great or potential to be great. That might make things tough for some jobs. At the same time, I certainly believe there will be new opportunities that were not there before. People will have to definitely adjust to that new reality, learn and understand what’s going on, what are the options, and also try to be very early on, very confident at using AI as much as they can because for sure, companies are going to only hire workers that have shown their capacity to work well with AI. Nuno Goncalves PedroMy belief is that it generates new opportunities for recent undergrads, et cetera, of building their own microbusinesses or nano businesses. To your point, maybe getting jobs because they’ll be forced to move faster within their jobs and do less menial and repetitive activities and be more focused on actual dramatic intellectual activities immediately from the get go, which is not a bad thing. Their acceleration into knowledge will be even faster. I don’t know. It feels to me maybe there’s a positivity to it. Obviously, if you’ve stayed in a big school, et cetera, that there will be some positivity coming out of that. The Transformation of Education Maybe this is a good segue to education. How does education change to adapt to a new world where AI is a given? It’s not like I can check if you’re faking it on your homework or if you’re doing a remote examination or whatever, if you’re using or not tools, it’s like you’re going to use these tools. What happens in that case, and how does education need to shift in this brave new world of AI augmentation and AI enhancements to students? Bertrand SchmittYes, I agree with you. There will be new opportunities. I think people need to be adaptable. What used to be an absolute perfect career choice might not be anymore. You need to learn what changes are happening in the industry, and you need to adjust to that, especially if you’re a new graduate. Nuno Goncalves PedroMaybe we’ll talk a little bit about education, Bertrand, and how education would fundamentally shift. I think one of the things that’s been really discussed is what are the core skills that need to be developed? What are the core skills that will be important in the future? I think critical thinking is probably most important than ever. The ability to actually assimilate information and discern which information is correct or incorrect and which information can lead you to a conclusion or not, for example, I think is more important than ever. The ability to assimilate a bunch of pieces of information, make a decision or have an insight or foresight out of that information is very, very critical. The ability to be analytical around how you look at information and to really distinguish what’s fact from what’s opinion, I think is probably quite important. Maybe moving away more and more from memorisation from just cramming information into your brain like we used to do it in college, you have to know every single algorithm for whatever. It’s like, “Who gives a shit? I can just go and search it.” There’s these shifts that are not simple because I think education, in particular in the last century, has maybe been too focused on knowing more and more knowledge, on learning this knowledge. Now it’s more about learning how to process the knowledge rather than learning how to apprehend it. Because the apprehension doesn’t matter as much because you can have this information at any point in time. The information is available to you at the touch of a finger or voice or whatever. But the ability to then use the information to do something with it is not. That’s maybe where you start distinguishing the different level degrees of education and how things are taught. Bertrand SchmittHonestly, what you just say or describe could apply of the changes we went through the past 30 years. Just using internet search has for sure tremendously changed how you can do any knowledge worker job. Suddenly you have the internet at your fingertips. You can search about any topics. You have direct access to a Wikipedia or something equivalent in any field. I think some of this, we already went through it, and I hope we learned the consequence of these changes. I would say what is new is the way AI itself is working, because when you use AI, you realise that it can utter to you complete bullshit in a very self-assured way of explaining something. It’s a bit more scary than it used to be, because in the past, that algorithm trying to present you the most relevant stuff based on some algorithm was not trying to present you the truth. It’s a list of links. Maybe it was more the number one link versus number 100. But ultimately, it’s for you to make your own opinion. Now you have some chatbot that’s going to tell you that for sure this is the way you should do it. Then you check more, and you realise, no, it’s totally wrong. It’s definitely a slight change in how you have to apprehend this brave new world. Also, this AI tool, the big change, especially with generative AI, is the ability for them to give you the impression they can do the job at hand by themselves when usually they cannot. Nuno Goncalves PedroIndeed. There’s definitely a lot of things happening right now that need to fundamentally shift. Honestly, I think in the education system the problem is the education system is barely adapted to the digital world. Even today, if you studied at a top school like Stanford, et cetera, there’s stuff you can do online, there’s more and more tools online. But the teaching process has been very centred on syllabus, the teachers, later on the professors, and everything that’s around it. In class presence, there’s been minor adaptations. People sometimes allow to use their laptops in the classroom, et cetera, or their mobile phones. But it’s been done the other way around. It’s like the tools came later, and they got fed into the process. Now I think there needs to be readjustments. If we did this ground up from a digital first or a mobile first perspective and an AI first perspective, how would we do it? That changes how teachers and professors should interact with the classrooms, with the role of the classroom, the role of the class itself, the role of homework. A lot of people have been debating that. What do you want out of homework? It’s just that people cram information and whatever, or do you want people to show critical thinking in a specific different manner, or some people even go one step further. It’s like, there should be no homework. People should just show up in class and homework should move to the class in some ways. Then what happens outside of the class? What are people doing at home? Are they learning tools? Are they learning something else? Are they learning to be productive in responding to teachers? But obviously, AI augmented in doing so. I mean, still very unclear what this looks like. We’re still halfway through the revolution, as we said earlier. The revolution is still in motion. It’s not realised yet. Bertrand SchmittI would quite separate higher education, university and beyond, versus lower education, teenager, kids. Because I think the core up to the point you are a teenager or so, I think the school system should still be there to guide you, discovering and learning and being with your peers. I think what is new is that, again, at some point, AI could potentially do your job, do your homework. We faced similar situation in the past with the rise of Wikipedia, online encyclopedias and the stuff. But this is quite dramatically different. Then someone could write your essays, could answer your maths work. I can see some changes where you talk about homework, it’s going to be classwork instead. No work at home because no one can trust that you did it yourself anymore going forward, but you will have to do it in the classroom, maybe spend more time at school so that we can verify that you really did your job. I think there is real value to make sure that you can still think by yourself. The same way with the rise of calculators 40 years ago, I think it was the right thing to do to say, “You know what? You still need to learn the basics of doing calculations by hand.” Yes, I remember myself a kid thinking, “What the hell? I have a calculator. It’s working very well.” But it was still very useful because you can think in your head, you can solve complex problems in your head, you can check some output that it’s right or wrong if it’s coming from a calculator. There was a real value to still learn the basics. At the same point, it was also right to say, “You know what? Once you know the basics, yes, for sure, the calculator will take over because we’re at the point.” I think that was the right balance that was put in place with the rise of calculators. We need something similar with AI. You need to be able to write by yourself, to do stuff by yourself. At some point, you have to say, “Yeah, you know what? That long essays that we asked you to do for the sake of doing long essays? What’s the point?” At some point, yeah, that would be a true question. For higher education, I think personally, it’s totally ripe for full disruption. You talk about the traditional system trying to adapt. I think we start to be at the stage where “It should be the other way around.” It should be we should be restarted from the ground up because we simply have different tools, different ways. I think at this stage, many companies if you take, [inaudible 00:33:01] for instance, started to recruit people after high school. They say, “You know what? Don’t waste your time in universities. Don’t spend crazy shitload of money to pay for an education that’s more or less worthless.” Because it used to be a way to filter people. You go to good school, you have a stamp that say, “This guy is good enough, knows how to think.” But is it so true anymore? I mean, now that universities have increased the enrolment so many times over, and your university degree doesn’t prove much in terms of your intelligence or your capacity to work hard, quite frankly. If the universities are losing the value of their stamp and keep costing more and more and more, I think it’s a fair question to say, “Okay, maybe this is not needed anymore.” Maybe now companies can directly find the best talents out there, train them themselves, make sure that ultimately it’s a win-win situation. If kids don’t have to have big loans anymore, companies don’t have to pay them as much, and everyone is winning. I think we have reached a point of no return in terms of value of university degrees, quite frankly. Of course, there are some exceptions. Some universities have incredible programs, incredible degrees. But as a whole, I think we are reaching a point of no return. Too expensive, not enough value in the degree, not a filter anymore. Ultimately, I think there is a case to be made for companies to go back directly to the source and to high school. Nuno Goncalves PedroI’m still not ready to eliminate and just say higher education doesn’t have a role. I agree with the notion that it’s continuous education role that needs to be filled in a very different way. Going back to K-12, I think the learning of things is pretty vital that you learn, for example, how to write, that you learn cursive and all these things is important. I think the role of the teacher, and maybe actually even later on of the professors in higher education, is to teach people the critical information they need to know for the area they’re in. Basic math, advanced math, the big thinkers in philosophy, whatever is that you’re studying, and then actually teach the students how to use the tools that they need, in particular, K-12, so that they more rapidly apprehend knowledge, that they more rapidly can do exercises, that they more rapidly do things. I think we’ve had a static view on what you need to learn for a while. That’s, for example, in the US, where you have AP classes, like advanced placement classes, where you could be doing math and you could be doing AP math. You’re like, dude. In some ways, I think the role of the teacher and the interaction with the students needs to go beyond just the apprehension of knowledge. It also has to have apprehension of knowledge, but it needs to go to the apprehension of tools. Then the application of, as we discussed before, critical thinking, analytical thinking, creative thinking. We haven’t talked about creativity for all, but obviously the creativity that you need to have around certain problems and the induction of that into the process is critical. It’s particular in young kids and how they’re developing their learning skills and then actually accelerate learning. In that way, what I’m saying, I’m not sure I’m willing to say higher education is dead. I do think this mass production of higher education that we have, in particular in the US. That’s incredibly costly. A lot of people in Europe probably don’t see how costly higher education is because we’re educated in Europe, they paid some fee. A lot of the higher education in Europe is still, to a certain extent, subsidised or done by the state. There is high degree of subsidisation in it, so it’s not really as expensive as you’d see in the US. But someone spending 200-300K to go to a top school in the US to study for four years for an undergrad, that doesn’t make sense. For tuition alone, we’re talking about tuition alone. How does that work? Why is it so expensive? Even if I’m a Stanford or a Harvard or a University of Pennsylvania or whatever, whatever, Ivy League school, if I’m any of those, to command that premium, I don’t think makes much sense. To your point, maybe it is about thinking through higher education in a different way. Technical schools also make sense. Your ability to learn and learn and continue to education also makes sense. You can be certified. There are certifications all around that also makes sense. I do think there’s still a case for higher education, but it needs to be done in a different mould, and obviously the cost needs to be reassessed. Because it doesn’t make sense for you to be in debt that dramatically as you are today in the US. Bertrand SchmittI mean, for me, that’s where I’m starting when I’m saying it’s broken. You cannot justify this amount of money except in a very rare and stratified job opportunities. That means for a lot of people, the value of this equation will be negative. It’s like some new, indented class of people who owe a lot of money and have no way to get rid of this loan. Sorry. There are some ways, like join the government Task Force, work for the government, that at some point you will be forgiven your loans. Some people are going to just go after government jobs just for that reason, which is quite sad, frankly. I think we need a different approach. Education can be done, has to be done cheaper, should be done differently. Maybe it’s just regular on the job training, maybe it is on the side, long by night type of approach. I think there are different ways to think about. Also, it can be very practical. I don’t know you, but there are a lot of classes that are not really practical or not very tailored to the path you have chosen. Don’t get me wrong, there is always value to see all the stuff, to get a sense of the world around you. But this has a cost. If it was for free, different story. But nothing is free. I mean, your parents might think it’s free, but at the end of the day, it’s their taxes paying for all of this. The reality is that it’s not free. It’s costing a lot of money at the end of the day. I think we absolutely need to do a better job here. I think internet and now AI makes this a possibility. I don’t know you, but personally, I’ve learned so much through online classes, YouTube videos, and the like, that it never cease to amaze me how much you can learn, thanks to the internet, and keep up to date in so many ways on some topics. Quite frankly, there are some topics that there is not a single university that can teach you what’s going on because we’re talking about stuff that is so precise, so focused that no one is building a degree around that. There is no way. Nuno Goncalves PedroI think that makes sense. Maybe bring it back to core skills. We’ve talked about a couple of core skills, but maybe just to structure it a little bit for you, our listener. I think there’s a big belief that critical thinking will be more important than ever. We already talked a little bit about that. I think there’s a belief that analytical thinking, the ability to, again, distinguish fact from opinion, ability to distinguish elements from different data sources and make sure that you see what those elements actually are in a relatively analytical manner. Actually the ability to extract data in some ways. Active learning, proactive learning and learning strategies. I mean, the ability to proactively learn, proactively search, be curious and search for knowledge. Complex problem-solving, we also talked a little bit about it. That goes hand in hand normally with critical thinking and analysis. Creativity, we also talked about. I think originality, initiative, I think will be very important for a long time. I’m not saying AI at some point won’t be able to emulate genuine creativity. I wouldn’t go as far as saying that, but for the time being, it has tremendous difficulty doing so. Bertrand SchmittBut you can use AI in creative endeavours. Nuno Goncalves PedroOf course, no doubt. Bertrand SchmittYou can do stuff you will be unable to do, create music, create videos, create stuff that will be very difficult. I see that as an evolution of tools. It’s like now cameras are so cheap to create world-class quality videos, for instance. That if you’re a student, you want to learn cinema, you can do it truly on the cheap. But now that’s the next level. You don’t even need actors, you don’t even need the real camera. You can start to make movies. It’s amazing as a learning tool, as a creative tool. It’s for sure a new art form in a way that we have seen expanding on YouTube and other places, and the same for creating new images, new music. I think that AI can be actually a tool for expression and for creativity, even in its current form. Nuno Goncalves PedroAbsolutely. A couple of other skills that people would say maybe are soft skills, but I think are incredibly powerful and very distinctive from machines. Empathy, the ability to figure out how the other person’s feeling and why they’re feeling like that. Adaptability, openness, the flexibility, the ability to drop something and go a different route, to maybe be intellectually honest and recognise this is the wrong way and the wrong angle. Last but not the least, I think on the positive side, tech literacy. I mean, a lot of people are, oh, we don’t need to be tech literate. Actually, I think this is a moment in time where you need to be more tech literate than ever. It’s almost a given. It’s almost like table stakes, that you are at some tech literacy. What matters less? I think memorisation and just the cramming of information and using your brain as a library just for the sake of it, I think probably will matter less and less. If you are a subject or a class that’s just solely focused on cramming your information, I feel that’s probably the wrong way to go. I saw some analysis that the management of people is less and less important. I actually disagree with that. I think in the interim, because of what we were discussing earlier, that subject-matter experts at the top end can do a lot of stuff by themselves and therefore maybe need to less… They have less people working for them because they become a little bit more like superpowered individual contributors. But I feel that’s a blip rather than what’s going to happen over time. I think collaboration is going to be a key element of what needs to be done in the future. Still, I don’t see that changing, and therefore, management needs to be embedded in it. What other skills should disappear or what other skills are less important to be developed, I guess? Bertrand SchmittWorld learning, I’ve never, ever been a fan. I think that one for sure. But at the same time, I want to make sure that we still need to learn about history or geography. What we don’t want to learn is that stupid word learning. I still remember as a teenager having to learn the list of all the 100 French departments. I mean, who cared? I didn’t care about knowing the biggest cities of each French department. It was useless to me. But at the same time, geography in general, history in general, there is a lot to learn from the past from the current world. I think we need to find that right balance. The details, the long list might not be that necessary. At the same time, the long arc of history, our world where it is today, I think there is a lot of value. I think you talk about analysing data. I think this one is critical because the world is generating more and more data. We need to benefit from it. There is no way we can benefit from it if we don’t understand how data is produced, what data means. If we don’t understand the base of statistical analysis. I think some of this is definitely critical. But for stuff, we have to do less. It’s beyond world learning. I don’t know, honestly. I don’t think the core should change so much. But the tools we use to learn the core, yes, probably should definitely improve. Nuno Goncalves PedroOne final debate, maybe just to close, I think this chapter on education and skill building and all of that. There’s been a lot of discussion around specialisation versus generalisation, specialists versus generalists. I think for a very long time, the world has gone into a route that basically frames specialisation as a great thing. I think both of us have lived in Silicon Valley. I still do, but we both lived in Silicon Valley for a significant period of time. The centre of the universe in terms of specialisation, you get more and more specialised. I think we’re going into a world that becomes a little bit different. It becomes a little bit like what Amazon calls athletes, right? The T-Pi-shaped people get the most value, where you’re brought on top, you’re a very strong generalist on top, and you have a lot of great soft skills around management and empathy and all that stuff. Then you might have one or two subject matter expertise areas. Could be like business development and sales or corporate development and business development or product management and something else. I think those are the winners of the future. The young winners of the future are going to be more and more T-pi-shaped, if I had to make a guess. Specialisation matters, but maybe not as much as it matters today. It matters from the perspective that you still have to have spikes in certain areas of focus. But I’m not sure that you get more and more specialised in the area you’re in. I’m not sure that’s necessarily how humans create most value in their arena of deployment and development. Professionally, and therefore, I’m not sure education should be more and more specialised just for the sake of it. What do you think? Bertrand SchmittI think that that’s a great point. I would say I could see an argument for both. I think there is always some value in being truly an expert on a topic so that you can keep digging around, keep developing the field. You cannot develop a field without people focused on developing a field. I think that one is there to stay. At the same time, I can see how in many situations, combining knowledge of multiple fields can bring tremendous value. I think it’s very clear as well. I think it’s a balance. We still need some experts. At the same time, there is value to be quite horizontal in terms of knowledge. I think what is still very valuable is the ability to drill through whenever you need. I think that we say it’s actually much easier than before. That for me is a big difference. I can see how now you can drill through on topics that would have been very complex to go into. You will have to read a lot of books, watch a lot of videos, potentially do a new education before you grasp much about a topic. Well, now, thanks to AI, you can drill very quickly on topic of interest to you. I think that can be very valuable. Again, if you just do that blindly, that’s calling for trouble. But if you have some knowledge in the area, if you know how to deal with AI, at least today’s AI and its constraints, I think there is real value you can deliver thanks to an ability to drill through when you don’t. For me, personally, one thing I’ve seen is some people who are generalists have lost this ability. They have lost this ability to drill through on a topic, become expert on some topic very quickly. I think you need that. If you’re a VC, you need to analyse opportunity, you need to discover a new space very quickly. We say, I think some stuff can move much quicker than before. I’m always careful now when I see some pure generalists, because one thing I notice is that they don’t know how to do much anything any more. That’s a risk. We have example of very, very, very successful people. Take an Elon Musk, take a Steve Jobs. They have this ability to drill through to the very end of any topic, and that’s a real skill. Sometimes I see people, you should trust the people below. They know better on this and that, and you should not question experts and stuff. Hey, guys, how is it that they managed to build such successful companies? Is their ability to drill through and challenge hardcore experts. Yes, they will bring top people in the field, but they have an ability to learn quickly a new space and to drill through on some very technical topics and challenge people the right way. Challenge, don’t smart me. Not the, I don’t care, just do it in 10 days. No, going smartly, showing people those options, learning enough in the field to be dangerous. I think that’s a very, very important skill to have. Nuno Goncalves PedroMaybe switching to the dark side and talking a little bit about the bad stuff. I think a lot of people have these questions. There’s been a lot of debate around ChatGPT. I think there’s still a couple of court cases going on, a suicide case that I recently a bit privy to of a young man that killed himself, and OpenAI and ChatGPT as a tool currently really under the magnifying glass for, are people getting confused about AI and AI looks so similar to us, et cetera. The Ethics, Safety, and Privacy Landscape Maybe let’s talk about the ethics and safety and privacy landscape a little bit and what’s happening. Sadly, AI will also create the advent of a world that has still a lot of biases at scale. I mean, let’s not forget the AI is using data and data has biases. The models that are being trained on this data will have also biases that we’re seeing with AI, the ability to do things that are fake, deep fakes in video and pictures, et cetera. How do we, as a society, start dealing with that? How do we, as a society, start dealing with all the attacks that are going on? On the privacy side, the ability for these models and for these tools that we have today to actually have memory of the conversations we’ve had with them already and have context on what we said before and be able to act on that on us, and how is that information being farmed and that data being farmed? How is it being used? For what purposes is it being used? As I said, the dark side of our conversation today. I think we’ve been pretty positive until now. But in this world, I think things are going to get worse before they get better. Obviously, there’s a lot of money being thrown at rapid evolution of these tools. I don’t see moratoriums coming anytime soon or bans on tools coming anytime soon. The world will need to adapt very, very quickly. As we’ve talked in previous episodes, regulation takes a long time to adapt, except Europe, which obviously regulates maybe way too fast on technology and maybe not really on use cases and user flows. But how do we deal with this world that is clearly becoming more complex? Bertrand SchmittI mean, on the European topic, I believe Europe should focus on building versus trying to sensor and to control and to regulate. But going back to your point, I think there are some, I mean, very tough use case when you see about voice cloning, for instance. Grandparents believing that their kids are calling them, have been kidnapped when there is nothing to it, and they’re being extorted. AI generating deepfakes that enable sextortion, that stuff. I mean, it’s horrible stuff, obviously. I’m not for regulation here, to be frank. I think that we should for sure prosecute to the full extent of the law. The law has already a lot of tools to deal with this type of situation. But I can see some value to try to prevent that in some tools. If you are great at building tools to generate a fake voice, maybe you should make sure that you are not helping scammers. If you can generate easily images, you might want to make sure that you cannot easily generate tools that can be used for creating deep fakes and sex extortion. I think there are things that should be done by some providers to limit such terrible use cases. At the same time, the genie is out. There is also that part around, okay, the world will need to adapt. But yeah, you cannot trust everything that is done. What could have looked like horrible might not be true. You need to think twice about some of this, what you see, what you hear. We need to adjust how we live, how we work, but also how we prevent that. New tools, I believe, will appear. We will learn maybe to be less trustful on some stuff, but that is what it is. Nuno Goncalves PedroMaybe to follow up on that, I fully agree with everything you just said. We need to have these tools that will create boundary conditions around it as well. I think tech will need to fight tech in some ways, or we’ll need to find flaws in tech, but I think a lot of money needs to be put in it as well. I think my shout-out here, if people are listening to us, are entrepreneurs, et cetera, I think that’s an area that needs more and more investment, an area that needs more and more tooling platforms that are helpful to this. It’s interesting because that’s a little bit like how OpenAI was born. OpenAI was born to be a positive AI platform into the future. Then all of a sudden we’re like, “Can we have tools to control ChatGPT and all these things that are out there now?” How things have changed, I guess. But we definitely need to have, I think, a much more significant investment into these toolings and platforms than we do have today. Otherwise, I don’t see things evolving much better. There’s going to be more and more of this. There’s going to be more and more deep fakes, more and more, lack of contextualisation. There’s countries now that allow you to get married with not a human. It’s like you can get married to an algorithm or a robot or whatever. It’s like, what the hell? What’s happening now? It’s crazy. Hopefully, we’ll have more and more boundary conditions. Bertrand SchmittYeah, I think it will be a boom for cybersecurity. No question here. Tools to make sure that is there a better trust system or detecting the fake. It’s not going to be easy, but it has been the game in cybersecurity for a long time. You have some new Internet tools, some new Internet products. You need to find a difference against it and the constant war between the attackers and the defender. Nuno Goncalves PedroThe Parental Playbook: Actionable Strategies Maybe last but not the least in today’s episode, the parent playbook I’m a parent, what should I do I’ll actually let you start first. Bertrand, I’m parent-alike, but I am, sadly, not a parent, so I’ll let you start first, and then I’ll share some of my perspectives as well as a parent-like figure. Bertrand SchmittYeah, as a parent to an 8-year, I would say so far, no real difference than before. She will do some homework on an iPad. But beyond that, I cannot say I’ve seen at this stage so much difference. I think it will come up later when you have different type of homeworks when the kids start to be able to use computers on their own. What I’ve seen, however, is some interesting use cases. When my daughter is not sure about the spelling, she simply asks, Siri. “Hey, Siri, how do you spell this or this or that?” I didn’t teach her that. All of this came on her own. She’s using Siri for a few stuff for work, and I’m quite surprised in a very smart, useful way. It’s like, that’s great. She doesn’t need to ask me. She can ask by herself. She’s more autonomous. Why not? It’s a very efficient way for her to work and learn about the world. I probably feel sad when she asks Siri if she’s her friend. That does not feel right to me. But I would say so far, so good. I’ve seen only AI as a useful tool and with absolutely very limited risk. At the same time, for sure, we don’t let our kid close to any social media or the like. I think some of this stuff is for sure dangerous. I think as a parent, you have to be very careful before authorising any social media. I guess at some point you have no choice, but I think you have to be very careful, very gradual, and putting a lot of controls and safety mechanism I mean, you talk about kids committing suicide. It’s horrible. As a parent, I don’t think you can have a bigger worry than that. Suddenly your kids going crazy because someone bullied them online, because someone tried to extort them online. This person online could be someone in the same school or some scammer on the other side of the world. This is very scary. I think we need to have a lot of control on our kids’ digital life as well as being there for them on a lot of topics and keep drilling into them how a lot of this stuff online is not true, is fake, is not important, and being careful, yes, to raise them, to be critical of stuff, and to share as much as possible with our parents. I think We have to be very careful. But I would say some of the most dangerous stuff so far, I don’t think it’s really coming from AI. It’s a lot more social media in general, I would say, but definitely AI is adding another layer of risk. Nuno Goncalves PedroFrom my perspective, having helped raise three kids, having been a parent-like role today, what I would say is I would highlight against the skills that I was talking about before, and I would work on developing those skills. Skills that relate to curiosity, to analytical behaviours at the same time as being creative, allowing for both, allowing for the left brain, right brain, allowing for the discipline and structure that comes with analytical thinking to go hand in hand with doing things in a very, very different way and experimenting and failing and doing things and repeating them again. All the skills that I mentioned before, focusing on those skills. I was very fortunate to have a parental unit. My father and my mother were together all their lives: my father, sadly, passing away 5 years ago that were very, very different, my mother, more of a hacker in mindset. Someone was very curious, medical doctor, allowing me to experiment and to be curious about things around me and not simplifying interactions with me, saying it as it was with a language that was used for that particular purpose, allowing me to interact with her friends, who were obviously adults. And then on the other side, I have my father, someone who was more disciplined, someone who was more ethical, I think that becomes more important. The ability to be ethical, the ability to have moral standing. I’m Catholic. There is a religious and more overlay to how I do things. Having the ability to portray that and pass that to the next generation and sharing with them what’s acceptable and what’s not acceptable, I think is pretty critical and even more critical than it was before. The ability to be structured, to say and to do what you say, not just actually say a bunch of stuff and not do it. So, I think those things don’t go out of use, but I would really spend a lot more focus on the ability to do critical thinking, analytical thinking, having creative ideas, obviously, creating a little bit of a hacker mindset, how to cut corners to get to something is actually really more and more important. The second part is with all of this, the overlay of growth mindset. I feel having a more flexible mindset rather than a fixed mindset. What I mean by that is not praising your kids or your grandchildren for being very intelligent or very beautiful, which are fixed things, they’re static things, but praising them for the effort they put into something, for the learning that they put into something, for the process, raising the

Pioneering PAI: How Daniel Miessler's Personal AI Infrastructure Activates Human Agency & Creativity

Play Episode Listen Later Jan 18, 2026 148:11


Daniel Miessler shares his Personal AI Infrastructure (PAI) framework and vision for a future where single human owners are supported by armies of AI agents. He explains his TELOS system for defining purpose and goals, multi-layered memory design, and orchestration of multiple models and sub-agents. The conversation dives into cybersecurity impacts, from AI-accelerated testing to inevitable personalized spear-phishing and always-on defensive monitoring. Listeners will learn how scaffolding can turn frontier models into true digital assistants and even help reshape their own working habits. LINKS: PAI principles on GitHub README Daniel Miessler about page How Miessler's projects fit together AI changes predictions for 2026 Fabric open-source AI framework Personal AI Infrastructure GitHub repository Current definition of AGI article Why we'll have AGI by 2028 RAID AI definitions framework article Unsupervised Learning newsletter signup Daniel Miessler LinkedIn profile Sponsors: MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive MATS: MATS is a fully funded 12-week research program pairing rising talent with top mentors in AI alignment, interpretability, security, and governance. Apply for the next cohort at https://matsprogram.org/s26-tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai PRODUCED BY: https://aipodcast.ing

The Deep Dive Radio Show and Nick's Nerd News
The Human Brain Soon To Be Simulated...

The Deep Dive Radio Show and Nick's Nerd News

Play Episode Listen Later Jan 17, 2026 4:44


we are one step closer to AGI, here...

Making Sense with Sam Harris
#453 — AI and the New Face of Antisemitism

Making Sense with Sam Harris

Play Episode Listen Later Jan 16, 2026 21:50


Sam Harris speaks with Judea Pearl about causality, AI, and antisemitism. They discuss why LLMs won't spawn AGI, alignment concerns in the race for AGI, Pearl's public life after the murder of his son Daniel, the post-October 7th shift toward open anti-Zionism, the overlap between anti-Zionism and antisemitism, the misuse of "Islamophobia," Israel's fracture under Netanyahu, confronting anti-Zionism in universities, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

a16z
How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

a16z

Play Episode Listen Later Jan 16, 2026 57:06


The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on. Follow Omar Khattab on X: https://x.com/lateinteractionFollow Martin Casado on X: https://x.com/martin_casadoCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Jim Rutt Show
EP 329 Worldviews: David Krakauer

The Jim Rutt Show

Play Episode Listen Later Jan 15, 2026


In the inaugural episode of a new series, Jim talks with David Krakauer about his intellectual formation and worldview. They discuss what woke up as David this morning, his commitments to chance and pattern seeking, his epiphany about the idea of the idea at age 12 or 13, his perverse attraction to the arcane and difficult, evolution as integral to intelligence, the risk-averse character of scholars and the sociology of science, the Santa Fe Institute's attempt to maintain revolutionary science, the Ouroboros concept challenging foundationalism in epistemology, the standard model of physics as foundational versus the view that you can establish foundations anywhere, string theory as a slowly dying pseudoscience, whether beauty is a useful guide in science, emergence and broken symmetries, Phil Anderson's "More is Different" paper, the Wigner reversal and the shift from law to initial conditions, rejecting both weak and strong emergence, effective theories and causally justified concepts, downward causality, micrograining versus coarse graining, the distinction between abiotic and biotic systems, games and puzzles as model systems for complexity, combinatorial solution spaces, heuristics as dimensional reducers and potentially the golden road to AGI, Isaiah Berlin's influence on David's worldview, negative versus positive liberties, value pluralism and historicity, the Fermi paradox and the possibility of alien life, the rational versus the irrational in human life, and much more. Episode Transcript JRS EP 192 - David Krakauer on Science, Complexity and AI JRS EP10 - David Krakauer: Complexity Science "A Minimum Viable Metaphysics," by Jim Rutt "More Is Different," by P.W. Anderson The Emergence of Everything, by Harold Morowitz David Krakauer's research explores the evolution of intelligence and stupidity on Earth. This includes studying the evolution of genetic, neural, linguistic, social, and cultural mechanisms supporting memory and information processing, and exploring their shared properties. President of the Santa Fe Institute since 2015, he served previously as the founding director of the Wisconsin Institutes for Discovery, the co-director of the Center for Complexity and Collective Computation, and professor of mathematical genetics, all at the University of Wisconsin, Madison.

Personal Development Mastery
What's Keeping Your Midlife Career Transition Stuck (It's Not Fear or Lack of Motivation) | #571

Personal Development Mastery

Play Episode Listen Later Jan 15, 2026 6:05 Transcription Available


Are you stuck in a career that no longer fits, but still haven't taken a real-world step to change it?If you've been quietly contemplating a career or life transition for months (or years) without moving forward, this episode will speak directly to you. It's not about confusion. It's about why high-achieving professionals often stay silently stuck, even when clarity has already arrived.Discover why your current approach might be the very thing keeping you circling in place.If you've been carrying this quietly on your own, listen now to discover a new, more sustainable way forward.˚VALUABLE RESOURCES:Book a 1:1 conversation with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Support the showCareer transition and career clarity podcast content for midlife professionals in career transition, navigating a career change, career pivot or second career, starting a new venture or leaving a long-term career. Discover practical tools for career clarity, confident decision-making, rebuilding self belief and confidence, finding purpose and meaning in work, designing a purposeful, fulfilling next chapter, and creating meaningful work that fits who you are now. Episodes explore personal development and mindset for midlife professionals, including how to manage uncertainty and pressure, overcome fear and self-doubt, clarify your direction, plan your next steps, and turn your experience into a new role, business or vocation that feels aligned. To support the show, click here.

Deep Thoughts Radio Show
DTR S7: AGI – Artificial General Intelligence

Deep Thoughts Radio Show

Play Episode Listen Later Jan 14, 2026 87:37


The concept of an AGI or Artificial General Intelligence has been around as long as man has been able to conceive of talking computers. The establishment want you to believe that no corporation or institution has ever attempted to create one and that it will be 20 years from now until we see one in action. This episode breaks down the logic of that claim and explains in complete detail what an AGI… Source

thinkfuture with kalaboukis
1124 Why the Future Is Getting Harder to Predict | Steve Brown on AI, AGI, and What Comes Next

thinkfuture with kalaboukis

Play Episode Listen Later Jan 14, 2026 39:09


See more: https://thinkfuture.substack.comConnect with Steve: https://stevebrown.ai---What happens when technology moves faster than our ability to forecast it?In this episode of thinkfuture, host Chris Kalaboukis speaks with Steve Brown, veteran technologist, former Intel futurist, and former member of Google DeepMind's AI research lab. With over 35 years of experience across high-tech, digital transformation, and AI, Steve offers a rare long-view perspective on where we are—and why predicting what comes next has never been harder.Steve explains how long-term forecasting used to be feasible when technological progress followed clearer trajectories. Today, breakthroughs in AI—and soon quantum computing—are compressing decades of progress into just a few years. The result is a future that's accelerating faster than our institutions, economic models, and assumptions can keep up with.We cover:- Why 10-year technology forecasts are now nearly impossible- How AI is already accelerating progress in math, physics, and science- Why the combination of AI and quantum computing could reshape material science, chemistry, and biology- The likelihood of Artificial General Intelligence (AGI) arriving within 5–10 years- How AGI could disrupt jobs and force a rethink of capitalism itself- Why labor may increasingly turn into capital-The need for new economic models, shorter workweeks, or earlier retirement- How humans find meaning when machines handle most productive workSteve argues we may see more progress in the next five years than in the last fifty—and that the biggest challenge won't be technological, but human.If you're interested in AI, AGI, the future of work, economic disruption, or the limits of forecasting, this conversation offers a grounded, thoughtful look at what may be coming sooner than we expect.

Personal Development Mastery
Why Midlife Professionals Feel Disconnected from Their Work and Identity, and How to Find Career Clarity, with Shelley McIntyre | #570

Personal Development Mastery

Play Episode Listen Later Jan 12, 2026 43:25 Transcription Available


Are you a midlife professional feeling a growing disconnect between your career success and your personal fulfillment?Many Generation X professionals reach midlife with impressive achievements, yet silently struggle with a deep sense of dissatisfaction, identity confusion, or the haunting feeling that time is running out to live a truly meaningful life. This episode offers a compassionate, insightful roadmap for those questioning their current career path and seeking a purpose-driven reinvention.Discover why high performers often feel disconnected at midlife, and what to do when the “mask” you've worn at work no longer fits.Learn practical ways to explore new possibilities through tiny experiments, identity redefinition, and values discovery.Hear a deeply personal story of transition from corporate leadership to purpose-led coaching, with actionable advice on navigating financial fears, identity loss, and uncertainty.Listen now to learn how to begin designing a life and career that reflects who you truly are – not just the role you've always played.˚KEY POINTS AND TIMESTAMPS:00:41 - Introducing Shelley McIntyre & the midlife disconnect03:04 - Masks, midlife reckoning and inner identity06:53 - Options, evidence and experimenting with possibility09:17 - Values, meaning and redefining what work must provide11:18 - Shelley's personal transition story16:26 - Practical first steps: finances and portfolio careers19:33 - Challenges of transition: uncertainty, off-gassing and relationships24:14 - Identity, masks and rediscovering what you love30:16 - Professional identity, caring professions and essence work˚MEMORABLE QUOTE:"All knowledge is rumor until it is in the bones."˚VALUABLE RESOURCES:Shelley's website: https://burnthemapcoaching.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Kliq This: The Kevin Nash Podcast
Beat Bron and Make him

Kliq This: The Kevin Nash Podcast

Play Episode Listen Later Jan 12, 2026 115:52


This episode of Kliq This is one of those mornings where the conversation refuses to stay light. Kevin Nash and Sean Oliver come in reacting to the world as it is right now, not as anyone wishes it were, and the tone shifts fast. What starts as media and culture talk turns into something heavier, sharper, and far more uncomfortable Kev doesn't hedge his opinions here. He draws lines, explains why he draws them, and challenges the way stories are framed once they hit the news cycle. There is real frustration with how power is exercised, how accountability gets blurred, and how quickly narratives are assigned before facts are even settled The discussion widens into government, law enforcement, and what happens when authority is paired with fear and immunity. Kevin speaks from lived experience and training, not from a headline, and that perspective drives some of the most intense moments of the show. This is not a clean debate and it is not meant to be Just when you think the episode has found its ceiling, it pivots again. Economics, foreign policy, tariffs, oil, and global power plays all get pulled into the same conversation. The connective tissue is control, who benefits from it, and who pays the price when decisions are made far away from everyday people As always, Kliq This refuses to end on a neat bow. There are laughs, unexpected detours, and moments of gallows humor that cut through the tension. If you come to this show for honesty, unpredictability, and conversations that sound nothing like sanitized media panels, this episode delivers exactly that StopBox- Get firearm security redesigned and save 10% off @StopBoxUSA with code NASH10 at https://www.stopboxusa.com/NASH10 #stopboxpod BetterWild-Right now, Betterwild is offering our listeners up to 40% off your order at betterwild.com/KLIQ 00:00 Kliq This #184: Law & Disorder 00:57 Trying to keep up to date 01:19 Sophomore slumps in TV 05:59 Federal Investigation 08:00 "I am giving my opinion on things that happened" 09:56 Minneapolis Incident 16:52 Prior Incident (June 2025) 19:02 Domestic Terrorism? 23:56 Portland Shooting 28:25 Are there enough Republicans to stand up against this? 34:14 China Graph 38:54 Venezuelan Oil 47:26 AGI? 50:38 Venezualian casualties 01:02:28 The Champagne of beers 01:04:20 BREAK BETTER WILD 01:08:53 "maybe the content this week is suitable" 01:09:32 Portland shooting 01:11:07 "George you can type this shit but you can't say it" 01:11:52 "Nash swallows the pill in the locker room" 01:14:33 capturing Maduro 01:22:20 BREAK STOPBOX 01:25:11 RAW 01:26:49 Rhea is champ 01:27:22 Gunther/AJ 01:29:58 Maxxine Dupree 01:32:00 Punk/Bron 01:44:38 TRT. What to expect 01:47:30 Any Decade 01:48:20 Dennis Rodman 01:50:53 William Regal 01:52:21 OUTRO

We Study Billionaires - The Investor’s Podcast Network
TECH012: Monthly Tech Roundup – Data Centers in Space, AI5 Chip, Tesla vs. Waymo w/ Seb Bunney (Tech Podcast)

We Study Billionaires - The Investor’s Podcast Network

Play Episode Listen Later Jan 7, 2026 70:30


Preston and Seb unpack AI's implications for safety, governance, and economics. They debate AGI risks, corporate centralization, Bitcoin's regulatory role, and Elon Musk's ventures in space and autonomous tech. IN THIS EPISODE YOU'LL LEARN: 00:00:00 - Intro 00:04:37 – Why AI safety and autonomy are increasingly at odds00:11:30 – How AGI could reshape governance and policy-making00:07:40 – Preston's skepticism about AI self-preservation claims00:15:18 – The unintended consequences of AI regulation00:22:15 – How Bitcoin could hold corporations accountable00:20:10 – The dangers of centralizing economic power via AI00:34:45 – Why generalist thinking matters in a post-pandemic world00:37:20 – The role of curiosity and deep reading in future-proofing00:41:59 – How SpaceX is redefining launch economics with reusable rockets00:57:41 – The hidden potential of Tesla's AI chips and compute power Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Clip 1: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! with Tristan Harris. Clip 2: Marc Andreessen explains the future belongs to generalists in the AI era. Clip 3: Elon Musk on the Future of SpaceX & Mars. Official Website: Seb Bunney. Seb's book: The Hidden Cost of Money. Related ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our ⁠sponsors⁠: HardBlock Human Rights Foundation Masterworks Linkedin Talent Solutions Simple Mining Plus500 Netsuite Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

The Gist
Andy Mills: "Acceleration Is Salvation" — and Why AI Might Be the Last Invention.

The Gist

Play Episode Listen Later Jan 6, 2026 44:18


Andy Mills, creator of The Last Invention podcast, explores I.J. Good's 1965 concept of an "intelligence explosion"—and explains why "AGI" is a deceptively harmless term for a world-changing event. The central problem? Modern AI acts like a black box, often producing results that shock even its designers with no clear explanation of how they got there. Plus: A rebuttal to "spheres of influence" thinking, and why carving up the world is a bad strategy. Produced by Corey Wara | Coordinated by Lya Yanne | Video and Social Media by Geoff Craig Do you have questions or comments, or just want to say hello? Email us at ⁠⁠⁠⁠thegist@mikepesca.com For full Pesca content and updates, check out our website at https://www.mikepesca.com/ ⁠⁠⁠⁠⁠⁠⁠⁠ For ad-free content or to become a Pesca Plus subscriber, check out ⁠⁠⁠⁠https://subscribe.mikepesca.com/ For Mike's daily takes on Substack, subscribe to The Gist List https://mikepesca.substack.com/ Follow us on Social Media:⁠⁠⁠⁠ YouTube https://www.youtube.com/channel/UC4_bh0wHgk2YfpKf4rg40_g⁠⁠⁠⁠ Instagram https://www.instagram.com/pescagist/ X https://x.com/pescami TikTok https://www.tiktok.com/@pescagist To advertise on the show, contact ⁠⁠⁠⁠ad-sales@libsyn.com⁠⁠⁠⁠ or visit ⁠⁠⁠⁠https://advertising.libsyn.com/TheGist    

The John Batchelor Show
S8 Ep270: THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altm

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 5:09


THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altman's management style led to a "blip" where the nonprofit board fired him, only for him to be quickly reinstated due to employee loyalty. Elon Musk, having lost a power struggle for control of the organization, severed ties, leaving Altman to lead the race toward AGI. NUMBER 16 FEBRUARY 1955

The John Batchelor Show
S8 Ep270: FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 10:30


FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 1955

The John Batchelor Show
S8 Ep271: SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI -- a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilli

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 6:22


SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI --  a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilliance. In 1863, the photographer Nadar undertook a perilous ascent in a giant balloon to fund experiments for heavier-than-air flight, illustrating the adventurous spirit required of early photographers. This era began with Daguerre's 1839 introduction of the daguerreotype, a process involving highly dangerous chemicals like mercury and iodine to create unique, mirror-like images on copper plates. Pioneers risked their lives using explosive materials to capture reality with unprecedented clarity and permanence. NUMBER 1 PHOTOGRAPHING THE MOON AND SEA Colleague Anika Burgess, Flashes of Brilliance. Early photography expanded scientific understanding, allowing humanity to visualize the inaccessible. James Nasmyth produced realistic images of the moon by photographing plaster models based on telescope observations, aiming to prove its volcanic nature. Simultaneously, Louis Boutan spent a decade perfecting underwater photography, capturing divers in hard-hat helmets. These efforts demonstrated that photography could be a tool for scientific analysis and discovery, revealing details of the natural world previously hidden from the human eye. NUMBER 2 SOCIAL JUSTICE AND NATURE CONSERVATION Colleague Anika Burgess, Flashes of Brilliance. Photography became a powerful agent for social and environmental change. Jacob Riis utilized dangerous flash powder to document the squalid conditions of Manhattan tenements, exposing poverty to the public in How the Other Half Lives. While his methods raised consent issues, they illuminated grim realities. Conversely, Carleton Watkins hauled massive equipment into the wilderness to photograph Yosemite; his majestic images influenced legislation signed by Lincoln to protect the land, proving photography's political impact. NUMBER 3 X-RAYS, SURVEILLANCE, AND MOTION Colleague Anika Burgess, Flashes of Brilliance. The discovery of X-rays in 1895 sparked a "new photography" craze, though the radiation caused severe injuries to early practitioners and subjects. Photography also entered the realm of surveillance; British authorities used hidden cameras to photograph suffragettes, while doctors documented asylum patients without consent. Finally, Eadweard Muybridge's experiments captured horses in motion, settling debates about locomotion and laying the technical groundwork for the future development of motion pictures. NUMBER 4 THE AWAKENING OF CHINA'S ECONOMY Colleague Anne Stevenson-Yang, Wild Ride. Returning to China in 1994, the author witnessed a transformation from the destitute, Maoist uniformity of 1985 to a budding export economy. In the earlier era, workers slept on desks and lacked basic goods, but Deng Xiaoping's realization that the state needed hard currency prompted reforms. Deng established Special Economic Zones like Shenzhen to generate foreign capital while attempting to isolate the population from foreign influence, marking the start of China's export boom. NUMBER 5 RED CAPITALISTS AND SMUGGLERS Colleague Anne Stevenson-Yang, Wild Ride. Following the 1989 Tiananmen crackdown, China reopened to investment in 1992, giving rise to "red capitalists"—often the children of party officials who traded political access for equity. As the central government lost control over local corruption and smuggling rings, it launched "Golden Projects" to digitize and centralize authority over customs and taxes. To avert a banking collapse in 1998, the state created asset management companies to absorb bad loans, effectively rolling over massive debt. NUMBER 6 GHOST CITIES AND THE STIMULUS TRAP Colleague Anne Stevenson-Yang, Wild Ride. China's growth model shifted toward massive infrastructure spending, resulting in "ghost cities" and replica Western towns built to inflate GDP rather than house people. This "Potemkin culture" peaked during the 2008 Olympics, where facades were painted to impress foreigners. To counter the global financial crisis, Beijing flooded the economy with loans, fueling a real estate bubble that consumed more cement in three years than the US did in a century, creating unsustainable debt. NUMBER 7 STAGNATION UNDER SURVEILLANCE Colleague Anne Stevenson-Yang, Wild Ride. The severe lockdowns of the COVID-19 pandemic shattered consumer confidence, leaving citizens insecure and unwilling to spend, which stalled economic recovery. Local governments, cut off from credit and burdened by debt, struggle to provide basic services. Faced with economic stagnation, Xi Jinping has rejected market liberalization in favor of increased surveillance and control, prioritizing regime security over resolving the structural debt crisis or restoring the dynamism of previous decades. NUMBER 8 FAMINE AND FLIGHT TO FREEDOM Colleague Mark Clifford, The Troublemaker. Jimmy Lai was born into a wealthy family that lost everything to the Communist revolution, forcing his father to flee to Hong Kong while his mother endured labor camps. Left behind, Lai survived as a child laborer during a devastating famine where he was perpetually hungry. A chance encounter with a traveler who gave him a chocolate bar inspired him to escape to Hong Kong, the "land of chocolate," stowing away on a boat at age twelve. NUMBER 9 THE FACTORY GUY Colleague Mark Clifford, The Troublemaker. By 1975, Jimmy Lai had risen from a child laborer to a factory owner, purchasing a bankrupt garment facility using stock market profits. Despite being a primary school dropout who learned English from a dictionary, Lai succeeded through relentless work and charm. He capitalized on the boom in American retail sourcing, winning orders from Kmart by producing samples overnight and eventually building Comitex into a leading sweater manufacturer, embodying the Hong Kong dream. NUMBER 10 CONSCIENCE AND CONVERSION Colleague Mark Clifford, The Troublemaker. The 1989 Tiananmen Squaremassacre radicalized Lai, who transitioned from textiles to media, founding Next magazine and Apple Daily to champion democracy. Realizing the brutality of the Chinese Communist Party, he used his wealth to support the student movement and expose regime corruption. As the 1997 handover approached, Lai converted to Catholicism, influenced by his wife and pro-democracy peers, seeking spiritual protection and a moral anchor against the coming political storm. NUMBER 11 PRISON AND LAWFARE Colleague Mark Clifford, The Troublemaker. Following the 2020 National Security Law, authorities raided Apple Daily, froze its assets, and arrested Lai, forcing the newspaper to close. Despite having the means to flee, Lai chose to stay and face imprisonment as a testament to his principles. Now held in solitary confinement, he is subjected to "lawfare"—sham legal proceedings designed to silence him—while he spends his time sketching religious images, remaining a symbol of resistance against Beijing's tyranny. NUMBER 12 FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 THE ROOTS OF AMBITION Colleague Keach Hagey, The Optimist. Sam Altman grew up in St. Louis, the son of an idealistic developer and a driven dermatologist mother who instilled ambition and resilience in her children. Altmanattended the progressive John Burroughs School, where his intellect and charisma flourished, allowing him to connect with people on any topic. Though he was a tech enthusiast, his ability to charm others defined him early on, foreshadowing his future as a master persuader in Silicon Valley. NUMBER 14 SILICON VALLEY KINGMAKER Colleague Keach Hagey, The Optimist. At Stanford, Altman co-founded Loopt, a location-sharing app that won him a meeting with Steve Jobs and a spot in the App Store launch. While Loopt was not a commercial success, the experience taught Altman that his true talent lay in investing and spotting future trends rather than coding. He eventually succeeded Paul Graham as president of Y Combinator, becoming a powerful figure in Silicon Valley who could convince skeptics like Peter Thiel to back his visions. NUMBER 15 THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altman's management style led to a "blip" where the nonprofit board fired him, only for him to be quickly reinstated due to employee loyalty. Elon Musk, having lost a power struggle for control of the organization, severed ties, leaving Altman to lead the race toward AGI. NUMBER 16