Podcasts about AGI

  • 1,873PODCASTS
  • 6,164EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Jan 27, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

80,000 Hours Podcast with Rob Wiblin
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 27, 2026 151:48


Democracy might be a brief historical blip. That's the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'Links to learn more, video, and full transcript: https://80k.info/ddHe argues democracy wasn't the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?“The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they've needed us,” David explains. “Life can only get so bad when you're needed. That's the key thing that's going to change.”In David's telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they're at a disadvantage compared to governments that strategically restrict civil liberties.But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that's increasingly shaped by machine-to-machine communication — even if every AI does exactly what it's told.This episode was recorded on August 21, 2025.Chapters:Cold open (00:00:00)Who's David Duvenaud? (00:00:50)Alignment isn't enough: we still lose control (00:01:30)Smart AI advice can still lead to terrible outcomes (00:14:14)How gradual disempowerment would occur (00:19:02)Economic disempowerment: Humans become "meddlesome parasites" (00:22:05)Humans become a "criminally decadent" waste of energy (00:29:29)Is humans losing control actually bad, ethically? (00:40:36)Political disempowerment: Governments stop needing people (00:57:26)Can human culture survive in an AI-dominated world? (01:10:23)Will the future be determined by competitive forces? (01:26:51)Can we find a single good post-AGI equilibria for humans? (01:34:29)Do we know anything useful to do about this? (01:44:43)How important is this problem compared to other AGI issues? (01:56:03)Improving global coordination may be our best bet (02:04:56)The 'Gradual Disempowerment Index' (02:07:26)The government will fight to write AI constitutions (02:10:33)“The intelligence curse” and Workshop Labs (02:16:58)Mapping out disempowerment in a world of aligned AGIs (02:22:48)What do David's CompSci colleagues think of all this? (02:29:19)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore

StartUp Health NOW Podcast
Live from Apollo House: Building the Modern Health Stack in the Age of Superintelligence

StartUp Health NOW Podcast

Play Episode Listen Later Jan 27, 2026 11:19


What does it really take to achieve Health Moonshots in the Age of Superintelligence? Recorded live at StartUp Health’s Apollo House during JPM Healthcare Week, this panel brings together leaders operating at the intersection of healthcare delivery, diagnostics, cloud infrastructure, and AI. Moderated by Angela Shippy, MD, of Amazon Web Services, the conversation explores how AI is moving from point solutions to foundational infrastructure across the modern health stack. Together, the panel examines why clean, connected data is essential, how agentic workflows can reduce burnout and improve clinician and patient experience, and what it will take to move healthcare from transactional to truly person-centered care. The discussion also tackles trust, governance, and why collaboration across startups, health systems, and big tech is critical to delivering real-world impact. This is a grounded, forward-looking conversation about how purpose-driven leadership can turn exponential technology into practical outcomes that matter. Featured Guests Angela Shippy, MDSenior Physician Executive and Clinical Innovation Lead, Global Healthcare and Nonprofit, Amazon Web Services (AWS) Brian Caveney, MD, MPHChief Medical and Scientific Officer, Labcorp Rasu Shrestha, MDEVP, Chief Innovation and Commercialization Officer, Advocate Health Chelsea Sumner, PharmDTranslational Health and AI Strategy Leader, NVIDIA Mark AndrewsSenior Principal, AGI, Product Leader, Amazon Do you want to participate in live conversations with industry luminaries? When you join the StartUp Health Network – a new private community for investors, buyers, and industry leaders to connect year-round with top health entrepreneurs – you are invited to a full calendar of interactive Fireside Chats with the most influential leaders shaping health innovation. Come with questions, learn what is working right now, and connect with industry icons. » Learn more and join today. Want more content like this? Sign up for StartUp Health Insider™ to get funding insights, news, and special updates delivered to your inbox.

Personal Development Mastery
Why So Many Career Transitions Fail, and How to Avoid the #1 Mistake Professionals Make, with Rachel Spekman | #574

Personal Development Mastery

Play Episode Listen Later Jan 26, 2026 35:26 Transcription Available


Stuck in a “successful” career that looks great on paper but feels soul-sucking? It might be time to reverse-engineer work you actually love.If you're a high achiever who's quietly miserable, this episode shows how to move from fear and obligation to clarity and momentum, without blowing up your finances or identity.Learn practical exercises to pinpoint fit fast: design your perfect workday, run a time vs. enjoyment audit, and write your “internal résumé” to surface real strengths.Replace anxiety with a plan: shift from 20% happy to 80% through calculated, incremental moves (not impulsive leaps), guided by the Three C's:community, contribution, and challenge.Navigate the messy middle: handle reputational noise, manage ego with a beginner's mindset, and translate transferable skills so opportunities find you.Hit play to learn the exact steps and scripts to start feeling more fulfilled at work, starting today.˚MEMORABLE QUOTE:"It's gonna be okay. You got this!"˚VALUABLE RESOURCES:Rachel's website: https://madeformorecoach.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The AI Breakdown: Daily Artificial Intelligence News and Discussions

As coding agents and vibe-coding tools push software creation into a fundamentally new phase, the real question shifts from what AI can do to what skills actually matter. This episode unpacks the emerging divide between two critical roles in the code AGI era: the Agent Manager, who knows how to direct and scale AI agents effectively, and the Enterprise Operator, who knows what problems are worth solving and why. With execution becoming cheap and abundant, skills like systems thinking, async orchestration, domain expertise, problem recognition, and workflow redesign are becoming the true sources of leverage.Link: https://x.com/natolambert/status/2014023020302704698Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Section - Build an AI workforce at scale - ⁠⁠https://www.sectionai.com/⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Squawk Pod
Davos 2026: Google DeepMind CEO Demis Hassabis 1/24/26

Squawk Pod

Play Episode Listen Later Jan 24, 2026 15:08


AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Razib Khan's Unsupervised Learning
Aneil Mallavarapu: why machine intelligence will never be conscious

Razib Khan's Unsupervised Learning

Play Episode Listen Later Jan 23, 2026 74:10


Today Razib talks to Aneil Mallavarapu, a scientist and technology leader based in Austin, Texas, whose career bridges the fields of biochemistry, systems biology, and software engineering. He earned his doctorate in Biochemistry and Cell Biology from the University of California, and has held academic positions at Harvard Medical School, where he contributed to the Department of Systems Biology and developed the "Little b" programming language. Mallavarapu has transitioned from academic research into the tech and venture capital sectors, co-founding ventures such as Precise.ly and DeepDialog, and currently serving as a Managing Partner at Humain Ventures. He remains active in the scientific community through local initiatives like the Austin Science Network. Most of the conversation centers around Mallavarapu's arguments outlined in his Substack The Case Against Conscious AI - Why AI consciousness is inconsistent with physics. The core of his argument rests on the "Simultaneity Problem" and the "Hard Problem of Physics," which involve non-locality and the memorylessness of artificial intelligence phenomena. Though Mallavarapu believes that artificial intelligence holds great promise, and perhaps even "artificial general intelligence" (AGI) is feasible, he argues that this is a distinct issue from consciousness, which is a property of human minds. Razib also brings up the inverse case: could it be that many organisms that are not particular intelligence, also have consciousness? What does that imply for ethics of practices like eating meat?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay 2

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:04


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey

American Institute of CPAs - Personal Financial Planning (PFP)
Bob Keebler on The Renaissance of Income Tax Planning

American Institute of CPAs - Personal Financial Planning (PFP)

Play Episode Listen Later Jan 23, 2026 18:08


Non-grantor trusts are stepping into the spotlight, not for estate tax, but for income tax planning. In this episode, Cary Sinnett sits down with tax expert Bob Keebler to explore how the One Big Beautiful Act (H.R.1) reshapes the planning landscape. You'll hear how you can use trusts to reclaim lost SALT deductions, stack §199A benefits, shift income across generations, and even layer in QSBS exemptions. If your clients are hitting phaseouts or facing high state taxes, this episode delivers advanced strategies to optimize their tax position now and into the future. Non-Grantor Trusts: Keebler explains how trust structures can sidestep phaseouts and help clients reclaim deductions previously lost due to high AGI. The "Tax Trifecta Trust" Explained: Learn how to stack SALT deductions, layer multiple §199A deductions, and shift income strategically using non-grantor trust planning. Five Strategies You Can Use Today Income shifting to lower-bracket heirs Stacking SALT deductions across multiple trusts Boosting §199A deductions with trust-level taxpayers Expanding QSBS exemptions via strategic trust ownership Reducing or deferring state income tax through out-of-state trust situs Real-World Implementation Advice: Bob outlines guardrails around IRC §643(f) to avoid having multiple trusts collapsed into one. Hear how to structure trusts legally and practically for high-impact planning, and how to identify ideal client profiles for this approach. What CPA Financial Planners Need to Watch For: Bob discusses state-specific issues, kiddie tax complications, trust drafting must-haves, and how CPAs can lead the planning process with confidence. AICPA Resources: Video: Decoding Trusts and Wills: Provisions for PFP Practitioners Video: Year-End Planning Through the Lens of H.R. 1 Resource: Charitable planning post OBBBA rules This episode is brought to you by the AICPA's Personal Financial Planning Section, the premier provider of information, tools, advocacy, and guidance for professionals who specialize in providing tax, estate, retirement, risk management and investment planning advice. Also, by the CPA/PFS credential program, which allows CPAs to demonstrate competence and confidence in providing these services to their clients. Visit us online to join our community, gain access to valuable member-only benefits or learn about our PFP certificate program. Subscribe to the PFP Podcast channel at Libsyn to find all the latest episodes or search "AICPA Personal Financial Planning" on your favorite podcast app.  

The Jim Rutt Show
EP 330 Worldviews: Ben Goertzel

The Jim Rutt Show

Play Episode Listen Later Jan 22, 2026


Jim talks with Ben Goertzel about his worldview. They discuss Ben's morning experience of consciousness crystallizing from ambient awareness, his identification as a panpsychic, the concept of pattern being more fundamental than stuff, Charles Peirce's ontology of first/second/third, the idea of uryphysics as a broader notion of physics beyond metaphysics, parapsychology and psi phenomena including remote viewing and Project Stargate, reincarnation-like phenomena and cases from India, experimental design in parapsychology research, the legitimation of both AGI and psi research, the consciousness explosion occurring alongside AI/ASI development, Jeffrey Martin's work on fundamental well-being and persistent nonsymbolic experience, the immense design space of possible minds, human cognitive limitations like seven plus or minus two short-term memory, the single-threaded nature of human consciousness versus potential multi-threaded ASI, scenarios for beneficial superintelligence and options for humans to remain in human form or upload, the question of how long human existence would remain interesting post-singularity, psychedelics as tools for accessing different states of consciousness and insights into mind construction, the absence of shamanic institutions in modern culture, experiences with DMT and heroic doses, holding multiple contradictory perspectives simultaneously, Walt Whitman's notion of containing multitudes, Ben's intuitive sense that consciousness and the basic ground of being are fundamentally joyful and compassionate, arguments for why superintelligence will likely be good based on efficiency of mutually trusting agents, and much more. Episode Transcript The Consciousness Explosion, by Ben Goertzel JRS EP 217 Ben Goertzel on a New Framework for AGI JRS EP 211 Ben Goertzel on Generative AI vs. AGI JRS Currents 072: Ben Goertzel on Viable Paths to True AGI Evidence for Psi: Thirteen Empirical Research Reports, ed. Damien Broderick & Ben Goertzel Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author.  Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more.

AMA Part 2: Is Fine-Tuning Dead? How Am I Preparing for AGI? Are We Headed for UBI? & More!

Play Episode Listen Later Jan 22, 2026 143:38


In this AMA-style episode, Nathan takes on listener questions about whether fine-tuning is really on the way out, what emergent misalignment and weird generalization results tell us, and how to think about continual learning. He talks candidly about how he's personally preparing for AGI—from career choices and investing to what resilience steps he has and hasn't taken. The discussion also covers timelines for job disruption, whether UBI becomes inevitable, how to talk to kids and “normal people” about AI, and which safety approaches are most neglected. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) Ernie cancer update (04:57) Is fine-tuning dead (Part 1) (12:31) Sponsors: Blitzy | MongoDB (14:57) Is fine-tuning dead (Part 2) (Part 1) (26:56) Sponsors: Serval | Tasklet (29:15) Is fine-tuning dead (Part 2) (Part 2) (29:16) Continual learning cautions (34:59) Talking to normal people (39:30) Personal risk preparation (49:59) Investing around AI safety (01:00:39) Early childhood AI literacy (01:08:55) Work disruption timelines (01:27:58) Nonprofits, need, and UBI (01:34:53) Benchmarks, AGI, and embodiment (01:47:30) AI tooling and platforms (01:57:01) Discourse norms and shaming (02:05:50) Location and safety funding (02:15:17) Turpentine deal and independence (02:24:19) Outro PRODUCED BY: https://aipodcast.ing

Waking Up With AI
Confessions of a Large Language Model

Waking Up With AI

Play Episode Listen Later Jan 22, 2026 22:41


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers' proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers' proof of concept results and the framework's resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind's “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors' proposed four layer safety stack. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

Big Technology Podcast
Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet

Big Technology Podcast

Play Episode Listen Later Jan 21, 2026 34:07


Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward.  --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

TrueLife
Don Quixote - Windmills & Algorithms

TrueLife

Play Episode Listen Later Jan 21, 2026 9:36


One on One Video Call W/George https://tidycal.com/georgepmonty/60-minute-meetingSupport the show:https://www.paypal.me/Truelifepodcast?locale.x=en_USIn a world besieged by the relentless march of AI, where algorithms whisper promises of utopia or apocalypse, one timeless tale rises from the dust of centuries to mirror our chaotic present: Don Quixote. Join host [Your Name] in the premiere episode of [Podcast Name], “The Knight of the Sorrowful Algorithm,” as we embark on a quixotic quest through Cervantes' masterpiece—a story of a man whose brain “dried up” from devouring too many fantastical romances, only to armor up and charge into a reality that mocked his dreams.But this isn't just dusty literature. It's us. Right now. Scrolling through endless feeds of AI doomsayers and saviors: “Your job is obsolete!” “Embrace the disruption!” “AGI will save—or end—humanity!” We're all Don Quixote, lost in a whirlwind of narratives that blur truth and fiction, leaving us paralyzed by questions: Is adaptation surrender? Is optimism naivety? And who are the true mad knights of our age—the artists defying generative machines, the workers reclaiming their humanity, or those daring to pursue passion in a profit-obsessed empire?Delve into the heart of the madness: Why Don Quixote chose delusion over despair, and why “sanity”—accepting a world ruled by efficiency, oligarchs, and obsolescence—might be the deadliest illusion of all. In a finale that shatters illusions, discover how renouncing the quest led to his demise… and what that means for us tilting at digital windmills.Epic, introspective, and urgently relevant, this episode challenges you to ask: In the AI era, is going a little mad the only way to stay truly alive? Tune in, saddle up your Rocinante, and ride into the fray. Next up: “Sancho Panza and the Gig Economy”—the everyman's gamble on a madman's promise. One on One Video call W/George https://tidycal.com/georgepmonty/60-minute-meetingSupport the show:https://www.paypal.me/Truelifepodcast?locale.x=en_US

a16z
From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu

a16z

Play Episode Listen Later Jan 20, 2026 46:35


Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z
The AI Opportunity That Goes Beyond Models

a16z

Play Episode Listen Later Jan 19, 2026 70:20


The a16z AI Apps team outlines how they are thinking about the AI application cycle and why they believe it represents the largest and fastest product shift in software to date. The conversation places AI in the context of prior platform waves, from PCs to cloud to mobile, and examines where adoption is already translating into real enterprise usage and revenue. They walk through three core investment themes: existing software categories becoming AI-native, new categories where software directly replaces labor, and applications built around proprietary data and closed-loop workflows. Using portfolio examples, the discussion shows how these models play out in practice and why defensibility, workflow ownership, and data moats matter more than novelty as AI applications scale. Resources:Follow  Alex Rampell on X: https://twitter.com/arampellFollow Jen Kha on X: https://twitter.com/jkhamehlFollow David Haber on X: https://twitter.com/dhaberFollow Anish Acharya on X: https://twitter.com/illscience Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Not an offer or solicitation. None of the information herein should be taken as investment advice; Some of the companies mentioned are portfolio companies of a16z. Please see https://a16z.com/disclosures/ for more information.  A list of investments made by a16z is available at https://a16z.com/portfolio. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Personal Development Mastery
The Hidden Emotional Traps of Career Change No One Talks About, with Michelle Schafer | #572

Personal Development Mastery

Play Episode Listen Later Jan 19, 2026 36:09 Transcription Available


Ever feel like your career transition is “going nowhere”… even though you're doing all the right things?If you're in that messy in-between of job loss, burnout, values mismatch, or a pivot that's taking longer than you expected, this episode will feel like a deep exhale. Michelle Schafer (two-time restructuring survivor turned career coach) breaks down why this phase feels so uncertain, why confidence takes a hit, and what to do when the “sprout” hasn't shown up yet… even though growth is happening under the surface.Learn practical ways to manage the emotional rollercoaster of uncertainty (without pretending you're fine).Discover simple, repeatable actions to rebuild confidence, especially if job searching feels harder than it used to.Get a clear career-transition strategy: how to define your target, create a plan, and stop wasting energy on unfocused applications.Press play now to get a grounded, step-by-step approach that helps you regain clarity and confidence, so you can move toward work that energises you and aligns with what you believe.˚KEY POINTS AND TIMESTAMPS:01:40 - Michelle's background and restructuring experiences03:30 - The planted seed: discovering coaching07:00 - The seed and gardening metaphor for career transition09:48 - Navigating uncertainty during transitions13:54 - Emotional regulation and building supportive practices16:18 - Confidence dips and practical reflection strategies21:10 - Strategy, clarity and planning your career direction27:45 - Resources, final questions and concluding insights˚MEMORABLE QUOTE:"Use your network more. Look through your network, not just at it, because every connection opens the door to many more."˚VALUABLE RESOURCES:Michelle's website: https://mschafercoaching.ca/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Code AGI is Functional AGI (And It's Here)

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Jan 18, 2026 24:10


This episode argues that the most important AGI threshold has already been crossed. As coding agents learn to reason, iterate, and operate autonomously over long horizons, they unlock a form of functional general intelligence that matters for real work. Coding isn't just another domain—it's a universal lever that collapses the distance between idea and execution, reshaping how companies build, decide, and compete. The result isn't a gradual improvement, but a structural shift in how work gets done. Readings from:https://x.com/gradypb/status/2011491957730918510https://x.com/danshipper/status/2011617055636705718Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Tech Deciphered
72 – Our Children's Future

Tech Deciphered

Play Episode Listen Later Jan 18, 2026 64:12


IWhat is our children's future? What skills should they be developing? How should schools be adapting? What will the fully functioning citizens and workers of the future look like? A look into the landscape of the next 15 years, the future of work with human and AI interactions, the transformation of education, the safety and privacy landscapes, and a parental playbook. Navigation: Intro The Landscape: 2026–2040 The Future of Work: Human + AI The Transformation of Education The Ethics, Safety, and Privacy Landscape The Parental Playbook: Actionable Strategies Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Bertrand SchmittIntroduction Welcome to Episode 72 of Tech Deciphered, about our children’s future. What is our children’s future? What skills should they be developing? How should school be adapting to AI? What would be the functioning citizens and workers of the future look like, especially in the context of the AI revolution? Nuno, what’s your take? Maybe we start with the landscape. Nuno Goncalves PedroThe Landscape: 2026–2040 Let’s first frame it. What do people think is going to happen? Firstly, that there’s going to be a dramatic increase in productivity, and because of that dramatic increase in productivity, there are a lot of numbers that show that there’s going to be… AI will enable some labour productivity growth of 0.1 to 0.6% through 2040, which would be a figure that would be potentially rising even more depending on use of other technologies beyond generative AI, as much as 0.5 to 3.4% points annually, which would be ridiculous in terms of productivity enhancement. To be clear, we haven’t seen it yet. But if there are those dramatic increases in productivity expected by the market, then there will be job displacement. There will be people losing their jobs. There will be people that will need to be reskilled, and there will be a big shift that is similar to what happens when there’s a significant industrial revolution, like the Industrial Revolution of the late 19th century into the 20th century. Other numbers quoted would say that 30% of US jobs could be automated by 2030, which is a silly number, 30%, and that another 60% would see tremendously being altered. A lot of their tasks would be altered for those jobs. There’s also views that this is obviously fundamentally a global phenomenon, that as much as 9% of jobs could be lost to AI by 2030. I think question mark if this is a net number or a gross number, so it might be 9% our loss, but then maybe there’re other jobs that will emerge. It’s very clear that the landscape we have ahead of us is if there are any significant increases in productivity, there will be job displacement. There will be job shifting. There will be the need for reskilling. Therefore, I think on the downside, you would say there’s going to be job losses. We’ll have to reevaluate whether people should still work in general 5 days a week or not. Will we actually work in 10, 20, 30 years? I think that’s the doomsday scenario and what happens on that side of the fence. I think on the positive side, there’s also a discussion around there’ll be new jobs that emerge. There’ll be new jobs that maybe we don’t understand today, new job descriptions that actually don’t even exist yet that will emerge out this brave new world of AI. Bertrand SchmittYeah. I mean, let’s not forget how we get to a growing economy. I mean, there’s a measurement of a growing economy is GDP growth. Typically, you can simplify in two elements. One is the growth of the labour force, two, the rise of the productivity of that labour force, and that’s about it. Either you grow the economy by increasing the number of people, which in most of the Western world is not really happening, or you increase productivity. I think that we should not forget that growth of productivity is a backbone of growth for our economies, and that has been what has enabled the rise in prosperity across countries. I always take that as a win, personally. That growth in productivity has happened over the past decades through all the technological revolutions, from more efficient factories to oil and gas to computers, to network computers, to internet, to mobile and all the improvement in science, usually on the back of technological improvement. Personally, I welcome any rise in improvement we can get in productivity because there is at this stage simply no other choice for a growing world in terms of growing prosperity. In terms of change, we can already have a look at the past. There are so many jobs today you could not imagine they would exist 30 years ago. Take the rise of the influencer, for instance, who could have imagined that 30 years ago. Take the rise of the small mom-and-pop e-commerce owner, who could have imagined that. Of course, all the rise of IT as a profession. I mean, how few of us were there 30 years ago compared to today. I mean, this is what it was 30 years ago. I think there is a lot of change that already happened. I think as a society, we need to welcome that. If we go back even longer, 100 years ago, 150 years ago, let’s not forget, if I take a city like Paris, we used to have tens of thousands of people transporting water manually. Before we have running water in every home, we used to have boats going to the North Pole or to the northern region to bring back ice and basically pushing ice all the way to the Western world because we didn’t have fridges at the time. I think that when we look back in time about all the jobs that got displaced, I would say, Thank you. Thank you because these were not such easy jobs. Change is coming, but change is part of the human equation, at least. Industrial revolution, the past 250 years, it’s thanks to that that we have some improvement in living conditions everywhere. AI is changing stuff, but change is a constant, and we need to adapt and adjust. At least on my side, I’m glad that AI will be able to displace some jobs that were not so interesting to do in the first place in many situations. Maybe not dangerous like in the past because we are talking about replacing white job collars, but at least repetitive jobs are definitely going to be on the chopping block. Nuno Goncalves PedroWhat happens in terms of shift? We were talking about some numbers earlier. The World Economic Forum also has some numbers that predicts that there is a gross job creation rate of 14% from 2025 to 2030 and a displacement rate of 8%, so I guess they’re being optimistic, so a net growth in employment. I think that optimism relates to this thesis that, for example, efficiency, in particular in production and industrial environments, et cetera, might reduce labour there while increasing the demand for labour elsewhere because there is a natural lower cost base. If there’s more automation in production, therefore there’s more disposable income for people to do other things and to focus more on their side activities. Maybe, as I said before, not work 5 days a week, but maybe work four or three or whatever it is. What are the jobs of the future? What are the jobs that we see increasing in the future? Obviously, there’re a lot of jobs that relate to the technology side, that relate obviously to AI, that’s a little bit self-serving, and everything that relates to information technology, computer science, computer technology, computer engineering, et cetera. More broadly in electrical engineering, mechanical engineering, that might actually be more needed. Because there is a broadening of all of these elements of contact with digital, with AI over time also with robots and robotics, that those jobs will increase. There’s a thesis that actually other jobs that are a little bit more related to agriculture, education, et cetera, might not see a dramatic impact, that will still need for, I guess, teachers and the need for people working in farms, et cetera. I think this assumes that probably the AI revolution will come much before the fundamental evolution that will come from robotics afterwards. Then there’s obviously this discussion around declining roles. Anything that’s fundamentally routine, like data entry, clinical roles, paralegals, for example, routine manufacturing, anything that’s very repetitive in nature will be taken away. I have the personal thesis that there are jobs that are actually very blue-collar jobs, like HVAC installation, maintenance, et cetera, plumbing, that will be still done by humans for a very long time because there are actually, they appear to be repetitive, but they’re actually complex, and they require manual labour that cannot be easily, I think, right now done by robots and replacements of humans. Actually, I think there’re blue-collar roles that will be on the increase rather than on decrease that will demand a premium, because obviously, they are apprenticeship roles, certification roles, and that will demand a premium. Maybe we’re at the two ends. There’s an end that is very technologically driven of jobs that will need to necessarily increase, and there’s at the other end, jobs that are very menial but necessarily need to be done by humans, and therefore will also command a premium on the other end. Bertrand SchmittI think what you say make a lot of sense. If you think about AI as a stack, my guess is that for the foreseeable future, on the whole stack, and when I say stack, I mean from basic energy production because we need a lot of energy for AI, maybe to going up to all the computing infrastructure, to AI models, to AI training, to robotics. All this stack, we see an increase in expertise in workers and everything. Even if a lot of this work will benefit from AI improvement, the boom is so large that it will bring a lot of demand for anyone working on any part of the stack. Some of it is definitely blue-collar. When you have to build a data centre or energy power station, this requires a lot of blue-collar work. I would say, personally, I’m absolutely not a believer of the 3 or 4 days a week work week. I don’t believe a single second in that socialist paradise. If you want to call it that way. I think that’s not going to change. I would say today we can already see that breaking. I mean, if you take Europe, most European countries have a big issue with pension. The question is more to increase how long you are going to work because financially speaking, the equation is not there. Personally, I don’t think AI would change any of that. I agree with you in terms of some jobs from electricians to gas piping and stuff. There will still be demand and robots are not going to help soon on this job. There will be a big divergence between and all those that can be automated, done by AI and robots and becoming cheaper and cheaper and stuff that requires a lot of human work, manual work. I don’t know if it will become more expensive, but definitely, proportionally, in comparison, we look so expensive that you will have second thoughts about doing that investment to add this, to add that. I can see that when you have your own home, so many costs, some cost our product. You buy this new product, you add it to your home. It can be a water heater or something, built in a factory, relatively cheap. You see the installation cost, the maintenance cost. It’s many times the cost of the product itself. Nuno Goncalves PedroMaybe it’s a good time to put a caveat into our conversation. I mean, there’s a… Roy Amara was a futurist who came up with the Amara’s Law. We tend to overestimate the effect of a technology in the short run and overestimate the effect in the long run. I prefer my own law, which is, we tend to overestimate the speed at which we get to a technological revolution and underestimate its impact. I think it’s a little bit like that. I think everyone now is like, “Oh, my God, we’re going to be having the AI overlords taking over us, and AGI is going to happen pretty quickly,” and all of that. I mean, AGI will probably happen at some point. We’re not really sure when. I don’t think anyone can tell you. I mean, there’re obviously a lot of ranges going on. Back to your point, for example, on the shift of the work week and how we work. I mean, just to be very clear, we didn’t use to have 5 days a week and 2 days a weekend. If we go back to religions, there was definitely Sabbath back in the day, and there was one day off, the day of the Lord and the day of God. Then we went to 2 days of weekend. I remember going to Korea back in 2005, and I think Korea shifted officially to 5 days a week, working week and 2 days weekend for some of the larger business, et cetera, in 2004. Actually, it took another whatever years for it to be pervasive in society. This is South Korea, so this is a developed market. We might be at some point moving to 4 days a week. Maybe France was ahead of the game. I know Bertrand doesn’t like this, the 35-hour week. Maybe we will have another shift in what defines the working week versus not. What defines what people need to do in terms of efficiency and how they work and all of that. I think it’s probably just going to take longer than we think. I think there’re some countries already doing it. I was reading maybe Finland was already thinking about moving to 4 days a week. There’re a couple of countries already working on it. Certainly, there’re companies already doing it as well. Bertrand SchmittYeah, I don’t know. I’m just looking at the financial equation of most countries. The disaster is so big in Western Europe, in the US. So much debt is out that needs to get paid that I don’t think any country today, unless there is a complete reversal of the finance, will be able to make a big change. You could argue maybe if we are in such a situation, it might be because we went too far in benefits, in vacation, in work days versus weekends. I’m not saying we should roll back, but I feel that at this stage, the proof is in the pudding. The finance of most developed countries are broken, so I don’t see a change coming up. Potentially, the other way around, people leaving to work more, unfortunately. We will see. My point is that AI will have to be so transformational for the productivity for countries, and countries will have to go back to finding their ways in terms of financial discipline to reach a level where we can truly profit from that. I think from my perspective, we have time to think about it in 10, 20 years. Right now, it’s BS at this stage of this discussion. Nuno Goncalves PedroYeah, there’s a dependency, Bertrand, which is there needs to be dramatic increases in productivity that need to happen that create an expansion of economy. Once that expansion is captured by, let’s say, government or let’s say by the state, it needs to be willingly fed back into society, which is not a given. There’re some governments who are going to be like, “No, you need to work for a living.” Tough luck. There’re no handouts, there’s nothing. There’s going to be other governments that will be pressured as well. I mean, even in a more socialist Europe, so to speak. There’re now a lot of pressures from very far-right, even extreme positions on what people need to do for a living and how much should the state actually intervene in terms of minimum salaries, et cetera, and social security. To your point, the economies are not doing well in and of themselves. Anyway, there would need to be tremendous expansion of economy and willingness by the state to give back to its citizens, which is also not a given. Bertrand SchmittAnd good financial discipline as well. Before we reach all these three. Reaping the benefits in a tremendous way, way above trend line, good financial discipline, and then some willingness to send back. I mean, we can talk about a dream. I think that some of this discussion was, in some ways, to have a discussion so early about this. It’s like, let’s start to talk about the benefits of the aeroplane industries in 1915 or 1910, a few years after the Wright brothers flight, and let’s make a decision based on what the world will be in 30 years from now when we reap this benefit. This is just not reasonable. This is not reasonable thinking. I remember seeing companies from OpenAI and others trying to push this narrative. It was just political agenda. It was nothing else. It was, “Let’s try to make look like AI so nice and great in the future, so you don’t complain on the short term about what’s happening.” I don’t think this is a good discussion to have for now. Let’s be realistic. Nuno Goncalves PedroJust for the sake of sharing it with our listeners, apparently there’re a couple of countries that have moved towards something a bit lower than 5 days a week. Belgium, I think, has legislated the ability for you to compress your work week into 4 days, where you could do 10 hours for 4 days, so 40 hours. UAE has some policy for government workers, 4.5 days. Iceland has some stuff around 35 to 36 hours, which is France has had that 35 hour thing. Lithuania for parents. Then just trials, it’s all over the shop. United Kingdom, my own Portugal, of course, Germany, Brazil, and South Africa, and a bunch of other countries, so interesting. There’s stuff going on. Bertrand SchmittFor sure. I mean, France managed to bankrupt itself playing the 75 hours work week since what, 2000 or something. I mean, yeah, it’s a choice of financial suicide, I would say. Nuno Goncalves PedroWonderful. The Future of Work: Human + AI Maybe moving a little bit towards the future of work and the coexistence of work of human and AI, I think the thesis that exists a little bit in the market is that the more positive thesis that leads to net employment growth and net employment creation, as we were saying, there’s shifting of professions, they’re rescaling, and there’s the new professions that will emerge, is the notion that human will need to continue working alongside with machine. I’m talking about robots, I’m also talking about software. Basically software can’t just always run on its own, and therefore, software serves as a layer of augmentation, that humans become augmented by AI, and therefore, they can be a lot more productive, and we can be a lot more productive. All of that would actually lead to a world where the efficiencies and the economic creation are incredible. We’ll have an unparalleled industrial evolution in our hands through AI. That’s one way of looking at it. We certainly at Chameleon, that’s how we think through AI and the AI layers that we’re creating with Mantis, which is our in-house platform at Chameleon, is that it’s augmenting us. Obviously, the human is still running the show at the end, making the toughest decisions, the more significant impact with entrepreneurs that we back, et cetera. AI augments us, but we run the show. Bertrand SchmittI totally agree with that perspective that first AI will bring a new approach, a human plus AI. Here in that situation, you really have two situations. Are you a knowledgeable user? Do you know your field well? Are you an expert? Are you an IT expert? Are you a medical doctor? Do you find your best way to optimise your work with AI? Are you knowledgeable enough to understand and challenge AI when you see weird output? You have to be knowledgeable in your field, but also knowledgeable in how to handle AI, because even experts might say, “Whatever AI says.” My guess is that will be the users that will benefit most from AI. Novice, I think, are in a bit tougher situation because if you use AI without truly understanding it, it’s like laying foundations on sand. Your stuff might crumble down the way, and you will have no clue what’s happening. Hopefully, you don’t put anyone in physical danger, but that’s more worrisome to me. I think some people will talk about the rise of vibe coding, for instance. I’ve seen AI so useful to improve coding in so many ways, but personally, I don’t think vibe coding is helpful. I mean, beyond doing a quick prototype or some stuff, but to put some serious foundation, I think it’s near useless if you have a pure vibe coding approach, obviously to each their own. I think the other piece of the puzzle, it’s not just to look at human plus AI. I think definitely there will be the other side as well, which is pure AI. Pure AI replacement. I think we start to see that with autonomous cars. We are close to be there. Here we’ll be in situation of maybe there is some remote control by some humans, maybe there is local control. We are talking about a huge scale replacement of some human activities. I think in some situation, let’s talk about work farms, for instance. That’s quite a special term, but basically is to describe work that is very repetitive in nature, requires a lot of humans. Today, if you do a loan approval, if you do an insurance claim analysis, you have hundreds, thousands, millions of people who are doing this job in Europe, in the US, or remotely outsourced to other countries like India. I think some of these jobs are fully at risk to be replaced. Would it be 100% replacement? Probably not. But a 9:1, 10:1 replacement? I think it’s definitely possible because these jobs have been designed, by the way, to be repetitive, to follow some very clear set of rules, to improve the rules, to remove any doubt if you are not sure. I think some of these jobs will be transformed significantly. I think we see two sides. People will become more efficient controlling an AI, being able to do the job of two people at once. On the other side, we see people who have much less control about their life, basically, and whose job will simply disappear. Nuno Goncalves PedroTwo points I would like to make. The first point is we’re talking about a state of AI that we got here, and we mentioned this in previous episodes of Tech Deciphered, through brute force, dramatically increased data availability, a lot of compute, lower network latencies, and all of that that has led us to where we are today. But it’s brute force. The key thing here is brute force. Therefore, when AI acts really well, it acts well through brute force, through seeing a bunch of things that have happened before. For example, in the case of coding, it might still outperform many humans in coding in many different scenarios, but it might miss hedge cases. It might actually not be as perfect and as great as one of these developers that has been doing it for decades who has this intuition and is a 10X developer. In some ways, I think what got us here is not maybe what’s going to get us to the next level of productivity as well, which is the unsupervised learning piece, the actually no learning piece, where you go into the world and figure stuff out. That world is emerging now, but it’s still not there in terms of AI algorithms and what’s happening. Again, a lot of what we’re seeing today is the outcome of the brute force movement that we’ve had over the last decade, decade and a half. The second point I’d like to make is to your point, Bertrand, you were going really well through, okay, if you’re a super experienced subject-matter expert, the way you can use AI is like, wow! Right? I mean, you are much more efficient, right? I was asked to do a presentation recently. When I do things in public, I don’t like to do it. If it’s a keynote, because I like to use my package stuff, there’s like six, seven presentations that I have prepackaged, and I can adapt around that. But if it’s a totally new thing, I don’t like to do it as a keynote because it requires a lot of preparation. Therefore, I’m like, I prefer to do a fire set chat or a panel or whatever. I got asked to do something, a little bit what is taking us to this topic today around what’s happening to our children and all of that is like, “God! I need to develop this from scratch.” The honest truth is if you have domain expertise around many areas, you can do it very quickly with the aid of different tools in AI. Anything from Gemini, even with Nana Banana, to ChatGPT and other tools that are out there for you and framing, how would you do that? But the problem then exists with people that are just at the beginning of their careers, people that have very little expertise and experience, and people that are maybe coming out of college where their knowledge is mostly theoretical. What happens to those people? Even in computer engineering, even in computer science, even in software development, how do those people get to the next level? I think that’s one of the interesting conversations to be had. What happens to the recent graduate or the recent undergrad? How do those people get the expertise they need to go to the next level? Can they just be replaced by AI agents today? What’s their role in terms of the workforce, and how do they fit into that workforce? Bertrand SchmittNo, I mean, that’s definitely the biggest question. I think that a lot of positions, if you are really knowledgeable, good at your job, if you are that 10X developer, I don’t think your job is at risk. Overall, you always have some exceptions, some companies going through tough times, but I don’t think it’s an issue. On the other end, that’s for sure, the recent new graduates will face some more trouble to learn on their own, start their career, and go to that 10X productivity level. But at the same time, let’s also not kid ourselves. If we take software development, this is a profession that increase in number of graduates tremendously over the past 30 years. I don’t think everyone basically has the talent to really make it. Now that you have AI, for sure, the bar to justify why you should be there, why you should join this company is getting higher and higher. Being just okay won’t be enough to get you a career in IT. You will need to show that you are great or potential to be great. That might make things tough for some jobs. At the same time, I certainly believe there will be new opportunities that were not there before. People will have to definitely adjust to that new reality, learn and understand what’s going on, what are the options, and also try to be very early on, very confident at using AI as much as they can because for sure, companies are going to only hire workers that have shown their capacity to work well with AI. Nuno Goncalves PedroMy belief is that it generates new opportunities for recent undergrads, et cetera, of building their own microbusinesses or nano businesses. To your point, maybe getting jobs because they’ll be forced to move faster within their jobs and do less menial and repetitive activities and be more focused on actual dramatic intellectual activities immediately from the get go, which is not a bad thing. Their acceleration into knowledge will be even faster. I don’t know. It feels to me maybe there’s a positivity to it. Obviously, if you’ve stayed in a big school, et cetera, that there will be some positivity coming out of that. The Transformation of Education Maybe this is a good segue to education. How does education change to adapt to a new world where AI is a given? It’s not like I can check if you’re faking it on your homework or if you’re doing a remote examination or whatever, if you’re using or not tools, it’s like you’re going to use these tools. What happens in that case, and how does education need to shift in this brave new world of AI augmentation and AI enhancements to students? Bertrand SchmittYes, I agree with you. There will be new opportunities. I think people need to be adaptable. What used to be an absolute perfect career choice might not be anymore. You need to learn what changes are happening in the industry, and you need to adjust to that, especially if you’re a new graduate. Nuno Goncalves PedroMaybe we’ll talk a little bit about education, Bertrand, and how education would fundamentally shift. I think one of the things that’s been really discussed is what are the core skills that need to be developed? What are the core skills that will be important in the future? I think critical thinking is probably most important than ever. The ability to actually assimilate information and discern which information is correct or incorrect and which information can lead you to a conclusion or not, for example, I think is more important than ever. The ability to assimilate a bunch of pieces of information, make a decision or have an insight or foresight out of that information is very, very critical. The ability to be analytical around how you look at information and to really distinguish what’s fact from what’s opinion, I think is probably quite important. Maybe moving away more and more from memorisation from just cramming information into your brain like we used to do it in college, you have to know every single algorithm for whatever. It’s like, “Who gives a shit? I can just go and search it.” There’s these shifts that are not simple because I think education, in particular in the last century, has maybe been too focused on knowing more and more knowledge, on learning this knowledge. Now it’s more about learning how to process the knowledge rather than learning how to apprehend it. Because the apprehension doesn’t matter as much because you can have this information at any point in time. The information is available to you at the touch of a finger or voice or whatever. But the ability to then use the information to do something with it is not. That’s maybe where you start distinguishing the different level degrees of education and how things are taught. Bertrand SchmittHonestly, what you just say or describe could apply of the changes we went through the past 30 years. Just using internet search has for sure tremendously changed how you can do any knowledge worker job. Suddenly you have the internet at your fingertips. You can search about any topics. You have direct access to a Wikipedia or something equivalent in any field. I think some of this, we already went through it, and I hope we learned the consequence of these changes. I would say what is new is the way AI itself is working, because when you use AI, you realise that it can utter to you complete bullshit in a very self-assured way of explaining something. It’s a bit more scary than it used to be, because in the past, that algorithm trying to present you the most relevant stuff based on some algorithm was not trying to present you the truth. It’s a list of links. Maybe it was more the number one link versus number 100. But ultimately, it’s for you to make your own opinion. Now you have some chatbot that’s going to tell you that for sure this is the way you should do it. Then you check more, and you realise, no, it’s totally wrong. It’s definitely a slight change in how you have to apprehend this brave new world. Also, this AI tool, the big change, especially with generative AI, is the ability for them to give you the impression they can do the job at hand by themselves when usually they cannot. Nuno Goncalves PedroIndeed. There’s definitely a lot of things happening right now that need to fundamentally shift. Honestly, I think in the education system the problem is the education system is barely adapted to the digital world. Even today, if you studied at a top school like Stanford, et cetera, there’s stuff you can do online, there’s more and more tools online. But the teaching process has been very centred on syllabus, the teachers, later on the professors, and everything that’s around it. In class presence, there’s been minor adaptations. People sometimes allow to use their laptops in the classroom, et cetera, or their mobile phones. But it’s been done the other way around. It’s like the tools came later, and they got fed into the process. Now I think there needs to be readjustments. If we did this ground up from a digital first or a mobile first perspective and an AI first perspective, how would we do it? That changes how teachers and professors should interact with the classrooms, with the role of the classroom, the role of the class itself, the role of homework. A lot of people have been debating that. What do you want out of homework? It’s just that people cram information and whatever, or do you want people to show critical thinking in a specific different manner, or some people even go one step further. It’s like, there should be no homework. People should just show up in class and homework should move to the class in some ways. Then what happens outside of the class? What are people doing at home? Are they learning tools? Are they learning something else? Are they learning to be productive in responding to teachers? But obviously, AI augmented in doing so. I mean, still very unclear what this looks like. We’re still halfway through the revolution, as we said earlier. The revolution is still in motion. It’s not realised yet. Bertrand SchmittI would quite separate higher education, university and beyond, versus lower education, teenager, kids. Because I think the core up to the point you are a teenager or so, I think the school system should still be there to guide you, discovering and learning and being with your peers. I think what is new is that, again, at some point, AI could potentially do your job, do your homework. We faced similar situation in the past with the rise of Wikipedia, online encyclopedias and the stuff. But this is quite dramatically different. Then someone could write your essays, could answer your maths work. I can see some changes where you talk about homework, it’s going to be classwork instead. No work at home because no one can trust that you did it yourself anymore going forward, but you will have to do it in the classroom, maybe spend more time at school so that we can verify that you really did your job. I think there is real value to make sure that you can still think by yourself. The same way with the rise of calculators 40 years ago, I think it was the right thing to do to say, “You know what? You still need to learn the basics of doing calculations by hand.” Yes, I remember myself a kid thinking, “What the hell? I have a calculator. It’s working very well.” But it was still very useful because you can think in your head, you can solve complex problems in your head, you can check some output that it’s right or wrong if it’s coming from a calculator. There was a real value to still learn the basics. At the same point, it was also right to say, “You know what? Once you know the basics, yes, for sure, the calculator will take over because we’re at the point.” I think that was the right balance that was put in place with the rise of calculators. We need something similar with AI. You need to be able to write by yourself, to do stuff by yourself. At some point, you have to say, “Yeah, you know what? That long essays that we asked you to do for the sake of doing long essays? What’s the point?” At some point, yeah, that would be a true question. For higher education, I think personally, it’s totally ripe for full disruption. You talk about the traditional system trying to adapt. I think we start to be at the stage where “It should be the other way around.” It should be we should be restarted from the ground up because we simply have different tools, different ways. I think at this stage, many companies if you take, [inaudible 00:33:01] for instance, started to recruit people after high school. They say, “You know what? Don’t waste your time in universities. Don’t spend crazy shitload of money to pay for an education that’s more or less worthless.” Because it used to be a way to filter people. You go to good school, you have a stamp that say, “This guy is good enough, knows how to think.” But is it so true anymore? I mean, now that universities have increased the enrolment so many times over, and your university degree doesn’t prove much in terms of your intelligence or your capacity to work hard, quite frankly. If the universities are losing the value of their stamp and keep costing more and more and more, I think it’s a fair question to say, “Okay, maybe this is not needed anymore.” Maybe now companies can directly find the best talents out there, train them themselves, make sure that ultimately it’s a win-win situation. If kids don’t have to have big loans anymore, companies don’t have to pay them as much, and everyone is winning. I think we have reached a point of no return in terms of value of university degrees, quite frankly. Of course, there are some exceptions. Some universities have incredible programs, incredible degrees. But as a whole, I think we are reaching a point of no return. Too expensive, not enough value in the degree, not a filter anymore. Ultimately, I think there is a case to be made for companies to go back directly to the source and to high school. Nuno Goncalves PedroI’m still not ready to eliminate and just say higher education doesn’t have a role. I agree with the notion that it’s continuous education role that needs to be filled in a very different way. Going back to K-12, I think the learning of things is pretty vital that you learn, for example, how to write, that you learn cursive and all these things is important. I think the role of the teacher, and maybe actually even later on of the professors in higher education, is to teach people the critical information they need to know for the area they’re in. Basic math, advanced math, the big thinkers in philosophy, whatever is that you’re studying, and then actually teach the students how to use the tools that they need, in particular, K-12, so that they more rapidly apprehend knowledge, that they more rapidly can do exercises, that they more rapidly do things. I think we’ve had a static view on what you need to learn for a while. That’s, for example, in the US, where you have AP classes, like advanced placement classes, where you could be doing math and you could be doing AP math. You’re like, dude. In some ways, I think the role of the teacher and the interaction with the students needs to go beyond just the apprehension of knowledge. It also has to have apprehension of knowledge, but it needs to go to the apprehension of tools. Then the application of, as we discussed before, critical thinking, analytical thinking, creative thinking. We haven’t talked about creativity for all, but obviously the creativity that you need to have around certain problems and the induction of that into the process is critical. It’s particular in young kids and how they’re developing their learning skills and then actually accelerate learning. In that way, what I’m saying, I’m not sure I’m willing to say higher education is dead. I do think this mass production of higher education that we have, in particular in the US. That’s incredibly costly. A lot of people in Europe probably don’t see how costly higher education is because we’re educated in Europe, they paid some fee. A lot of the higher education in Europe is still, to a certain extent, subsidised or done by the state. There is high degree of subsidisation in it, so it’s not really as expensive as you’d see in the US. But someone spending 200-300K to go to a top school in the US to study for four years for an undergrad, that doesn’t make sense. For tuition alone, we’re talking about tuition alone. How does that work? Why is it so expensive? Even if I’m a Stanford or a Harvard or a University of Pennsylvania or whatever, whatever, Ivy League school, if I’m any of those, to command that premium, I don’t think makes much sense. To your point, maybe it is about thinking through higher education in a different way. Technical schools also make sense. Your ability to learn and learn and continue to education also makes sense. You can be certified. There are certifications all around that also makes sense. I do think there’s still a case for higher education, but it needs to be done in a different mould, and obviously the cost needs to be reassessed. Because it doesn’t make sense for you to be in debt that dramatically as you are today in the US. Bertrand SchmittI mean, for me, that’s where I’m starting when I’m saying it’s broken. You cannot justify this amount of money except in a very rare and stratified job opportunities. That means for a lot of people, the value of this equation will be negative. It’s like some new, indented class of people who owe a lot of money and have no way to get rid of this loan. Sorry. There are some ways, like join the government Task Force, work for the government, that at some point you will be forgiven your loans. Some people are going to just go after government jobs just for that reason, which is quite sad, frankly. I think we need a different approach. Education can be done, has to be done cheaper, should be done differently. Maybe it’s just regular on the job training, maybe it is on the side, long by night type of approach. I think there are different ways to think about. Also, it can be very practical. I don’t know you, but there are a lot of classes that are not really practical or not very tailored to the path you have chosen. Don’t get me wrong, there is always value to see all the stuff, to get a sense of the world around you. But this has a cost. If it was for free, different story. But nothing is free. I mean, your parents might think it’s free, but at the end of the day, it’s their taxes paying for all of this. The reality is that it’s not free. It’s costing a lot of money at the end of the day. I think we absolutely need to do a better job here. I think internet and now AI makes this a possibility. I don’t know you, but personally, I’ve learned so much through online classes, YouTube videos, and the like, that it never cease to amaze me how much you can learn, thanks to the internet, and keep up to date in so many ways on some topics. Quite frankly, there are some topics that there is not a single university that can teach you what’s going on because we’re talking about stuff that is so precise, so focused that no one is building a degree around that. There is no way. Nuno Goncalves PedroI think that makes sense. Maybe bring it back to core skills. We’ve talked about a couple of core skills, but maybe just to structure it a little bit for you, our listener. I think there’s a big belief that critical thinking will be more important than ever. We already talked a little bit about that. I think there’s a belief that analytical thinking, the ability to, again, distinguish fact from opinion, ability to distinguish elements from different data sources and make sure that you see what those elements actually are in a relatively analytical manner. Actually the ability to extract data in some ways. Active learning, proactive learning and learning strategies. I mean, the ability to proactively learn, proactively search, be curious and search for knowledge. Complex problem-solving, we also talked a little bit about it. That goes hand in hand normally with critical thinking and analysis. Creativity, we also talked about. I think originality, initiative, I think will be very important for a long time. I’m not saying AI at some point won’t be able to emulate genuine creativity. I wouldn’t go as far as saying that, but for the time being, it has tremendous difficulty doing so. Bertrand SchmittBut you can use AI in creative endeavours. Nuno Goncalves PedroOf course, no doubt. Bertrand SchmittYou can do stuff you will be unable to do, create music, create videos, create stuff that will be very difficult. I see that as an evolution of tools. It’s like now cameras are so cheap to create world-class quality videos, for instance. That if you’re a student, you want to learn cinema, you can do it truly on the cheap. But now that’s the next level. You don’t even need actors, you don’t even need the real camera. You can start to make movies. It’s amazing as a learning tool, as a creative tool. It’s for sure a new art form in a way that we have seen expanding on YouTube and other places, and the same for creating new images, new music. I think that AI can be actually a tool for expression and for creativity, even in its current form. Nuno Goncalves PedroAbsolutely. A couple of other skills that people would say maybe are soft skills, but I think are incredibly powerful and very distinctive from machines. Empathy, the ability to figure out how the other person’s feeling and why they’re feeling like that. Adaptability, openness, the flexibility, the ability to drop something and go a different route, to maybe be intellectually honest and recognise this is the wrong way and the wrong angle. Last but not the least, I think on the positive side, tech literacy. I mean, a lot of people are, oh, we don’t need to be tech literate. Actually, I think this is a moment in time where you need to be more tech literate than ever. It’s almost a given. It’s almost like table stakes, that you are at some tech literacy. What matters less? I think memorisation and just the cramming of information and using your brain as a library just for the sake of it, I think probably will matter less and less. If you are a subject or a class that’s just solely focused on cramming your information, I feel that’s probably the wrong way to go. I saw some analysis that the management of people is less and less important. I actually disagree with that. I think in the interim, because of what we were discussing earlier, that subject-matter experts at the top end can do a lot of stuff by themselves and therefore maybe need to less… They have less people working for them because they become a little bit more like superpowered individual contributors. But I feel that’s a blip rather than what’s going to happen over time. I think collaboration is going to be a key element of what needs to be done in the future. Still, I don’t see that changing, and therefore, management needs to be embedded in it. What other skills should disappear or what other skills are less important to be developed, I guess? Bertrand SchmittWorld learning, I’ve never, ever been a fan. I think that one for sure. But at the same time, I want to make sure that we still need to learn about history or geography. What we don’t want to learn is that stupid word learning. I still remember as a teenager having to learn the list of all the 100 French departments. I mean, who cared? I didn’t care about knowing the biggest cities of each French department. It was useless to me. But at the same time, geography in general, history in general, there is a lot to learn from the past from the current world. I think we need to find that right balance. The details, the long list might not be that necessary. At the same time, the long arc of history, our world where it is today, I think there is a lot of value. I think you talk about analysing data. I think this one is critical because the world is generating more and more data. We need to benefit from it. There is no way we can benefit from it if we don’t understand how data is produced, what data means. If we don’t understand the base of statistical analysis. I think some of this is definitely critical. But for stuff, we have to do less. It’s beyond world learning. I don’t know, honestly. I don’t think the core should change so much. But the tools we use to learn the core, yes, probably should definitely improve. Nuno Goncalves PedroOne final debate, maybe just to close, I think this chapter on education and skill building and all of that. There’s been a lot of discussion around specialisation versus generalisation, specialists versus generalists. I think for a very long time, the world has gone into a route that basically frames specialisation as a great thing. I think both of us have lived in Silicon Valley. I still do, but we both lived in Silicon Valley for a significant period of time. The centre of the universe in terms of specialisation, you get more and more specialised. I think we’re going into a world that becomes a little bit different. It becomes a little bit like what Amazon calls athletes, right? The T-Pi-shaped people get the most value, where you’re brought on top, you’re a very strong generalist on top, and you have a lot of great soft skills around management and empathy and all that stuff. Then you might have one or two subject matter expertise areas. Could be like business development and sales or corporate development and business development or product management and something else. I think those are the winners of the future. The young winners of the future are going to be more and more T-pi-shaped, if I had to make a guess. Specialisation matters, but maybe not as much as it matters today. It matters from the perspective that you still have to have spikes in certain areas of focus. But I’m not sure that you get more and more specialised in the area you’re in. I’m not sure that’s necessarily how humans create most value in their arena of deployment and development. Professionally, and therefore, I’m not sure education should be more and more specialised just for the sake of it. What do you think? Bertrand SchmittI think that that’s a great point. I would say I could see an argument for both. I think there is always some value in being truly an expert on a topic so that you can keep digging around, keep developing the field. You cannot develop a field without people focused on developing a field. I think that one is there to stay. At the same time, I can see how in many situations, combining knowledge of multiple fields can bring tremendous value. I think it’s very clear as well. I think it’s a balance. We still need some experts. At the same time, there is value to be quite horizontal in terms of knowledge. I think what is still very valuable is the ability to drill through whenever you need. I think that we say it’s actually much easier than before. That for me is a big difference. I can see how now you can drill through on topics that would have been very complex to go into. You will have to read a lot of books, watch a lot of videos, potentially do a new education before you grasp much about a topic. Well, now, thanks to AI, you can drill very quickly on topic of interest to you. I think that can be very valuable. Again, if you just do that blindly, that’s calling for trouble. But if you have some knowledge in the area, if you know how to deal with AI, at least today’s AI and its constraints, I think there is real value you can deliver thanks to an ability to drill through when you don’t. For me, personally, one thing I’ve seen is some people who are generalists have lost this ability. They have lost this ability to drill through on a topic, become expert on some topic very quickly. I think you need that. If you’re a VC, you need to analyse opportunity, you need to discover a new space very quickly. We say, I think some stuff can move much quicker than before. I’m always careful now when I see some pure generalists, because one thing I notice is that they don’t know how to do much anything any more. That’s a risk. We have example of very, very, very successful people. Take an Elon Musk, take a Steve Jobs. They have this ability to drill through to the very end of any topic, and that’s a real skill. Sometimes I see people, you should trust the people below. They know better on this and that, and you should not question experts and stuff. Hey, guys, how is it that they managed to build such successful companies? Is their ability to drill through and challenge hardcore experts. Yes, they will bring top people in the field, but they have an ability to learn quickly a new space and to drill through on some very technical topics and challenge people the right way. Challenge, don’t smart me. Not the, I don’t care, just do it in 10 days. No, going smartly, showing people those options, learning enough in the field to be dangerous. I think that’s a very, very important skill to have. Nuno Goncalves PedroMaybe switching to the dark side and talking a little bit about the bad stuff. I think a lot of people have these questions. There’s been a lot of debate around ChatGPT. I think there’s still a couple of court cases going on, a suicide case that I recently a bit privy to of a young man that killed himself, and OpenAI and ChatGPT as a tool currently really under the magnifying glass for, are people getting confused about AI and AI looks so similar to us, et cetera. The Ethics, Safety, and Privacy Landscape Maybe let’s talk about the ethics and safety and privacy landscape a little bit and what’s happening. Sadly, AI will also create the advent of a world that has still a lot of biases at scale. I mean, let’s not forget the AI is using data and data has biases. The models that are being trained on this data will have also biases that we’re seeing with AI, the ability to do things that are fake, deep fakes in video and pictures, et cetera. How do we, as a society, start dealing with that? How do we, as a society, start dealing with all the attacks that are going on? On the privacy side, the ability for these models and for these tools that we have today to actually have memory of the conversations we’ve had with them already and have context on what we said before and be able to act on that on us, and how is that information being farmed and that data being farmed? How is it being used? For what purposes is it being used? As I said, the dark side of our conversation today. I think we’ve been pretty positive until now. But in this world, I think things are going to get worse before they get better. Obviously, there’s a lot of money being thrown at rapid evolution of these tools. I don’t see moratoriums coming anytime soon or bans on tools coming anytime soon. The world will need to adapt very, very quickly. As we’ve talked in previous episodes, regulation takes a long time to adapt, except Europe, which obviously regulates maybe way too fast on technology and maybe not really on use cases and user flows. But how do we deal with this world that is clearly becoming more complex? Bertrand SchmittI mean, on the European topic, I believe Europe should focus on building versus trying to sensor and to control and to regulate. But going back to your point, I think there are some, I mean, very tough use case when you see about voice cloning, for instance. Grandparents believing that their kids are calling them, have been kidnapped when there is nothing to it, and they’re being extorted. AI generating deepfakes that enable sextortion, that stuff. I mean, it’s horrible stuff, obviously. I’m not for regulation here, to be frank. I think that we should for sure prosecute to the full extent of the law. The law has already a lot of tools to deal with this type of situation. But I can see some value to try to prevent that in some tools. If you are great at building tools to generate a fake voice, maybe you should make sure that you are not helping scammers. If you can generate easily images, you might want to make sure that you cannot easily generate tools that can be used for creating deep fakes and sex extortion. I think there are things that should be done by some providers to limit such terrible use cases. At the same time, the genie is out. There is also that part around, okay, the world will need to adapt. But yeah, you cannot trust everything that is done. What could have looked like horrible might not be true. You need to think twice about some of this, what you see, what you hear. We need to adjust how we live, how we work, but also how we prevent that. New tools, I believe, will appear. We will learn maybe to be less trustful on some stuff, but that is what it is. Nuno Goncalves PedroMaybe to follow up on that, I fully agree with everything you just said. We need to have these tools that will create boundary conditions around it as well. I think tech will need to fight tech in some ways, or we’ll need to find flaws in tech, but I think a lot of money needs to be put in it as well. I think my shout-out here, if people are listening to us, are entrepreneurs, et cetera, I think that’s an area that needs more and more investment, an area that needs more and more tooling platforms that are helpful to this. It’s interesting because that’s a little bit like how OpenAI was born. OpenAI was born to be a positive AI platform into the future. Then all of a sudden we’re like, “Can we have tools to control ChatGPT and all these things that are out there now?” How things have changed, I guess. But we definitely need to have, I think, a much more significant investment into these toolings and platforms than we do have today. Otherwise, I don’t see things evolving much better. There’s going to be more and more of this. There’s going to be more and more deep fakes, more and more, lack of contextualisation. There’s countries now that allow you to get married with not a human. It’s like you can get married to an algorithm or a robot or whatever. It’s like, what the hell? What’s happening now? It’s crazy. Hopefully, we’ll have more and more boundary conditions. Bertrand SchmittYeah, I think it will be a boom for cybersecurity. No question here. Tools to make sure that is there a better trust system or detecting the fake. It’s not going to be easy, but it has been the game in cybersecurity for a long time. You have some new Internet tools, some new Internet products. You need to find a difference against it and the constant war between the attackers and the defender. Nuno Goncalves PedroThe Parental Playbook: Actionable Strategies Maybe last but not the least in today’s episode, the parent playbook I’m a parent, what should I do I’ll actually let you start first. Bertrand, I’m parent-alike, but I am, sadly, not a parent, so I’ll let you start first, and then I’ll share some of my perspectives as well as a parent-like figure. Bertrand SchmittYeah, as a parent to an 8-year, I would say so far, no real difference than before. She will do some homework on an iPad. But beyond that, I cannot say I’ve seen at this stage so much difference. I think it will come up later when you have different type of homeworks when the kids start to be able to use computers on their own. What I’ve seen, however, is some interesting use cases. When my daughter is not sure about the spelling, she simply asks, Siri. “Hey, Siri, how do you spell this or this or that?” I didn’t teach her that. All of this came on her own. She’s using Siri for a few stuff for work, and I’m quite surprised in a very smart, useful way. It’s like, that’s great. She doesn’t need to ask me. She can ask by herself. She’s more autonomous. Why not? It’s a very efficient way for her to work and learn about the world. I probably feel sad when she asks Siri if she’s her friend. That does not feel right to me. But I would say so far, so good. I’ve seen only AI as a useful tool and with absolutely very limited risk. At the same time, for sure, we don’t let our kid close to any social media or the like. I think some of this stuff is for sure dangerous. I think as a parent, you have to be very careful before authorising any social media. I guess at some point you have no choice, but I think you have to be very careful, very gradual, and putting a lot of controls and safety mechanism I mean, you talk about kids committing suicide. It’s horrible. As a parent, I don’t think you can have a bigger worry than that. Suddenly your kids going crazy because someone bullied them online, because someone tried to extort them online. This person online could be someone in the same school or some scammer on the other side of the world. This is very scary. I think we need to have a lot of control on our kids’ digital life as well as being there for them on a lot of topics and keep drilling into them how a lot of this stuff online is not true, is fake, is not important, and being careful, yes, to raise them, to be critical of stuff, and to share as much as possible with our parents. I think We have to be very careful. But I would say some of the most dangerous stuff so far, I don’t think it’s really coming from AI. It’s a lot more social media in general, I would say, but definitely AI is adding another layer of risk. Nuno Goncalves PedroFrom my perspective, having helped raise three kids, having been a parent-like role today, what I would say is I would highlight against the skills that I was talking about before, and I would work on developing those skills. Skills that relate to curiosity, to analytical behaviours at the same time as being creative, allowing for both, allowing for the left brain, right brain, allowing for the discipline and structure that comes with analytical thinking to go hand in hand with doing things in a very, very different way and experimenting and failing and doing things and repeating them again. All the skills that I mentioned before, focusing on those skills. I was very fortunate to have a parental unit. My father and my mother were together all their lives: my father, sadly, passing away 5 years ago that were very, very different, my mother, more of a hacker in mindset. Someone was very curious, medical doctor, allowing me to experiment and to be curious about things around me and not simplifying interactions with me, saying it as it was with a language that was used for that particular purpose, allowing me to interact with her friends, who were obviously adults. And then on the other side, I have my father, someone who was more disciplined, someone who was more ethical, I think that becomes more important. The ability to be ethical, the ability to have moral standing. I’m Catholic. There is a religious and more overlay to how I do things. Having the ability to portray that and pass that to the next generation and sharing with them what’s acceptable and what’s not acceptable, I think is pretty critical and even more critical than it was before. The ability to be structured, to say and to do what you say, not just actually say a bunch of stuff and not do it. So, I think those things don’t go out of use, but I would really spend a lot more focus on the ability to do critical thinking, analytical thinking, having creative ideas, obviously, creating a little bit of a hacker mindset, how to cut corners to get to something is actually really more and more important. The second part is with all of this, the overlay of growth mindset. I feel having a more flexible mindset rather than a fixed mindset. What I mean by that is not praising your kids or your grandchildren for being very intelligent or very beautiful, which are fixed things, they’re static things, but praising them for the effort they put into something, for the learning that they put into something, for the process, raising the

Pioneering PAI: How Daniel Miessler's Personal AI Infrastructure Activates Human Agency & Creativity

Play Episode Listen Later Jan 18, 2026 148:11


Daniel Miessler shares his Personal AI Infrastructure (PAI) framework and vision for a future where single human owners are supported by armies of AI agents. He explains his TELOS system for defining purpose and goals, multi-layered memory design, and orchestration of multiple models and sub-agents. The conversation dives into cybersecurity impacts, from AI-accelerated testing to inevitable personalized spear-phishing and always-on defensive monitoring. Listeners will learn how scaffolding can turn frontier models into true digital assistants and even help reshape their own working habits. LINKS: PAI principles on GitHub README Daniel Miessler about page How Miessler's projects fit together AI changes predictions for 2026 Fabric open-source AI framework Personal AI Infrastructure GitHub repository Current definition of AGI article Why we'll have AGI by 2028 RAID AI definitions framework article Unsupervised Learning newsletter signup Daniel Miessler LinkedIn profile Sponsors: MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive MATS: MATS is a fully funded 12-week research program pairing rising talent with top mentors in AI alignment, interpretability, security, and governance. Apply for the next cohort at https://matsprogram.org/s26-tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai PRODUCED BY: https://aipodcast.ing

The Deep Dive Radio Show and Nick's Nerd News
The Human Brain Soon To Be Simulated...

The Deep Dive Radio Show and Nick's Nerd News

Play Episode Listen Later Jan 17, 2026 4:44


we are one step closer to AGI, here...

Making Sense with Sam Harris
#453 — AI and the New Face of Antisemitism

Making Sense with Sam Harris

Play Episode Listen Later Jan 16, 2026 21:50


Sam Harris speaks with Judea Pearl about causality, AI, and antisemitism. They discuss why LLMs won't spawn AGI, alignment concerns in the race for AGI, Pearl's public life after the murder of his son Daniel, the post-October 7th shift toward open anti-Zionism, the overlap between anti-Zionism and antisemitism, the misuse of "Islamophobia," Israel's fracture under Netanyahu, confronting anti-Zionism in universities, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

a16z
How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

a16z

Play Episode Listen Later Jan 16, 2026 57:06


The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on. Follow Omar Khattab on X: https://x.com/lateinteractionFollow Martin Casado on X: https://x.com/martin_casadoCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Jim Rutt Show
EP 329 Worldviews: David Krakauer

The Jim Rutt Show

Play Episode Listen Later Jan 15, 2026


In the inaugural episode of a new series, Jim talks with David Krakauer about his intellectual formation and worldview. They discuss what woke up as David this morning, his commitments to chance and pattern seeking, his epiphany about the idea of the idea at age 12 or 13, his perverse attraction to the arcane and difficult, evolution as integral to intelligence, the risk-averse character of scholars and the sociology of science, the Santa Fe Institute's attempt to maintain revolutionary science, the Ouroboros concept challenging foundationalism in epistemology, the standard model of physics as foundational versus the view that you can establish foundations anywhere, string theory as a slowly dying pseudoscience, whether beauty is a useful guide in science, emergence and broken symmetries, Phil Anderson's "More is Different" paper, the Wigner reversal and the shift from law to initial conditions, rejecting both weak and strong emergence, effective theories and causally justified concepts, downward causality, micrograining versus coarse graining, the distinction between abiotic and biotic systems, games and puzzles as model systems for complexity, combinatorial solution spaces, heuristics as dimensional reducers and potentially the golden road to AGI, Isaiah Berlin's influence on David's worldview, negative versus positive liberties, value pluralism and historicity, the Fermi paradox and the possibility of alien life, the rational versus the irrational in human life, and much more. Episode Transcript JRS EP 192 - David Krakauer on Science, Complexity and AI JRS EP10 - David Krakauer: Complexity Science "A Minimum Viable Metaphysics," by Jim Rutt "More Is Different," by P.W. Anderson The Emergence of Everything, by Harold Morowitz David Krakauer's research explores the evolution of intelligence and stupidity on Earth. This includes studying the evolution of genetic, neural, linguistic, social, and cultural mechanisms supporting memory and information processing, and exploring their shared properties. President of the Santa Fe Institute since 2015, he served previously as the founding director of the Wisconsin Institutes for Discovery, the co-director of the Center for Complexity and Collective Computation, and professor of mathematical genetics, all at the University of Wisconsin, Madison.

Personal Development Mastery
What's Keeping Your Midlife Career Transition Stuck (It's Not Fear or Lack of Motivation) | #571

Personal Development Mastery

Play Episode Listen Later Jan 15, 2026 6:05 Transcription Available


Are you stuck in a career that no longer fits, but still haven't taken a real-world step to change it?If you've been quietly contemplating a career or life transition for months (or years) without moving forward, this episode will speak directly to you. It's not about confusion. It's about why high-achieving professionals often stay silently stuck, even when clarity has already arrived.Discover why your current approach might be the very thing keeping you circling in place.If you've been carrying this quietly on your own, listen now to discover a new, more sustainable way forward.˚VALUABLE RESOURCES:Book a 1:1 conversation with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Support the showCareer transition and career clarity podcast content for midlife professionals in career transition, navigating a career change, career pivot or second career, starting a new venture or leaving a long-term career. Discover practical tools for career clarity, confident decision-making, rebuilding self belief and confidence, finding purpose and meaning in work, designing a purposeful, fulfilling next chapter, and creating meaningful work that fits who you are now. Episodes explore personal development and mindset for midlife professionals, including how to manage uncertainty and pressure, overcome fear and self-doubt, clarify your direction, plan your next steps, and turn your experience into a new role, business or vocation that feels aligned. To support the show, click here.

Revealing Men
Masculinity in the Age of AGI- Preserving our Humanity

Revealing Men

Play Episode Listen Later Jan 15, 2026 18:51


In the first segment of a two-part Revealing Men series on how men will adjust to the advance of artificial general intelligence (AGI)*, Randy Flood, director and co-founder of the Men's Resource Center, describes what happens when “technology starts outperforming men physically, mentally, and cognitively.” In a world where masculinity has been “equated with utility,” [...]

Deep Thoughts Radio Show
DTR S7: AGI – Artificial General Intelligence

Deep Thoughts Radio Show

Play Episode Listen Later Jan 14, 2026 87:37


The concept of an AGI or Artificial General Intelligence has been around as long as man has been able to conceive of talking computers. The establishment want you to believe that no corporation or institution has ever attempted to create one and that it will be 20 years from now until we see one in action. This episode breaks down the logic of that claim and explains in complete detail what an AGI… Source

thinkfuture with kalaboukis
1124 Why the Future Is Getting Harder to Predict | Steve Brown on AI, AGI, and What Comes Next

thinkfuture with kalaboukis

Play Episode Listen Later Jan 14, 2026 39:09


See more: https://thinkfuture.substack.comConnect with Steve: https://stevebrown.ai---What happens when technology moves faster than our ability to forecast it?In this episode of thinkfuture, host Chris Kalaboukis speaks with Steve Brown, veteran technologist, former Intel futurist, and former member of Google DeepMind's AI research lab. With over 35 years of experience across high-tech, digital transformation, and AI, Steve offers a rare long-view perspective on where we are—and why predicting what comes next has never been harder.Steve explains how long-term forecasting used to be feasible when technological progress followed clearer trajectories. Today, breakthroughs in AI—and soon quantum computing—are compressing decades of progress into just a few years. The result is a future that's accelerating faster than our institutions, economic models, and assumptions can keep up with.We cover:- Why 10-year technology forecasts are now nearly impossible- How AI is already accelerating progress in math, physics, and science- Why the combination of AI and quantum computing could reshape material science, chemistry, and biology- The likelihood of Artificial General Intelligence (AGI) arriving within 5–10 years- How AGI could disrupt jobs and force a rethink of capitalism itself- Why labor may increasingly turn into capital-The need for new economic models, shorter workweeks, or earlier retirement- How humans find meaning when machines handle most productive workSteve argues we may see more progress in the next five years than in the last fifty—and that the biggest challenge won't be technological, but human.If you're interested in AI, AGI, the future of work, economic disruption, or the limits of forecasting, this conversation offers a grounded, thoughtful look at what may be coming sooner than we expect.

Personal Development Mastery
Why Midlife Professionals Feel Disconnected from Their Work and Identity, and How to Find Career Clarity, with Shelley McIntyre | #570

Personal Development Mastery

Play Episode Listen Later Jan 12, 2026 43:25 Transcription Available


Are you a midlife professional feeling a growing disconnect between your career success and your personal fulfillment?Many Generation X professionals reach midlife with impressive achievements, yet silently struggle with a deep sense of dissatisfaction, identity confusion, or the haunting feeling that time is running out to live a truly meaningful life. This episode offers a compassionate, insightful roadmap for those questioning their current career path and seeking a purpose-driven reinvention.Discover why high performers often feel disconnected at midlife, and what to do when the “mask” you've worn at work no longer fits.Learn practical ways to explore new possibilities through tiny experiments, identity redefinition, and values discovery.Hear a deeply personal story of transition from corporate leadership to purpose-led coaching, with actionable advice on navigating financial fears, identity loss, and uncertainty.Listen now to learn how to begin designing a life and career that reflects who you truly are – not just the role you've always played.˚KEY POINTS AND TIMESTAMPS:00:41 - Introducing Shelley McIntyre & the midlife disconnect03:04 - Masks, midlife reckoning and inner identity06:53 - Options, evidence and experimenting with possibility09:17 - Values, meaning and redefining what work must provide11:18 - Shelley's personal transition story16:26 - Practical first steps: finances and portfolio careers19:33 - Challenges of transition: uncertainty, off-gassing and relationships24:14 - Identity, masks and rediscovering what you love30:16 - Professional identity, caring professions and essence work˚MEMORABLE QUOTE:"All knowledge is rumor until it is in the bones."˚VALUABLE RESOURCES:Shelley's website: https://burnthemapcoaching.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Kliq This: The Kevin Nash Podcast
Beat Bron and Make him

Kliq This: The Kevin Nash Podcast

Play Episode Listen Later Jan 12, 2026 115:52


This episode of Kliq This is one of those mornings where the conversation refuses to stay light. Kevin Nash and Sean Oliver come in reacting to the world as it is right now, not as anyone wishes it were, and the tone shifts fast. What starts as media and culture talk turns into something heavier, sharper, and far more uncomfortable Kev doesn't hedge his opinions here. He draws lines, explains why he draws them, and challenges the way stories are framed once they hit the news cycle. There is real frustration with how power is exercised, how accountability gets blurred, and how quickly narratives are assigned before facts are even settled The discussion widens into government, law enforcement, and what happens when authority is paired with fear and immunity. Kevin speaks from lived experience and training, not from a headline, and that perspective drives some of the most intense moments of the show. This is not a clean debate and it is not meant to be Just when you think the episode has found its ceiling, it pivots again. Economics, foreign policy, tariffs, oil, and global power plays all get pulled into the same conversation. The connective tissue is control, who benefits from it, and who pays the price when decisions are made far away from everyday people As always, Kliq This refuses to end on a neat bow. There are laughs, unexpected detours, and moments of gallows humor that cut through the tension. If you come to this show for honesty, unpredictability, and conversations that sound nothing like sanitized media panels, this episode delivers exactly that StopBox- Get firearm security redesigned and save 10% off @StopBoxUSA with code NASH10 at https://www.stopboxusa.com/NASH10 #stopboxpod BetterWild-Right now, Betterwild is offering our listeners up to 40% off your order at betterwild.com/KLIQ 00:00 Kliq This #184: Law & Disorder 00:57 Trying to keep up to date 01:19 Sophomore slumps in TV 05:59 Federal Investigation 08:00 "I am giving my opinion on things that happened" 09:56 Minneapolis Incident 16:52 Prior Incident (June 2025) 19:02 Domestic Terrorism? 23:56 Portland Shooting 28:25 Are there enough Republicans to stand up against this? 34:14 China Graph 38:54 Venezuelan Oil 47:26 AGI? 50:38 Venezualian casualties 01:02:28 The Champagne of beers 01:04:20 BREAK BETTER WILD 01:08:53 "maybe the content this week is suitable" 01:09:32 Portland shooting 01:11:07 "George you can type this shit but you can't say it" 01:11:52 "Nash swallows the pill in the locker room" 01:14:33 capturing Maduro 01:22:20 BREAK STOPBOX 01:25:11 RAW 01:26:49 Rhea is champ 01:27:22 Gunther/AJ 01:29:58 Maxxine Dupree 01:32:00 Punk/Bron 01:44:38 TRT. What to expect 01:47:30 Any Decade 01:48:20 Dennis Rodman 01:50:53 William Regal 01:52:21 OUTRO

Light Talk with The Lumen Brothers
LIGHT TALK Episode 458 - "Lighting, Art, and Pizza - Our Conversation with Elena Hewett and Darius Knight Evans"

Light Talk with The Lumen Brothers

Play Episode Listen Later Jan 10, 2026 43:06


In this episode of LIGHT TALK, as part of our Young Designers Series, The Lumen Brothers and Sister interview USITT PAT MacKay Diversity in Design Scholarship Winners Darius Knight Evans and Elena Hewett.   In this episode, Elena, Darius, Ellen, Steve, Dennis, and David discuss: How young designers find the right education; Different paths in discovering lighting design careers; Connecting with the right people in the industry; This years' interesting LDI presentations, panels and products; Exciting innovations; New ideas for post-graduate education; Suggestions for future LDI conferences; The next steps; How young designers are using and dealing with AI and AGI; Coping with stress; and Words of wisdom for other young designers.  Nothing is Taboo, Nothing is Sacred, and Very Little Makes Sense.

Data Science at Home
AGI: The Dream We Should Never Reach (Ep. 296)

Data Science at Home

Play Episode Listen Later Jan 10, 2026 45:41


Also on YouTube   Two AI experts who actually love the technology explain why chasing AGI might be the worst thing for AI's future—and why the current hype cycle could kill the field we're trying to save. Want to dive deeper? Head to datascienceathome.com for detailed show notes, code examples, and exclusive deep-dives into the papers we discuss.   Subscribe to our newsletter for weekly breakdowns of cutting-edge research delivered straight to your inbox—no fluff, just science!

Software Defined Talk
Episode 554: The Alpha and The Omega

Software Defined Talk

Play Episode Listen Later Jan 9, 2026 72:05


This week, we discuss AI's impact on Stack Overflow, Docker's Hardened Images, and Nvidia buying Groq. Plus, thoughts on playing your own game and having fun. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) 554 (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) Please complete the Software Defined Talk Listener Survey! (https://docs.google.com/forms/d/e/1FAIpQLSfl7eHWQJwu2tBLa-FjZqHG2nr6p_Z3zQI3Pp1EyNWQ8Fu-SA/viewform?usp=header) Runner-up Titles It's all brisket after that. Exploring Fun Should I go build a snow man? Pets Innersourcing Two books Michael Lewis should write. Article IV is foundational. Freedom is options. Rundown Stack Overflow is dead. (https://x.com/rohanpaul_ai/status/2008007012920209674?s=20) Hardened Images for Everyone (https://www.docker.com/blog/docker-hardened-images-for-every-developer/) Tanzu's Bitnami stuff does this too (https://blogs.vmware.com/tanzu/what-good-software-supply-chain-security-looks-like-for-highly-regulated-industries/). OpenAI OpenAI's New Fundraising Round Could Value Startup at as Much as $830 Billion (https://www.wsj.com/tech/ai/openais-new-fundraising-round-could-value-startup-at-a[…]4238&segment_id=212500&user_id=c5a514ba8b7d9a954711959a6031a3fa) OpenAI Reportedly Planning to Make ChatGPT "Prioritize" Advertisers in Conversation (https://futurism.com/artificial-intelligence/openai-chatgpt-sponsored-ads) OpenAI bets big on audio as Silicon Valley declares war on screens (https://techcrunch.com/2026/01/01/openai-bets-big-on-audio-as-silicon-valley-declares-war-on-screens/) Sam Altman says: He has zero percent interest in remaining OpenAI CEO, once (https://timesofindia.indiatimes.com/technology/tech-news/sam-altman-says-he-has-zero-percent-interest-remaining-openai-ceo-once-/articleshow/126350602.cms) Nvidia buying AI chip startup Groq's assets for about $20 billion in its largest deal on record (https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startup-groq-for-about-20-billion-biggest-deal.html) Relevant to your Interests Broadcom IT uses Tanzu Platform to host MCP Servers (https://news.broadcom.com/app-dev/broadcom-tanzu-platform-agentic-business-transformation). A Brief History Of The Spreadsheet (https://hackaday.com/2025/12/15/a-brief-history-of-the-spreadsheet/) Databricks is raising over $4 billion in Series L funding at a $134 billion (https://x.com/exec_sum/status/2000971604449485132?s=20) Amazon's big AGI reorg decoded by Corey Quinn (https://www.theregister.com/2025/12/17/jassy_taps_peter_desantis_to_run_agi/) “They burned millions but got nothing.” (https://automaton-media.com/en/news/japanese-game-font-services-aggressive-price-hike-could-be-result-of-parent-companys-alleged-ai-failu/) X sues to protect Twitter brand Musk has been trying to kill (https://www.theregister.com/2025/12/17/x_twitter_brand_lawsuit/) Mozilla's new CEO says AI is coming to Firefox, but will remain a choice | TechCrunch (https://techcrunch.com/2025/12/17/mozillas-new-ceo-says-ai-is-coming-to-firefox-but-will-remain-a-choice/) Why Oracle keeps sparking AI-bubble fears (https://www.axios.com/2025/12/18/ai-oracle-stock-blue-owl) What's next for Threads (https://sources.news/p/whats-next-for-threads) Salesforce Executives Say Trust in Large Language Models Has Declined (https://www.theinformation.com/articles/salesforce-executives-say-trust-generative-ai-declined?rc=giqjaz) Akamai Technologies Announces Acquisition of Function-as-a-Service Company Fermyon (https://www.akamai.com/newsroom/press-release/akamai-announces-acquisition-of-function-as-a-service-company-fermyon) Google Rolling Out Gmail Address Change Feature: Here Is How It Works (https://finance.yahoo.com/news/google-rolling-gmail-address-change-033112607.html) The Enshittifinancial Crisis (https://www.wheresyoured.at/the-enshittifinancial-crisis/) MongoBleed: Critical MongoDB Vulnerability CVE-2025-14847 | Wiz Blog (https://www.wiz.io/blog/mongobleed-cve-2025-14847-exploited-in-the-wild-mongodb) Softbank to buy data center firm DigitalBridge for $4 billion in AI push (https://www.cnbc.com/amp/2025/12/29/digitalbridge-shares-jump-on-report-softbank-in-talks-to-acquire-firm.html) The best tech announced at CES 2026 so far (https://www.theverge.com/tech/854159/ces-2026-best-tech-gadgets-smartphones-appliances-robots-tvs-ai-smart-home) Who's who at X, the deepfake porn site formerly known as Twitter (https://www.ft.com/content/ad94db4c-95a0-4c65-bd8d-3b43e1251091?accessToken=zwAGR7kzep9gkdOtlNtMlaBMZdO9jTtD4SUQkQ.MEYCIQCdZajuC9uga-d9b5Z1t0HI2BIcnkVoq98loextLRpCTgIhAPL3rW72aTHBNL_lS7s1ONpM2vBgNlBNHDBeGbHkPkZj&sharetype=gift&token=a7473827-0799-4064-9008-bf22b3c99711) Manus Joins Meta for Next Era of Innovation (https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation) The WELL: State of the World 2026 with Bruce Sterling and Jon Lebkowsky (https://people.well.com/conf/inkwell.vue/topics/561/State-of-the-World-2026-with-Bru-page01.html) Virtual machines still run the world (https://cote.io/2026/01/07/virtual-machines-still-run-the.html) Databases in 2025: A Year in Review (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Chat Platform Discord Files Confidentially for IPO (https://www.bloomberg.com/news/articles/2026-01-06/chat-platform-discord-is-said-to-file-confidentially-for-ipo?embedded-checkout=true) The DRAM shortage explained: AI, rising prices, and what's next (https://www.techradar.com/pro/why-is-ram-so-expensive-right-now-its-more-complicated-than-you-think) Nonsense Palantir CEO buys monastery in Old Snowmass for $120 million (https://www.denverpost.com/2025/12/17/palantir-alex-karp-snowmass-monastery/amp/) H-E-B gives free groceries to all customers after registers glitch today in Burleson, Texas. (https://www.reddit.com/r/interestingasfuck/s/ZEcblg7atP) Conferences cfgmgmtcamp 2026 (https://cfgmgmtcamp.org/ghent2026/), February 2nd to 4th, Ghent, BE. Coté speaking - anyone interested in being a SDI guest? DevOpsDayLA at SCALE23x (https://www.socallinuxexpo.org/scale/23x), March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026 (https://devnexus.com), March 4th to 6th, Atlanta, GA. Coté has a discount code, but he's not sure if he can give it out. He's asking! Send him a DM in the meantime. KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. Whole bunch of VMUGs, mostly in the US. The CFPs are open (https://app.sessionboard.com/submit/vmug-call-for-content-2026/ae1c7013-8b85-427c-9c21-7d35f8701bbe?utm_campaign=5766542-VMUG%20Voice&utm_medium=email&_hsenc=p2ANqtz-_YREN7dr6p3KSQPYkFSN5K85A-pIVYZ03ZhKZOV0O3t3h0XHdDHethhx5O8gBFguyT5mZ3n3q-ZnPKvjllFXYfWV3thg&_hsmi=393690000&utm_content=393685389&utm_source=hs_email), go speak at them! Coté speaking in Amsterdam. Amsterdam (March 17-19, 2026), Minneapolis (April 7-9, 2026), Toronto (May 12-14, 2026), Dallas (June 9-11, 2026), Orlando (October 20-22, 2026) SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Why Data Doesn't Always Win, with a Philosopher of Art (https://podcasts.apple.com/us/podcast/the-points-you-shouldnt-score-a-new-years-resolution/id1685093486?i=1000743950053) (Apple Podcasts) Why Data Doesn't Always Win, with a Philosopher of Art (https://www.youtube.com/watch?v=7AdbePyGS2M&list=RD7AdbePyGS2M&start_radio=1) (YouTube) Coté: “Databases in 2025: A Year in Review.” (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Photo Credits Header (https://unsplash.com/photos/red-and-black-love-neon-light-signage-igJrA98cf4A)

AMA Part 1: Is Claude Code AGI? Are we in a bubble? Plus Live Player Analysis

Play Episode Listen Later Jan 9, 2026 114:39


In this AMA episode, Nathan gives an update on his son Ernie's cancer treatment and how frontier AI models are helping him navigate complex medical decisions. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. He reflects on whether Claude Opus 4.5 and Claude Code amount to AGI-level coding, sharing stories of hospital vibe coding apps for his family. You'll hear his framework for getting real value from Gemini 3, Claude, and GPT 5.2 Pro, plus his take on AI bubbles, Chinese models, chip controls, and who the true live players are in today's AI race. Sponsors: MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Framer: Framer is an enterprise-grade website builder that lets business teams design, launch, and optimize their.com with AI-powered wireframing, real-time collaboration, and built-in analytics. Start building for free and get 30% off a Framer Pro annual plan at https://framer.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) AMA intro format (00:22) Ernie health update (09:24) Claude 4.5 question (Part 1) (09:29) Sponsors: MongoDB | Framer (11:21) Claude 4.5 question (Part 2) (13:21) Using AI for cancer (19:48) AI value and skill (22:10) Holiday coding projects (Part 1) (22:15) Sponsor: Tasklet (23:27) Holiday coding projects (Part 2) (28:01) Claude code workflow (32:03) Is Claude 4.5 AGI (36:04) AI bubble or not (41:22) VC froth examples (46:09) Chinese models comparison (55:29) H200 exports to China (01:03:40) Google DeepMind strengths (01:11:55) OpenAI strategy outlook (01:22:51) Anthropic culture and strategy (01:36:17) XAI promise and risks (01:48:20) Meta and Microsoft (01:52:29) Part two preview (01:53:39) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

Mind Matters
Artificial Intelligence: Navigating the Hype, Limitations, and Ethics

Mind Matters

Play Episode Listen Later Jan 8, 2026 0:01


On this episode of Mind Matters News, our conversation continues with Dr. Donald Wunsch on his experiences with AI and his recent article in the IEEE Computational Intelligence Magazine about artificial general intelligence. He first reminds us why Artificial General Intelligence is a problematic term and why AGI is very unlikely to arrive by 2030, as some people estimate. Wunsch Read More › Source

The ChatGPT Report
166 - Are white collared jobs cooked? What happened over break? AI news and more

The ChatGPT Report

Play Episode Listen Later Jan 8, 2026 14:00


In this episode, we separate the AI hype from the reality of the 2025 job market and look at why the "AGI" promises of tech founders haven't yet materialized. From "AI washing" in corporate layoffs to critical privacy alerts for Gmail users, here is what you need to know:The "AI Washing" Trend: Ryan explores why companies are using AI as an excuse for layoffs, arguing that replacing human customer service and coders with AI is often a move for headlines rather than actual efficiency.The Innovation Plateau: We discuss whether AI development has hit a wall; while early progress was lightning-fast, current updates feel like minor adjustments rather than revolutionary leaps.Coding vs. Vibe Coding: While tools like Claude Code are making development easier for non-coders, the CEO of Cursor warns that "vibe coding" can lead to shaky foundations and crumbling infrastructure without human oversight.Privacy Red Alert: A crucial breakdown on why Google has automatically opted Gmail users into AI training and the specific steps you must take to opt-out and protect your private data.AI Failures & Market Shifts: From a lawyer losing his career over fake AI citations to ChatGPT's recent 22% traffic drop following the Gemini 3 launch, we look at the growing skepticism surrounding LLM reliability.Claude coding - https://x.com/emollick/status/2008253907701821650@emollick

We Study Billionaires - The Investor’s Podcast Network
TECH012: Monthly Tech Roundup – Data Centers in Space, AI5 Chip, Tesla vs. Waymo w/ Seb Bunney (Tech Podcast)

We Study Billionaires - The Investor’s Podcast Network

Play Episode Listen Later Jan 7, 2026 70:30


Preston and Seb unpack AI's implications for safety, governance, and economics. They debate AGI risks, corporate centralization, Bitcoin's regulatory role, and Elon Musk's ventures in space and autonomous tech. IN THIS EPISODE YOU'LL LEARN: 00:00:00 - Intro 00:04:37 – Why AI safety and autonomy are increasingly at odds00:11:30 – How AGI could reshape governance and policy-making00:07:40 – Preston's skepticism about AI self-preservation claims00:15:18 – The unintended consequences of AI regulation00:22:15 – How Bitcoin could hold corporations accountable00:20:10 – The dangers of centralizing economic power via AI00:34:45 – Why generalist thinking matters in a post-pandemic world00:37:20 – The role of curiosity and deep reading in future-proofing00:41:59 – How SpaceX is redefining launch economics with reusable rockets00:57:41 – The hidden potential of Tesla's AI chips and compute power Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Clip 1: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! with Tristan Harris. Clip 2: Marc Andreessen explains the future belongs to generalists in the AI era. Clip 3: Elon Musk on the Future of SpaceX & Mars. Official Website: Seb Bunney. Seb's book: The Hidden Cost of Money. Related ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our ⁠sponsors⁠: HardBlock Human Rights Foundation Masterworks Linkedin Talent Solutions Simple Mining Plus500 Netsuite Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

The Gist
Andy Mills: "Acceleration Is Salvation" — and Why AI Might Be the Last Invention.

The Gist

Play Episode Listen Later Jan 6, 2026 44:18


Andy Mills, creator of The Last Invention podcast, explores I.J. Good's 1965 concept of an "intelligence explosion"—and explains why "AGI" is a deceptively harmless term for a world-changing event. The central problem? Modern AI acts like a black box, often producing results that shock even its designers with no clear explanation of how they got there. Plus: A rebuttal to "spheres of influence" thinking, and why carving up the world is a bad strategy. Produced by Corey Wara | Coordinated by Lya Yanne | Video and Social Media by Geoff Craig Do you have questions or comments, or just want to say hello? Email us at ⁠⁠⁠⁠thegist@mikepesca.com For full Pesca content and updates, check out our website at https://www.mikepesca.com/ ⁠⁠⁠⁠⁠⁠⁠⁠ For ad-free content or to become a Pesca Plus subscriber, check out ⁠⁠⁠⁠https://subscribe.mikepesca.com/ For Mike's daily takes on Substack, subscribe to The Gist List https://mikepesca.substack.com/ Follow us on Social Media:⁠⁠⁠⁠ YouTube https://www.youtube.com/channel/UC4_bh0wHgk2YfpKf4rg40_g⁠⁠⁠⁠ Instagram https://www.instagram.com/pescagist/ X https://x.com/pescami TikTok https://www.tiktok.com/@pescagist To advertise on the show, contact ⁠⁠⁠⁠ad-sales@libsyn.com⁠⁠⁠⁠ or visit ⁠⁠⁠⁠https://advertising.libsyn.com/TheGist    

The Neuron: AI Explained
Inside Pathway's Brain-Like AI: Zuzanna Stamirowska on Continual Learning, Memory & Real-Time Reasoning

The Neuron: AI Explained

Play Episode Listen Later Jan 6, 2026 48:50


Imagine an AI that doesn't just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world's first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway's architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

The Cloud Pod
336: We Were Right (Mostly), 2026: The New Prophecies

The Cloud Pod

Play Episode Listen Later Jan 6, 2026 68:15


Welcome to episode 335 of The Cloud Pod, where the forecast is always cloudy! Welcome to the first show of 2026, and it's a full house, too! Justin, Jonathan, Ryan,  and Matt are all here to reflect on 2025, plus bring you their predictions for 2026. Let's get started!  Titles we almost went with this week SQL Me Maybe: AlloyDB Gets Chatty With Your Database **OpenAI SELECT * FROM natural_language WHERE accuracy LIKE ‘100%’ **Anthropic etcd You Were Worried About Database Limits: CloudWatch Has Your Back CSV You Later: Looker Adds Drag-and-Drop Data Uploads AWS Spots an Opportunity to Manage Your Container Costs EKS Network Policies: No More IP Address Whack-a-Mole AWS Security Hub Splits: It’s Not You, It’s CSPM Spot On: ECS Finally Manages Your Cheapest Compute TOON Squad: DigitalOcean’s New Format Makes JSON Look Bloated The Price is Wrong: AWS Breaks Two Decades of Downward Pricing Tradition Show Your Work: Why AI-Generated Code Without Tests is Just Expensive Spam No More Agent Orange: Google Simplifies VM Extension Deployment AWS Discovers Prices Can Go Both Ways, Raises GPU Costs 15 Percent Sovereignty Washing: When Your European Cloud Still Answers to Uncle Sam Agent Builder Gets a Memory Upgrade: Google’s AI Finally Remembers Where It Put Its Keys Ctrl+F for the Future: A year-end Scorecard & Next-Gen Bets AI Agents, GPU Prices, and The best of the Cloud Pod 2025 Beyond the Hype: The Cloud Pods Definitive 2025 Year in Review Apocalypse Now… What? Our 2026 Forecast Follow Up  01:27 RYAN’S PREDICTIONS Prediction Status Notes Quick LLM models for individuals ACCURATE Meta-Llama-3.1-8B-Instruct, GLM-4-9B-0414, and Qwen2.5-VL-7B-Instruct—each chosen for an outstanding balance of performance and computational efficiency, making them ideal for edge AI deployment. A new AI inference application called Inferencer allows even modest Apple Mac computers to run the largest open-source LLMs. AI at the edge natively (Lambda-esque) ACCURATE Akamai launched a new Inference Cloud product for edge AI using Nvidia’s Blackwell 6000 GPUs in 17 cities. AWS IoT Greengrass with Lambda functions for edge logic. “Edge AI allows for instant decision-making where it matters most—close to the data source.” Cloud native security mesh multi-cloud UNCLEAR Service mesh technologies continue to evolve (Istio, Linkerd), but I didn’t find a breakthrough “app-to-app at the edge” security mesh product announcement in 2025. This one needs more specific evidence. Ryan Score: 2/3 02:25 MATTHEW’S PREDICTIONS Prediction Status Notes FOCUS adopted by Snowflake or Databricks ACCURATE FOCUS version 1.2 was ratified on May 29, 2025. Three new providers announced support: Alibaba Cloud, Databricks, and Grafana. Databricks officially adopted FOCUS! AI security/ethical standard (SOC or ISO) ACCURATE ISO 42001 is the first international standard outlining requirements for AI governance. Major companies achieving certification in 2025: Automation Anywhere is among the first 100 companies worldwide to earn ISO/IEC 42001:2023 certification. Anthropic also achieved ISO 42001 certification. Amazon deprecates 5+ services (WorkMail bonus) ACCURATE (no bonus) 19 services are mothballed, four are being sunset, and one is end of its supported life. Deprecated services include CodeCommit, Cloud9, S3 Select, CloudSearch, SimpleDB, Forecast, Data Pipeline, QLDB, Snowball Edge, and more. WorkMail NOT deprecated – WorkDocs was (April 2025), but WorkMail remains active. Matthew Score: 3/3 03:22 JONATHAN’S PREDICTIONS Prediction Status Notes Company claims AGI achieved ACC

Retire With Style
Episode 210: Navigating Tax Changes for 2026

Retire With Style

Play Episode Listen Later Jan 6, 2026 38:51


In this episode of Retire with Style, Alex and Wade break down the key tax changes coming in 2026, including the return of standard ACA subsidies, the extension of current tax rates, and updates to standard deductions. They cover inflation-related adjustments, itemized deductions, charitable giving rules, estate tax exemptions, and what the alternative minimum tax means for retirees. The conversation also explores the new Trump accounts launching in July 2026 and wraps up with a lighthearted discussion on New Year's resolutions. Listen now to learn more!   Takeaways The ACA subsidies are reverting to pre-2021 levels in 2026. Tax rates remain unchanged from previous years, providing stability. Standard deductions continue to be higher, affecting itemization rates. Inflation adjustments will impact the thresholds for tax brackets. Charitable contributions now have a deduction floor based on AGI. Estate tax exemptions are increasing significantly for 2026. The alternative minimum tax may affect more high-income earners. 529 plans have expanded eligible expenses for education. Trump accounts will allow contributions for minors starting in July 2026. New Year's resolutions can be a time for reflection and planning.   Chapters 00:00 Welcome to 2026: New Beginnings 02:13 Affordable Care Act Changes 07:25 Tax Rates and Standard Deductions 09:16 Inflation Adjustments and Tax Planning 11:34 Itemized Deductions and Charitable Contributions 20:32 Estate Tax Exemptions and Business Income 24:44 Alternative Minimum Tax and 529 Plans 28:19 Trump Accounts and Future Planning 29:42 New Year Reflections and Resolutions   Links Explore the New RetireWithStyle.com! We've launched a brand-new home for the podcast! Visit RetireWithStyle.com to catch up on all our latest episodes, explore topics by category, and send us your questions or ideas for future episodes. If there's something you've been wondering about retirement, we want to hear it! This episode is sponsored by Retirement Researcher https://retirementresearcher.com/. Download their free eBook, 8 Tips to Becoming A Retirement Income Investor at retirementresearcher.com/8tips

Personal Development Mastery
How To Embrace a Midlife Career Change With Joy Even When the Path Ahead Feels Unclear, with Hans Wilhelm | #568

Personal Development Mastery

Play Episode Listen Later Jan 5, 2026 46:20 Transcription Available


Are you navigating a major life transition, especially in midlife, and struggling with uncertainty, identity shifts, or unmet expectations?In this heartfelt and deeply insightful episode, Hans Wilhelm returns to explore the spiritual perspective on transitions, especially during midlife. If you're facing change and feel lost without a clear direction, this conversation offers both comfort and guidance. Whether you're leaving a long-held career or questioning your next move, Hans shares timeless wisdom that helps you reconnect with your inner joy and higher self.Discover how following your inner joy can become your most reliable compass through uncertainty.Learn why unmet expectations create suffering—and how to reframe them to regain emotional freedom.Embrace practical techniques to shift from self-pity to empowerment and align with your soul's purpose.Tune in now to uncover how life transitions can become profound spiritual awakenings when you listen to your inner guidance.˚KEY POINTS AND TIMESTAMPS:00:00 - Introduction, podcast mission and welcoming Hans Wilhelm02:11 - Setting the theme: midlife transitions and lack of clarity04:11 - Following inner joy as guidance from the higher self11:57 - Joy as a beacon and introducing expectations13:50 - Turning painful expectations into preferences and trusting life's plan20:29 - Escaping victimhood: silver linings and the three-positives practice25:41 - Midlife transitions and loosening the grip of old identities28:16 - Gratitude for past careers and letting go of professional labels35:06 - Service as the highest form of love and purpose in later life39:07 - Karma, everyday service, and closing reflections on gratitude˚MEMORABLE QUOTE:"Unfulfilled expectations are the basis of all our physical pain, our mental pain, anything that doesn't work the way we expect it to.”"˚VALUABLE RESOURCES:First conversation with Hans Wilhelm: https://personaldevelopmentmasterypodcast.com/520˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Conversations and insights on career transition, career clarity, midlife career change and career pivots for midlife professionals, including second careers, new ventures, leaving a long-term career with confidence, better decision-making, and creating purposeful, meaningful work.˚Support the showCareer transition and career clarity podcast content for midlife professionals in career transition, navigating a midlife career change, career pivot or second career, starting a new venture or leaving a long-term career. Discover practical tools for career clarity, confident decision-making, rebuilding self belief and confidence, finding purpose and meaning in work, designing a purposeful, fulfilling next chapter, and creating meaningful work that fits who you are now. Episodes explore personal development and mindset for midlife professionals, including how to manage uncertainty and pressure, overcome fear and self-doubt, clarify your direction, plan your next steps, and turn your experience into a new role, business or vocation that feels aligned. To support the show, click here.

CG Garage
Episode 530 - The 2026 Forecast: CG Garage Predicts the Future of Tech and Hollywood

CG Garage

Play Episode Listen Later Jan 5, 2026 81:30


Will 2026 be the year of the ultimate industry reckoning or a digital renaissance? Hosts Chris and Daniel are joined by guests James Blevins and Erick Geisler for a deep dive into the "mild, medium, and spicy" predictions that will define the next year. As the dust settles on early AI experiments, the group moves past the "Will Smith eating spaghetti" era of novelty to discuss the professionalization of tools, the massive consolidation of legacy studios, and the survival of the traditional theatrical experience. The conversation pushes boundaries, exploring everything from the rise of personal AI creative agents to the outlandish possibility of data centers orbiting in space. By examining the potential collapse of current tech giants alongside the emergence of AGI, the panel maps out a world where the lines between science, religion, and storytelling are permanently blurred. This episode isn't just a look at what's coming, it's a high-stakes debate on who will lead the charge in the collision of Hollywood and high-tech. Netflix's Acquisition of Warner Bros. Discovery > Flawless AI: DeepEditor & Ethical Reshoots > Starcloud: The First NVIDIA-Powered Space Data Center > NantWorks: Converging Biotech and AI >   This episode is sponsored by: Center Grid Virtual Studio Kitbash 3D (Use promocode "cggarage" for 10% off)  

Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS

Play Episode Listen Later Jan 4, 2026 114:14


Ryan Kidd, Co-Executive Director of MATS, shares an inside view of the AI safety field and the world's largest AI safety research talent pipeline. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. He discusses AGI timelines, the blurred line between safety and capabilities work, and why expert disagreement remains so high. In the second half, Ryan breaks down MATS' research archetypes, what top AI safety organizations are looking for, and how applicants can stand out with the right projects, skills, and career strategy. Sponsors: Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Agents of Scale: Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive CHAPTERS: (00:00) About the Episode (03:50) MATS mission, AGI timelines (13:43) Evaluating current AI safety (Part 1) (13:48) Sponsor: Tasklet (14:59) Evaluating current AI safety (Part 2) (Part 1) (28:11) Sponsors: Agents of Scale | Shopify (30:58) Evaluating current AI safety (Part 2) (Part 2) (30:59) Safety research versus capabilities (40:01) Frontier labs, deployment, governance (51:51) MATS tracks and governance (01:04:11) Research archetypes and tooling (01:12:25) Labor market and careers (01:20:09) Applicant selection and preparation (01:29:33) Admissions, salaries, and compute (01:40:34) Future programs and paradigms (01:54:11) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

The John Batchelor Show
S8 Ep270: FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 10:30


FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 1955

The John Batchelor Show
S8 Ep270: THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altm

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 5:09


THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altman's management style led to a "blip" where the nonprofit board fired him, only for him to be quickly reinstated due to employee loyalty. Elon Musk, having lost a power struggle for control of the organization, severed ties, leaving Altman to lead the race toward AGI. NUMBER 16 FEBRUARY 1955

The John Batchelor Show
S8 Ep271: SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI -- a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilli

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 6:22


SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI --  a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilliance. In 1863, the photographer Nadar undertook a perilous ascent in a giant balloon to fund experiments for heavier-than-air flight, illustrating the adventurous spirit required of early photographers. This era began with Daguerre's 1839 introduction of the daguerreotype, a process involving highly dangerous chemicals like mercury and iodine to create unique, mirror-like images on copper plates. Pioneers risked their lives using explosive materials to capture reality with unprecedented clarity and permanence. NUMBER 1 PHOTOGRAPHING THE MOON AND SEA Colleague Anika Burgess, Flashes of Brilliance. Early photography expanded scientific understanding, allowing humanity to visualize the inaccessible. James Nasmyth produced realistic images of the moon by photographing plaster models based on telescope observations, aiming to prove its volcanic nature. Simultaneously, Louis Boutan spent a decade perfecting underwater photography, capturing divers in hard-hat helmets. These efforts demonstrated that photography could be a tool for scientific analysis and discovery, revealing details of the natural world previously hidden from the human eye. NUMBER 2 SOCIAL JUSTICE AND NATURE CONSERVATION Colleague Anika Burgess, Flashes of Brilliance. Photography became a powerful agent for social and environmental change. Jacob Riis utilized dangerous flash powder to document the squalid conditions of Manhattan tenements, exposing poverty to the public in How the Other Half Lives. While his methods raised consent issues, they illuminated grim realities. Conversely, Carleton Watkins hauled massive equipment into the wilderness to photograph Yosemite; his majestic images influenced legislation signed by Lincoln to protect the land, proving photography's political impact. NUMBER 3 X-RAYS, SURVEILLANCE, AND MOTION Colleague Anika Burgess, Flashes of Brilliance. The discovery of X-rays in 1895 sparked a "new photography" craze, though the radiation caused severe injuries to early practitioners and subjects. Photography also entered the realm of surveillance; British authorities used hidden cameras to photograph suffragettes, while doctors documented asylum patients without consent. Finally, Eadweard Muybridge's experiments captured horses in motion, settling debates about locomotion and laying the technical groundwork for the future development of motion pictures. NUMBER 4 THE AWAKENING OF CHINA'S ECONOMY Colleague Anne Stevenson-Yang, Wild Ride. Returning to China in 1994, the author witnessed a transformation from the destitute, Maoist uniformity of 1985 to a budding export economy. In the earlier era, workers slept on desks and lacked basic goods, but Deng Xiaoping's realization that the state needed hard currency prompted reforms. Deng established Special Economic Zones like Shenzhen to generate foreign capital while attempting to isolate the population from foreign influence, marking the start of China's export boom. NUMBER 5 RED CAPITALISTS AND SMUGGLERS Colleague Anne Stevenson-Yang, Wild Ride. Following the 1989 Tiananmen crackdown, China reopened to investment in 1992, giving rise to "red capitalists"—often the children of party officials who traded political access for equity. As the central government lost control over local corruption and smuggling rings, it launched "Golden Projects" to digitize and centralize authority over customs and taxes. To avert a banking collapse in 1998, the state created asset management companies to absorb bad loans, effectively rolling over massive debt. NUMBER 6 GHOST CITIES AND THE STIMULUS TRAP Colleague Anne Stevenson-Yang, Wild Ride. China's growth model shifted toward massive infrastructure spending, resulting in "ghost cities" and replica Western towns built to inflate GDP rather than house people. This "Potemkin culture" peaked during the 2008 Olympics, where facades were painted to impress foreigners. To counter the global financial crisis, Beijing flooded the economy with loans, fueling a real estate bubble that consumed more cement in three years than the US did in a century, creating unsustainable debt. NUMBER 7 STAGNATION UNDER SURVEILLANCE Colleague Anne Stevenson-Yang, Wild Ride. The severe lockdowns of the COVID-19 pandemic shattered consumer confidence, leaving citizens insecure and unwilling to spend, which stalled economic recovery. Local governments, cut off from credit and burdened by debt, struggle to provide basic services. Faced with economic stagnation, Xi Jinping has rejected market liberalization in favor of increased surveillance and control, prioritizing regime security over resolving the structural debt crisis or restoring the dynamism of previous decades. NUMBER 8 FAMINE AND FLIGHT TO FREEDOM Colleague Mark Clifford, The Troublemaker. Jimmy Lai was born into a wealthy family that lost everything to the Communist revolution, forcing his father to flee to Hong Kong while his mother endured labor camps. Left behind, Lai survived as a child laborer during a devastating famine where he was perpetually hungry. A chance encounter with a traveler who gave him a chocolate bar inspired him to escape to Hong Kong, the "land of chocolate," stowing away on a boat at age twelve. NUMBER 9 THE FACTORY GUY Colleague Mark Clifford, The Troublemaker. By 1975, Jimmy Lai had risen from a child laborer to a factory owner, purchasing a bankrupt garment facility using stock market profits. Despite being a primary school dropout who learned English from a dictionary, Lai succeeded through relentless work and charm. He capitalized on the boom in American retail sourcing, winning orders from Kmart by producing samples overnight and eventually building Comitex into a leading sweater manufacturer, embodying the Hong Kong dream. NUMBER 10 CONSCIENCE AND CONVERSION Colleague Mark Clifford, The Troublemaker. The 1989 Tiananmen Squaremassacre radicalized Lai, who transitioned from textiles to media, founding Next magazine and Apple Daily to champion democracy. Realizing the brutality of the Chinese Communist Party, he used his wealth to support the student movement and expose regime corruption. As the 1997 handover approached, Lai converted to Catholicism, influenced by his wife and pro-democracy peers, seeking spiritual protection and a moral anchor against the coming political storm. NUMBER 11 PRISON AND LAWFARE Colleague Mark Clifford, The Troublemaker. Following the 2020 National Security Law, authorities raided Apple Daily, froze its assets, and arrested Lai, forcing the newspaper to close. Despite having the means to flee, Lai chose to stay and face imprisonment as a testament to his principles. Now held in solitary confinement, he is subjected to "lawfare"—sham legal proceedings designed to silence him—while he spends his time sketching religious images, remaining a symbol of resistance against Beijing's tyranny. NUMBER 12 FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 THE ROOTS OF AMBITION Colleague Keach Hagey, The Optimist. Sam Altman grew up in St. Louis, the son of an idealistic developer and a driven dermatologist mother who instilled ambition and resilience in her children. Altmanattended the progressive John Burroughs School, where his intellect and charisma flourished, allowing him to connect with people on any topic. Though he was a tech enthusiast, his ability to charm others defined him early on, foreshadowing his future as a master persuader in Silicon Valley. NUMBER 14 SILICON VALLEY KINGMAKER Colleague Keach Hagey, The Optimist. At Stanford, Altman co-founded Loopt, a location-sharing app that won him a meeting with Steve Jobs and a spot in the App Store launch. While Loopt was not a commercial success, the experience taught Altman that his true talent lay in investing and spotting future trends rather than coding. He eventually succeeded Paul Graham as president of Y Combinator, becoming a powerful figure in Silicon Valley who could convince skeptics like Peter Thiel to back his visions. NUMBER 15 THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altman's management style led to a "blip" where the nonprofit board fired him, only for him to be quickly reinstated due to employee loyalty. Elon Musk, having lost a power struggle for control of the organization, severed ties, leaving Altman to lead the race toward AGI. NUMBER 16

Crazy Wisdom
Episode #519: Inside the Stack: What Really Makes Robots “Intelligent”

Crazy Wisdom

Play Episode Listen Later Jan 2, 2026 62:24


In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Robotics and Personal Journey05:27 The Evolution of Robotics: From Standard to Advanced09:56 The Future of Robotics: AI and Automation12:09 The Role of Edge Computing in Robotics17:40 FPGA and AI: The Future of Robotics Processing21:54 Sensing the World: How Robots Perceive Their Environment29:01 Learning from the Physical World: Insights from Robotics33:21 The Intersection of Robotics and Manufacturing35:01 Journey into Robotics: Education and Passion36:41 Practical Robotics Projects for Beginners39:06 Understanding Particle Filters in Robotics40:37 World Models: The Future of AI and Robotics41:51 The Black Box Dilemma in AI and Robotics44:27 Safety and Interpretability in Autonomous Systems49:16 Regulatory Challenges in Robotics and AI51:19 Global Perspectives on Robotics Regulation54:43 The Future of Robotics in Emerging Markets57:38 The Role of Engineers in Modern WarfareKey Insights1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts.2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised.3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount.4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks.5. Modern robotics development benefits from increasingly affordable optical sensors. The democratization of 3D cameras, laser range finders, and miniature range measurement chips (costing just a few dollars from distributors like DigiKey) enables rapid prototyping and innovation that was previously limited to well-funded research institutions.6. Geopolitical shifts are driving localized high-tech development, particularly in defense applications. The changing role of US global leadership and lessons from Ukraine's drone warfare are motivating countries like Poland to develop indigenous robotics capabilities. Small engineering teams can now create battlefield-effective technology using consumer drones equipped with advanced sensors.7. The future of robotics lies in natural language programming for non-experts. Dymczyk envisions a transformation where small business owners can instruct robots using conversational language rather than complex programming, similar to how AI coding assistants now enable non-programmers to build applications through natural language prompts.

Unchained
Bits + Bips: 2026 Crypto Predictions: BTC & ETH Hit Record Highs, Stablecoins Go Big

Unchained

Play Episode Listen Later Dec 31, 2025 69:44


Thank you to our sponsor, Mantle!Mantle is launching the Global Hackathon 2025 to accelerate the future of Real-World Assets. With a $150k prize pool, backing from a $4B treasury, and direct access to Bybit's 7M+ users, this is the ultimate ecosystem for builders. Sign up here! In this year-end Bits + Bips roundtable, hosts Austin Campbell and Chris Perkins are joined by John D'Agostino, Head of Strategy at Coinbase Institutional, for a wide-ranging and often contentious look at what 2026 may hold for crypto. They debate whether a major global brand will launch its own stablecoin, whether altcoins are structurally doomed—or secretly set up for a Wall Street–driven resurgence—and whether a major crypto hack is coming. The conversation also explores how tokens accrue value and whether there will be a new M&A trend that'll reshape the industry as we know it. Plus: don't miss what they have to say about NFTs, financial nihilism, and whether we'll see all-time highs for bitcoin in 2026. Hosts: Ram Ahluwalia, CFA, CEO and Founder of Lumida Austin Campbell, NYU Stern professor and Founder of Zero Knowledge Consulting Christopher Perkins, Managing Partner and President of CoinFund Guest: John D'Agostino, Head of Strategy for Coinbase Institutional Timestamps

Top Traders Unplugged
IL44: From AI Hype to Transformative AGI ft. Aubrie Pagano

Top Traders Unplugged

Play Episode Listen Later Dec 31, 2025 62:41 Transcription Available


In this Ideas Lab episode, Kevin Coldiron speaks with venture capitalist and former founder Aubrie Pagano about what stands between today's AI hype and a truly transformative AGI economy. Rather than treating AI as destiny, Aubrie maps the frictions that hold it back: hard power limits, fragile industrial data, and agents that still cannot coordinate with humans or each other. She explains why we may be a full capital cycle or two away from real AGI and why that delay is precisely where the best opportunities lie. The conversation then widens into the “Aquarius Economy,” a possible future in which human agency, not algorithms, becomes the scarcest and most valuable asset.-----50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE-----Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.IT's TRUE ? – most CIO's read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfolio” here.Learn more about the Trend Barometer here.Send your questions to info@toptradersunplugged.comAnd please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.Follow Kevin on SubStack & read his Book.Follow Aubrie on LinkedIn.Episode TimeStamps: 00:00 - Why agent coordination is still clunky and unreliable00:38 - Intro to Top Traders Unplugged and performance risk framing01:34 - Kevin sets up the Ideas Lab and today's focus on AGI02:59 - Aubrie's background as founder, researcher and VC shaping her lens06:04 - Why she wrote about AGI and the Aquarius Economy07:21 - AGI as the last cycle built on labor scarcity10:41 - The blockers framework and why AGI may be a cycle or two away12:58 - Blocker...

a16z
Why a16z's Martin Casado Believes the AI Boom Still Has Years to Run

a16z

Play Episode Listen Later Dec 30, 2025 82:20


This episode is a special replay from The Generalist Podcast, featuring a conversation with a16z General Partner Martin Casado. Martin has lived through multiple tech waves as a founder, researcher, and investor, and in this discussion he shares how he thinks about the AI boom, why he believes we're still early in the cycle, and how a market-first lens shapes his approach to investing.They also dig into the mechanics behind the scenes: why AI coding could become a multi-trillion-dollar market, how a16z evolved from a small generalist firm into a specialized organization, the growing role of open-source models, and why Martin believes AGI debates often obscure more meaningful questions about how technology actually creates value. Resources:Follow Mario GabrieleX: https://x.com/mariogabrielehttps://www.generalist.com/Follow Martin Casado:LinkedIn: https://www.linkedin.com/in/martincasado/X: https://x.com/martin_casadoThe Generalist Substack: https://www.generalist.com/The Generalist on YouTube: https://www.youtube.com/@TheGeneralistPodcastSpotify: https://open.spotify.com/show/6mHuHe0Tj6XVxpgaw4WsJVApple: https://podcasts.apple.com/us/podcast/the-generalist/id1805868710 Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.