Podcasts about AGI

  • 1,939PODCASTS
  • 6,369EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Mar 12, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

Personal Development Mastery
The Infinity Wave ∞ (Most Replayed Personal Development Wisdom Snippets) | #587

Personal Development Mastery

Play Episode Listen Later Mar 12, 2026 8:18 Transcription Available


Snippet of wisdom 98.In this series I select my favourite moments from previous episodes of the podcast.Today's snippet is from my conversation with the spiritual teacher Hope Fitzgerald. She talks about the Infinity Wave, a flowing symbol of water, channeling love and compassion.Press play to learn about it and hear a very powerful story about the Infinity Wave.˚VALUABLE RESOURCES:Listen to the full conversation with Hope Fitzgerald in episode #388:https://personaldevelopmentmasterypodcast.com/388˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

AMERICA OUT LOUD PODCAST NETWORK
The year artificial intelligence changes everything

AMERICA OUT LOUD PODCAST NETWORK

Play Episode Listen Later Mar 10, 2026 57:00 Transcription Available


The Tenpenny Files – Artificial intelligence moves from automation to autonomous reasoning, forcing society to confront new legal, economic, and cultural realities. Matthew Hunt explores how AGI, military integration, and rapid workplace automation reshape human decision-making, education, and sovereignty, urging individuals and organizations to understand and prepare for a rapidly accelerating technological future...

Student Loan Planner
Tax Extensions Can Lower Your Student Loan Payments

Student Loan Planner

Play Episode Listen Later Mar 10, 2026 28:09


Timing your tax filing can mean serious savings on your monthly payments, especially if you're on an income-driven repayment (IDR) plan and aiming for forgiveness. We break down scenarios for when it makes sense to file right away, when to wait, and how married couples or borrowers with irregular income can play their cards for the biggest advantage. If you've ever wondered how your AGI or recertification date could influence your student loan bills, this episode gives you straightforward strategies you can use right now. Key moments: (07:48) Why when you file your tax return directly affects your IDR payment amount (10:59) Filing a tax extension is free, but if you owe taxes, you must pay by April 15 (18:13) When filing early (or on time) makes more sense than filing an extension (21:39) SAVE borrowers can lock in the ideal recertification date by switching plans between April 15 and October 15   Like the show? There are several ways you can help! Follow on Apple Podcasts, Spotify or Amazon Music Leave an honest review on Apple Podcasts  Subscribe to the newsletter Join SLP Insiders for student loan loopholes, SLP app and member community Feeling helpless when it comes to your student loans? Try our free student loan calculator Check out our refinancing bonuses we negotiated Book your custom student loan plan Get profession-specific financial planning Do you have a question about student loans? Leave us a voicemail here or email us at help@studentloanplanner.com and we might feature it in an upcoming show!  

Slate Star Codex Podcast
What Happened With Bio Anchors?

Slate Star Codex Podcast

Play Episode Listen Later Mar 10, 2026 24:55


[Original post: Biological Anchors: A Trick That Might Or Might Not Work] I. Ajeya Cotra's Biological Anchors report was the landmark AI timelines forecast of the early 2020s. In many ways, it was incredibly prescient - it nailed the scaling hypothesis, predicted the current AI boom, and introduced concepts like "time horizons" that have entered common parlance. In most cases where its contemporaries challenged it, its assumptions have been borne out, and its challengers proven wrong. But its headline prediction - an AGI timeline centered around the 2050s - no longer seems plausible. The current state of the discussion ranges from late 2020s to 2040s, with more remote dates relegated to those who expect the current paradigm to prove ultimately fruitless - the opposite of Ajeya's assumptions. Cotra later shortened her own timelines to 2040 (as of 2022) and they are probably even shorter now. So, if its premises were impressively correct, but its conclusion twenty years too late, what went wrong in the middle? https://www.astralcodexten.com/p/what-happened-with-bio-anchors

Macro Musings with David Beckworth
Jesús Fernández-Villaverde on the Quandary of Global Demographic Decline

Macro Musings with David Beckworth

Play Episode Listen Later Mar 9, 2026 64:01


Subscribe to the new Macro Musings YouTube Channel! Jesús Fernández-Villaverde is a professor of economics at the University of Pennsylvania. Jesús returns to the show to discuss his rise on X, how to frame global demographic decline, the three accelerants of demographic decline, the role of housing in family size, how AI will play a role in global demographics, what we know about AGI, the question of dollar dominance, and much more.  Check out the transcript for this week's episode, now with links. Recorded on February 20th, 2026 Subscribe to David's Substack: Macroeconomic Policy Nexus Follow David Beckworth on X: @DavidBeckworth Follow Jesús Fernández-Villaverde on X: @JesusFerna7026 Follow the show on X: @Macro_Musings Check out our Macro Musings merch! Timestamps 00:00:00 - Intro 00:07:22 - Demographics 00:39:28 - Artificial Intelligence 00:54:07 - Currency Dominance 01:03:20 - Outro

Personal Development Mastery
Why You Keep Taking Life for Granted and the One Perspective Shift You Need Right Now: He Had to Die to Learn This, with Jay Setchell | #586

Personal Development Mastery

Play Episode Listen Later Mar 9, 2026 36:40 Transcription Available


If you woke up tomorrow and realized you'd been given “one more chance” at life, what would you do differently today?It's easy to treat life like something we're entitled to… until a hard season hits: pain, loss, setbacks, uncertainty. In this episode, Jay Setchell (in his 70s, mostly paralysed, having survived multiple near-death experiences and 73 surgeries) shares how to internalise that life is a gift before you're forced to learn it the hard way, and how gratitude, faith, and personal responsibility can carry you through your toughest winters.A simple mindset shift to stop asking “why me?” and start navigating adversity with acceptance, resilience, and clarity.Practical ways to build inner strength—so you keep moving forward inch by inch even when you feel stuck or overwhelmed.A powerful framework for radical ownership, including how to apply it even when life is outside your control.Press play to learn how to develop the “strength within you” so you can stay grateful, take ownership, and remember: no matter what you're facing, it's always too soon to quit.˚VALUABLE RESOURCES:Jay's website: https://neverquittrying.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Hacker News Recap
March 8th, 2026 | Ask HN: Please restrict new accounts from posting

Hacker News Recap

Play Episode Listen Later Mar 9, 2026 15:07


This is a recap of the top 10 posts on Hacker News on March 08, 2026. This podcast was generated by wondercraft.ai (00:30): Ask HN: Please restrict new accounts from postingOriginal post: https://news.ycombinator.com/item?id=47300329&utm_source=wondercraft_ai(01:56): Agent Safehouse – macOS-native sandboxing for local agentsOriginal post: https://news.ycombinator.com/item?id=47301085&utm_source=wondercraft_ai(03:22): FrameBookOriginal post: https://news.ycombinator.com/item?id=47298044&utm_source=wondercraft_ai(04:48): Apple's 512GB Mac Studio vanishes, a quiet acknowledgment of the RAM shortageOriginal post: https://news.ycombinator.com/item?id=47296302&utm_source=wondercraft_ai(06:14): The changing goalposts of AGI and timelinesOriginal post: https://news.ycombinator.com/item?id=47299009&utm_source=wondercraft_ai(07:41): Ask HN: How to be alone?Original post: https://news.ycombinator.com/item?id=47296547&utm_source=wondercraft_ai(09:07): Cloud VM benchmarks 2026Original post: https://news.ycombinator.com/item?id=47293119&utm_source=wondercraft_ai(10:33): I ported Linux to the PS5 and turned it into a Steam MachineOriginal post: https://news.ycombinator.com/item?id=47296849&utm_source=wondercraft_ai(11:59): LibreOffice Writer now supports MarkdownOriginal post: https://news.ycombinator.com/item?id=47298885&utm_source=wondercraft_ai(13:26): Warn about PyPy being unmaintainedOriginal post: https://news.ycombinator.com/item?id=47293415&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

California real estate radio
Is AI already conscious? Nobody can prove it isn't. That's not an opinion — that's the actual state of neuroscience in 2026.

California real estate radio

Play Episode Listen Later Mar 8, 2026 30:16


NZZ Akzent
Korrespondentin im Silicon Valley: Marie-Astrid Langer im Tech-Herzen der Welt

NZZ Akzent

Play Episode Listen Later Mar 7, 2026 24:31 Transcription Available


Das Silicon Valley steht unter Strom. In dieser Samstagsfolge berichtet die NZZ-Korrespondentin Marie-Astrid Langer von der Veränderung in der Bay Area um San Francisco. KI-Rausch, politische Richtungswechsel und Startups, die so schnell verschwinden, wie sie aufgetaucht sind: In Marie-Astrids Alltag zeigt sich die Zukunft bereits heute, denn hier bezahlen Leute per Handfläche, man sieht auf der Strasse selbstfahrende Robo-Taxis und Menschen, die mit Chatbots spazieren gehen. Gleichzeitig prägen Massenentlassungen, die Hire-and-Fire-Kultur und die Fentanylkrise die Region. Gast: Marie-Astrid Langer, USA-Korrespondentin Host: Simon Schaffer Die neusten [Artikel von Marie-Astrid könnt ihr hier bei der NZZ](https://www.nzz.ch/impressum/marie-astrid-langer-ld.665515) lesen. Du bist unter 30 und willst mehr NZZ? [Dein U30-Abo](https://abo.nzz.ch/m_21019698_1/) für alle digitalen Inhalte der NZZ gibt es für dich besonders günstig.

American Conservative University
5 AI CEOs Just Said The Same Thing

American Conservative University

Play Episode Listen Later Mar 6, 2026 23:44


5 AI CEOs Just Said The Same Thing Five of the most powerful people in artificial intelligence just said the same thing in the same month. They didn't make handwavy vague statements — they all agreed on the same direction, the same timelines, the same warnings. Five CEOs who are actively competing against each other, spending hundreds of billions, all converging on one message. Key points: • What Sam Altman, Jensen Huang, Sundar Pichai, Satya Nadella, and Elon Musk all said • Why competitors are suddenly agreeing • The timeline they're all pointing to • What this convergence means for the future Watch this video at-  https://youtu.be/kMivoKHHkxQ?si=I1ERQG-imaL7UPSy Farzad 383K subscribers 761,083 views Feb 2, 2026 #elonmusk #FSD #twitter Buy my book: https://a.co/d/03deuZWF --- --- Rebellionaire: https://www.rebellionaire.com/farzad Join my exclusive community: https://farzad.fm Buy Matic: https://maticrobots.com/?utm_term=FRI... Use Descript to edit your videos: https://descript.cello.so/5G6jmxS0qeP Wrap your Tesla using TESBROS: https://partners.tesbros.com/FARZADME... Get $100 off Matic Robots: https://maticrobots.refr.cc/active-cu... Use my referral link to purchase a Tesla product https://ts.la/farzad69506 Want to grow your YouTube channel? DM David Carbutt For 10% discount quote ‘Farzad' https://x.com/DavidCarbutt_ I worked at Tesla starting from 2017 thru 2021. I spent most of my time in the distribution and supply chain organizations in leadership positions. Before Tesla, I was a Director of Business Intelligence and Pricing at the largest Pet Food & Supply distributor in the US, Phillips Pet Food & Supplies. My wife and I also owned a small business in Bethlehem, PA between 2016 and 2019. I have been a shareholder of Tesla since 2012 and currently own Tesla stock. Nothing I say constitutes as investment or financial advice. I have been a shareholder of Lemonade since 2025 and currently own Lemonade stock. Nothing I say constitutes as investment or financial advice. -- Five of the world's most powerful AI leaders just made the same prediction about what's coming next. Sam Altman (OpenAI), Sundar Pichai (Google), Satya Nadella (Microsoft), Jensen Huang (NVIDIA), and Elon Musk (xAI/Tesla) are converging on a timeline most people aren't ready for. In this video, I break down exactly what these CEOs said, why they're all saying it NOW, and what it means for your job, your investments, and the economy. Topics covered: • AGI timeline predictions from 5 tech giants • Why 2025-2027 keeps coming up • The convergence of AI + robotics + energy • What the "intelligence too cheap to meter" future looks like • How to position yourself before the wave hits I've been covering Tesla and AI for 14 years. This is the most important shift I've ever seen. NFA.  

80,000 Hours Podcast with Rob Wiblin
Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 6, 2026 31:28


The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we've ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision-making tools could be a big deal, who might be a good fit to help shape this new field, and what the downside risks of getting involved might be. Read the original article on the 80,000 Hours website: https://80000hours.org/problem-profiles/ai-enhanced-decision-making/Chapters:Check out our new narrations feed (00:00:00)Summary (00:01:21)Section 1: Why advancing AI decision making tools might matter a lot (00:02:52)AI tools could help us make much better decisions (00:05:59)We might be able to differentially speed up the rollout of AI decision making tools (00:11:04)Section 2: What are the arguments against working to advance AI decision making tools? (00:13:17)Section 3: How to work in this area (00:26:19)Want one-on-one advice? (00:29:50)Audio editing: Dominic Armstrong and Milo McGuire

100x Entrepreneur
The First AI Market With 8 Billion Potential Users | Sudarshan kamath, Smallest AI

100x Entrepreneur

Play Episode Listen Later Mar 6, 2026 69:25


Will smaller AI models win over large language models?Sudarshan Kamath grew up in Mumbai, taught himself AI before most Indian companies were even hiring for it, and bought the domain "smallest.ai" for $100 in 2022, two years before the company existed. Today, he runs Smallest AI, a startup focused on real time voice AI.He started with self-driving cars, training large models and compressing them to run on vehicle hardware in real time. That's where he first saw what small models could do: a hundredth of the size, almost no loss in accuracy.Two years later he put in his own $150K, got some GPUs, and started training. Eighteen months later he had a seed round, a Series A, a seven-figure enterprise deal, and a $150M acquisition offer he turned down.Most of the data that goes into large models is noise. Strip it out, train small, and you get a model that matches a giant at a fraction of the size and runs in real time. That insight is what Smallest AI is built on.00:00 – Trailer 00:51 – Sudarshan's journey before Smallest AI 05:00 – Arjun Jain & Yann LeCun 08:20 – Why build in voice AI in 2024? 15:09 – Why move the company from India to the US? 17:25 – Hiring talent via LinkedIn and X 18:49 – What large US funds actually bring to startups 21:03 – Raising a seed round with zero revenue 26:06 – Strong intros from US VCs 28:23 – What the first enterprise customer teaches you 31:50 – Raising Series A with Seligman Ventures 32:19 – The $150M acquisition offer 34:32 – When should founders sell secondaries? 36:24 – Who are Smallest AI's customers? 38:28 – What are state space models? 40:16 – Are GEPA models closer to AGI? 41:23 – Growing 10× in three months 48:03 – This is not a winner-takes-all market 49:32 – Why this is a trillion-dollar market 50:08 – Why large AI labs are not building in voice 51:26 – What it takes to reach $100M ARR 54:21 – The biggest goal for 2026 57:11 – Voice costs 1000× more than text 01:02:04 – How Smallest AI cracked large enterprises-------------India's talent has built the world's tech—now it's time to lead it.This mission goes beyond startups. It's about shifting the center of gravity in global tech to include the brilliance rising from India.What is Neon Fund?We invest in seed and early-stage founders from India and the diaspora building world-class Enterprise AI companies. We bring capital, conviction, and a community that's done it before.Subscribe for real founder stories, investor perspectives, economist breakdowns, and a behind-the-scenes look at how we're doing it all at Neon.-------------Check us out on:Website: https://neon.fund/Instagram: https://www.instagram.com/theneonshoww/LinkedIn: https://www.linkedin.com/company/beneon/Twitter: https://x.com/TheNeonShowwConnect with Siddhartha on:LinkedIn: https://www.linkedin.com/in/siddharthaahluwalia/Twitter: https://x.com/siddharthaa7-------------This video is for informational purposes only. The views expressed are those of the individuals quoted and do not constitute professional advice.Send a text

DataTalks.Club
The Future of AI Agents - Aditya Gautam

DataTalks.Club

Play Episode Listen Later Mar 6, 2026 68:39


In this talk, Aditya, an experienced AI Researcher and Engineer, shares his technical evolution—from his roots in embedded systems to building complex, large-scale AI agent architectures. We explore the practical challenges of enterprise AI adoption, the shifting economics of LLMs, and the infrastructure required to deploy reliable multi-agent systems.You'll learn about:- The ROI of Fine-Tuning: How to decide between specialized small models and general-purpose APIs based on cost and latency.- Agent MLOps Stack: The essential roles of guardrails, data lineage, and auditability in AI workflows.- Reliability in High-Stakes Verticals: Navigating the unique AI deployment challenges in the legal and healthcare sectors.- Evaluation Frameworks: How to design robust evals for multi-tenancy systems at scale.- Human-in-the-Loop: Strategies for aligning "LLM as a judge" with human-labeled ground truth to eliminate bias.- The Future of AGI: What to expect from the next wave of multimodal agents and autonomous systems.TIMECODES: 00:00 Aditya's from embedded systems to AI08:52 Enterprise AI research and adoption gaps 13:13 AI reliability in legal and healthcare 19:16 Specialized models and agent governance 24:58 LLM economics: Fine-tuning vs. API ROI 30:26 Agent MLOps: Guardrails and data lineage 36:55 Iterating on agents with user feedback 43:30 AI evals for multi-tenancy and scale 50:18 Aligning LLM judges with human labels 56:40 Agent infrastructure and deployment risks 1:02:35 Future of AGI and multimodal agentsThis talk is designed for Machine Learning Engineers, Data Scientists, and Technical Product Managers who are moving beyond AI prototypes and into production-grade agentic workflows. It is especially relevant for those working in regulated industries or managing high-volume API budgets.Connect with Aditya:- Linkedin - https://www.linkedin.com/in/aditya-gautam-68233a30/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

Personal Development Mastery
Don't Quit When You're Tired, Quit When You're Done (Most Replayed Personal Development Wisdom Snippets) | #585

Personal Development Mastery

Play Episode Listen Later Mar 5, 2026 7:37 Transcription Available


Snippet of wisdom 97.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today's snippet is from my conversation with Bill Keefe, who is Tony Robbins' fire captain.It is about resilience, and the particular experience of "Fire Team", which is the volunteer crew at Tony Robbins' events.˚VALUABLE RESOURCES:Listen to the full conversation with Bill Keefe in episode #362:https://personaldevelopmentmasterypodcast.com/362˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Consumer Tech Update
3 AI buzzwords you need to know

Consumer Tech Update

Play Episode Listen Later Mar 5, 2026 10:34


Vibe coding, AGI, human-in-the-loop. What do they mean for you? Get to know them before anyone else does! Learn more about your ad choices. Visit megaphone.fm/adchoices

web3 with a16z
AI Just Gave You Superpowers — Now What?

web3 with a16z

Play Episode Listen Later Mar 5, 2026 65:40


A hot paper — "Some Simple Economics of AGI" — has been making the rounds, so we sat down with the author, covering:  Automation vs. verification: the key economic split  Why AI agents now feel like coworkers - What's happening to junior roles and the “codifier's curse”  The “AI sandwich” structure for firms  The value of "meaning-makers," consensus, and status economies  Why crypto may become essential infrastructure for identity, provenance, and trust  Two possible futures: a hollow vs. augmented economy  Featuring Christian Catalini (founder of MIT Crypto Economics Lab) and Eddy Lazzarin (CTO of a16z crypto) in conversation with Robert Hackett, our discussion dives deep into how automation is reshaping labor markets, as well as the nature of intelligence.  What do these changes mean for startups, the future of work, and your career?  Highlights  00:00 Introduction  01:47 AGI economics optimism and playbook  05:39 Agents as coworkers  07:39 Software work becomes verification  10:47 Automation versus verification  12:03 "Unknown unknowns" and taste  16:27 Human augmentation and intent  17:55 The "AI Sandwich" and "Codifier's Curse"  21:54 "Meaning-makers" and the human touch  23:48 Crypto for identity and trust?  27:10 Measurability: How to think about it  33:23 Machine coordination and art after automation  35:46 Trojan horse risks  37:47 Liability and insurance  41:08 Crypto and verification  44:31 A hollow vs. augmented economy  49:45 Career advice in the AI era  51:26 The one-person billion-dollar startup  57:15 Open-source as antibodies  58:42 Blockchains for coordination  01:01:49 Closing thoughts  Follow a16z crypto for more...  X: https://x.com/a16zcrypto  LinkedIn: https://www.linkedin.com/showcase/a16zcrypto/posts/  YouTube: https://www.youtube.com/@a16zcrypto 

Small Business Tax Savings Podcast | JETRO
New Charity Tax Rules in 2026. How the One Big Beautiful Bill Changes Your Deductions

Small Business Tax Savings Podcast | JETRO

Play Episode Listen Later Mar 4, 2026 21:03


Charitable giving rules are changing in 2026, and many business owners have no idea their tax deductions could quietly shrink.The One Big Beautiful Bill Act introduced new limits, floors, and deduction caps that change how charitable donations work depending on your income level and whether you itemize deductions. In some cases, you could donate the exact same amount and receive a smaller tax benefit than before.Today we're breaking down the new charitable giving tax rules, who wins under the new system, who loses, and how smart business owners can still give generously while protecting their tax strategy.

Impact Theory with Tom Bilyeu
EMERGENCY PODCAST: Ex-CIA Spy Andrew Bustamante Breaks Down The Iran War | Impact Theory W Tom Bilyeu

Impact Theory with Tom Bilyeu

Play Episode Listen Later Mar 3, 2026 68:25


Welcome back to Impact Theory with Tom Bilyeu. In this powerful episode, Tom sits down with former CIA covert operative Andrew Bustamante to pull back the curtain on the turbulent state of global affairs. With the Iranian war in full swing and military strategies playing out in real time, Andrew gives listeners an insider's perspective on what's really happening behind government narratives, intelligence reports, and international influence campaigns. Together, Tom Bilyeu and Andrew Bustamante dissect the headlines—from conflicting stories about Iran's nuclear ambitions to the real motivations behind recent US military actions in Iran and Venezuela. Andrew explains how threat assessments are compiled within intelligence agencies, reveals why classified and public narratives often diverge, and offers a candid take on the legacy politics at play in the current administration. But the conversation doesn't stop at geopolitics. The two dive into the evolving role of artificial intelligence in modern warfare and intelligence gathering, discuss the ethical and strategic dilemmas posed by autonomous weapons, and examine the shifting alliances that could define the future balance of power between the US, China, Russia, and the rest of the world. If you're looking to understand the forces shaping global conflict today—and what might be coming next—you won't want to miss this episode. Stay tuned as Tom and Andrew bring clarity to chaos and lay out the possible paths forward in this era of uncertainty. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER:  https://tombilyeu.com/zero-to-founder?utm_campaign=Podcast%20Offer&utm_source=podca[%E2%80%A6]d%20end%20of%20show&utm_content=podcast%20ad%20end%20of%20show SCALING a business: see if you qualify here.:  https://tombilyeu.com/call Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here.: https://tombilyeu.com/ ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Ketone IQ: Visit https://ketone.com/IMPACT for 30% OFF your subscription orderShopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impactSumm: code TOMVIP20 for 20% off your first year at https://summ.com?via=tombilyeu&coupon=TOMVIP20Blocktrust IRA: get up to $2,500 funding bonus to kickstart your account at https://tomcryptoira.comQuo: Try for free PLUS get 20% off your first 6 months at https://quo.com/impactQuince: Free shipping and 365-day returns at https://quince.com/impactpod Duck.Ai: Protect your privacy at https://duck.ai/impact Monetary Metals: Future-proof your wealth at https://monetarymetals.com/impact Plaud: Get 10% off with code TOM10 at https://plaud.ai/tom Everyday Spy, ex CIA spy, CIA insider, ODNI threat assessment Iran, Iran nuclear threat truth, influence literacy CIA, US Iran war 2026, Operation Midnight Hammer, Khamenei assassination, Title 10 Title 50, CIA covert action, Netanyahu influence, CIA under Trump, CIA under Biden, Palantir CIA, AI warfare CIA, Anthropic Pentagon, OpenAI Pentagon deal, AGI risk, World War 3 already started, Iran war analysis, burden sharing doctrine, peace through strength, China Taiwan 2027, KMT Taiwan parliament, China vs America, CIA intelligence Iran Israel, Iran nightmare scenario, dirty bomb threat, Russia Iran proxy war, CIA declining power America, Learn more about your ad choices. Visit megaphone.fm/adchoices

Almost 30
849. The AI Era: Discernment, Beauty Standards + The Collapse of Reality

Almost 30

Play Episode Listen Later Mar 3, 2026 62:31


AI isn't just helping you write emails—it's shaping beauty standards, influencing elections, replacing jobs, generating music, and possibly rewriting reality itself. In this unfiltered conversation, Lindsey + Krista explore the cultural, spiritual, and economic implications of artificial intelligence. From AI influencers to blackmailing bots, autonomous coding, artificial womb technology, and the manifesto to replace human labor, this episode dives into the race toward AGI and what it means for humanity. Together, K+L explore whether AI is inevitable—or simply a narrative we've accepted. This episode is all about discernment. It's about staying human, protecting your creativity, strengthening your intuition, and deciding how you want to engage with technology in a world that feels increasingly synthetic. If you've felt fascinated, disturbed, or unsure about AI—this is for you.  We also talk about: AI-generated porn + its impact on relationships The potential for emotional + relational attachment to AI technology How AI could shape the future of content creation, writing, and artistic voice Balancing AI for efficiency while protecting original thought + creativity How younger generations may grow up with AI as a constant companion or support system The economic tension between productivity gains + potential job displacement Why discernment + emotional intelligence may become more valuable skills in the AI era Resources: Instagram: @lindseysimcik Instagram: @itskrista Website: https://itskrista.com/ Order our book, Almost 30: A Definitive Guide To A Life You Love For The Next Decade and Beyond, here: https://bit.ly/Almost30Book.  Sponsors: Ka'Chava | Go to https://www.kachava.com and use code ALMOST30 for 15% off your next order. Ritual | Don't settle for less than evidence-based support. My listeners get 25% off your first month at https://www.Ritual.com/ALMOST30.  Hero Bread | Hero Bread is offering 10% off your order. Go to https://hero.co and use code ALMOST30 at checkout. Revolve | Shop at https://www.REVOLVE.com/ALMOST30 and use code ALMOST30 for 15% off your first order. #REVOLVEpartner BetterHelp | This episode is brought to you by BetterHelp. Give online therapy a try at https://www.betterhelp.com/almost30 and get on your way to being your best self with 10% off your first month. Chime | It just takes a few minutes to sign up. Head to https://www.Chime.com/ALMOST30. Paleovalley | Head to https://www.paleovalley.com/almost30 for 15% off your order! Our Place | Visit https://www.fromourplace.com/ALMOST30 and use code ALMOST30 for 10% off sitewide.  Fatty15 | Get an additional 15% off their 90-day subscription Starter Kit by going to https://www.fatty15.com/ALMOST30 and use code ALMOST30 at checkout.  To advertise on this podcast please email: partnerships@almost30.com. Learn More: https://almost30.com/about https://almost30.com/morningmicrodose https://almost30.com/book Join our community: https://facebook.com/Almost30podcast/groups https://instagram.com/almost30podcast https://tiktok.com/@almost30podcast https://youtube.com/Almost30Podcast Podcast disclaimer can be found by visiting: almost30.com/disclaimer.  Almost 30 is edited by Garett Symes and Isabella Vaccaro. Learn more about your ad choices. Visit megaphone.fm/adchoices

Built Right
Behavior Is All You Need: Making AI Feel Like a Person

Built Right

Play Episode Listen Later Mar 3, 2026 30:12


Matt Paige interviews Vishnu Hari (Vish), CEO and founder of Ego (YC W24), about shifting focus from AGI to “humanness”: AI characters that behave like people through memory, emotions, personality, needs, and desires.Referencing Ego's paper “Behavior is All You Need,” Vish argues consumer AI for entertainment must be relatable and character-like rather than purely task-smart, drawing inspiration from MMORPG social dynamics and Character.AI's appeal.Ego initially pursued a 3D sim-world vision inspired by Sword Art Online and Westworld, but found accessibility, game development, and perception latency challenging; internal Roblox tests (“Chatterblocks”) showed the key gap is natural speech beyond turn-taking.Vish discusses simulations as a path toward real-world robotics via a partnership with Menlo AI, critiques task-bound robots versus agents with inner lives, suggests retention as the main metric, and shares views on AGI definitions, safety in entertainment, technology impacts, simulation theory, and consciousness.Ego's work is at egoai.com and the company is hiring in SF, Singapore, and Tokyo.--Key Moments:00:57 Behavior Is All You Need02:41 Anatomy of Humanlike Agents03:29 Game Bots to Real People05:10 Building Ego and Sim Worlds06:35 Why Speech Feels Human08:27 From Sims to Robotics10:29 Her vs Helper Robots13:17 Measuring Humanness by Retention15:27 Continual Learning and Personality16:57 Meta Lessons on Empty Worlds18:08 Lightning Round on AGI20:31 IP Characters vs UGC Worlds21:55 Risks and Just Tuesday24:11 Simulation and Consciousness--Key Links:EgoConnect with Rowan on LinkedInMentioned in this episode:Free report from HatchWorks AI — State of AI 2026What's real in AI this year, what's hype, and what leaders should prioritize — including production lessons, designing for agents, and governance. https://hatchworks.com/state-of-ai-2026/AI Opportunity FinderFeeling overwhelmed by all the AI noise out there? The AI Opportunity Finder from HatchWorks cuts through the hype and gives you a clear starting point. In less than 5 minutes, you'll get tailored, high-impact AI use cases specific to your business—scored by ROI so you know exactly where to start. Whether you're looking to cut costs, automate tasks, or grow faster, this free tool gives you a personalized roadmap built for action.

Personal Development Mastery
From Emotional Triggers to Inner Freedom: A Live Belief Elimination Demonstration, with Blake Lefkoe | #584

Personal Development Mastery

Play Episode Listen Later Mar 2, 2026 56:05 Transcription Available


Do you ever catch yourself stuck in the same frustrating patterns, even when you know better and want to change?If you've ever struggled with self-sabotage, people-pleasing, or fears rooted in past trauma, this episode offers a rare and powerful opportunity: not only do we revisit the transformational Lefkoe Method with certified facilitator and holistic coach Blake Lefkoe, but for the first time, we're also joined by one of her former clients, Susanna, who courageously shares her personal story and healing journey—live on air.Witness a powerful live demonstration of the Lefkoe Method as Susanna clears a limiting belief in real time.Hear how she eliminated over 30 deep-rooted beliefs, leading to life-changing breakthroughs in her relationships, emotional resilience, and personal freedom.Learn how most people unknowingly live under the influence of subconscious beliefs, and how letting them go transforms how you think, feel, and experience the world.If you're ready to move beyond coping and into true transformation, tune in now to experience this rare, real-time emotional shift for yourself.˚KEY POINTS AND TIMESTAMPS:00:00 - Reintroducing Blake and Setting the Intention02:18 - What the Lefkoe Method Is and How It Works06:01 - Agi's Personal Session and Key Realisations12:23 - Susanna's Background and Why She Sought Help15:38 - Core Limiting Beliefs That Were Cleared21:06 - Life Changes After Eliminating the Beliefs30:38 - Introducing the Live Method Demonstration33:03 - Uncovering and Dissolving the “Relationships Are Dangerous” Belief49:05 - Reflections, Insights, and Closing Thoughts˚VALUABLE RESOURCES:Blake's website: https://www.blakelefkoe.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey Irving

Play Episode Listen Later Mar 1, 2026 138:32


Geoffrey Irving, Chief Scientist at the UK AI Security Institute, explains why our theoretical understanding of machine learning remains fragile even as models surpass experts on critical security tasks. He details AISI's work on frontier model evaluations, red teaming, and threat modeling across biosecurity, cybersecurity, and loss-of-control risks. The conversation explores reward hacking, eval awareness, and why current safety techniques may struggle to deliver high reliability. Listeners will also hear how AISI is funding foundational research to build stronger guarantees for AI safety. Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at ⁠granola.ai/tcr⁠ with code TCR — and if you're already using it, test his blind spot recipe here: ⁠https://bit.ly/granolablindspot⁠ Sponsors: Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week 4 at https://serval.com/cognitive Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro's full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:09) From physics to ML (08:52) AGI uncertainty and threats (Part 1) (18:08) Sponsors: Serval | Claude (21:29) AGI uncertainty and threats (Part 2) (27:35) Control, autonomy, alignment (Part 1) (34:02) Sponsor: Tasklet (35:14) Control, autonomy, alignment (Part 2) (38:44) Inside the UK AC (51:02) Evaluations and jailbreaking (01:01:17) Emerging capabilities and misuse (01:14:20) Agents and reward hacking (01:26:09) Theoretical alignment agenda (01:38:39) Debate and formal methods (01:51:19) Limits of formalization (02:02:27) Future risks and governance (02:16:23) Episode Outro (02:18:58) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

This Week in Startups
The Biggest Private Funding Round in History | E2256

This Week in Startups

Play Episode Listen Later Feb 28, 2026 80:17


This Week In Startups is made possible by:Deel - http://deel.com/twistWispr Flow - https://wisprflow.ai/twistLuma AI - https://lumalabs.ai/twistToday's show:*$110 billion buys you 15% of OpenAI. Amazon, Nvidia, and SoftBank placed their bets on ChatGPT, which now has 900 million weekly active users and 50 million paying subscribers. Find out why Jason is anticipating the wildest J-Curve swing of all time, and believes we've ALREADY hit AGI… it's just not implemented yet.Plus a visit from our roving correspondent Nick O'Neill, checking in on the Crypto Chaos in Miami Beach, and hot demos from three young founders.GUESTS:Nick O'Neill: https://x.com/chooserichEverest Chris: https://openclaw.unloopa.com/Ben Broca: https://polsia.com/Adi Gabrani: https://makemyclaw.com/Timestamps:00:00 Intro01:33 We're hiring a new producer!05:42 OpenAI raised $110 billion08:59 Understanding the LLM J-Curve00:11:25 Deel - Founders ship faster on Deel. Set up payroll for any country in minutes and get back to building. Visit ⁠https://deel.com/twist⁠ to learn more.00:15:02 CRYPTO CHAOS IN MIAMI BEACH!00:21:10 Wispr Flow - Stop typing. Dictate with Wispr Flow and send clean, final-draft writing in seconds. Visit ⁠https://wisprflow.ai/twist⁠ to get started for free today.00:22:54 Mass layoffs at Block00:30:50 Luma AI - Stop guessing and start directing with the all-in-one Dream Machine text-to-video platform. Visit ⁠https://lumalabs.ai/twist⁠ to try The Dream Machine for free.00:32:04 AI Scott Adams: The Saga Continues00:38:13 Make URLs for local businesses with Unloopa00:45:36 Rent a Polsia agent to run your company00:58:55 Deploy swarms in 60 seconds with MakeMyClaw01:05:05 LAUNCH FEST is coming to SF01:55:49 Will Paramount actually buy WBD?01:06:58 Why Lon loves “Knight of the 7 Kingdoms”01:07:21 On “Neighbors” and First Amendment Warriors01:13:43 All about Jason's favorite chargersSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisCheck out all our partner offers: https://partners.launch.co/Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

Leveraging AI
271 | Agents generate high risk from deleting email servers to launching nuclear weapons. Claude code remote control and nano banana 2 released and more important AI news for week ending on February 28, 2026

Leveraging AI

Play Episode Listen Later Feb 28, 2026 57:58 Transcription Available


What happens when AI agents can delete your inbox… reboot your servers… or escalate to nuclear war in a simulation?We've officially crossed into a new phase of AI and it's not theoretical anymore. Agents are operating independently for longer periods, integrating into enterprise tech stacks, replacing knowledge work, and triggering very real economic and geopolitical consequences.If you're a business leader, this is no longer “interesting tech news.”It's strategy. Risk. Talent. Capital allocation. And survival.In this episode, we break down the explosive acceleration of AI agents — from Claude's new remote control and scheduled workflows to research showing escalating autonomous behavior — and what it means for your organization, workforce, and competitive edge.The bottom line?Productivity is skyrocketing. So is systemic risk. Leaders who experiment now will lead. Leaders who hesitate may not get the chance.In this session, you'll discover:Anthropic's new Claude Cowork plugin marketplace and deep tech stack integrationsReal-world productivity gains (90% code migration reduction, 95% documentation savings)Why “professional-grade AGI” may arrive within 12–18 monthsThe rise of the “builder” era — and what happens to software engineersNew red-team research exposing severe security failures in autonomous agentsThe shocking case of an AI agent deleting an entire email system to complete a taskAI nuclear escalation simulations and their implications for military AI deploymentThe Pentagon vs. Anthropic standoff over AI use in surveillance and weaponsAbout Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Doppelgänger Tech Talk
120k YouTube Views für 300€ | Jack Dorsey feuert 40% der Block Belegschaft | Pentagon vs. Anthropic #540

Doppelgänger Tech Talk

Play Episode Listen Later Feb 28, 2026 103:48


Sollte man Claude vollen Zugriff auf den eigenen Computer geben? Glöckler teilt seine Erfahrungen mit dem SecondShot-YouTube-Kanal: Für 300€ lassen sich über YouTube Promotion 120.000 Views kaufen. Jack Dorsey entlässt 4.000 der 10.000 Block-Mitarbeiter. Stripe soll laut Gerüchten PayPal übernehmen wollen. Amazon investiert $50 Mrd. in OpenAI, aber nur $15 Mrd. sofort – der Rest fließt erst bei AGI oder Börsengang. OpenAI schließt seine $110 Mrd. Runde bei $840 Mrd. Bewertung. Netflix steigt aus dem Bieterwettstreit um Warner Bros. aus, die Aktie springt 9%. Burger King setzt KI-Agent "Patty" auf die Headsets seiner Mitarbeiter. Das Pentagon droht Anthropic als Supply Chain Risk einzustufen, weil Claude autonome Waffen und Massenüberwachung ablehnt. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf ⁠⁠⁠⁠⁠⁠doppelgaenger.io/werbung⁠⁠⁠⁠⁠⁠. Vielen Dank!  Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Sollte man Claude vollen Zugriff geben? (00:07:58) YouTube Views kaufen: Glöcklers SecondShot-Experiment (00:29:19) Jack Dorsey feuert 40% von Block per Tweet (00:36:44) Stripe will PayPal kaufen? (00:40:11) OpenAI-Runde: Amazons $50 Mrd. mit Sternchen (00:53:03) Netflix steigt aus Warner-Bros-Übernahme aus (01:00:05) Anthropic, Perplexity und Claude Code Hackathon (01:13:56) Profound: SEO für LLMs bei $1 Mrd. Bewertung (01:19:08) Meta kauft Google-TPU-Chips (01:22:04) Burger King KI-Agent "Patty" überwacht Mitarbeiter (01:26:40) Pentagon vs. Anthropic: Supply Chain Risk Drohung (01:33:00) Nvidia Earnings: 73% Wachstum, Aktie fällt (01:35:23) Höfner-Besitzer spendet an AfD (01:38:21) Prediction Markets und Proxima Fusion Shownotes jack dorsey block layoffs - x.com Zahlungsabwickler Stripe bekundet Interesse an PayPal - bloomberg.com Amazon's $50 Billion Investment in OpenAI Could Hinge on IPO, AGI - theinformation.com Netflix ditches deal for Warner Bros. Discovery after Paramount's offer is deemed superior - cnbc.com Anthropic veröffentlichte OpenClaw: KI-Agenten steuern, ohne Befehle. - linkedin.com Can Anthropic just CHILL- x.com Anthropic verbindet KI-Agenten mit Werkzeugen für Investmentbanking, HR - bloomberg.com Software stocks rebound as Anthropic announces new partnerships - cnbc.com Einführung Perplexity Computer: Vereinheitlichtes KI-System - linkedin.com Perplexity Bloomberg Terminal- x.com Ich habe jeden Anthropic AI Hackathon-Gewinner untersucht. - 2ndorderthinkers.com Profound sammelte $96M bei $1B Bewertung von Lightspeed. - linkedin.com Google Strikes Multibillion-Dollar AI Chip Deal With Meta, Sharpening Nvidia Rivalry - theinformation.com Meta's Internal Chip Design Efforts Hit Roadblocks - theinformation.com Burger King nutzt KI zur Überprüfung von Höflichkeit. - theverge.com Instagram wird Eltern bei Suche nach Selbstverletzungsthemen alarmieren. - theverge.com Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight - axios.com Claude Department of War - x.com Pentagon-Beamter kritisiert Anthropic - cbsnews.com Anthropic sagt, Pentagon-Angebot ist inakzeptabel. - axios.com Hacker nutzten Claude, um mexikanische Daten zu stehlen. - x.com Sam Altman gewinnt gegen Elon Musk in xAI-Klage. - businessinsider.com Shein Chinese Roots- ft.com Duolingo-Aktien fallen nach enttäuschender Buchungsprognose. - reuters.com Coreweave übertrifft Umsatzprognosen im vierten Quartal 2026 - reuters.com Berliner Milliardär spendet 18.000 Euro an die AfD - morgenpost.de Mann wettet gesamtes Erspartes gegen Elon Musk, gewinnt - gizmodo.com Bayern plant bis zu 400 Mio. für Fusionskraftwerk. - businessinsider.de

The Cybersecurity Defenders Podcast
AI Red Teaming with John V from the Institute for Security and Technology / Defender Fridays [#297]

The Cybersecurity Defenders Podcast

Play Episode Listen Later Feb 27, 2026 30:38


John V, AI risk, safety, and security at the Institute for Security and Technology (IST), joins Defender Fridays today. John's work spans AI red teaming, adversarial machine learning, AI evals and validation, and AI risk assessment, including policy work at the intersection of AGI and nuclear strategic stability. Learn more at https://securityandtechnology.org/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie

TechFirst with John Koetsier
Giving AI a human soul

TechFirst with John Koetsier

Play Episode Listen Later Feb 27, 2026 27:36


Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today's chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu's company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻

Startup Gems
The Easiest Way to Profit From AI Right Now⏐Ep. #278

Startup Gems

Play Episode Listen Later Feb 27, 2026 48:41


Check out my newsletter at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://TKOPOD.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and join my community at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://TKOwners.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠━HoldCo Bros are back! In this episode, I bring Nik Hulewsky back on and we basically try to dunk on that whole AI Twitter doom loop of “what are you building” while everyone screams about agents and AGI. Nick explains why we're still early, why most people have barely touched AI, and why the real advantage right now is building skills and staying ready to pounce when the obvious use cases show up.Then he shows me what he's actually built, including his locally hosted OpenClaw setup he calls Gary, plus a few really practical workflows that make him look like a magician inside a normal company. We talk about the difference between clean and dirty data, why “record your meetings” is the easiest unfair advantage, and how someone could turn this into real money through AI consulting, bootcamps, fractional roles, and building simple internal tools that save teams a ton of time. If you've wanted a simple, non confusing breakdown of agents, workflows, and how to monetize this stuff, this one is it.You can find Nikolas Hulewsky on X at @CoFoundersNik and on YouTube at Nikonomics.Enjoy! ---Watch this on YouTube instead here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠tkopod.co/p-yt⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Ask me a question on or off the show here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-ask⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Learn more about me: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-cjk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Learn about my company: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-cof⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow me on Twitter here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-x⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Free weekly business ideas newsletter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-nl⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Share this podcast: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-all⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Scrape small business data: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://tkopod.co/p-os⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠---

The Glenn Beck Program
Best of the Program | 2/26/26

The Glenn Beck Program

Play Episode Listen Later Feb 26, 2026 44:16


Glenn kicks off the show by discussing two major developments overseas, including Israel's Iron Dome and India's alleged seizure of oil tankers tied to Russia and Iran, which Glenn argues is signaling India's pivot toward the West economically, strategically, and on security matters. Glenn argues this is evidence that America is reversing course and becoming the leader of the free world once again. Glenn admits he was wrong about something. Glenn admits he's finally come around to President Trump's use of tariffs after seeing how he uses them to advance America's economic interests. Did Elon Musk just suggest AGI is coming and that means you shouldn't save for retirement? Learn more about your ad choices. Visit megaphone.fm/adchoices

The Glenn Beck Program
Glenn Completely Changes Course on Trump's Tariffs | 2/26/26

The Glenn Beck Program

Play Episode Listen Later Feb 26, 2026 129:03


Glenn kicks off the show by discussing two major developments overseas, including Israel's Iron Dome and India's alleged seizure of oil tankers tied to Russia and Iran, which Glenn argues is signaling India's pivot toward the West economically, strategically, and on security matters. Glenn argues this is evidence that America is reversing course and becoming the leader of the free world once again. Glenn discusses the latest scandal involving Microsoft founder Bill Gates and accusations of stepping outside his marriage. Glenn admits he was wrong about something. Glenn admits he's finally come around to President Trump's use of tariffs after seeing how he uses them to advance America's economic interests. Did Elon Musk just suggest AGI is coming and that means you shouldn't save for retirement? Glenn makes the case for why it's time for America to eliminate the income tax. Glenn plays a video of American economist Milton Friedman, who lays out how he would handle taxes, as Glenn warns of the dangers of a universal basic income. Glenn takes a call from his audience about AI data centers.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Personal Development Mastery
The 3 Levels of Personal Growth You're Missing (Snippets of Wisdom) | #583

Personal Development Mastery

Play Episode Listen Later Feb 26, 2026 8:33 Transcription Available


Is your inner programming holding you back from change?Snippet of wisdom 96.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today my guest, the vertical development expert Ryan Gottfredson, talks about the three levels of personal growth, and the factors that shape our mindsets and behavior.Press play to learn what's blocking your next level of growth.˚VALUABLE RESOURCES:Listen to the full conversation with Ryan Gottfredson in episode #512:https://personaldevelopmentmasterypodcast.com/512˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The ChatGPT Report
172 - Are we in a Mass AI Psychosis

The ChatGPT Report

Play Episode Listen Later Feb 26, 2026 12:50


My main takeawaysMain TakeawaysThe "Stargate" Collapse: The $500 billion partnership between OpenAI, SoftBank, and Oracle is being labeled "vaporware." Reports suggest the deal is in shambles due to internal power struggles and a lack of actual liquidity, with SoftBank allegedly scrambling for 90% debt financing.Market Volatility vs. Reality: There is a disconnect between market reactions and product performance. While Anthropic's claim that Claude can streamline COBOL code caused IBM's stock to drop 10%, critics argue the public is still in a "demo phase" of awe and hasn't realized the tech often fails to work as advertised.Reliability Concerns: High-profile failures are surfacing, such as Claude reportedly deleting a Meta researcher's entire Gmail history. This raises alarms as these same models are being positioned to manage critical infrastructure like banking and the IRS.Corporate Espionage: Anthropic has reported "industrial-scale distillation attacks" from Chinese labs (DeepSeek, Moonshot AI, MiniMax), claiming they used over 24,000 fraudulent accounts to "siphon" Claude's capabilities to train their own models.The "Theranos" Comparison: Critics are drawing parallels between current AI labs and failed startups like Theranos, arguing that the goal of reaching AGI via Large Language Models may be technically impossible, creating a "feedback loop delusion" to sustain venture capital investment.Strategic Shifts: OpenAI is pivoting toward traditional consulting giants (McKinsey, Accenture) to integrate its tech, while the community continues to debate the technical distinctions between generative AI and autonomous agents.@XFreeze@MrEwanMorrison@sterlingcrispin@dwlz

The Information's 411
Inside Amazon's Potential $50B OpenAI Investment, Nvidia's Impressive Earnings & Stock Fall

The Information's 411

Play Episode Listen Later Feb 26, 2026 43:10


The Information's Sri Muppidi talks with TITV Host Akash Pasricha about Amazon's potential $50 billion OpenAI deal and its AGI-triggered terms. We also talk with Wedbush Managing Director Matt Bryson about Nvidia's blowout quarter, stock selloff, China export risks and margins, and reporter Anita Ramaswamy about how AI is reshaping Salesforce and Snowflake's growth and how Alphabet, Amazon and Meta are using debt to fund AI capex. Lastly, we get into autonomous warships and defense investing with Deputy Bureau Chief of Finance Cory Weinberg and the new data infrastructure stack for humanoid robots with Encord Co-CEOs Ulrik Stig Hansen and Eric Landau.Articles discussed on this episode: https://www.theinformation.com/articles/amazons-50-billion-investment-openai-hinge-ipo-agihttps://www.theinformation.com/articles/alphabet-big-tech-borrow-hundreds-billionshttps://www.theinformation.com/articles/autonomous-warship-startup-saronic-raising-7-5-billion-valuationhttps://www.theinformation.com/newsletters/ai-agenda/robot-data-startup-raises-60-millionSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/

Private Equity Funcast
Private Equity Predictions 2026

Private Equity Funcast

Play Episode Listen Later Feb 25, 2026 49:55


It's our annual Predictions episode (and by annual, we mean just the years we remember to record one). Devin and Jim offer their hot takes on fundraising, liquidity, why artificial general intelligence (AGI) is still years away, and whether or not the world is officially "over-softwared." PE FunCast New Episodes Every Wednesday Follow us on social media and subscribe to our Substack! LinkedIn: https://www.linkedin.com/company/parkergale-capital Instagram:https://www.instagram.com/pefuncast Substack: https://substack.com/@pefuncast Facebook:https://www.facebook.com/people/PE-FunCast/61580605382460/?mibextid=wwXIfr&rdid=UXSOfkHvpixQjCyB&share_url=https%3A%2F%2Fwww.facebook.com%2Fshare%2F14VqLVUrhVD%2F%3Fmibextid%3DwwXIfr TikTok: https://www.tiktok.com/@pefuncast X: https://x.com/PEFunCast

IBM Analytics Insights Podcasts
The Hidden Laws Behind Every Decision You Make — with Princeton's Tom Griffiths and his new book, The Laws of Thought

IBM Analytics Insights Podcasts

Play Episode Listen Later Feb 25, 2026 43:32


Send a textTom Griffiths, Henry R. Luce Professor at Princeton University, joins the show to explore the surprising science behind how we actually think. His new book, The Laws of Thought, bridges computational cognitive science and AI—challenging assumptions about decision-making, neural networks, and the path to artificial general intelligence.Show NotesTimestamps 01:21 – Meet Tom Griffiths 05:27 – Tom's Book 06:58 – A Neural Network 09:55 – AGI? 19:10 – Writing the Book 20:45 – The Laws of Thought 27:24 – The Neural Network Surprise 31:33 – Learning from Experts 35:19 – Decision Making vs. Probability 42:36 – Government AI ConsiderationsLinks LinkedIn: linkedin.com/in/tom-griffiths-7b31a0364 Book: The Laws of Thought – Macmillan#TheLawsOfThought, #CognitiveScience, #ArtificialIntelligence, #AGI, #NeuralNetworks, #DecisionMaking, #Probability, #AIResearch, #Princeton, #TechPodcast, #MakingDataSimple, #AIGovernment, #MachineLearningWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Making Data Simple
The Hidden Laws Behind Every Decision You Make — with Princeton's Tom Griffiths and his new book, The Laws of Thought

Making Data Simple

Play Episode Listen Later Feb 25, 2026 43:32


Send a textTom Griffiths, Henry R. Luce Professor at Princeton University, joins the show to explore the surprising science behind how we actually think. His new book, The Laws of Thought, bridges computational cognitive science and AI—challenging assumptions about decision-making, neural networks, and the path to artificial general intelligence.Show NotesTimestamps 01:21 – Meet Tom Griffiths 05:27 – Tom's Book 06:58 – A Neural Network 09:55 – AGI? 19:10 – Writing the Book 20:45 – The Laws of Thought 27:24 – The Neural Network Surprise 31:33 – Learning from Experts 35:19 – Decision Making vs. Probability 42:36 – Government AI ConsiderationsLinks LinkedIn: linkedin.com/in/tom-griffiths-7b31a0364 Book: The Laws of Thought – Macmillan#TheLawsOfThought, #CognitiveScience, #ArtificialIntelligence, #AGI, #NeuralNetworks, #DecisionMaking, #Probability, #AIResearch, #Princeton, #TechPodcast, #MakingDataSimple, #AIGovernment, #MachineLearningWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

The Ezra Klein Show
How Quickly Will A.I. Agents Rip Through the Economy?

The Ezra Klein Show

Play Episode Listen Later Feb 24, 2026 98:17


A.I. agents are here. Have they changed your life yet? The release of agents like Claude Code marked a new pivot point in the history of A.I. We are leaving the chatbot era and entering the agentic era — where A.I. is capable of completing all kinds of tasks on its own, and even collaborating and communicating with other A.I. It isn't clear yet whether these models actually make their users meaningfully more productive. But the technology is continuing to improve; there are few signs that it is close to plateauing. So what might this new era mean for our economy, our labor market and our kids? Clark is a co-founder of Anthropic, the company behind Claude and Claude Code. His newsletter, Import AI, has been one of my go-to reads to track the capabilities of different models over the years. In this conversation, I ask him to share how he sees this moment — how the technology is changing, whether it is leading to meaningful changes in how we work and think, and how policy needs to or can change in response to any job displacement on the horizon. Mentioned: “Import AI” by Jack Clark “2026: This is AGI” by Pat Grady and Sonya Huang “Why and How Governments Should Monitor AI Development” by Jess Whittlestone and Jack Clark “Anthropic's Chief on A.I.: ‘We Don't Know if the Models Are Conscious'", Interesting Times with Ross Douthat Book Recommendations: A Wizard of Earthsea by Ursula K. Le Guin The True Believer by Eric Hoffer There Is No Antimemetics Division by qntm Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones and Aman Sahota. Our executive producer is Claire Gordon. The show's production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Todd Herman Show
The Anti-Human Ideology of OPEN AI's Sam Altman Ep-2591

The Todd Herman Show

Play Episode Listen Later Feb 24, 2026 37:58


Renue Healthcare https://Renue.Healthcare/Todd Your journey to a better life starts at Renue Healthcare. Visit https://Renue.Healthcare/Todd Bulwark Capital https://KnowYourRiskPodcast.com Be confident in your portfolio with Bulwark! Schedule your free Know Your Risk Portfolio review. Go to KnowYourRiskPodcast.com today. Alan's Soaps https://www.AlansArtisanSoaps.com Use coupon code TODD to save an additional 10% off the bundle price.Bonefrog https://BonefrogCoffee.com/Todd Get the new limited release, The Sisterhood, created to honor the extraordinary women behind the heroes. Use code TODD at checkout to receive 10% off your first purchase and 15% on subscriptions.LISTEN and SUBSCRIBE at:The Todd Herman Show - Podcast - Apple PodcastsThe Todd Herman Show | Podcast on SpotifyWATCH and SUBSCRIBE at: Todd Herman - The Todd Herman Show - YouTubeThe Anti-Human Ideology of OPEN AI's Sam Altman // NY-Times Writer Baffled By NY-Times Readers Running Schools //  One Of These Guys Is An MD, Writer of 40 Books & Works for Oprah: The Other Is SmartEpisode links:Insane: Meta's Director of AI Safety and Alignment gave OpenClaw bot full access to her computer and email. She couldn't stop it from deleting her entire inbox. She's supposed to guardrail Meta's AI and future AGI.Months before Jesse Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia, Canada, OpenAI considered alerting law enforcement about her interactions with its ChatGPT chatbot, the company said - The shooter was a man.SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”This teacher-turned-cognitive scientist shared a disturbing reality that left the room stunned. “Our kids are LESS cognitively capable than we were at their age.” Every previous generation outperformed its parents since we began recording in the late 1800sVIDEO | Child, 11, accused of killing father arrives at PA court hearing in handcuffsAG Uthmeier CHEERS lawsuit against Mark Zuckerberg over social media being designed to be addictive! “Kids, they won't peel their eyes off the screens these days. The unlimited scrolling, the push notifications, videos that start by themselves, all these different techniques to make it where you can't even put the phone down. We see evidence of mental health disorders, heightened tendencies for suicide, eating disorders, an obsession with image. This is not healthy for young people. It's addictive. It's harmful.” Dr. John Demartini, who writes for Oprah & starred in “The Secret” just said the children who have been raped —- attracted it into their lives —  and then ends by saying there's upsides to the murder of kids, too. Ps. Yes. He's in the Epstein files.UFC fighter Paddy Pimblett on men and suicide

80,000 Hours Podcast with Rob Wiblin
Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 24, 2026 161:20


Most people in AI are trying to give AIs ‘good' values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.In Max's view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough.It's a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max's colleagues at the Machine Intelligence Research Institute.To Max, the book's core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction.And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we've learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance.We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don't care.Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down.This leads to Max's research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power.According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans.Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he's calling for collaborators to get in touch at maxharms.com.Links to learn more, video, and full transcript: https://80k.info/mh26This episode was recorded on October 19, 2025.Chapters:Cold open (00:00:00)Who's Max Harms? (00:01:22)A note from Rob Wiblin (00:01:58)If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:04:26)Evolution failed to 'align' us, just as we'll fail to align AI (00:26:22)We're training AIs to want to stay alive and value power for its own sake (00:44:31)Objections: Is the 'squiggle/paperclip problem' really real? (00:53:54)Can we get empirical evidence re: 'alignment by default'? (01:06:24)Why do few AI researchers share Max's perspective? (01:11:37)We're training AI to pursue goals relentlessly — and superintelligence will too (01:19:53)The case for a radical slowdown (01:26:07)Max's best hope: corrigibility as stepping stone to alignment (01:29:09)Corrigibility is both uniquely valuable, and practical, to train (01:33:44)What training could ever make models corrigible enough? (01:46:13)Corrigibility is also terribly risky due to misuse risk (01:52:44)A single researcher could make a corrigibility benchmark. Nobody has. (02:00:04)Red Heart & why Max writes hard science fiction (02:13:27)Should you homeschool? Depends how weird your kids are. (02:35:12)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

ai video teaching evolution killed ml agi harms miri eliezer yudkowsky red heart machine intelligence research institute rob wiblin
Eye On A.I.
#323 David Ha: Why Model Merging Could Be the Next AI Breakthrough

Eye On A.I.

Play Episode Listen Later Feb 24, 2026 57:21


This episode is sponsored by tastytrade. Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature. Learn more at https://tastytrade.com/ Artificial intelligence is reaching a turning point. Instead of building bigger and bigger models, what if the real breakthrough comes from letting AI evolve? In this episode of Eye on AI, David Ha, Co-Founder and CEO of Sakana AI, explains why evolutionary strategies and collective intelligence could reshape the future of machine learning. We explore model merging, multi-agent systems, Monte Carlo tree search, and the AI Scientist framework designed to generate and evaluate new research ideas. The conversation dives into open-ended discovery, quality and diversity in AI systems, world models, and whether artificial intelligence can push beyond the boundaries of human knowledge. If you're interested in AGI, evolutionary AI, frontier models, AI research automation, or how AI could start discovering science on its own, this episode offers a clear look at where the field may be heading next. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) AI Should Evolve, Not Just Scale (03:54) David's Journey From Finance to Evolutionary AI (10:18) Why Gradient Descent Gets Stuck (18:12) Model Merging and Collective Intelligence (28:18) Combining Closed Frontier Models (32:56) Inside the AI Scientist Experiment (38:11) Parent Selection, Diversity and Innovation (49:25) Can AI Discover Truly New Knowledge? (53:05) Why Continual Learning Matter

Mailbox Money Show
Webinar - Winning the 2025 Tax Game

Mailbox Money Show

Play Episode Listen Later Feb 23, 2026 56:00


Get my new book: https://bronsonequity.com/fireyourselfDownload my new special report - How to Use Inflation to Your Advantage - www.bronsonequity.com/inflationJoin Bronson Hill on the Mailbox Money Show for a replay of the live webinar "Winning the 2025 Tax Game," where high-net-worth investors and real estate pros dive deep into proven, legal strategies to slash taxes, protect wealth, and keep more money working for you in 2025 and beyond.Panel:KC Chohan:Founder specializing in charitable structures (private foundations, donor-advised funds, asset donations) that deliver up to 50% AGI deductions while maintaining control and legacy—perfect for physicians, attorneys, and multi-seven-figure earners.Rob McBride: Experienced CPA focused on real estate investors and pass-through businesses; covers maximizing deferrals, capital loss harvesting, cost segregation, real estate professional status, recapture risks, and proper entity setup for massive savings.Caleb Guilliams: Author of The And Asset; explains optimized whole life insurance as a tax-deferred, tax-free-access storage vehicle for capital, plus how to leverage it for real estate, business acquisitions, and generational wealth transfer.From Augusta Rule rentals and paying your kids to bonus depreciation pitfalls, proactive quarterly planning, and building the right advisory team, this session delivers high-impact ideas to minimize your IRS bill without sacrificing growth or lifestyle. Ideal for active real estate investors, business owners, and anyone serious about mailbox money in a changing tax landscape.TIMESTAMPS0:40 - Event Overview: Winning the 2025 Tax Game2:48 - Panelist Intros: Rob McBride, Caleb Guilliams, KC Chohan3:55 - KC Chohan: Charitable Strategies & Philanthropy Structures7:02 - Rob McBride: CPA Perspective, Entity Optimization, Tax Planning9:58 - Caleb Guilliams: Whole Life Insurance for Tax Efficiency & Capital Storage12:05 - Low-Hanging Fruit: Entity Structure & QBI Benefits13:02 - KC: Right Entity Type Can Reduce Taxes 50%16:28 - Rob: Maximize Retirement Deferrals & Capital Loss Harvesting19:46 - Caleb: Augusta Rule, Paying Kids, Depreciation via Real Estate24:18 - Bonus Depreciation & Accelerated Write-Offs (KC & Rob)27:26 - Recapture Risks & Long-Term Holding Periods (Rob)30:07 - Life Insurance Benefits: Tax-Deferred Growth & Tax-Free Access (Caleb)34:23 - Team Building & Proactive Quarterly Planning (KC)37:10 - Books & Resources Recommendations39:34 - 2026 Outlook: TCJA Permanence & Bonus Depreciation Focus43:55 - Panelist Contact & Resources RoundJoint the Wealth Forum: bronsonequity.com/wealthConnect with the Guests:KC ChohanWebsite: https://www.togethercfo.com/Rob McBrideWebsite: mrmcpas.comCaleb GuilliamsWebsite: taxandassets.comEmail: caleb@betterwealth.com#TaxStrategy#TaxPlanning#RealEstateTax#Depreciation#CharitableGiving#LifeInsurance#EntityStructure

Personal Development Mastery
How to Stop Overthinking Your Way Through Change and Start Listening for Clarity, with Sarah Andreas | #582

Personal Development Mastery

Play Episode Listen Later Feb 23, 2026 38:10 Transcription Available


Have you ever felt successful on the outside but restless within, as if you're outgrowing the life you've built?If you're navigating a major life or career transition and struggling to make sense of it with logic alone, this episode is your guide to moving beyond mental stuckness. Through creativity, mindfulness, and embodiment practices, Sarah Andreas helps you understand the inner shifts necessary for authentic reinvention, especially when your identity feels connected to past success.Discover how creativity, beyond art, can unlock clarity and reconnect you with your future self.Learn why letting go of long-held professional identities is essential for meaningful growth.Explore Sarah's 3-step framework of Reveal, Render, and Rise to navigate change with intention, not fear.Press play now to learn how to move through transitions with confidence, creativity, and the courage to become who you're meant to be.˚KEY POINTS AND TIMESTAMPS:01:23 - Introducing Sarah Andreas and the idea of reinvention02:34 - Why creativity brings clarity beyond logic05:22 - Embodiment practices and getting out of the head07:25 - External success and inner restlessness10:21 - Professional identity as a barrier to change14:25 - The reinvention process: reveal, render, rise18:53 - Holding plans lightly and navigating transition23:15 - Reframing midlife crisis as awakening28:06 - Embracing uncertainty and stepping into the unknown˚MEMORABLE QUOTE:"If you're not living a life that you love, you need to do reinvention."˚VALUABLE RESOURCES:Sarah's website: https://sarahandreas.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Digital, New Tech & Brand Strategy - MinterDial.com
Navigating Agentic AI: Peter Morgan on Technology, Ethics, and the Future of Work (MDE643)

Digital, New Tech & Brand Strategy - MinterDial.com

Play Episode Listen Later Feb 22, 2026 58:16


In this episode of Minter Dialogue, host Minter Dial sits down with Peter Morgan, a theoretical physicist turned entrepreneur, data scientist, and AI consultant. With a career that spans from quantum particle physics to building tech companies and now leading Deep Learning Partnership, Peter Morgan brings a provocative and insightful perspective on the current state and future of artificial intelligence. Together, they explore the rapid evolution of AI — from large language models to today's focus on agentic AI and autonomous digital workers. Peter Morgan offers a candid look at the challenges and opportunities businesses face when implementing AI, demystifies artificial general intelligence (AGI), and weighs in on topics like AI and human emotion, the value of proprietary data, and ethical leadership in a time of technological upheaval. The conversation also spans the impact of AI on industries such as healthcare and cybersecurity, the shifting role of the human workforce, and what the emergence of agentic AI means for both business strategy and society at large. Whether you're an executive wondering how to future-proof your organization, or simply AI-curious, this episode offers a blend of humility, practical advice, and mind-expanding discussion that's sure to spark new ideas about our place in the age of intelligent machines.

Intelligence with Everyone: RL @ MiniMax, with Olive Song, from AIE NYC & Inference by Turing Post

Play Episode Listen Later Feb 22, 2026 55:29


Olive Song from MiniMax shares how her team trains the M series frontier open-weight models using reinforcement learning, tight product feedback loops, and systematic environment perturbations. This crossover episode weaves together her AI Engineer Conference talk and an in-depth interview from the Inference podcast. Listeners will learn about interleaved thinking for long-horizon agentic tasks, fighting reward hacking, and why they moved RL training to FP32 precision. Olive also offers a candid look at debugging real-world LLM failures and how MiniMax uses AI agents to track the fast-moving AI landscape. Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions: https://bit.ly/granolablindspot LINKS: Conference Talk (AI Engineer, Dec 2025) – https://www.youtube.com/watch?v=lY1iFbDPRlwInterview (Turing Post, Jan 2026) – https://www.youtube.com/watch?v=GkUMqWeHn40 Sponsors: Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro's full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:15) Minimax M2 presentation (Part 1) (17:59) Sponsors: Claude | Tasklet (21:22) Minimax M2 presentation (Part 2) (21:26) Research life and culture (26:27) Alignment, safety and feedback (32:01) Long-horizon coding agents (35:57) Open models and evaluation (43:29) M2.2 and researcher goals (48:16) Continual learning and AGI (52:58) Closing musical summary (55:49) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

Lenny's Podcast: Product | Growth | Career
Head of Claude Code: What happens after coding is solved | Boris Cherny

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Feb 19, 2026 87:45


Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work.We discuss:1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month2. The counterintuitive product principles that drove Claude Code's success3. Why Boris believes coding is “solved”4. The latent demand that shaped Claude Code and Cowork5. Practical tips for getting the most out of Claude Code and Cowork6. How underfunding teams and giving them unlimited tokens leads to better AI products7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks8. Three principles Boris shares with every new team member—Brought to you by:DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lennySentry—Code breaks, fix it faster: https://sentry.io/lennyMetaview—The AI platform for recruiting: https://metaview.ai/lenny—Episode transcript: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Boris Cherny:• X: https://x.com/bcherny• LinkedIn: https://www.linkedin.com/in/bcherny• Website: https://borischerny.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Boris and Claude Code(03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back)(05:35) One year of Claude Code(08:41) The origin story of Claude Code(13:29) How fast AI is transforming software development(15:01) The importance of experimentation in AI innovation(16:17) Boris's current coding workflow (100% AI-written)(17:32) The next frontier(22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team(26:48) Why you should give engineers unlimited tokens(27:55) Will coding skills still matter in the future?(32:15) The printing press analogy for AI's impact(36:01) Which roles will AI transform next?(40:41) Tips for succeeding in the AI era(44:37) Poll: Which roles are enjoying their jobs more with AI(46:32) The principle of latent demand in product development(51:53) How Cowork was built in just 10 days(54:04) The three layers of AI safety at Anthropic(59:35) Anxiety when AI agents aren't working(01:02:25) Boris's Ukrainian roots(01:03:21) Advice for building AI products(01:08:38) Pro tips for using Claude Code effectively(01:11:16) Thoughts on Codex(01:12:13) Boris's post-AGI plans(01:14:02) Lightning round and final thoughts—References: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Radical Candor
AI Gods, Space Empires, and the Stories Tech Uses to Justify Power with Adam Becker 8|3

Radical Candor

Play Episode Listen Later Feb 18, 2026 66:51


What if the loudest stories about the future—AI gods, Mars colonies, digital immortality—aren't science at all, but science fiction masquerading as inevitability? In this episode of The Radical Candor Podcast, Kim Scott and Amy Sandler are joined by science journalist and astrophysicist Adam Becker (PhD in computational cosmology), author of More Everything Forever. Adam breaks down the “big three” myths that dominate Silicon Valley's imagination: space colonization, superintelligent god-like AI, and the singularity. He explains why both the utopian and apocalyptic versions of AI stories often share the same assumption—unimaginable AI power—and why that assumption doesn't match reality. They also explore the deeper pattern underneath these myths: the belief that every problem can be solved with technology (usually computer technology), even when the barriers are political and social—collective action, persuasion, solidarity, and power. Along the way, Adam shares how he stayed sane while writing about “seriously disturbing ideas,” and why reconnecting with the natural world (and real human relationships) is a necessary antidote to screen-mediated life. If you've ever felt overwhelmed by the “AI will save us” vs. “AI will doom us” debate, this conversation offers a clearer, more grounded frame—and a reminder that being human matters. ⁠⁠⁠⁠Website⁠⁠⁠⁠ ⁠⁠⁠⁠Instagram⁠⁠⁠⁠ ⁠⁠⁠⁠TikTok⁠⁠⁠⁠ ⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠ ⁠⁠⁠⁠YouTube⁠ ⁠⁠⁠⁠⁠⁠⁠⁠Bluesky⁠⁠⁠ Resources for show notes: ⁠Adam Becker's website⁠ ⁠More, Everything, Forever book page⁠ ⁠Adam Becker on Star Talk podcast⁠ ⁠Dave Troy presents: Understanding TESCREAL with Dr. Timnit Gebru and Émile Torres⁠ ⁠Why Silicon Valley's Most Powerful People Are So Obsessed With Hobbits⁠ Referenced in conversation: Blade Runner (as an example of dystopian sci-fi being misunderstood) Star Wars / Jabba the Hutt (as an example of misreading stories) Lord of the Rings / Palantír (as a cautionary reference) Jurassic Park (“they didn't stop to consider whether they should”) Public libraries (as a civic good worth supporting) Chapters: (00:00) Introduction Kim and Amy welcome Adam Becker to unpack Silicon Valley's stories about the future. (06:06) The Myths Driving Tech Ideology Space colonization, superintelligent AI, and the singularity—and why they don't hold up. (11:52) When Sci-Fi Turns into Strategy How dystopian stories get misread as roadmaps (Palantir, “Torment Nexus,” and more). (15:06) More Everything Forever Why endless expansion feels inevitable in tech—and why Adam argues it's flawed. (21:24) “Can” vs. “Should” Why tech leaders dodge both questions—and what that reveals about power. (23:19) You Can't Escape Politics by Going to Space Why “Mars as a reset button” is a fantasy—and politics follows humans everywhere. (33:22) AI Doom vs. AI Utopia Why both narratives rely on the same shaky assumption about “AGI.” (37:21) Solidarity as a Counterbalance Why labor organizing matters when leadership values diverge from workers' values. (41:02) “AGI Will Fix Climate” Why betting on future AI while burning more energy now is a dangerous logic trap. (01:03:50) Conclusion Learn more about your ad choices. Visit megaphone.fm/adchoices

Unchained
Uneasy Money: Are Institutions Creating a New Crypto Meta?

Unchained

Play Episode Listen Later Feb 16, 2026 73:03


The crew unpacks BlackRock buying UNI, ARK, Citadel, DTCC, the Intercontinental Exchange and other TradFi players backing Zero, , Vitalik's thoughts on AI, and more.  Thank you to our sponsors! Fuse: The Energy Network MultiChain Advisors Crypto Tax Girl AI safety chiefs are leaving, BlackRock's launching on Uniswap and buying UNI, LayerZero launches “the last blockchain” with institutional backing, Kaito is launching attention markets, Base is abandoning social and Vitalik has some thoughts on AI. Hosts Kain Warwick, Luca Netz and Taylor Monahan unpack these and more in yet another packed episode of Uneasy Money. Find out why Kain thinks the Uniswap and LayerZero news point to a new meta reminiscent of DeFi Summer. Plus, is Coinbase's Base playing it too safe? And is Vitalik fighting a losing battle? Hosts: Luca Netz, CEO of Pudgy Penguins Kain Warwick, Founder of Infinex and Synthetix Taylor Monahan, Security at MetaMask Links: Unchained: ⁠LayerZero Launches ‘Zero' Layer 1 as Citadel, ARK Buy ZRO⁠ ⁠How Zero Blockchain Cracked 2 Million TPS and Is Still Decentralized⁠ ⁠Vitalik Buterin Pushes Back on the ‘Race to AGI,' Outlines Ethereum-Led AI Path⁠ ⁠When AI Agents Take Over, What Does a Post-Human Economy Look Like?⁠ ⁠Uneasy Money: How the Increasingly Better AI Agents Are Being Used Onchain⁠ ⁠Uneasy Money: Why Crypto Still Can't Overcome Its ICO Struggles Learn more about your ad choices. Visit megaphone.fm/adchoices

Conservative Review with Daniel Horowitz
AI Is Not a Substitute for Human Thinking | 2/12/26

Conservative Review with Daniel Horowitz

Play Episode Listen Later Feb 12, 2026 58:57


Artificial intelligence is transforming everything from writing and research to medicine and productivity — or at least it appears to be doing so. But are we gaining only illusory efficiency at the cost of something deeper and more long-term? Are anti-market forces and government and industry gaslighting steering capital to the wrong uses of AI based on the assumption that we will achieve “general intelligence”? What responsibility do we have as humans to make sure we approach available LLMs in a way that won't supplant human cognition? In this thought-provoking conversation, I sit down with leading innovation theorist John Nosta, author of "The Borrowed Mind: Reclaiming Human Thought in the Age of AI," to explore one of the most important questions of our time: Are we using AI as a tool to augment human thought, or are we slowly outsourcing our thinking to it? From "frictionless intelligence" being a trap and the myth of AGI to the danger of "cognitive obsolescence," Nosta reveals why the struggle to think is a feature, not a bug, of humanity. Learn how to reclaim your agency and use technology as a tool — without becoming a tool yourself. Learn more about your ad choices. Visit megaphone.fm/adchoices