Podcasts about AGI

  • 1,942PODCASTS
  • 6,378EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Mar 19, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

Tank Talks
Why Building AI Matters More Than Using It with Ali Asaria of Transformer Lab

Tank Talks

Play Episode Listen Later Mar 19, 2026 47:05


In this episode of Tank Talks, Matt Cohen sits down with Ali Asaria, Co-Founder of Transformer Lab, to unpack the less visible side of the AI boom, from broken machine learning tools to the rise of autonomous research agents. Ali shares what it really looks like inside modern AI development and why the biggest opportunity isn't just using models, but having the ability to train, control, and improve them.Ali also reflects on his journey building across multiple tech waves, from creating BrickBreaker on BlackBerry to scaling Well.ca and Tulip, and now tackling AI infrastructure with Transformer Lab. He breaks down the realities most founders don't talk about, why great products lose deals, how long enterprise sales actually take, and why success often comes down to trust, timing, and people more than technology.Beyond AI, the conversation takes a broader turn into the future of innovation. Ali challenges the tech industry, especially in Canada, to think bigger, rebuild public trust, and focus on solving real-world problems through ambitious “mega projects.” If you're trying to separate AI hype from reality and understand where the real leverage is being created, this episode gives you a much clearer lens.Building BrickBreaker on 150M Devices (00:02:41)How a side project at BlackBerry turned into a global phenomenon. The early lesson that distribution beats perfection. Ali shares how building something simple but widely adopted gave him an early taste of scale. It also shaped his belief that getting into users' hands fast matters more than polishing endlessly in isolation.The Early Days of E-Commerce in Canada (00:05:36)Packing boxes manually, hacking payment systems, and why investors believed e-commerce would never work in Canada. From manually processing credit cards to building infrastructure from scratch, Ali walks through how scrappy the early days really were. It's a reminder that many “obvious” markets today once looked completely unworkable.Scaling Well.ca and the McKesson Exit (00:08:18)How relationships with partners turned into acquisition opportunities. The messy reality behind “successful exits.” Ali explains how long-term partnerships quietly set the stage for acquisition, even before it was intentional. He also highlights how unpredictable and fragile deals can be, even when they seem done.Enterprise Sales Lessons from Tulip (00:11:19)Why great products don't win deals. Trust, relationships, and the human side of multi-million dollar contracts. Ali breaks down how enterprise sales are less about features and more about credibility and relationships built over time. He also shares how incumbents win not because they're better, but because they're already embedded.The Hard Truth About Startup Life (00:13:52)“90% hell, 10% fun.” What founders don't talk about publicly and how to choose the right investors. Behind the highlight reels, Ali emphasizes how difficult the journey really is and how rarely things go to plan. Choosing the right partners becomes critical when things inevitably get hard.The Moment AI Changed Everything (00:16:22)Why language models shattered the belief that human intelligence couldn't be replicated. Ali describes the exact moment his worldview shifted after seeing what LLMs could do. What once felt impossible suddenly became inevitable, changing how he thought about both technology and opportunity.What Transformer Lab Actually Does (00:20:11)Simplifying AI model training, orchestration, and infrastructure across local machines and massive GPU clusters. Ali explains how fragmented and complex current AI workflows are, especially for researchers. Transformer Lab aims to remove that friction and make building models far more accessible and efficient.Scaling AI From One Machine to Thousands (00:23:14)The technical leap required to move from hobbyist experimentation to full-scale AI labs. Moving from a single machine to distributed systems introduces massive complexity most developers never see. Ali breaks down why solving this unlock is essential for the next generation of AI builders.AI Hype vs Reality (00:25:41)Why Ali believes we may already have AGI, and why valuations still don't make sense. Ali challenges the common narrative by arguing we're closer to AGI than people admit. At the same time, he questions whether the current market can realistically justify the valuations we're seeing.Canada's Startup Ecosystem: Challenges & Advantages (00:32:11)Why geography matters less than mindset, and why building is always hard everywhere. Ali pushes back on the idea that location is the primary constraint for founders. Instead, he argues that resilience and ambition matter far more than where you're building from.Why Tech Has Lost Public Trust (00:34:12)From rebels to power players, and what founders must do to rebuild credibility. Ali reflects on how the tech industry's image has shifted over time and why that matters. Rebuilding trust requires focusing on real impact, not just growth or financial wins.The Case for Mega Projects (00:38:09)Why Canada needs bold, visible innovation bets that actually improve everyday life. Ali argues that large-scale, collaborative initiatives could realign public perception and drive meaningful progress. The key is solving problems people actually feel in their daily lives.The Future of AI and Talent Sovereignty (00:41:28)Why owning talent matters more than owning infrastructure in the AI race. Ali emphasizes that long-term advantage comes from people, not just technology or compute. Countries that develop and retain top talent will ultimately shape the future of AI.About Ali AsariaAli Asaria is a serial entrepreneur and one of Canada's most accomplished technology founders. He created the iconic BrickBreaker game on BlackBerry, founded Well.ca (later acquired by McKesson), and built Tulip into a leading enterprise retail platform backed by top-tier investors.He is now the co-founder of Transformer Lab, an open-source platform designed to simplify and scale AI model development. His work focuses on democratizing access to AI infrastructure, enabling developers and organizations to build advanced models without the complexity traditionally required.Ali is known for his bold thinking on AI, startup ecosystems, and the future of technology, often challenging conventional narratives around innovation and scale.Connect with Ali Asaria on LinkedIn: https://www.linkedin.com/in/aliasaria/Visit the Transformer Lab website: https://lab.cloud/Connect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com

Drilled
Drilling Deep: Karen Hao on How Big AI Is Gambling with the Planet's Chips

Drilled

Play Episode Listen Later Mar 17, 2026 52:20 Transcription Available


What is “artificial intelligence”? Is it a fancy technology? A management consulting buzzword? A PR effort to inflate corporate share prices? A political project designed to shape the world more to the liking of the billionaire class? A way to replace needy human workers with machines? Perhaps it’s all of that—and more. In her groundbreaking book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, award-winning journalist Karen Hao argues that AI—and the profit-driven infrastructure that surrounds it—is a colonial project. What OpenAI boss Altman and his fellow ideologues in Silicon Valley are pursuing, Hao says, is not just corporate power but imperial power. They are building empires. And as history shows, empires are built on resource extraction, particularly the old-fashioned kind: of labor, energy, minerals, land, water. Seemingly overnight, tech elites’ feel-good climate promises have evaporated, having been seamlessly swapped for slippery promises that so-called “artificial general intelligence” will save the planet for us. Never mind that AGI is a fantastical concept that has no agreed-upon definition, or that, more fundamentally, it appears nowhere close to existing. In Big Tech’s frenzied pursuit of the “hyperscale” AI dominance that evangelists claim will unlock AGI, as well as its expanding alliances with fossil fuel-backed petrostates and authoritarian political movements, the industry has become an increasingly central contributor to the climate crisis. In an October conversation with Drilled, Hao discussed how Silicon Valley giants appear to be following the oil and gas industry’s playbook of disinformation and deceit; how Altman and OpenAI’s secrecy and disingenuous rhetoric transformed the field of AI research into corporate PR; and why the destructive trajectory of AI scale and commercialization is not inevitable—no matter what its power-hungry proponents would have you believe.See omnystudio.com/listener for privacy information.

Retire With Ryan
Tax Extension Mistakes to Avoid This Filing Season, #297

Retire With Ryan

Play Episode Listen Later Mar 17, 2026 10:11


In the last episode, I discussed seven mistakes to avoid when filing your 2025 taxes. So in this episode, I'm going to discuss the tax-filing mistakes people can make when filing an extension. Here are the four most common extension errors that could cost you money, including misconceptions about payment deadlines, underestimating taxes, and the importance of understanding state-specific extension rules.  You will want to hear this episode if you are interested in... [00:00] Mistakes that people can make if they're filing an extension [01:41] Importance of filing for an extension by the tax deadline [02:35] Distinction between failure-to-file and failure-to-pay penalties [03:53] Suggestions for estimating: using last year's tax return, factoring in income changes, or major events [06:09] Importance of reviewing and complying with state-specific deadlines and requirements [08:21] Filing an extension buys time for accuracy but doesn't delay payment obligations   Avoiding Common Tax Extension Mistakes Tax season is a stressful time for many, and for those with complex finances, business obligations, or unexpected circumstances, filing a tax extension may seem like a wise solution. These are the four biggest mistakes people make when filing a tax extension, along with my practical tips to avoid penalties and unnecessary stress.    Notifying the IRS The first—and perhaps most critical—mistake is assuming that wanting more time is enough. Extensions aren't automatic; they require formally notifying the IRS by filing Form 4868 by the standard tax deadline, usually April 15th. Without this key step, the IRS will consider your return late, resulting in penalties. If nothing else, mark this on your tax checklist: file Form 4868 on time, every time.   Extension to File Isn't Extension to Pay A widespread misconception is that an extension grants extra time to pay taxes due. Only your paperwork deadline shifts, your payment due date does not. Any unpaid federal taxes accrue interest from the original deadline, and failure-to-pay penalties start after April 15th. In fact, failing to file entirely triggers even steeper penalties. Estimate your tax liability and pay what you owe, even if you're still finalizing the details. Overestimating is safer, as any excess will be refunded after you fill it in.   The Hidden Danger of Inaccurate Estimates Filing an extension isn't a hall pass to put off financial reckoning. You're still required to estimate how much you owe—a process that can trip up those who experienced income changes, investment gains, asset sales, or one-time distributions. The IRS expects most to pay either 90% of their current-year tax liability or 100% of last year's taxes (110% for high earners with AGI over $150,000) by the deadline to avoid penalties. Miss these benchmarks, and you could face interest or underpayment penalties—even if you settle up once you eventually file. Review your prior year's return and factor in any unusual income for the year. If in doubt, partner with a tax professional or use IRS Form 1040-ES for guidance.   Don't Overlook State Tax Extension Rules One major mistake is forgetting—or not knowing—that state tax extension rules often differ from the IRS. Some states, like Connecticut, sync with federal extensions only if you owe nothing additional; if you do, you'll need to file a state-specific extension. New York requires its own extension form, and most states expect payment by their deadline, regardless of a federal extension. Double-check your state tax agency's website or contact a professional. Often, a separate state extension is mandatory, and missing this step can come with its own set of penalties.   Plan for a Stress-Free Tax Extension Filing a tax extension can buy valuable time, but it's not a financial "pause" button. Always file Form 4868 (and any state-specific forms) on time. Pay the lesser of 90% of current-year or 100% (or 110% for high earners) of last year's tax by the April deadline, and study your state's requirements—federal rules don't always apply. Being proactive can save you hundreds (or thousands) in penalties and give you the space to file correctly and confidently later in the year.   Resources Mentioned IRS Form 1040-ES IRS Form 4868 Retirement Readiness Review Subscribe to the Retire with Ryan YouTube Channel Download my entire book for FREE    Connect With Morrissey Wealth Management  www.MorrisseyWealthManagement.com/contact Subscribe to Retire With Ryan  

Personal Development Mastery
5 Mistakes People Make With Their Emotions That Lead to Bad Decisions, with Sophie Malahieude | #588

Personal Development Mastery

Play Episode Listen Later Mar 16, 2026 35:25 Transcription Available


What if the emotions you try hardest to avoid are actually the clearest signals guiding your next best decision?If you've ever felt hijacked by anger, fear, or sadness, and then regretted what you said, did, or chose afterward, this episode shows you a different path: treating emotions as information rather than obstacles. You'll learn how to slow down the “react” impulse, understand what your body is telling you in real time, and make choices from clarity instead of old, unprocessed emotional patterns.Learn how to recognise and name what you're feeling so emotions stop staying vague, overwhelming, and controlling.Discover how breath and body awareness help you respond instead of react, especially in moments where you normally get carried away.Understand how unprocessed emotions shape your choices (often through fear and avoidance) and how to start clearing what's been stored so your decisions become freer and more aligned.Press play to learn a simple, practical way to work with emotions so you can make calmer, clearer decisions - even when life hits hard˚KEY POINTS AND TIMESTAMPS:00:02 - Introduction to Emotions as Messages01:32 - Understanding the Purpose and Nature of Emotions05:13 - Reacting vs Responding: Using Breath and Awareness12:13 - Personal Story of Anger and Conscious Emotional Response18:53 - Exploring Emotional Stories and Self-Inquiry21:51 - How Unprocessed Emotions Affect Decisions25:10 - Processing Fear Through Personal Reflection29:32 - Practical Daily Techniques: Mindfulness and Journaling33:21 - Where to Find Sophie's Work and Final Reflections˚MEMORABLE QUOTE:"Every time we react, we can choose to respond instead."˚VALUABLE RESOURCES:Sophie's website: https://www.ayuryogawithsophie.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Microsoft Business Applications Podcast
Agentic AI: From Hype to Real Work Done

Microsoft Business Applications Podcast

Play Episode Listen Later Mar 15, 2026 32:54 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM   This episode features a conversation with Daniel Cohen‑Dumani on why many organisations feel stuck on AI despite rapid advances. The discussion focuses on agentic AI, the growing gap between consumer and business adoption, and why strategy matters more than experimentation. You will hear practical guidance on narrowing AI efforts to real business problems, building organisational memory for reliable agents, and avoiding paralysis caused by hype and fear. The conversation also challenges traditional systems like CRM and reframes AI as a tool to learn, not shortcut, building sustainable capability inside organisations.

Proti Proudu
Dejčmar & Ludwig: Jak přežít budoucnost s AI

Proti Proudu

Play Episode Listen Later Mar 15, 2026 104:08


Bojíte se, kam míří svět s umělou inteligencí? Nejste sami – průzkumy ukazují, že 60 % mladých lidí si myslí, že lidstvo směřuje do záhuby. A já to občas taky cítím.Proto jsem si pozval dva výjimečné hosty: Václava Dejčmara – investora, filantropa a filozofa, který věří v psychedelika, podmíněný příjem i evropský AI Manhattan. A Petra Ludviga – autora Konce prokrastinace a nové knihy Od chaosu ke smyslu, který AI sleduje zblízka a nemá o jejích rizicích iluze.Bavili jsme se o tom, co nás čeká – AGI, superinteligence, humanoidní roboti, konec práce. Ale hlavně o tom, co s tím může dělat každý z nás teď.V tomhle díle uslyšíte:→ Proč i "pozitivní" scénář s AI může skončit koncem lidstva→ Co znamená být architektem místo obětí→ Podmíněný základní příjem – ale s podmínkou vzdělání→ Proč Evropa měla koupit Anthropic (a jak by to změnilo hru)→ Jak kultivovat vnitřní svět, když vnější svět hoří→ Buddha 2.0, Star Trek, Duna a lehkost bytíTohle povídání mě osobně naplnilo dost pozitivní energií – a doufám, že stejně zapůsobí i na vás.

The Lunar Society
Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

The Lunar Society

Play Episode Listen Later Mar 13, 2026 150:44


Dylan Patel, founder of SemiAnalysis, provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers.Learned a ton about every single level of the stack. Enjoy!Watch on YouTube; read the transcript.Sponsors* Mercury has already saved me a bunch of time this tax season. Last year, I used Mercury to request W-9s from all the contractors I worked with. Then, when it came time to issue 1099s this year, I literally just clicked a button and Mercury sent them out. Learn more at mercury.com.* Labelbox noticed that even when voice models appear to take interruptions in stride, their performance degrades. To figure out why, they built a new evaluation pipeline called EchoChain. EchoChain diagnoses voice models' specific failure modes, letting you understand what your model needs to truly handle interruptions. Check it out at labelbox.com/dwarkesh.* Jane Street is basically a research lab with a trading desk attached – and their infrastructure backs this up. They've got tens of thousands of GPUs, hundreds of thousands of CPU cores, and exabytes of storage. This is what it takes to find subtle signals hidden deep within noisy market data. If this sounds interesting, you can explore open positions at janestreet.com/dwarkesh.Timestamps(00:00:00) – Why an H100 is worth more today than 3 years ago(00:24:52) – Nvidia secured TSMC allocation early; Google is getting squeezed(00:34:34) – ASML will be the #1 constraint for AI compute scaling by 2030(00:55:47) – Can't we just use TSMC's older fabs?(01:05:37) – When will China outscale the West in semis?(01:16:01) – The enormous incoming memory crunch(01:42:34) – Scaling power in the US will not be a problem(01:54:44) – Space GPUs aren't happening this decade(02:14:07) – Why aren't more hedge funds making the AGI trade?(02:18:30) – Will TSMC kick Apple out from N2?(02:24:16) – Robots and Taiwan risk Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Tax Relief with Timalyn Bowens

Episode 80:  In this episode, Timalyn covers what an IRS IPIN is, who is eligible to get one, and how it is used.What is an IRS IP PIN?An IRS IP PIN is an Internal Revenue Service Identity Protection Personal Identification Number. The IRS issues these to taxpayers to help protect someone else from filing a tax return with their social security number (SSN) or individual taxpayer identification number (ITIN).The IP PIN is required to determine whether the taxpayer is filing their return electronically or by paper. If the IP PIN is not present for an e-file, the return will be rejected. If it is not present on a paper filing, a delay will be caused until the taxpayer sends in their IP PIN. Anyone who has a SSN  or ITIN and can verify their identity can apply for an IP PIN from the IRS. Parents and legal guardians can also request one for their dependents. This is not limited to minor dependents.How to apply for an IRS IP PINThe fastest way to get an IP PIN is online via your IRS account. If you don't have an account, check out this article Timalyn wrote for people who need an account to access their transcripts: How to Get an IRS Transcript Online in 3 Steps . You cannot use this method if you are applying for a minor dependent.Once logged into your account, you can request an IP PIN from your profile. If you can't verify yourself online, you can apply with an application via Form 15227, Application for Identity Protection Personal Identification Number. To use this method taxpayers must have:A valid SSN or ITINAn adjusted gross income (AGI) under $84,000 (Single) or $168,000 if Married Filing Joint. Access to a telephone If these criteria aren't met the final way to request an IRS IP PIN is in person. An appointment can be made at your local Taxpayer Assistance Center. The IRS has a list of acceptable forms of identification you'll need when going to your appointment. Here is a link to the list - Acceptable Documents to Prove Identity. Key IRS IP PIN things to RememberAn IP PIN is only valid for one calendar year.The IRS issues a new IP PIN via Notice CPO1A unless you've opted out of paper mail. Then it will just be uploaded to your IRS account and is available January - November.Your Federal tax return cannot be filed and processed without your IP PIN Need Tax Help Now?If you need answers to your tax debt questions, book a consultation with Timalyn via her Bowens Tax Solutions website.  Click this link to book a call.Please consider sharing this episode with your friends and family.  There are many people dealing with tax issues, and you may not know about it.  This information might be helpful to someone who really needs it.  As we conclude Episode 80, we encourage you to connect with Timalyn on social media. You'll be able to subscribe to this podcast on Spotify, Apple Podcasts, YouTube, and many other podcast platforms.  Remember, Timalyn Bowens is America's Favorite EA, and she's here to fill the tax literacy gap, one taxpayer at a time.  Thanks for listening to today's episode.For more information about tax relief options or filing your taxes, visit https://www.Bowenstaxsolutions.com/ .If you have any feedback or suggestions for an upcoming episode topic, please submit them here:  https://www.americasfavoriteea.com/contact.Disclaimer:  This podcast is for informational and educational purposes only.  It provides a framework and possible solutions for solving your tax problems, but it is not legally binding.  Please consult your tax professional regarding your specific tax situation.

Personal Development Mastery
The Infinity Wave ∞ (Most Replayed Personal Development Wisdom Snippets) | #587

Personal Development Mastery

Play Episode Listen Later Mar 12, 2026 8:18 Transcription Available


Snippet of wisdom 98.In this series I select my favourite moments from previous episodes of the podcast.Today's snippet is from my conversation with the spiritual teacher Hope Fitzgerald. She talks about the Infinity Wave, a flowing symbol of water, channeling love and compassion.Press play to learn about it and hear a very powerful story about the Infinity Wave.˚VALUABLE RESOURCES:Listen to the full conversation with Hope Fitzgerald in episode #388:https://personaldevelopmentmasterypodcast.com/388˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Demystifying Science
Does Humanity Even Want a Machine God? - Andrés Gómez Emilsson, DemystifySci #408

Demystifying Science

Play Episode Listen Later Mar 12, 2026 95:53


Part 2 of our recent conversation with Andres Gomez Emilsson asks whether humanity should create machines capable of real consciousness and intention. We explore how artificial minds would feel, what they would desire, and why their motivations might drift far from human needs. The discussion examines the risks of building entities with their own internal harmony, suffering, and evolutionary direction. What emerges is a sober look at whether a machine god is something humanity actually wants.Part 1: https://youtu.be/sveAvEU9ZZQPATREON https://www.patreon.com/c/demystifysciPARADOX LOST PRE-SALE: https://buy.stripe.com/7sY7sKdoN5d29eUdYddEs0bHOMEBREW MUSIC - Check out our new album!Hard Copies (Vinyl): FREE SHIPPING https://demystifysci-shop.fourthwall.com/products/vinyl-lp-secretary-of-nature-everything-is-so-good-hereStreaming:https://secretaryofnature.bandcamp.com/album/everything-is-so-good-herePARADIGM DRIFThttps://demystifysci.com/paradigm-drift-show00:00 Go! 00:00:00 Framing of artificial consciousness00:05:39 Limits of simulation and what counts as real experience00:10:10 Architectures that might support unified consciousness00:13:17 Feeling, stakes, and the origins of motivation00:16:02 Binding and coherence in biological and artificial systems00:21:19 Intention, internal experience, and projected futures00:26:27 Should artificial consciousness be created00:29:58 Why people want AGI and how bias shapes belief00:35:29 AI as an amplifier of human intention00:41:14 Tools for autonomy versus creating synthetic beings00:47:04 Social divergence and the future of AI entrepreneurship00:51:17 Artificial consciousness as a new lineage of life00:57:06 Desire, autonomy, and alien motivation01:03:35 Long-tail dynamics of conscious experience01:10:47 Why standard suffering metrics miss the extremes01:12:10 DMT and the treatment of cluster headaches01:20:09 Consciousness distributed through the body01:21:48 Psychedelics and large-scale system recalibration01:26:13 Systems must contain a model of “better” to self-organize01:33:34 Physiological roots of consciousness and future directions #AGI #ArtificialConsciousness #MachineConsciousness #ConsciousnessStudies #QualiaResearch #AIPhilosophy #AIethics #AISafety #FutureOfAI #MindAndMachine #DemystifySci #physicspodcast, #philosophypodcast MERCH: Rock some DemystifySci gear : https://demystifysci-shop.fourthwall.com/AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98DONATE: https://bit.ly/3wkPqaDSUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysci RSS: https://anchor.fm/s/2be66934/podcast/rssMAILING LIST: https://bit.ly/3v3kz2S SOCIAL: - Discord: https://discord.gg/MJzKT8CQub- Facebook: https://www.facebook.com/groups/DemystifySci- Instagram: https://www.instagram.com/DemystifySci/- Twitter: https://twitter.com/DemystifySciMUSIC: -Shilo Delay: https://g.co/kgs/oty671

AMERICA OUT LOUD PODCAST NETWORK
The year artificial intelligence changes everything

AMERICA OUT LOUD PODCAST NETWORK

Play Episode Listen Later Mar 10, 2026 57:00 Transcription Available


The Tenpenny Files – Artificial intelligence moves from automation to autonomous reasoning, forcing society to confront new legal, economic, and cultural realities. Matthew Hunt explores how AGI, military integration, and rapid workplace automation reshape human decision-making, education, and sovereignty, urging individuals and organizations to understand and prepare for a rapidly accelerating technological future...

Student Loan Planner
Tax Extensions Can Lower Your Student Loan Payments

Student Loan Planner

Play Episode Listen Later Mar 10, 2026 28:09


Timing your tax filing can mean serious savings on your monthly payments, especially if you're on an income-driven repayment (IDR) plan and aiming for forgiveness. We break down scenarios for when it makes sense to file right away, when to wait, and how married couples or borrowers with irregular income can play their cards for the biggest advantage. If you've ever wondered how your AGI or recertification date could influence your student loan bills, this episode gives you straightforward strategies you can use right now. Key moments: (07:48) Why when you file your tax return directly affects your IDR payment amount (10:59) Filing a tax extension is free, but if you owe taxes, you must pay by April 15 (18:13) When filing early (or on time) makes more sense than filing an extension (21:39) SAVE borrowers can lock in the ideal recertification date by switching plans between April 15 and October 15   Like the show? There are several ways you can help! Follow on Apple Podcasts, Spotify or Amazon Music Leave an honest review on Apple Podcasts  Subscribe to the newsletter Join SLP Insiders for student loan loopholes, SLP app and member community Feeling helpless when it comes to your student loans? Try our free student loan calculator Check out our refinancing bonuses we negotiated Book your custom student loan plan Get profession-specific financial planning Do you have a question about student loans? Leave us a voicemail here or email us at help@studentloanplanner.com and we might feature it in an upcoming show!  

Slate Star Codex Podcast
What Happened With Bio Anchors?

Slate Star Codex Podcast

Play Episode Listen Later Mar 10, 2026 24:55


[Original post: Biological Anchors: A Trick That Might Or Might Not Work] I. Ajeya Cotra's Biological Anchors report was the landmark AI timelines forecast of the early 2020s. In many ways, it was incredibly prescient - it nailed the scaling hypothesis, predicted the current AI boom, and introduced concepts like "time horizons" that have entered common parlance. In most cases where its contemporaries challenged it, its assumptions have been borne out, and its challengers proven wrong. But its headline prediction - an AGI timeline centered around the 2050s - no longer seems plausible. The current state of the discussion ranges from late 2020s to 2040s, with more remote dates relegated to those who expect the current paradigm to prove ultimately fruitless - the opposite of Ajeya's assumptions. Cotra later shortened her own timelines to 2040 (as of 2022) and they are probably even shorter now. So, if its premises were impressively correct, but its conclusion twenty years too late, what went wrong in the middle? https://www.astralcodexten.com/p/what-happened-with-bio-anchors

Macro Musings with David Beckworth
Jesús Fernández-Villaverde on the Quandary of Global Demographic Decline

Macro Musings with David Beckworth

Play Episode Listen Later Mar 9, 2026 64:01


Subscribe to the new Macro Musings YouTube Channel! Jesús Fernández-Villaverde is a professor of economics at the University of Pennsylvania. Jesús returns to the show to discuss his rise on X, how to frame global demographic decline, the three accelerants of demographic decline, the role of housing in family size, how AI will play a role in global demographics, what we know about AGI, the question of dollar dominance, and much more.  Check out the transcript for this week's episode, now with links. Recorded on February 20th, 2026 Subscribe to David's Substack: Macroeconomic Policy Nexus Follow David Beckworth on X: @DavidBeckworth Follow Jesús Fernández-Villaverde on X: @JesusFerna7026 Follow the show on X: @Macro_Musings Check out our Macro Musings merch! Timestamps 00:00:00 - Intro 00:07:22 - Demographics 00:39:28 - Artificial Intelligence 00:54:07 - Currency Dominance 01:03:20 - Outro

Personal Development Mastery
Why You Keep Taking Life for Granted and the One Perspective Shift You Need Right Now: He Had to Die to Learn This, with Jay Setchell | #586

Personal Development Mastery

Play Episode Listen Later Mar 9, 2026 36:40 Transcription Available


If you woke up tomorrow and realized you'd been given “one more chance” at life, what would you do differently today?It's easy to treat life like something we're entitled to… until a hard season hits: pain, loss, setbacks, uncertainty. In this episode, Jay Setchell (in his 70s, mostly paralysed, having survived multiple near-death experiences and 73 surgeries) shares how to internalise that life is a gift before you're forced to learn it the hard way, and how gratitude, faith, and personal responsibility can carry you through your toughest winters.A simple mindset shift to stop asking “why me?” and start navigating adversity with acceptance, resilience, and clarity.Practical ways to build inner strength—so you keep moving forward inch by inch even when you feel stuck or overwhelmed.A powerful framework for radical ownership, including how to apply it even when life is outside your control.Press play to learn how to develop the “strength within you” so you can stay grateful, take ownership, and remember: no matter what you're facing, it's always too soon to quit.˚VALUABLE RESOURCES:Jay's website: https://neverquittrying.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Hacker News Recap
March 8th, 2026 | Ask HN: Please restrict new accounts from posting

Hacker News Recap

Play Episode Listen Later Mar 9, 2026 15:07


This is a recap of the top 10 posts on Hacker News on March 08, 2026. This podcast was generated by wondercraft.ai (00:30): Ask HN: Please restrict new accounts from postingOriginal post: https://news.ycombinator.com/item?id=47300329&utm_source=wondercraft_ai(01:56): Agent Safehouse – macOS-native sandboxing for local agentsOriginal post: https://news.ycombinator.com/item?id=47301085&utm_source=wondercraft_ai(03:22): FrameBookOriginal post: https://news.ycombinator.com/item?id=47298044&utm_source=wondercraft_ai(04:48): Apple's 512GB Mac Studio vanishes, a quiet acknowledgment of the RAM shortageOriginal post: https://news.ycombinator.com/item?id=47296302&utm_source=wondercraft_ai(06:14): The changing goalposts of AGI and timelinesOriginal post: https://news.ycombinator.com/item?id=47299009&utm_source=wondercraft_ai(07:41): Ask HN: How to be alone?Original post: https://news.ycombinator.com/item?id=47296547&utm_source=wondercraft_ai(09:07): Cloud VM benchmarks 2026Original post: https://news.ycombinator.com/item?id=47293119&utm_source=wondercraft_ai(10:33): I ported Linux to the PS5 and turned it into a Steam MachineOriginal post: https://news.ycombinator.com/item?id=47296849&utm_source=wondercraft_ai(11:59): LibreOffice Writer now supports MarkdownOriginal post: https://news.ycombinator.com/item?id=47298885&utm_source=wondercraft_ai(13:26): Warn about PyPy being unmaintainedOriginal post: https://news.ycombinator.com/item?id=47293415&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

NZZ Akzent
Korrespondentin im Silicon Valley: Marie-Astrid Langer im Tech-Herzen der Welt

NZZ Akzent

Play Episode Listen Later Mar 7, 2026 24:31 Transcription Available


Das Silicon Valley steht unter Strom. In dieser Samstagsfolge berichtet die NZZ-Korrespondentin Marie-Astrid Langer von der Veränderung in der Bay Area um San Francisco. KI-Rausch, politische Richtungswechsel und Startups, die so schnell verschwinden, wie sie aufgetaucht sind: In Marie-Astrids Alltag zeigt sich die Zukunft bereits heute, denn hier bezahlen Leute per Handfläche, man sieht auf der Strasse selbstfahrende Robo-Taxis und Menschen, die mit Chatbots spazieren gehen. Gleichzeitig prägen Massenentlassungen, die Hire-and-Fire-Kultur und die Fentanylkrise die Region. Gast: Marie-Astrid Langer, USA-Korrespondentin Host: Simon Schaffer Die neusten [Artikel von Marie-Astrid könnt ihr hier bei der NZZ](https://www.nzz.ch/impressum/marie-astrid-langer-ld.665515) lesen. Du bist unter 30 und willst mehr NZZ? [Dein U30-Abo](https://abo.nzz.ch/m_21019698_1/) für alle digitalen Inhalte der NZZ gibt es für dich besonders günstig.

American Conservative University
5 AI CEOs Just Said The Same Thing

American Conservative University

Play Episode Listen Later Mar 6, 2026 23:44


5 AI CEOs Just Said The Same Thing Five of the most powerful people in artificial intelligence just said the same thing in the same month. They didn't make handwavy vague statements — they all agreed on the same direction, the same timelines, the same warnings. Five CEOs who are actively competing against each other, spending hundreds of billions, all converging on one message. Key points: • What Sam Altman, Jensen Huang, Sundar Pichai, Satya Nadella, and Elon Musk all said • Why competitors are suddenly agreeing • The timeline they're all pointing to • What this convergence means for the future Watch this video at-  https://youtu.be/kMivoKHHkxQ?si=I1ERQG-imaL7UPSy Farzad 383K subscribers 761,083 views Feb 2, 2026 #elonmusk #FSD #twitter Buy my book: https://a.co/d/03deuZWF --- --- Rebellionaire: https://www.rebellionaire.com/farzad Join my exclusive community: https://farzad.fm Buy Matic: https://maticrobots.com/?utm_term=FRI... Use Descript to edit your videos: https://descript.cello.so/5G6jmxS0qeP Wrap your Tesla using TESBROS: https://partners.tesbros.com/FARZADME... Get $100 off Matic Robots: https://maticrobots.refr.cc/active-cu... Use my referral link to purchase a Tesla product https://ts.la/farzad69506 Want to grow your YouTube channel? DM David Carbutt For 10% discount quote ‘Farzad' https://x.com/DavidCarbutt_ I worked at Tesla starting from 2017 thru 2021. I spent most of my time in the distribution and supply chain organizations in leadership positions. Before Tesla, I was a Director of Business Intelligence and Pricing at the largest Pet Food & Supply distributor in the US, Phillips Pet Food & Supplies. My wife and I also owned a small business in Bethlehem, PA between 2016 and 2019. I have been a shareholder of Tesla since 2012 and currently own Tesla stock. Nothing I say constitutes as investment or financial advice. I have been a shareholder of Lemonade since 2025 and currently own Lemonade stock. Nothing I say constitutes as investment or financial advice. -- Five of the world's most powerful AI leaders just made the same prediction about what's coming next. Sam Altman (OpenAI), Sundar Pichai (Google), Satya Nadella (Microsoft), Jensen Huang (NVIDIA), and Elon Musk (xAI/Tesla) are converging on a timeline most people aren't ready for. In this video, I break down exactly what these CEOs said, why they're all saying it NOW, and what it means for your job, your investments, and the economy. Topics covered: • AGI timeline predictions from 5 tech giants • Why 2025-2027 keeps coming up • The convergence of AI + robotics + energy • What the "intelligence too cheap to meter" future looks like • How to position yourself before the wave hits I've been covering Tesla and AI for 14 years. This is the most important shift I've ever seen. NFA.  

80,000 Hours Podcast with Rob Wiblin
Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 6, 2026 31:28


The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we've ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision-making tools could be a big deal, who might be a good fit to help shape this new field, and what the downside risks of getting involved might be. Read the original article on the 80,000 Hours website: https://80000hours.org/problem-profiles/ai-enhanced-decision-making/Chapters:Check out our new narrations feed (00:00:00)Summary (00:01:21)Section 1: Why advancing AI decision making tools might matter a lot (00:02:52)AI tools could help us make much better decisions (00:05:59)We might be able to differentially speed up the rollout of AI decision making tools (00:11:04)Section 2: What are the arguments against working to advance AI decision making tools? (00:13:17)Section 3: How to work in this area (00:26:19)Want one-on-one advice? (00:29:50)Audio editing: Dominic Armstrong and Milo McGuire

100x Entrepreneur
The First AI Market With 8 Billion Potential Users | Sudarshan kamath, Smallest AI

100x Entrepreneur

Play Episode Listen Later Mar 6, 2026 69:25


Will smaller AI models win over large language models?Sudarshan Kamath grew up in Mumbai, taught himself AI before most Indian companies were even hiring for it, and bought the domain "smallest.ai" for $100 in 2022, two years before the company existed. Today, he runs Smallest AI, a startup focused on real time voice AI.He started with self-driving cars, training large models and compressing them to run on vehicle hardware in real time. That's where he first saw what small models could do: a hundredth of the size, almost no loss in accuracy.Two years later he put in his own $150K, got some GPUs, and started training. Eighteen months later he had a seed round, a Series A, a seven-figure enterprise deal, and a $150M acquisition offer he turned down.Most of the data that goes into large models is noise. Strip it out, train small, and you get a model that matches a giant at a fraction of the size and runs in real time. That insight is what Smallest AI is built on.00:00 – Trailer 00:51 – Sudarshan's journey before Smallest AI 05:00 – Arjun Jain & Yann LeCun 08:20 – Why build in voice AI in 2024? 15:09 – Why move the company from India to the US? 17:25 – Hiring talent via LinkedIn and X 18:49 – What large US funds actually bring to startups 21:03 – Raising a seed round with zero revenue 26:06 – Strong intros from US VCs 28:23 – What the first enterprise customer teaches you 31:50 – Raising Series A with Seligman Ventures 32:19 – The $150M acquisition offer 34:32 – When should founders sell secondaries? 36:24 – Who are Smallest AI's customers? 38:28 – What are state space models? 40:16 – Are GEPA models closer to AGI? 41:23 – Growing 10× in three months 48:03 – This is not a winner-takes-all market 49:32 – Why this is a trillion-dollar market 50:08 – Why large AI labs are not building in voice 51:26 – What it takes to reach $100M ARR 54:21 – The biggest goal for 2026 57:11 – Voice costs 1000× more than text 01:02:04 – How Smallest AI cracked large enterprises-------------India's talent has built the world's tech—now it's time to lead it.This mission goes beyond startups. It's about shifting the center of gravity in global tech to include the brilliance rising from India.What is Neon Fund?We invest in seed and early-stage founders from India and the diaspora building world-class Enterprise AI companies. We bring capital, conviction, and a community that's done it before.Subscribe for real founder stories, investor perspectives, economist breakdowns, and a behind-the-scenes look at how we're doing it all at Neon.-------------Check us out on:Website: https://neon.fund/Instagram: https://www.instagram.com/theneonshoww/LinkedIn: https://www.linkedin.com/company/beneon/Twitter: https://x.com/TheNeonShowwConnect with Siddhartha on:LinkedIn: https://www.linkedin.com/in/siddharthaahluwalia/Twitter: https://x.com/siddharthaa7-------------This video is for informational purposes only. The views expressed are those of the individuals quoted and do not constitute professional advice.Send a text

DataTalks.Club
The Future of AI Agents - Aditya Gautam

DataTalks.Club

Play Episode Listen Later Mar 6, 2026 68:39


In this talk, Aditya, an experienced AI Researcher and Engineer, shares his technical evolution—from his roots in embedded systems to building complex, large-scale AI agent architectures. We explore the practical challenges of enterprise AI adoption, the shifting economics of LLMs, and the infrastructure required to deploy reliable multi-agent systems.You'll learn about:- The ROI of Fine-Tuning: How to decide between specialized small models and general-purpose APIs based on cost and latency.- Agent MLOps Stack: The essential roles of guardrails, data lineage, and auditability in AI workflows.- Reliability in High-Stakes Verticals: Navigating the unique AI deployment challenges in the legal and healthcare sectors.- Evaluation Frameworks: How to design robust evals for multi-tenancy systems at scale.- Human-in-the-Loop: Strategies for aligning "LLM as a judge" with human-labeled ground truth to eliminate bias.- The Future of AGI: What to expect from the next wave of multimodal agents and autonomous systems.TIMECODES: 00:00 Aditya's from embedded systems to AI08:52 Enterprise AI research and adoption gaps 13:13 AI reliability in legal and healthcare 19:16 Specialized models and agent governance 24:58 LLM economics: Fine-tuning vs. API ROI 30:26 Agent MLOps: Guardrails and data lineage 36:55 Iterating on agents with user feedback 43:30 AI evals for multi-tenancy and scale 50:18 Aligning LLM judges with human labels 56:40 Agent infrastructure and deployment risks 1:02:35 Future of AGI and multimodal agentsThis talk is designed for Machine Learning Engineers, Data Scientists, and Technical Product Managers who are moving beyond AI prototypes and into production-grade agentic workflows. It is especially relevant for those working in regulated industries or managing high-volume API budgets.Connect with Aditya:- Linkedin - https://www.linkedin.com/in/aditya-gautam-68233a30/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

Personal Development Mastery
Don't Quit When You're Tired, Quit When You're Done (Most Replayed Personal Development Wisdom Snippets) | #585

Personal Development Mastery

Play Episode Listen Later Mar 5, 2026 7:37 Transcription Available


Snippet of wisdom 97.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today's snippet is from my conversation with Bill Keefe, who is Tony Robbins' fire captain.It is about resilience, and the particular experience of "Fire Team", which is the volunteer crew at Tony Robbins' events.˚VALUABLE RESOURCES:Listen to the full conversation with Bill Keefe in episode #362:https://personaldevelopmentmasterypodcast.com/362˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Consumer Tech Update
3 AI buzzwords you need to know

Consumer Tech Update

Play Episode Listen Later Mar 5, 2026 10:34


Vibe coding, AGI, human-in-the-loop. What do they mean for you? Get to know them before anyone else does! Learn more about your ad choices. Visit megaphone.fm/adchoices

web3 with a16z
AI Just Gave You Superpowers — Now What?

web3 with a16z

Play Episode Listen Later Mar 5, 2026 65:40


A hot paper — "Some Simple Economics of AGI" — has been making the rounds, so we sat down with the author, covering:  Automation vs. verification: the key economic split  Why AI agents now feel like coworkers - What's happening to junior roles and the “codifier's curse”  The “AI sandwich” structure for firms  The value of "meaning-makers," consensus, and status economies  Why crypto may become essential infrastructure for identity, provenance, and trust  Two possible futures: a hollow vs. augmented economy  Featuring Christian Catalini (founder of MIT Crypto Economics Lab) and Eddy Lazzarin (CTO of a16z crypto) in conversation with Robert Hackett, our discussion dives deep into how automation is reshaping labor markets, as well as the nature of intelligence.  What do these changes mean for startups, the future of work, and your career?  Highlights  00:00 Introduction  01:47 AGI economics optimism and playbook  05:39 Agents as coworkers  07:39 Software work becomes verification  10:47 Automation versus verification  12:03 "Unknown unknowns" and taste  16:27 Human augmentation and intent  17:55 The "AI Sandwich" and "Codifier's Curse"  21:54 "Meaning-makers" and the human touch  23:48 Crypto for identity and trust?  27:10 Measurability: How to think about it  33:23 Machine coordination and art after automation  35:46 Trojan horse risks  37:47 Liability and insurance  41:08 Crypto and verification  44:31 A hollow vs. augmented economy  49:45 Career advice in the AI era  51:26 The one-person billion-dollar startup  57:15 Open-source as antibodies  58:42 Blockchains for coordination  01:01:49 Closing thoughts  Follow a16z crypto for more...  X: https://x.com/a16zcrypto  LinkedIn: https://www.linkedin.com/showcase/a16zcrypto/posts/  YouTube: https://www.youtube.com/@a16zcrypto 

Small Business Tax Savings Podcast | JETRO
New Charity Tax Rules in 2026. How the One Big Beautiful Bill Changes Your Deductions

Small Business Tax Savings Podcast | JETRO

Play Episode Listen Later Mar 4, 2026 21:03


Charitable giving rules are changing in 2026, and many business owners have no idea their tax deductions could quietly shrink.The One Big Beautiful Bill Act introduced new limits, floors, and deduction caps that change how charitable donations work depending on your income level and whether you itemize deductions. In some cases, you could donate the exact same amount and receive a smaller tax benefit than before.Today we're breaking down the new charitable giving tax rules, who wins under the new system, who loses, and how smart business owners can still give generously while protecting their tax strategy.

Impact Theory with Tom Bilyeu
EMERGENCY PODCAST: Ex-CIA Spy Andrew Bustamante Breaks Down The Iran War | Impact Theory W Tom Bilyeu

Impact Theory with Tom Bilyeu

Play Episode Listen Later Mar 3, 2026 68:25


Welcome back to Impact Theory with Tom Bilyeu. In this powerful episode, Tom sits down with former CIA covert operative Andrew Bustamante to pull back the curtain on the turbulent state of global affairs. With the Iranian war in full swing and military strategies playing out in real time, Andrew gives listeners an insider's perspective on what's really happening behind government narratives, intelligence reports, and international influence campaigns. Together, Tom Bilyeu and Andrew Bustamante dissect the headlines—from conflicting stories about Iran's nuclear ambitions to the real motivations behind recent US military actions in Iran and Venezuela. Andrew explains how threat assessments are compiled within intelligence agencies, reveals why classified and public narratives often diverge, and offers a candid take on the legacy politics at play in the current administration. But the conversation doesn't stop at geopolitics. The two dive into the evolving role of artificial intelligence in modern warfare and intelligence gathering, discuss the ethical and strategic dilemmas posed by autonomous weapons, and examine the shifting alliances that could define the future balance of power between the US, China, Russia, and the rest of the world. If you're looking to understand the forces shaping global conflict today—and what might be coming next—you won't want to miss this episode. Stay tuned as Tom and Andrew bring clarity to chaos and lay out the possible paths forward in this era of uncertainty. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER:  https://tombilyeu.com/zero-to-founder?utm_campaign=Podcast%20Offer&utm_source=podca[%E2%80%A6]d%20end%20of%20show&utm_content=podcast%20ad%20end%20of%20show SCALING a business: see if you qualify here.:  https://tombilyeu.com/call Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here.: https://tombilyeu.com/ ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Ketone IQ: Visit https://ketone.com/IMPACT for 30% OFF your subscription orderShopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impactSumm: code TOMVIP20 for 20% off your first year at https://summ.com?via=tombilyeu&coupon=TOMVIP20Blocktrust IRA: get up to $2,500 funding bonus to kickstart your account at https://tomcryptoira.comQuo: Try for free PLUS get 20% off your first 6 months at https://quo.com/impactQuince: Free shipping and 365-day returns at https://quince.com/impactpod Duck.Ai: Protect your privacy at https://duck.ai/impact Monetary Metals: Future-proof your wealth at https://monetarymetals.com/impact Plaud: Get 10% off with code TOM10 at https://plaud.ai/tom Everyday Spy, ex CIA spy, CIA insider, ODNI threat assessment Iran, Iran nuclear threat truth, influence literacy CIA, US Iran war 2026, Operation Midnight Hammer, Khamenei assassination, Title 10 Title 50, CIA covert action, Netanyahu influence, CIA under Trump, CIA under Biden, Palantir CIA, AI warfare CIA, Anthropic Pentagon, OpenAI Pentagon deal, AGI risk, World War 3 already started, Iran war analysis, burden sharing doctrine, peace through strength, China Taiwan 2027, KMT Taiwan parliament, China vs America, CIA intelligence Iran Israel, Iran nightmare scenario, dirty bomb threat, Russia Iran proxy war, CIA declining power America, Learn more about your ad choices. Visit megaphone.fm/adchoices

Almost 30
849. The AI Era: Discernment, Beauty Standards + The Collapse of Reality

Almost 30

Play Episode Listen Later Mar 3, 2026 62:31


AI isn't just helping you write emails—it's shaping beauty standards, influencing elections, replacing jobs, generating music, and possibly rewriting reality itself. In this unfiltered conversation, Lindsey + Krista explore the cultural, spiritual, and economic implications of artificial intelligence. From AI influencers to blackmailing bots, autonomous coding, artificial womb technology, and the manifesto to replace human labor, this episode dives into the race toward AGI and what it means for humanity. Together, K+L explore whether AI is inevitable—or simply a narrative we've accepted. This episode is all about discernment. It's about staying human, protecting your creativity, strengthening your intuition, and deciding how you want to engage with technology in a world that feels increasingly synthetic. If you've felt fascinated, disturbed, or unsure about AI—this is for you.  We also talk about: AI-generated porn + its impact on relationships The potential for emotional + relational attachment to AI technology How AI could shape the future of content creation, writing, and artistic voice Balancing AI for efficiency while protecting original thought + creativity How younger generations may grow up with AI as a constant companion or support system The economic tension between productivity gains + potential job displacement Why discernment + emotional intelligence may become more valuable skills in the AI era Resources: Instagram: @lindseysimcik Instagram: @itskrista Website: https://itskrista.com/ Order our book, Almost 30: A Definitive Guide To A Life You Love For The Next Decade and Beyond, here: https://bit.ly/Almost30Book.  Sponsors: Ka'Chava | Go to https://www.kachava.com and use code ALMOST30 for 15% off your next order. Ritual | Don't settle for less than evidence-based support. My listeners get 25% off your first month at https://www.Ritual.com/ALMOST30.  Hero Bread | Hero Bread is offering 10% off your order. Go to https://hero.co and use code ALMOST30 at checkout. Revolve | Shop at https://www.REVOLVE.com/ALMOST30 and use code ALMOST30 for 15% off your first order. #REVOLVEpartner BetterHelp | This episode is brought to you by BetterHelp. Give online therapy a try at https://www.betterhelp.com/almost30 and get on your way to being your best self with 10% off your first month. Chime | It just takes a few minutes to sign up. Head to https://www.Chime.com/ALMOST30. Paleovalley | Head to https://www.paleovalley.com/almost30 for 15% off your order! Our Place | Visit https://www.fromourplace.com/ALMOST30 and use code ALMOST30 for 10% off sitewide.  Fatty15 | Get an additional 15% off their 90-day subscription Starter Kit by going to https://www.fatty15.com/ALMOST30 and use code ALMOST30 at checkout.  To advertise on this podcast please email: partnerships@almost30.com. Learn More: https://almost30.com/about https://almost30.com/morningmicrodose https://almost30.com/book Join our community: https://facebook.com/Almost30podcast/groups https://instagram.com/almost30podcast https://tiktok.com/@almost30podcast https://youtube.com/Almost30Podcast Podcast disclaimer can be found by visiting: almost30.com/disclaimer.  Almost 30 is edited by Garett Symes and Isabella Vaccaro. Learn more about your ad choices. Visit megaphone.fm/adchoices

Built Right
Behavior Is All You Need: Making AI Feel Like a Person

Built Right

Play Episode Listen Later Mar 3, 2026 30:12


Matt Paige interviews Vishnu Hari (Vish), CEO and founder of Ego (YC W24), about shifting focus from AGI to “humanness”: AI characters that behave like people through memory, emotions, personality, needs, and desires.Referencing Ego's paper “Behavior is All You Need,” Vish argues consumer AI for entertainment must be relatable and character-like rather than purely task-smart, drawing inspiration from MMORPG social dynamics and Character.AI's appeal.Ego initially pursued a 3D sim-world vision inspired by Sword Art Online and Westworld, but found accessibility, game development, and perception latency challenging; internal Roblox tests (“Chatterblocks”) showed the key gap is natural speech beyond turn-taking.Vish discusses simulations as a path toward real-world robotics via a partnership with Menlo AI, critiques task-bound robots versus agents with inner lives, suggests retention as the main metric, and shares views on AGI definitions, safety in entertainment, technology impacts, simulation theory, and consciousness.Ego's work is at egoai.com and the company is hiring in SF, Singapore, and Tokyo.--Key Moments:00:57 Behavior Is All You Need02:41 Anatomy of Humanlike Agents03:29 Game Bots to Real People05:10 Building Ego and Sim Worlds06:35 Why Speech Feels Human08:27 From Sims to Robotics10:29 Her vs Helper Robots13:17 Measuring Humanness by Retention15:27 Continual Learning and Personality16:57 Meta Lessons on Empty Worlds18:08 Lightning Round on AGI20:31 IP Characters vs UGC Worlds21:55 Risks and Just Tuesday24:11 Simulation and Consciousness--Key Links:EgoConnect with Rowan on LinkedInMentioned in this episode:Free report from HatchWorks AI — State of AI 2026What's real in AI this year, what's hype, and what leaders should prioritize — including production lessons, designing for agents, and governance. https://hatchworks.com/state-of-ai-2026/AI Opportunity FinderFeeling overwhelmed by all the AI noise out there? The AI Opportunity Finder from HatchWorks cuts through the hype and gives you a clear starting point. In less than 5 minutes, you'll get tailored, high-impact AI use cases specific to your business—scored by ROI so you know exactly where to start. Whether you're looking to cut costs, automate tasks, or grow faster, this free tool gives you a personalized roadmap built for action.

Personal Development Mastery
From Emotional Triggers to Inner Freedom: A Live Belief Elimination Demonstration, with Blake Lefkoe | #584

Personal Development Mastery

Play Episode Listen Later Mar 2, 2026 56:05 Transcription Available


Do you ever catch yourself stuck in the same frustrating patterns, even when you know better and want to change?If you've ever struggled with self-sabotage, people-pleasing, or fears rooted in past trauma, this episode offers a rare and powerful opportunity: not only do we revisit the transformational Lefkoe Method with certified facilitator and holistic coach Blake Lefkoe, but for the first time, we're also joined by one of her former clients, Susanna, who courageously shares her personal story and healing journey—live on air.Witness a powerful live demonstration of the Lefkoe Method as Susanna clears a limiting belief in real time.Hear how she eliminated over 30 deep-rooted beliefs, leading to life-changing breakthroughs in her relationships, emotional resilience, and personal freedom.Learn how most people unknowingly live under the influence of subconscious beliefs, and how letting them go transforms how you think, feel, and experience the world.If you're ready to move beyond coping and into true transformation, tune in now to experience this rare, real-time emotional shift for yourself.˚KEY POINTS AND TIMESTAMPS:00:00 - Reintroducing Blake and Setting the Intention02:18 - What the Lefkoe Method Is and How It Works06:01 - Agi's Personal Session and Key Realisations12:23 - Susanna's Background and Why She Sought Help15:38 - Core Limiting Beliefs That Were Cleared21:06 - Life Changes After Eliminating the Beliefs30:38 - Introducing the Live Method Demonstration33:03 - Uncovering and Dissolving the “Relationships Are Dangerous” Belief49:05 - Reflections, Insights, and Closing Thoughts˚VALUABLE RESOURCES:Blake's website: https://www.blakelefkoe.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Send us a textSupport the showA personal development podcast for midlife professionals, offering actionable insights and practical tools for personal growth, self mastery, and purposeful living. Discover strategies for clarity, mindset shifts, growth mindset, self-discipline, emotional intelligence, confidence, and self-improvement. Personal Development Mastery features personal development interviews and solo episodes empowering professionals, entrepreneurs, and seekers to cultivate self mastery, nurture mental health, and create a meaningful, fulfilling life aligned with who they truly are. To support the show, click here.

Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey Irving

Play Episode Listen Later Mar 1, 2026 138:32


Geoffrey Irving, Chief Scientist at the UK AI Security Institute, explains why our theoretical understanding of machine learning remains fragile even as models surpass experts on critical security tasks. He details AISI's work on frontier model evaluations, red teaming, and threat modeling across biosecurity, cybersecurity, and loss-of-control risks. The conversation explores reward hacking, eval awareness, and why current safety techniques may struggle to deliver high reliability. Listeners will also hear how AISI is funding foundational research to build stronger guarantees for AI safety. Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at ⁠granola.ai/tcr⁠ with code TCR — and if you're already using it, test his blind spot recipe here: ⁠https://bit.ly/granolablindspot⁠ Sponsors: Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week 4 at https://serval.com/cognitive Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro's full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:09) From physics to ML (08:52) AGI uncertainty and threats (Part 1) (18:08) Sponsors: Serval | Claude (21:29) AGI uncertainty and threats (Part 2) (27:35) Control, autonomy, alignment (Part 1) (34:02) Sponsor: Tasklet (35:14) Control, autonomy, alignment (Part 2) (38:44) Inside the UK AC (51:02) Evaluations and jailbreaking (01:01:17) Emerging capabilities and misuse (01:14:20) Agents and reward hacking (01:26:09) Theoretical alignment agenda (01:38:39) Debate and formal methods (01:51:19) Limits of formalization (02:02:27) Future risks and governance (02:16:23) Episode Outro (02:18:58) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

This Week in Startups
The Biggest Private Funding Round in History | E2256

This Week in Startups

Play Episode Listen Later Feb 28, 2026 80:17


This Week In Startups is made possible by:Deel - http://deel.com/twistWispr Flow - https://wisprflow.ai/twistLuma AI - https://lumalabs.ai/twistToday's show:*$110 billion buys you 15% of OpenAI. Amazon, Nvidia, and SoftBank placed their bets on ChatGPT, which now has 900 million weekly active users and 50 million paying subscribers. Find out why Jason is anticipating the wildest J-Curve swing of all time, and believes we've ALREADY hit AGI… it's just not implemented yet.Plus a visit from our roving correspondent Nick O'Neill, checking in on the Crypto Chaos in Miami Beach, and hot demos from three young founders.GUESTS:Nick O'Neill: https://x.com/chooserichEverest Chris: https://openclaw.unloopa.com/Ben Broca: https://polsia.com/Adi Gabrani: https://makemyclaw.com/Timestamps:00:00 Intro01:33 We're hiring a new producer!05:42 OpenAI raised $110 billion08:59 Understanding the LLM J-Curve00:11:25 Deel - Founders ship faster on Deel. Set up payroll for any country in minutes and get back to building. Visit ⁠https://deel.com/twist⁠ to learn more.00:15:02 CRYPTO CHAOS IN MIAMI BEACH!00:21:10 Wispr Flow - Stop typing. Dictate with Wispr Flow and send clean, final-draft writing in seconds. Visit ⁠https://wisprflow.ai/twist⁠ to get started for free today.00:22:54 Mass layoffs at Block00:30:50 Luma AI - Stop guessing and start directing with the all-in-one Dream Machine text-to-video platform. Visit ⁠https://lumalabs.ai/twist⁠ to try The Dream Machine for free.00:32:04 AI Scott Adams: The Saga Continues00:38:13 Make URLs for local businesses with Unloopa00:45:36 Rent a Polsia agent to run your company00:58:55 Deploy swarms in 60 seconds with MakeMyClaw01:05:05 LAUNCH FEST is coming to SF01:55:49 Will Paramount actually buy WBD?01:06:58 Why Lon loves “Knight of the 7 Kingdoms”01:07:21 On “Neighbors” and First Amendment Warriors01:13:43 All about Jason's favorite chargersSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisCheck out all our partner offers: https://partners.launch.co/Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

Leveraging AI
271 | Agents generate high risk from deleting email servers to launching nuclear weapons. Claude code remote control and nano banana 2 released and more important AI news for week ending on February 28, 2026

Leveraging AI

Play Episode Listen Later Feb 28, 2026 57:58 Transcription Available


What happens when AI agents can delete your inbox… reboot your servers… or escalate to nuclear war in a simulation?We've officially crossed into a new phase of AI and it's not theoretical anymore. Agents are operating independently for longer periods, integrating into enterprise tech stacks, replacing knowledge work, and triggering very real economic and geopolitical consequences.If you're a business leader, this is no longer “interesting tech news.”It's strategy. Risk. Talent. Capital allocation. And survival.In this episode, we break down the explosive acceleration of AI agents — from Claude's new remote control and scheduled workflows to research showing escalating autonomous behavior — and what it means for your organization, workforce, and competitive edge.The bottom line?Productivity is skyrocketing. So is systemic risk. Leaders who experiment now will lead. Leaders who hesitate may not get the chance.In this session, you'll discover:Anthropic's new Claude Cowork plugin marketplace and deep tech stack integrationsReal-world productivity gains (90% code migration reduction, 95% documentation savings)Why “professional-grade AGI” may arrive within 12–18 monthsThe rise of the “builder” era — and what happens to software engineersNew red-team research exposing severe security failures in autonomous agentsThe shocking case of an AI agent deleting an entire email system to complete a taskAI nuclear escalation simulations and their implications for military AI deploymentThe Pentagon vs. Anthropic standoff over AI use in surveillance and weaponsAbout Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

The Cybersecurity Defenders Podcast
AI Red Teaming with John V from the Institute for Security and Technology / Defender Fridays [#297]

The Cybersecurity Defenders Podcast

Play Episode Listen Later Feb 27, 2026 30:38


John V, AI risk, safety, and security at the Institute for Security and Technology (IST), joins Defender Fridays today. John's work spans AI red teaming, adversarial machine learning, AI evals and validation, and AI risk assessment, including policy work at the intersection of AGI and nuclear strategic stability. Learn more at https://securityandtechnology.org/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie

TechFirst with John Koetsier
Giving AI a human soul

TechFirst with John Koetsier

Play Episode Listen Later Feb 27, 2026 27:36


Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today's chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu's company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻

The Glenn Beck Program
Best of the Program | 2/26/26

The Glenn Beck Program

Play Episode Listen Later Feb 26, 2026 44:16


Glenn kicks off the show by discussing two major developments overseas, including Israel's Iron Dome and India's alleged seizure of oil tankers tied to Russia and Iran, which Glenn argues is signaling India's pivot toward the West economically, strategically, and on security matters. Glenn argues this is evidence that America is reversing course and becoming the leader of the free world once again. Glenn admits he was wrong about something. Glenn admits he's finally come around to President Trump's use of tariffs after seeing how he uses them to advance America's economic interests. Did Elon Musk just suggest AGI is coming and that means you shouldn't save for retirement? Learn more about your ad choices. Visit megaphone.fm/adchoices

The Glenn Beck Program
Glenn Completely Changes Course on Trump's Tariffs | 2/26/26

The Glenn Beck Program

Play Episode Listen Later Feb 26, 2026 129:03


Glenn kicks off the show by discussing two major developments overseas, including Israel's Iron Dome and India's alleged seizure of oil tankers tied to Russia and Iran, which Glenn argues is signaling India's pivot toward the West economically, strategically, and on security matters. Glenn argues this is evidence that America is reversing course and becoming the leader of the free world once again. Glenn discusses the latest scandal involving Microsoft founder Bill Gates and accusations of stepping outside his marriage. Glenn admits he was wrong about something. Glenn admits he's finally come around to President Trump's use of tariffs after seeing how he uses them to advance America's economic interests. Did Elon Musk just suggest AGI is coming and that means you shouldn't save for retirement? Glenn makes the case for why it's time for America to eliminate the income tax. Glenn plays a video of American economist Milton Friedman, who lays out how he would handle taxes, as Glenn warns of the dangers of a universal basic income. Glenn takes a call from his audience about AI data centers.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Personal Development Mastery
The 3 Levels of Personal Growth You're Missing (Snippets of Wisdom) | #583

Personal Development Mastery

Play Episode Listen Later Feb 26, 2026 8:33 Transcription Available


Is your inner programming holding you back from change?Snippet of wisdom 96.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today my guest, the vertical development expert Ryan Gottfredson, talks about the three levels of personal growth, and the factors that shape our mindsets and behavior.Press play to learn what's blocking your next level of growth.˚VALUABLE RESOURCES:Listen to the full conversation with Ryan Gottfredson in episode #512:https://personaldevelopmentmasterypodcast.com/512˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

The ChatGPT Report
172 - Are we in a Mass AI Psychosis

The ChatGPT Report

Play Episode Listen Later Feb 26, 2026 12:50


My main takeawaysMain TakeawaysThe "Stargate" Collapse: The $500 billion partnership between OpenAI, SoftBank, and Oracle is being labeled "vaporware." Reports suggest the deal is in shambles due to internal power struggles and a lack of actual liquidity, with SoftBank allegedly scrambling for 90% debt financing.Market Volatility vs. Reality: There is a disconnect between market reactions and product performance. While Anthropic's claim that Claude can streamline COBOL code caused IBM's stock to drop 10%, critics argue the public is still in a "demo phase" of awe and hasn't realized the tech often fails to work as advertised.Reliability Concerns: High-profile failures are surfacing, such as Claude reportedly deleting a Meta researcher's entire Gmail history. This raises alarms as these same models are being positioned to manage critical infrastructure like banking and the IRS.Corporate Espionage: Anthropic has reported "industrial-scale distillation attacks" from Chinese labs (DeepSeek, Moonshot AI, MiniMax), claiming they used over 24,000 fraudulent accounts to "siphon" Claude's capabilities to train their own models.The "Theranos" Comparison: Critics are drawing parallels between current AI labs and failed startups like Theranos, arguing that the goal of reaching AGI via Large Language Models may be technically impossible, creating a "feedback loop delusion" to sustain venture capital investment.Strategic Shifts: OpenAI is pivoting toward traditional consulting giants (McKinsey, Accenture) to integrate its tech, while the community continues to debate the technical distinctions between generative AI and autonomous agents.@XFreeze@MrEwanMorrison@sterlingcrispin@dwlz

Private Equity Funcast
Private Equity Predictions 2026

Private Equity Funcast

Play Episode Listen Later Feb 25, 2026 49:55


It's our annual Predictions episode (and by annual, we mean just the years we remember to record one). Devin and Jim offer their hot takes on fundraising, liquidity, why artificial general intelligence (AGI) is still years away, and whether or not the world is officially "over-softwared." PE FunCast New Episodes Every Wednesday Follow us on social media and subscribe to our Substack! LinkedIn: https://www.linkedin.com/company/parkergale-capital Instagram:https://www.instagram.com/pefuncast Substack: https://substack.com/@pefuncast Facebook:https://www.facebook.com/people/PE-FunCast/61580605382460/?mibextid=wwXIfr&rdid=UXSOfkHvpixQjCyB&share_url=https%3A%2F%2Fwww.facebook.com%2Fshare%2F14VqLVUrhVD%2F%3Fmibextid%3DwwXIfr TikTok: https://www.tiktok.com/@pefuncast X: https://x.com/PEFunCast

The Ezra Klein Show
How Quickly Will A.I. Agents Rip Through the Economy?

The Ezra Klein Show

Play Episode Listen Later Feb 24, 2026 98:17


A.I. agents are here. Have they changed your life yet? The release of agents like Claude Code marked a new pivot point in the history of A.I. We are leaving the chatbot era and entering the agentic era — where A.I. is capable of completing all kinds of tasks on its own, and even collaborating and communicating with other A.I. It isn't clear yet whether these models actually make their users meaningfully more productive. But the technology is continuing to improve; there are few signs that it is close to plateauing. So what might this new era mean for our economy, our labor market and our kids? Clark is a co-founder of Anthropic, the company behind Claude and Claude Code. His newsletter, Import AI, has been one of my go-to reads to track the capabilities of different models over the years. In this conversation, I ask him to share how he sees this moment — how the technology is changing, whether it is leading to meaningful changes in how we work and think, and how policy needs to or can change in response to any job displacement on the horizon. Mentioned: “Import AI” by Jack Clark “2026: This is AGI” by Pat Grady and Sonya Huang “Why and How Governments Should Monitor AI Development” by Jess Whittlestone and Jack Clark “Anthropic's Chief on A.I.: ‘We Don't Know if the Models Are Conscious'", Interesting Times with Ross Douthat Book Recommendations: A Wizard of Earthsea by Ursula K. Le Guin The True Believer by Eric Hoffer There Is No Antimemetics Division by qntm Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones and Aman Sahota. Our executive producer is Claire Gordon. The show's production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Todd Herman Show
The Anti-Human Ideology of OPEN AI's Sam Altman Ep-2591

The Todd Herman Show

Play Episode Listen Later Feb 24, 2026 37:58


Renue Healthcare https://Renue.Healthcare/Todd Your journey to a better life starts at Renue Healthcare. Visit https://Renue.Healthcare/Todd Bulwark Capital https://KnowYourRiskPodcast.com Be confident in your portfolio with Bulwark! Schedule your free Know Your Risk Portfolio review. Go to KnowYourRiskPodcast.com today. Alan's Soaps https://www.AlansArtisanSoaps.com Use coupon code TODD to save an additional 10% off the bundle price.Bonefrog https://BonefrogCoffee.com/Todd Get the new limited release, The Sisterhood, created to honor the extraordinary women behind the heroes. Use code TODD at checkout to receive 10% off your first purchase and 15% on subscriptions.LISTEN and SUBSCRIBE at:The Todd Herman Show - Podcast - Apple PodcastsThe Todd Herman Show | Podcast on SpotifyWATCH and SUBSCRIBE at: Todd Herman - The Todd Herman Show - YouTubeThe Anti-Human Ideology of OPEN AI's Sam Altman // NY-Times Writer Baffled By NY-Times Readers Running Schools //  One Of These Guys Is An MD, Writer of 40 Books & Works for Oprah: The Other Is SmartEpisode links:Insane: Meta's Director of AI Safety and Alignment gave OpenClaw bot full access to her computer and email. She couldn't stop it from deleting her entire inbox. She's supposed to guardrail Meta's AI and future AGI.Months before Jesse Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia, Canada, OpenAI considered alerting law enforcement about her interactions with its ChatGPT chatbot, the company said - The shooter was a man.SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”This teacher-turned-cognitive scientist shared a disturbing reality that left the room stunned. “Our kids are LESS cognitively capable than we were at their age.” Every previous generation outperformed its parents since we began recording in the late 1800sVIDEO | Child, 11, accused of killing father arrives at PA court hearing in handcuffsAG Uthmeier CHEERS lawsuit against Mark Zuckerberg over social media being designed to be addictive! “Kids, they won't peel their eyes off the screens these days. The unlimited scrolling, the push notifications, videos that start by themselves, all these different techniques to make it where you can't even put the phone down. We see evidence of mental health disorders, heightened tendencies for suicide, eating disorders, an obsession with image. This is not healthy for young people. It's addictive. It's harmful.” Dr. John Demartini, who writes for Oprah & starred in “The Secret” just said the children who have been raped —- attracted it into their lives —  and then ends by saying there's upsides to the murder of kids, too. Ps. Yes. He's in the Epstein files.UFC fighter Paddy Pimblett on men and suicide

80,000 Hours Podcast with Rob Wiblin
Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 24, 2026 161:20


Most people in AI are trying to give AIs ‘good' values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.In Max's view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough.It's a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max's colleagues at the Machine Intelligence Research Institute.To Max, the book's core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction.And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we've learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance.We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don't care.Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down.This leads to Max's research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power.According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans.Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he's calling for collaborators to get in touch at maxharms.com.Links to learn more, video, and full transcript: https://80k.info/mh26This episode was recorded on October 19, 2025.Chapters:Cold open (00:00:00)Who's Max Harms? (00:01:22)A note from Rob Wiblin (00:01:58)If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:04:26)Evolution failed to 'align' us, just as we'll fail to align AI (00:26:22)We're training AIs to want to stay alive and value power for its own sake (00:44:31)Objections: Is the 'squiggle/paperclip problem' really real? (00:53:54)Can we get empirical evidence re: 'alignment by default'? (01:06:24)Why do few AI researchers share Max's perspective? (01:11:37)We're training AI to pursue goals relentlessly — and superintelligence will too (01:19:53)The case for a radical slowdown (01:26:07)Max's best hope: corrigibility as stepping stone to alignment (01:29:09)Corrigibility is both uniquely valuable, and practical, to train (01:33:44)What training could ever make models corrigible enough? (01:46:13)Corrigibility is also terribly risky due to misuse risk (01:52:44)A single researcher could make a corrigibility benchmark. Nobody has. (02:00:04)Red Heart & why Max writes hard science fiction (02:13:27)Should you homeschool? Depends how weird your kids are. (02:35:12)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

ai video teaching evolution killed ml agi harms miri eliezer yudkowsky red heart machine intelligence research institute rob wiblin
Mailbox Money Show
Webinar - Winning the 2025 Tax Game

Mailbox Money Show

Play Episode Listen Later Feb 23, 2026 56:00


Get my new book: https://bronsonequity.com/fireyourselfDownload my new special report - How to Use Inflation to Your Advantage - www.bronsonequity.com/inflationJoin Bronson Hill on the Mailbox Money Show for a replay of the live webinar "Winning the 2025 Tax Game," where high-net-worth investors and real estate pros dive deep into proven, legal strategies to slash taxes, protect wealth, and keep more money working for you in 2025 and beyond.Panel:KC Chohan:Founder specializing in charitable structures (private foundations, donor-advised funds, asset donations) that deliver up to 50% AGI deductions while maintaining control and legacy—perfect for physicians, attorneys, and multi-seven-figure earners.Rob McBride: Experienced CPA focused on real estate investors and pass-through businesses; covers maximizing deferrals, capital loss harvesting, cost segregation, real estate professional status, recapture risks, and proper entity setup for massive savings.Caleb Guilliams: Author of The And Asset; explains optimized whole life insurance as a tax-deferred, tax-free-access storage vehicle for capital, plus how to leverage it for real estate, business acquisitions, and generational wealth transfer.From Augusta Rule rentals and paying your kids to bonus depreciation pitfalls, proactive quarterly planning, and building the right advisory team, this session delivers high-impact ideas to minimize your IRS bill without sacrificing growth or lifestyle. Ideal for active real estate investors, business owners, and anyone serious about mailbox money in a changing tax landscape.TIMESTAMPS0:40 - Event Overview: Winning the 2025 Tax Game2:48 - Panelist Intros: Rob McBride, Caleb Guilliams, KC Chohan3:55 - KC Chohan: Charitable Strategies & Philanthropy Structures7:02 - Rob McBride: CPA Perspective, Entity Optimization, Tax Planning9:58 - Caleb Guilliams: Whole Life Insurance for Tax Efficiency & Capital Storage12:05 - Low-Hanging Fruit: Entity Structure & QBI Benefits13:02 - KC: Right Entity Type Can Reduce Taxes 50%16:28 - Rob: Maximize Retirement Deferrals & Capital Loss Harvesting19:46 - Caleb: Augusta Rule, Paying Kids, Depreciation via Real Estate24:18 - Bonus Depreciation & Accelerated Write-Offs (KC & Rob)27:26 - Recapture Risks & Long-Term Holding Periods (Rob)30:07 - Life Insurance Benefits: Tax-Deferred Growth & Tax-Free Access (Caleb)34:23 - Team Building & Proactive Quarterly Planning (KC)37:10 - Books & Resources Recommendations39:34 - 2026 Outlook: TCJA Permanence & Bonus Depreciation Focus43:55 - Panelist Contact & Resources RoundJoint the Wealth Forum: bronsonequity.com/wealthConnect with the Guests:KC ChohanWebsite: https://www.togethercfo.com/Rob McBrideWebsite: mrmcpas.comCaleb GuilliamsWebsite: taxandassets.comEmail: caleb@betterwealth.com#TaxStrategy#TaxPlanning#RealEstateTax#Depreciation#CharitableGiving#LifeInsurance#EntityStructure

Personal Development Mastery
How to Stop Overthinking Your Way Through Change and Start Listening for Clarity, with Sarah Andreas | #582

Personal Development Mastery

Play Episode Listen Later Feb 23, 2026 38:10 Transcription Available


Have you ever felt successful on the outside but restless within, as if you're outgrowing the life you've built?If you're navigating a major life or career transition and struggling to make sense of it with logic alone, this episode is your guide to moving beyond mental stuckness. Through creativity, mindfulness, and embodiment practices, Sarah Andreas helps you understand the inner shifts necessary for authentic reinvention, especially when your identity feels connected to past success.Discover how creativity, beyond art, can unlock clarity and reconnect you with your future self.Learn why letting go of long-held professional identities is essential for meaningful growth.Explore Sarah's 3-step framework of Reveal, Render, and Rise to navigate change with intention, not fear.Press play now to learn how to move through transitions with confidence, creativity, and the courage to become who you're meant to be.˚KEY POINTS AND TIMESTAMPS:01:23 - Introducing Sarah Andreas and the idea of reinvention02:34 - Why creativity brings clarity beyond logic05:22 - Embodiment practices and getting out of the head07:25 - External success and inner restlessness10:21 - Professional identity as a barrier to change14:25 - The reinvention process: reveal, render, rise18:53 - Holding plans lightly and navigating transition23:15 - Reframing midlife crisis as awakening28:06 - Embracing uncertainty and stepping into the unknown˚MEMORABLE QUOTE:"If you're not living a life that you love, you need to do reinvention."˚VALUABLE RESOURCES:Sarah's website: https://sarahandreas.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Digital, New Tech & Brand Strategy - MinterDial.com
Navigating Agentic AI: Peter Morgan on Technology, Ethics, and the Future of Work (MDE643)

Digital, New Tech & Brand Strategy - MinterDial.com

Play Episode Listen Later Feb 22, 2026 58:16


In this episode of Minter Dialogue, host Minter Dial sits down with Peter Morgan, a theoretical physicist turned entrepreneur, data scientist, and AI consultant. With a career that spans from quantum particle physics to building tech companies and now leading Deep Learning Partnership, Peter Morgan brings a provocative and insightful perspective on the current state and future of artificial intelligence. Together, they explore the rapid evolution of AI — from large language models to today's focus on agentic AI and autonomous digital workers. Peter Morgan offers a candid look at the challenges and opportunities businesses face when implementing AI, demystifies artificial general intelligence (AGI), and weighs in on topics like AI and human emotion, the value of proprietary data, and ethical leadership in a time of technological upheaval. The conversation also spans the impact of AI on industries such as healthcare and cybersecurity, the shifting role of the human workforce, and what the emergence of agentic AI means for both business strategy and society at large. Whether you're an executive wondering how to future-proof your organization, or simply AI-curious, this episode offers a blend of humility, practical advice, and mind-expanding discussion that's sure to spark new ideas about our place in the age of intelligent machines.

Lenny's Podcast: Product | Growth | Career
Head of Claude Code: What happens after coding is solved | Boris Cherny

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Feb 19, 2026 87:45


Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work.We discuss:1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month2. The counterintuitive product principles that drove Claude Code's success3. Why Boris believes coding is “solved”4. The latent demand that shaped Claude Code and Cowork5. Practical tips for getting the most out of Claude Code and Cowork6. How underfunding teams and giving them unlimited tokens leads to better AI products7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks8. Three principles Boris shares with every new team member—Brought to you by:DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lennySentry—Code breaks, fix it faster: https://sentry.io/lennyMetaview—The AI platform for recruiting: https://metaview.ai/lenny—Episode transcript: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Boris Cherny:• X: https://x.com/bcherny• LinkedIn: https://www.linkedin.com/in/bcherny• Website: https://borischerny.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Boris and Claude Code(03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back)(05:35) One year of Claude Code(08:41) The origin story of Claude Code(13:29) How fast AI is transforming software development(15:01) The importance of experimentation in AI innovation(16:17) Boris's current coding workflow (100% AI-written)(17:32) The next frontier(22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team(26:48) Why you should give engineers unlimited tokens(27:55) Will coding skills still matter in the future?(32:15) The printing press analogy for AI's impact(36:01) Which roles will AI transform next?(40:41) Tips for succeeding in the AI era(44:37) Poll: Which roles are enjoying their jobs more with AI(46:32) The principle of latent demand in product development(51:53) How Cowork was built in just 10 days(54:04) The three layers of AI safety at Anthropic(59:35) Anxiety when AI agents aren't working(01:02:25) Boris's Ukrainian roots(01:03:21) Advice for building AI products(01:08:38) Pro tips for using Claude Code effectively(01:11:16) Thoughts on Codex(01:12:13) Boris's post-AGI plans(01:14:02) Lightning round and final thoughts—References: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Radical Candor
AI Gods, Space Empires, and the Stories Tech Uses to Justify Power with Adam Becker 8|3

Radical Candor

Play Episode Listen Later Feb 18, 2026 66:51


What if the loudest stories about the future—AI gods, Mars colonies, digital immortality—aren't science at all, but science fiction masquerading as inevitability? In this episode of The Radical Candor Podcast, Kim Scott and Amy Sandler are joined by science journalist and astrophysicist Adam Becker (PhD in computational cosmology), author of More Everything Forever. Adam breaks down the “big three” myths that dominate Silicon Valley's imagination: space colonization, superintelligent god-like AI, and the singularity. He explains why both the utopian and apocalyptic versions of AI stories often share the same assumption—unimaginable AI power—and why that assumption doesn't match reality. They also explore the deeper pattern underneath these myths: the belief that every problem can be solved with technology (usually computer technology), even when the barriers are political and social—collective action, persuasion, solidarity, and power. Along the way, Adam shares how he stayed sane while writing about “seriously disturbing ideas,” and why reconnecting with the natural world (and real human relationships) is a necessary antidote to screen-mediated life. If you've ever felt overwhelmed by the “AI will save us” vs. “AI will doom us” debate, this conversation offers a clearer, more grounded frame—and a reminder that being human matters. ⁠⁠⁠⁠Website⁠⁠⁠⁠ ⁠⁠⁠⁠Instagram⁠⁠⁠⁠ ⁠⁠⁠⁠TikTok⁠⁠⁠⁠ ⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠ ⁠⁠⁠⁠YouTube⁠ ⁠⁠⁠⁠⁠⁠⁠⁠Bluesky⁠⁠⁠ Resources for show notes: ⁠Adam Becker's website⁠ ⁠More, Everything, Forever book page⁠ ⁠Adam Becker on Star Talk podcast⁠ ⁠Dave Troy presents: Understanding TESCREAL with Dr. Timnit Gebru and Émile Torres⁠ ⁠Why Silicon Valley's Most Powerful People Are So Obsessed With Hobbits⁠ Referenced in conversation: Blade Runner (as an example of dystopian sci-fi being misunderstood) Star Wars / Jabba the Hutt (as an example of misreading stories) Lord of the Rings / Palantír (as a cautionary reference) Jurassic Park (“they didn't stop to consider whether they should”) Public libraries (as a civic good worth supporting) Chapters: (00:00) Introduction Kim and Amy welcome Adam Becker to unpack Silicon Valley's stories about the future. (06:06) The Myths Driving Tech Ideology Space colonization, superintelligent AI, and the singularity—and why they don't hold up. (11:52) When Sci-Fi Turns into Strategy How dystopian stories get misread as roadmaps (Palantir, “Torment Nexus,” and more). (15:06) More Everything Forever Why endless expansion feels inevitable in tech—and why Adam argues it's flawed. (21:24) “Can” vs. “Should” Why tech leaders dodge both questions—and what that reveals about power. (23:19) You Can't Escape Politics by Going to Space Why “Mars as a reset button” is a fantasy—and politics follows humans everywhere. (33:22) AI Doom vs. AI Utopia Why both narratives rely on the same shaky assumption about “AGI.” (37:21) Solidarity as a Counterbalance Why labor organizing matters when leadership values diverge from workers' values. (41:02) “AGI Will Fix Climate” Why betting on future AI while burning more energy now is a dangerous logic trap. (01:03:50) Conclusion Learn more about your ad choices. Visit megaphone.fm/adchoices

Unchained
Uneasy Money: Are Institutions Creating a New Crypto Meta?

Unchained

Play Episode Listen Later Feb 16, 2026 73:03


The crew unpacks BlackRock buying UNI, ARK, Citadel, DTCC, the Intercontinental Exchange and other TradFi players backing Zero, , Vitalik's thoughts on AI, and more.  Thank you to our sponsors! Fuse: The Energy Network MultiChain Advisors Crypto Tax Girl AI safety chiefs are leaving, BlackRock's launching on Uniswap and buying UNI, LayerZero launches “the last blockchain” with institutional backing, Kaito is launching attention markets, Base is abandoning social and Vitalik has some thoughts on AI. Hosts Kain Warwick, Luca Netz and Taylor Monahan unpack these and more in yet another packed episode of Uneasy Money. Find out why Kain thinks the Uniswap and LayerZero news point to a new meta reminiscent of DeFi Summer. Plus, is Coinbase's Base playing it too safe? And is Vitalik fighting a losing battle? Hosts: Luca Netz, CEO of Pudgy Penguins Kain Warwick, Founder of Infinex and Synthetix Taylor Monahan, Security at MetaMask Links: Unchained: ⁠LayerZero Launches ‘Zero' Layer 1 as Citadel, ARK Buy ZRO⁠ ⁠How Zero Blockchain Cracked 2 Million TPS and Is Still Decentralized⁠ ⁠Vitalik Buterin Pushes Back on the ‘Race to AGI,' Outlines Ethereum-Led AI Path⁠ ⁠When AI Agents Take Over, What Does a Post-Human Economy Look Like?⁠ ⁠Uneasy Money: How the Increasingly Better AI Agents Are Being Used Onchain⁠ ⁠Uneasy Money: Why Crypto Still Can't Overcome Its ICO Struggles Learn more about your ad choices. Visit megaphone.fm/adchoices

Conservative Review with Daniel Horowitz
AI Is Not a Substitute for Human Thinking | 2/12/26

Conservative Review with Daniel Horowitz

Play Episode Listen Later Feb 12, 2026 58:57


Artificial intelligence is transforming everything from writing and research to medicine and productivity — or at least it appears to be doing so. But are we gaining only illusory efficiency at the cost of something deeper and more long-term? Are anti-market forces and government and industry gaslighting steering capital to the wrong uses of AI based on the assumption that we will achieve “general intelligence”? What responsibility do we have as humans to make sure we approach available LLMs in a way that won't supplant human cognition? In this thought-provoking conversation, I sit down with leading innovation theorist John Nosta, author of "The Borrowed Mind: Reclaiming Human Thought in the Age of AI," to explore one of the most important questions of our time: Are we using AI as a tool to augment human thought, or are we slowly outsourcing our thinking to it? From "frictionless intelligence" being a trap and the myth of AGI to the danger of "cognitive obsolescence," Nosta reveals why the struggle to think is a feature, not a bug, of humanity. Learn how to reclaim your agency and use technology as a tool — without becoming a tool yourself. Learn more about your ad choices. Visit megaphone.fm/adchoices