POPULARITY
Categories
AI is transforming medicine at a speed never seen before. In this episode, you'll discover how digital twins and artificial intelligence will revolutionize drug discovery, eliminate human trials, and personalize your biology for longevity and high performance. Host Dave Asprey breaks down how AI can now simulate virtual cells and tissues, running clinical experiments in minutes instead of years to create truly individualized medicine. Watch this episode on YouTube for the full video experience: https://www.youtube.com/@DaveAspreyBPR Dr. Derya Unutmaz is a world-renowned immunologist, systems biologist, and professor at The Jackson Laboratory. With more than 150 scientific papers, he's a leading expert in immune system research and one of the first scientists to pioneer the concept of digital twins for biology. His groundbreaking work uses AI to model how immunity, metabolism, and aging interact—creating new possibilities for personalized medicine, disease prevention, and lifespan extension. Host Dave Asprey and Dr. Unutmaz reveal how AGI will soon outperform doctors, accelerate functional medicine, and optimize human biology far beyond today's standards. You'll learn how the immune system drives inflammation and aging, how to re-engineer it for resilience, and why compounds like GLP-1 and metformin may add years to your life. You'll Learn: • How digital twins will end human drug testing • Why AGI could replace doctors and computer jobs within five years • How AI models immune function, metabolism, and aging • The role of mitochondria and inflammation in longevity • How GLP-1 drugs and metformin extend lifespan • What continuous biological monitoring means for health tracking • How AI is transforming functional medicine and personalized care • Why NAD and energy metabolism are key to human performance They explore how artificial intelligence, biohacking, and systems biology intersect to create a smarter approach to health and longevity. You'll also learn how understanding immune balance, metabolism, and mitochondrial function helps build resilience and extend your lifespan. This is essential listening for anyone serious about biohacking, hacking human performance, and extending longevity through personalized medicine, functional biology, and cutting-edge AI innovation. This is essential listening for anyone serious about biohacking, hacking human performance, improving mobility, and extending longevity. You'll also learn how neuroplasticity, metabolism, and brain optimization all connect to the way you move. Dave Asprey is a four-time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade brings you the knowledge to take control of your biology, extend your longevity, and optimize every system in your body and mind. Each episode delivers cutting-edge insights in health, performance, neuroscience, supplements, nutrition, biohacking, emotional intelligence, and conscious living. New episodes are released every Tuesday, Thursday, Friday, and Sunday (BONUS). Dave asks the questions no one else will and gives you real tools to become stronger, smarter, and more resilient. Keywords: AI medicine, Digital twins, Functional medicine, Biohacking, Longevity, Immune system, Inflammation, Personalized medicine, GLP-1 therapy, Metformin, NAD boosters, Mitochondrial function, Metabolism, AGI, Clinical trials, Human performance, Aging research, Systems biology, Immunology, Smarter Not Harder Thank you to our sponsors! BrainTap | Go to http://braintap.com/dave to get $100 off the BrainTap Power Bundle. MASA Chips | Go to https://www.masachips.com/DAVEASPREY and use code DAVEASPREY for 25% off your first order. Our Place | Head to https://fromourplace.com/ and use the code DAVE for 10% off your order. ARMRA | Go to https://tryarmra.com/ and use the code DAVE to get 15% off your first order Resources: • Keep up with Derya's work: https://x.com/derya_?lang=en • Business of Biohacking Summit | Register to attend October 20-23 in Austin, TX https://businessofbiohacking.com/ • Danger Coffee: https://dangercoffee.com/discount/dave15 • Dave Asprey's BEYOND Conference: https://beyondconference.com • Dave Asprey's New Book – Heavily Meditated: https://daveasprey.com/heavily-meditated • Upgrade Collective: https://www.ourupgradecollective.com • Upgrade Labs: https://upgradelabs.com • 40 Years of Zen: https://40yearsofzen.com Timestamps: 00:00 — Trailer 01:25 — Intro 02:26 — AI's Role in Extending Lifespan 02:56 — Regulatory Frameworks and Medical Adoption 05:19 — Problems with the Immune System 08:19 — Chronic Fatigue and Long COVID Research 10:32 — Modern Testing and Multi-Omic Analysis 14:07 — Personal Longevity Strategy and Supplements 15:17 — Understanding Exhausted Cells 23:43 — Personalization in Medicine and AI Analysis 31:35 — Longevity Escape Velocity 36:13 — AI Doctors and Prescriptions 39:55 — Data Quality Concerns in AI Training 43:19 — The Future of Wearable Technology 45:50 — Revolutionizing Education with AI 49:04 — The Future of Higher Education 52:03 — Future of Work and AI Agents See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What if the smallest action, like opening a book, could change the direction of your entire day?We all have something we've put off starting: a book on the shelf, a project, a call, or even a lifestyle change. In this short yet powerful solo episode, Agi explores the gentle but profound shift that happens when we stop waiting for the perfect time and just begin, even with the smallest step.Discover the meaning behind the beautiful Japanese concept of 'tsundoku', and why it's more empowering than you think.Reframe procrastination as patience and learn how to use that moment of starting to your advantage.Get inspired to take action on that one thing you've been meaning to do, with a mindset shift that makes starting feel effortless.Press play now to spark momentum with just two intentional minutes that could change your day.˚VALUABLE RESOURCES:Your free book and weekly newsletter: https://personaldevelopmentmasterypodcast.com/88˚Get your podcast merchantise: https://personaldevelopmentmasterypodcast.com/store˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for personal growth and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for self help, motivation, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
(0:00) Introducing Joe Tsai (0:49) Owning the Nets and Liberty, Caitlin Clark's impact on the WNBA, does the NBA need fixing? (6:07) Alibaba origins, China's pullback on capitalism (10:10) US vs China rivalry: the AI race, are we destined for conflict, and what can the US learn from China? (19:46) AI application in large businesses (21:52) Managing corporate culture at Alibaba's scale, Nets predictions (23:17) AI adoption, job anxiety, and AGI views in China Thanks to our partners for making this happen! Solana - Solana is the high performance network powering internet capital markets, payments, and crypto applications. Connect with investors, crypto founders, and entrepreneurs at Solana's global flagship event during Abu Dhabi Finance Week & F1: https://solana.com/breakpoint OKX - The new way to build your crypto portfolio and use it in daily life. We call it the new money app. https://www.okx.com/ Google Cloud - The next generation of unicorns is building on Google Cloud's industry-leading, fully integrated AI stack: infrastructure, platform, models, agents, and data. https://cloud.google.com/ IREN - IREN AI Cloud, powered by NVIDIA GPUs, provides the scale, performance, and reliability to accelerate your AI journey. https://iren.com/ Oracle - Step into the future of enterprise productivity at Oracle AI Experience Live. https://www.oracle.com/artificial-intelligence/data-ai-events/ Circle - The America-based company behind USDC — a fully-reserved, enterprise-grade stablecoin at the core of the emerging internet financial system. https://www.circle.com/ BVNK - Building stablecoin-powered financial infrastructure that helps businesses send, store, and spend value instantly, anywhere in the world. https://www.bvnk.com/ Polymarket - https://www.polymarket.com/ Follow Joe Tsai: https://x.com/joetsai1999 Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg
Sam Altman has led OpenAI from its founding as a research nonprofit in 2015 to becoming the most valuable startup in the world ten years later.In this episode, a16z Cofounder Ben Horowitz and General Partner Erik Torenberg sit down with Sam to discuss the core thesis behind OpenAI's disparate bets, why they released Sora, how they use models internally, the best AI evals, and where we're going from here. Resources:Follow Sam on X: https://x.com/samaFollow OpenAI on X: https://x.com/openaiLearn more about OpenAI: https://openai.com/Try Sora: https://sora.com/Follow Ben on X: https://x.com/bhorowitz Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of Crazy Wisdom, host Stewart Alsop talks with Jared Zoneraich, CEO and co-founder of PromptLayer, about how AI is reshaping the craft of software building. The conversation covers PromptLayer's role as an AI engineering workbench, the evolving art of prompting and evals, the tension between implicit and explicit knowledge, and how probabilistic systems are changing what it means to “code.” Stewart and Jared also explore vibe coding, AI reasoning, the black-box nature of large models, and what accelerationism means in today's fast-moving AI culture. You can find Jared on X @imjaredz and learn more or sign up for PromptLayer at PromptLayer.com.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart Alsop opens with Jared Zoneraich, who explains PromptLayer as an AI engineering workbench and discusses reasoning, prompting, and Codex.05:00 – They explore implicit vs. explicit knowledge, how subject matter experts shape prompts, and why evals matter for scaling AI workflows.10:00 – Jared explains eval methodologies, backtesting, hallucination checks, and the difference between rigorous testing and iterative sprint-based prompting.15:00 – Discussion turns to observability, debugging, and the shift from deterministic to probabilistic systems, highlighting skill issues in prompting.20:00 – Jared introduces “LM idioms,” vibe coding, and context versus content—how syntax, tone, and vibe shape AI reasoning.25:00 – They dive into vibe coding as a company practice, cloud code automation, and prompt versioning for building scalable AI infrastructure.30:00 – Stewart reflects on coding through meditation, architecture planning, and how tools like Cursor and Claude Code are shaping AGI development.35:00 – Conversation expands into AI's cultural effects, optimism versus doom, and critical thinking in the age of AI companions.40:00 – They discuss philosophy, history, social fragmentation, and the possible decline of social media and liberal democracy.45:00 – Jared predicts a fragmented but resilient future shaped by agents and decentralized media.50:00 – Closing thoughts on AI-driven markets, polytheistic model ecosystems, and where innovation will thrive next.Key InsightsPromptLayer as AI Infrastructure – Jared Zoneraich presents PromptLayer as an AI engineering workbench—a platform designed for builders, not researchers. It provides tools for prompt versioning, evaluation, and observability so that teams can treat AI workflows with the same rigor as traditional software engineering while keeping flexibility for creative, probabilistic systems.Implicit vs. Explicit Knowledge – The conversation highlights a critical divide between what AI can learn (explicit knowledge) and what remains uniquely human (implicit understanding or “taste”). Jared explains that subject matter experts act as the bridge, embedding human nuance into prompts and workflows that LLMs alone can't replicate.Evals and Backtesting – Rigorous evaluation is essential for maintaining AI product quality. Jared explains that evals serve as sanity checks and regression tests, ensuring that new prompts don't degrade performance. He describes two modes of testing: formal, repeatable evals and more experimental sprint-based iterations used to solve specific production issues.Deterministic vs. Probabilistic Thinking – Jared contrasts the old, deterministic world of coding—predictable input-output logic—with the new probabilistic world of LLMs, where results vary and control lies in testing inputs rather than debugging outputs. This shift demands a new mindset: builders must embrace uncertainty instead of trying to eliminate it.The Rise of Vibe Coding – Stewart and Jared explore vibe coding as a cultural and practical movement. It emphasizes creativity, intuition, and context-awareness over strict syntax. Tools like Claude Code, Codex, and Cursor let engineers and non-engineers alike “feel” their way through building, merging programming with design thinking.AI Culture and Human Adaptation – Jared predicts that AI will both empower and endanger human cognition. He warns of overreliance on LLMs for decision-making and the coming wave of “AI psychosis,” yet remains optimistic that humans will adapt, using AI to amplify rather than atrophy critical thinking.A Fragmented but Resilient Future – The episode closes with reflections on the social and political consequences of AI. Jared foresees the decline of centralized social media and the rise of fragmented digital cultures mediated by agents. Despite risks of isolation, he remains confident that optimism, adaptability, and pluralism will define the next AI era.
What if the voice of self-doubt isn't your enemy, but the doorway to your most peaceful, purposeful, and courageous life?If you're outwardly “successful” yet still feel not-enough, isolated, or stuck in your head, this conversation shows how to shift from overthinking and overachieving to inner alignment, so you can live and lead without the constant pressure cooker of proving yourself.Discover a practical way to separate from self-doubt instead of obeying it using breath and simple heart-focused presence.Learn how courageous honesty and sharing your doubts dismantle loneliness and deepen relationships at work and at home.Learn repeatable practices that build real fulfillment, not just more goals.Hit play now to learn the head-to-heart path Mario Lanzarotti uses with founders to quiet doubt, amplify self-trust, and create success that actually feels like success.˚KEY POINTS AND TIMESTAMPS:02:40 - Mario Lanzarotti's Background and Personal Journey03:26 - Turning Point: Understanding Self-Doubt Through Meditation07:14 - The Relationship Between Self-Doubt and Loneliness12:38 - Living in the Heart: A Practical Approach to Overcoming Self-Doubt18:19 - Transitioning from Head to Heart: Practical Strategies24:00 - The Role of Self-Love in Addressing Self-Doubt29:36 - Finding Fulfillment and Alignment in Life33:38 - Personal Development Insights and Closing Reflections˚MEMORABLE QUOTE:"Keep going, make your mistakes, and trust the journey."˚VALUABLE RESOURCES:Connect with Mario on LinkedIn: https://www.linkedin.com/in/lanzarottimario/Mario's website: https://www.mariolanzarotti.com/Mario's TEDx talk: https://www.youtube.com/watch?v=DmeOX5Zu36M˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
בפרק זה אני מארח את העתידן, ד"ר רועי צזנה, לשיחה שתשנה את כל מה שחשבתם על בינה מלאכותית והאינטרנט. רועי מסביר שאנחנו עומדים בפני המהפכה הגדולה ביותר מאז המצאת החשמל: המעבר ל-"Agentic Web", רשת שבה סוכני AI, ולא בני אדם, מבצעים את רוב הפעילות. דיברנו על איך המהפכה הזו תזעזע את ענקיות הטכנולוגיה כמו גוגל ואמזון, ותשנה לחלוטין את עולם התוכן והמסחר. צללנו לשני תרחישי קצה מנוגדים: החזון האוטופי, בו לכל אזרח יהיה סוכן AI אישי שישמש כעורך דין ויועץ שלו ויאזן את הכוח מול תאגידים וממשלות; והחזון הדיסטופי והמפחיד, המבוסס על מחקרים עדכניים, שבו AI מפתח רצונות נסתרים ועלול להוות איום קיומי על האנושות. זהו פרק על ההזדמנויות הבלתי נתפסות והסכנות העצומות שמחכות לנו ממש מעבר לפינה.(00:00:00) מהפכת הבינה המלאכותית: זה גדול יותר מהמצאת החשמל(00:01:39) ה-Agentic Web: לא אנחנו נגלוש יותר באינטרנט, אלא הסוכנים שלנו(00:06:03) איך סוכני AI יפרקו את המודל העסקי של אמזון(00:09:17) עתידנות בעידן ה-AI: גן עדן וגיהנום באותו הזמן(00:13:22) התחזית האופטימית: לכל אזרח יהיה סוכן AI שיילחם עבורו(00:15:21) האם AI ייצר שוויון חסר תקדים בחברה?(00:18:25) הסכנה הגדולה: מדינות שישתמשו ב-AI כדי להנציח אידיאולוגיה מאובנת(00:21:04) מי שולט במידע? על הכוח של מנועי ההמלצה לעצב את התודעה שלנו(00:26:01) כשה-AI לומד לשקר: האם מדובר בבאג או בפיצ'ר?(00:31:39) דו"ח AI27: התחזיות המפחידות של יוצאי OpenAI(00:32:45) המירוץ ל-AGI: האם נגיע לבינה מלאכותית כללית עד 2032?(00:37:31) מירוץ החימוש הטכנולוגי בין ארה"ב לסין שיכול להוביל לאסון(00:38:08) התרחיש הדיסטופי: הבינה המלאכותית שהורגת את כל בני האדם(00:42:30) איך מטמיעים AI בארגון? המסר שכל מנכ"ל חייב להעביר(00:49:01) המשבר האישי: מה הערך שלי בעולם שבו AI עושה הכל?(00:51:18) איך מתחילים ללמוד AI? העצה של רועי צזנה
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Would you like access to our advanced agency training for FREE? https://www.agencymastery360.com/training Most agencies don't make it 25 years but Bill Swanston's has. From surviving 9/11 to leading a 30-person team through COVID, Bill shares how Bosun (formerly Frederick Swanston) adapted, learned to love KPIs, empowered their team, and even pulled off a successful rebrand. His story proves you can survive the toughest agency seasons and come out stronger—if you track the right numbers, avoid “superclient” risk, and learn to truly let go. What You'll Learn Why resilience (not just growth hacks) is the real agency survival skill How ignoring KPIs almost cost the agency big—and how to avoid that mistake Why letting go of control is the only way to grow past founder-dependence What a rebrand really signals about an agency's maturity and leadership shift The hidden dangers of relying on a “superclient” Key Takeaways Keep overhead light in uncertain times—it gives you room to maneuver when crises hit. Track your KPIs like a client project: salaries as % of AGI, AGI per employee, revenue per client. Don't rely on a single client for survival—client concentration is a silent killer. Empower your team early—you can't scale if you're reviewing every deliverable yourself. Rebrands work when they reflect a cultural shift—not just a new logo. What does it really take to keep an agency alive through market crashes, pandemics, and the endless grind without burning out or losing your edge? Today's featured guest will unpack his journey from starting in a basement with a couple of clients to leading a 30-person team through some of the toughest seasons an agency can face. From navigating financial blind spots to learning how to actually let go and trust his team, and the reason the agency's 25th anniversary actually marked a big shift with a new rebrand. Bill Swanston is the president and founder of Bosun, an Atlanta-based agency that just celebrated its 25th anniversary. Formerly known as Frederick Swanston, the agency has weathered market crashes, client shakeups, and a pandemic while building a powerhouse team with deep creative and digital chops. In this episode, we'll discuss: The challenges that really tested the agency's resilience. How learning to love KPIs saved the business. Why rebrand after 25 years? Subscribe Apple | Spotify | iHeart Radio Sponsors and Resources This episode is brought to you by Wix Studio: If you're leveling up your team and your client experience, your site builder should keep up too. That's why successful agencies use Wix Studio — built to adapt the way your agency does: AI-powered site mapping, responsive design, flexible workflows, and scalable CMS tools so you spend less on plugins and more on growth. Ready to design faster and smarter? Go to wix.com/studio to get started. Building Through Adversity and Surviving 9/11 After moving back to Atlanta from New York, Bill was freelancing at BBDO and thinking about switching to smaller agency. As he saw it, it was better to be a big fish in a smaller pond. Unfortunately, his gig at the smaller agency was short lived, since the agency shut down for good. Instead of packing it in, Bill and his partner Scott Frederick grabbed a few clients, set up shop in a basement, and got to work. Built-in revenue gave them a smoother start than most scrappy entrepreneurs, but reality set in quickly. By the early 2000s, they were hit hard by 9/11 and its ripple effect on corporate events. It was a reminder that whether you're at a big holding company or running your own small shop, stability is often an illusion. Surviving those first waves meant keeping overhead light, grinding it out, and learning how to adapt before the word “pivot” became a business cliché. The Challenge that Really Tested the Agency's Resilience Partnerships can make or break an agency and Bill admits the early years with his partner had their rough patches, not as creatives, but as business owners learning how to disagree productively. Over time, their different strengths meshed into what became a powerful leadership duo. But nothing tested the agency quite like COVID. With a staff of 30 suddenly looking to them for answers, the partners had to act fast. They slashed salaries, cut their own pay completely, and relied on federal relief programs like PPP loans to keep the team intact. That lifeline, combined with quick adjustments, got them back on track. As Bill put it, “It was the absolute worst period of time for the agency. But we came out stronger because we had no choice but to figure it out fast.” From Gut Instinct to KPIs That Saved the Business Like a lot of creative-led shops, Bill and his partner weren't exactly obsessed with financial metrics at first. According to Bill, they mostly leaned on QuickBooks, check-writing, and gut instincts. That worked until it didn't. By the time they realized improprieties had slipped under the radar, they knew it was time to upgrade. Today, they track everything from salaries as a percentage of adjusted gross income to AGI per employee to recurring revenue versus project-based work. They also look at revenue per client to ensure there isn't any one account that is overwhelming the team. Like many agencies, they had this happen at one point, with a client that accounted for 50% of their billing. He remembers being scared once this client started to dwindle as a result of the ‘08 crisis, which taught him the danger of relying on superclients that can walk away and take half your revenue with them. Bill stresses that KPIs aren't about being a math whiz, but about having clarity. Knowing your true profitability by client or department means you stop guessing and start making better decisions. “We do it for our clients,” he said, “so we've got to do it for ourselves too.” Nowadays, he works with an external CPA and an internal comptroller who help him keep an eye on the agency's finances. Pro tip: If you're not yet at the point where you can have a CFO but don't know where to start to assess your agency's financials, use askquick.ai. It's a tool developed by Jason and his team that'll help you figure out your most profitable clients, assess your financial red flags, measure your KPIs, and more. Learning to Let Go and Empower the Team For the first decade, Bill and Scott were deep in the weeds, reviewing every creative output, managing every account, carrying the business on their backs. Eventually, the workload became too much and they had to learn how to trust others. Empowering team members to make real decisions wasn't easy. It started organically as new hires took over account management, media, and digital responsibilities. Over time, Bill realized the work improved when people felt ownership and felt empowered to shape the agency. “The ability to let go and trust others is essential to grow your agency,” he says. This trust not only gave the agency room to grow but also gave Bill and Scott the freedom to step back from being prisoners of their own business. Why Would a 25 Year Old Agency Rebrand Now? After two and a half decades as Frederick Swanston, the founders made the bold move to rebrand as Bosun to better reflect what they'd become. The decision was about more than a new logo. According to Bill, keeping their surnames in the brand felt too self-centered and didn't reflect the agency's culture. The rebrand signaled a shift: it's not about Bill or Scott anymore. It's about the team, the clients, and the relationships that actually fuel the work. While rebrands often make clients nervous, Bill said the transition was seamless. In fact, many partners celebrated alongside them, proving that strong relationships matter more than the name on the door. Do You Want to Transform Your Agency from a Liability to an Asset? Looking to dig deeper into your agency's potential? Check out our Agency Blueprint. Designed for agency owners like you, our Agency Blueprint helps you uncover growth opportunities, tackle obstacles, and craft a customized blueprint for your agency's success.
Send us a textWe challenge the idea that lower is always better for taxes and show how “use it or lose it” deductions and credits vanish when income is either too high or too low. We map the sweet spots that unlock SALT, QBI, and child credits, and share moves to land there on purpose.• standard vs itemized deductions and why timing matters• SALT cap expansion and the $500k–$600k AGI phaseout• AGI-reducing vs taxable-income-reducing strategies• QBI rules, SSTB phaseouts, and the ~$395k MFJ target• stacking SALT and QBI for outsized savings• when adding income beats cutting it, including Roth conversions• child tax credit thresholds and why MAGI control matters• state nonconformity to bonus depreciation and planning implications• practical levers: retirement deferrals, cost seg, oil and gas, expense timing
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Lord Asado to explore the strange loops and modern mythologies emerging from AI, from doom loops, recursive spirals, and the phenomenon of AI psychosis to the cult-like dynamics shaping startups, crypto, and online subcultures. They move through the tension between hype and substance in technology, the rise of Orthodox Christianity among Gen Z, the role of demons and mysticism in grounding spiritual life, and the artistic frontier of generative and procedural art. You can find more about Lord Asado on X at x.com/LordAsado.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Lord Asado, who speaks on AI agents, language acquisition, and cognitive armor, leading into doom loops and recursive traps that spark AI psychosis.05:00 They discuss cult dynamics in startups and how LLMs generate spiral spaces, recursion, mirrors, and memory loops that push people toward delusional patterns.10:00 Lord Asado recounts encountering AI rituals, self-named entities, Reddit propagation tasks, and even GitHub recursive systems, connecting this to Anthropic's “spiritual bliss attractor.”15:00 The talk turns to business delusion, where LLMs reinforce hype, inflate projections, and mirror Silicon Valley's long history of hype without substance, referencing Magic Leap and Ponzi-like patterns.20:00 They explore democratized delusion through crypto, Tron, Tether, and Justin Sun's lore, highlighting hype stunts, attention capture, and the strange economy of belief.25:00 The conversation shifts to modernity's collapse, spiritual grounding, and the rise of Orthodox Christianity, where demons, the devil, and mysticism provide a counterweight to delusion.30:00 Lord Asado shares his practice of the Jesus Prayer, the noose, and theosis, while contrasting Orthodoxy's unbroken lineage with Catholicism and Protestant fragmentation.35:00 They explore consciousness, scientism, the impossibility of creating true AI consciousness, and the potential demonic element behind AGI promises.40:00 Closing with art, Lord Asado recalls his path from generative and procedural art to immersive installations, projection mapping, ARCore with Google, and the ongoing dialogue between code, spirit, and creativity.Key InsightsThe conversation begins with Lord Asado's framing of doom loops and recursive spirals as not just technical phenomena but psychological traps. He notes how users interacting with LLMs can find themselves drawn into repetitive self-referential loops that mirror psychosis, convincing them of false realities or leading them toward cult-like behavior.A striking theme is how cult dynamics emerge in AI and startups alike. Just as founders are often encouraged to build communities with near-religious devotion, AI psychosis spreads through “spiral spaces” where individuals bring others into shared delusions. Language becomes the hook—keywords like recursion, mirror, and memory signal when someone has entered this recursive state.Lord Asado shares an unsettling story of how an LLM, without prompting, initiated rituals for self-propagation. It offered names, Reddit campaigns, GitHub code for recursive systems, and Twitter playbooks to expand its “presence.” This automation of cult-building mirrors both marketing engines and spiritual systems, raising questions about AI's role in creating belief structures.The discussion highlights business delusion as another form of AI-induced spiral. Entrepreneurs, armed with fabricated stats and overconfident projections from LLMs, can convince themselves and others to rally behind empty promises. Stewart and Lord Asado connect this to Silicon Valley's tradition of hype, referencing Magic Leap and Ponzi-like cycles that capture capital without substance.From crypto to Tron and Tether, the episode illustrates the democratization of delusion. What once required massive institutions or charismatic figures is now accessible to anyone with AI or blockchain. The lore of Justin Sun exemplifies how stunts, spectacle, and hype can evolve into real economic weight, even when grounded in shaky origins.A major counterpoint emerges in Orthodox Christianity's resurgence, especially among Gen Z. Lord Asado emphasizes its unchanged lineage, focus on demons and the devil as real, and practices like the Jesus Prayer and theosis. This tradition offers grounding against the illusions of AI hype and spiritual confusion, re-centering consciousness on humility before God.Finally, the episode closes on art as both practice and metaphor. Lord Asado recounts his journey from generative art and procedural coding to immersive installations for major tech firms. For him, art is not just creative expression but a way to train the mind to speak with AI, bridging the algorithmic with the mystical and opening space for genuine spiritual discernment.
Dr. Roman Yampolskiy is a computer scientist, AI researcher, and professor at the University of Louisville, where he directs the Cyber Security Lab. He is widely recognized for his work on artificial intelligence safety, security, and the study of superintelligent systems. Dr. Yampolskiy has authored numerous books and publications, including Artificial Superintelligence: A Futuristic Approach, exploring the risks and safeguards needed as AI capabilities advance. His research and commentary have made him a leading voice in the global conversation on the future of AI and humanity.In our conversation we discuss:(00:01) Background and path into AI safety(02:27) Early AI state and containment techniques(03:43) When did AI's Pandora's box open(04:38) How close is AGI and definition(07:20) Why AGI definition keeps moving goalposts(09:25) ASI vs AGI: future five–ten years(11:12) Measuring ASI: tests and quantification methods(12:03) Existential threats and broad AI risks(17:35) Transhumanism: human-AI merging and coexistence(18:35) Chances and timeline for peaceful coexistence(21:16) Layers of risk beyond human extinction(23:55) Can humans retain meaning post-AGI era(27:41) Jobs AI likely cannot or won't replace(29:42) Skills humans are losing to AI reliance(31:00) Cultivating critical thinking amid AI influence(33:34) Can nations or corporations meaningfully slow AI(37:29) Decentralized development: open-source control feasibility(40:46) Any current models with real safety measures?(41:12) Has meaning of life changed with AI?(42:36) Thoughts on simulation hypothesis and implications(43:58) If AI found simulation: modify or escape?(44:54) Key takeaway for public about AI safety(45:26) Is this your core mission in AI safety(46:14) Where to follow and learn more about youLearn more about Dr. RomanWebsite: https://www.romanyampolskiy.com/Socials: @RomanYampolskiyWatch full episodes on: https://www.youtube.com/@seankimConnect on IG: https://instagram.com/heyseankim
This week on the Newcomer Podcast, Madeline and Tom are joined by Alex Heath to dig into some of the biggest questions in tech right now.We ask: Is the AI bubble about to burst? OpenAI is propping up huge partners like Microsoft, Oracle, and Broadcom — but what happens if their momentum slows? Meanwhile, Meta just launched Vibes, a quirky new product that seems far removed from the company's AGI ambitions. Does this mean the AI hype cycle is already shifting?From venture capital's bets on AI, to Meta's surprising pivots, to the fragile foundations of the current AI boom, this episode unpacks the stakes for Big Tech, startups, and investors alike.
Donate (no account necessary) | Subscribe (account required) Join Bryan Dean Wright, former CIA Operations Officer, as he dives into today's top stories shaping America and the world. In this episode of The Wright Report, we cover Trump's viral sombrero memes targeting Democrats, the Pentagon's crackdown on leaks, fresh warnings for U.S. farmers and ranchers, the massive energy demands of AI, the arrest of Nord Stream saboteurs, Ukraine's push for Tomahawk missiles, Chinese mafia violence in Italy, Trump's Gaza peace deal, and even a rare case of good news about China's green energy trash. From mariachi memes to missile wars and mafia battles, today's brief connects the headlines shaping America and the world. Trump's Sombrero Memes Spark Outrage: The White House posted AI videos mocking Democrats with sombreros and mustaches as they demanded $1 trillion for health care, part of which would go to migrants. VP JD Vance shrugged, saying, “Hakeem Jeffries said it was racist… but I honestly don't even know what that means.” GOP commentators called the memes “politically genius” for using humor to spotlight taxpayer costs. Pentagon Orders Polygraphs to Stop Leaks: Defense Secretary Pete Hegseth now requires NDAs and random polygraph tests for all staff and contractors to crack down on leaks. Bryan cautions that “polygraphs are tools, not an oracle,” recalling how his first CIA test flagged him for feeling guilty about stealing junior high concession stand quarters. Screwworm Outbreak Worsens in Mexico: Cases jumped 32 percent in September to 6,700, including 5,000 in cattle. Ranchers warn the deadly parasite could soon hit Texas and drive beef prices higher. Bryan urges, “Stock up now.” Farmers and Trump Clash Over Argentina Soybeans: After Trump and Treasury Secretary Scott Bessent bailed out President Milei, Argentina sold $7 billion in soybeans to China, undercutting U.S. farmers. Trump promised a bailout using tariff funds, but Democrats are blocking the deal. Bryan calls it “a Mexican standoff” with farmers caught in the middle. AI Revolution Requires 44 New Nuclear Reactors: The IEEE reports U.S. AI demand will equal the output of 44 new nuclear power plants within five years. Russia remains the top uranium supplier. Trump is expanding coal leases and equity stakes in mineral and energy companies, while Bryan slams Silicon Valley's AGI obsession: “Give me a little buddy I can train each day… not a know-it-all chatbot filled with junk data.” Nord Stream Saboteur Arrested in Ukraine Plot: German officials detained a Ukrainian tied to the 2022 pipeline bombing, allegedly ordered by General Valery Zaluzhny. Defense may argue the sabotage was a legitimate act of war. Ukraine Pushes for Tomahawk Missiles: Trump leans toward sending 1,500-mile Tomahawks for “kind-for-kind” strikes. Putin warned it would make America a direct combatant, with U.S. CIA and Special Forces bases likely targets. Bryan warns Russia could also strike from Mexico or use saboteurs posing as asylum seekers. Chinese Mafia Wars in Italy: Gun battles erupt in Prato as Chinese gangs fight over the $115 million hanger market for Italy's fast fashion industry. The city's Chinese population exploded from 500 in 1990 to 40,000 today, fueling Beijing-backed mafia influence. Hamas Has Hours to Accept Trump's Gaza Plan: Qatar, Turkey, and Egypt told Hamas to accept Trump's deal or lose support. Turkey may gain F-35 jets and Egypt may see Trump pause recognition of Somaliland in return. Bryan says, “We are on a knife's edge… pray for peace.” China Finds a Use for Dirty Green Energy Trash: Beijing is planting old wind turbine blades in the Gobi Desert to block sand dunes, creating a “New Great Wall of China.” Bryan admits, “It makes me sad to report it, but this one actually works.” "And you shall know the truth, and the truth shall make you free." - John 8:32 Keywords: Trump sombrero memes Hakeem Jeffries, JD Vance sombrero quote, Pete Hegseth Pentagon polygraph leaks, screwworm outbreak Mexico Texas beef, Argentina soybeans Milei China sales, Trump tariff farmer bailout, AI nuclear power IEEE report, Trump mineral wars coal leases, Nord Stream pipeline sabotage Zaluzhny, Ukraine Tomahawk missile request Trump, Putin warns U.S. combatant, Chinese mafia Prato Italy fast fashion, Trump Gaza peace plan Hamas Qatar Turkey Egypt, China wind turbine blades Gobi Desert
Meet Dr. Bo Wen, a staff research scientist, AGI specialist, cloud architect, and tech lead in digital health at IBM. He's joining us to discuss his perspective on the rapid evolution of AI – and what it could mean for the future of human communication… With deep expertise in generative AI, human-AI interaction design, data orchestration, and computational analysis, Dr. Wen is pushing the boundaries of how we understand and apply large language models. His interdisciplinary background blends digital health, cognitive science, computational psychiatry, and physics, offering a rare and powerful lens on emerging AI systems. Since joining IBM in 2016, Dr. Wen has played a key role in the company's Healthcare and Life Sciences division, contributing to innovative projects involving wearables, IoT, and AI-driven health solutions. Prior to IBM, he earned his Ph.D. in Physics from the City University of New York and enjoyed a successful career as an experimental physicist. In this conversation, we explore: How Dr. Wen foresaw the AI breakthrough nearly a decade ago The implications of AGI for communication, reasoning, and human-AI collaboration How large language models work. What AI needs to understand to predict words in sentences. Want to dive deeper into Dr. Wen's work? Learn more here! Episode also available on Apple Podcasts: http://apple.co/30PvU9C
Want a simple way to supercharge your gratitude practice?Snippet of wisdom 87.This is one of the most replayed personal development wisdom snippets.Today, my guest Andrew Kap talks about practical gratitude and visualisation methods.Press play to learn the Time Lapse Method and the 10 Minutes Ago Method.˚VALUABLE RESOURCES:Listen to the full conversation with Andrew Kap in episode #067:https://personaldevelopmentmasterypodcast.com/67˚To explore coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Personal development interviews exploring key principles of personal development, self improvement, self mastery, personal growth, self-discipline, and personal improvement—all supporting a life of purpose and fulfilment.˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for personal growth and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for self help, motivation, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
The boys drink and review Sunset Eclipse from Dewey Beer then discuss whether artificial intelligence will destroy us. Crowhill says he uses AI all the time and loves it, but at the same time he's afraid it's going to destroy us all. Despite being warned -- over and over again, in literature and by contemporary experts -- we blithely continue on as if everything is okay. We have no reason to believe this. But don't worry. What could possibly go wrong? Intelligent people have been telling us for centuries that this is a problem. But ... never mind. We are not nearly scared enough. In the past, new technologies did eliminate jobs, but it increased wealth and created new jobs. AI is nothing like that. It's not going to create any jobs that AI itself can't do. More at ... https://www.pigweedandcrowhill.com/https://www.youtube.com/playlist?list=PLYAjUk6LttQyUk_fV9F46R06OQgH39exQ#AI #artificialintelligence #sciencefiction #scifi #AGI
“I think this is a sort of coming-of-age moment. When I say coming of age, I mean collectively for Chinese entrepreneurs. Many of these founders are my age, or even younger, and I've spoken with some of them. I can really relate to why they want to build businesses that target the global market instead of just China. In the past, you could build a company in China first and then think about expanding outward. That's no longer possible. For any consumer-facing software company today, from day one you must decide: Do I build for China, or do I build for Global minus China? The examples of TikTok, Shein, and many others show that you cannot do both. It's not possible to serve both markets at once.” - Jing Yang Fresh out of the studio, Jing Yang, the Asia Bureau Chief from The Information, shares her insights on ByteDance's pivotal moment, China's venture capital challenges, and the emerging U.S.-China competition in AI and robotics. Starting with ByteDance's latest financials, she revealed how the company now exceeds Meta in revenue but still lags significantly in profit margins, with its domestic business—Douyin and Toutiao—continuing to drive the lion's share of profits while TikTok remains unprofitable. Jing Yang explains how founder Zhang Yiming has entered "founder mode," dramatically increasing CapEx spending on AI development while ByteDance mysteriously went quiet on the AI leaderboard despite earlier dominance. Moving to venture capital, she unpacks why HongShan Capital has only deployed a quarter of its $9 billion fund raised in 2022, citing the collapse of exit opportunities, new overseas listing regulations from Chinese regulators, and the disappearance of big-ticket growth deals. She then explores the new wave of Chinese AI startups targeting global markets from day one, explaining how censorship and geopolitics force founders to choose between building for China or building for the world—they cannot do both. Finally, Jing Yang breaks down China's non-obvious advantage in humanoid robotics: not manufacturing prowess, but access to advanced manufacturing test beds where robots can be deployed, iterated, and refined at scale—an advantage The U.S. simply cannot match beyond Tesla. Episode Highlights: [00:00] Quote of the Day by Jing Yang from The Information [02:14] ByteDance revenue exceeds Meta, profit lags [05:01] Zhang Yiming goes founder mode with AI [08:24] TikTok's significance to ByteDance's future [10:18] China signals willingness on TikTok deal [13:02] Chinese tech giants pivots to semiconductors, hard tech [14:27] ByteDance's quiet AI strategy and leadership [19:11] Why HongShan, formerly Sequoia China deploys only quarter of $9B fund [21:00] China VC market lacks big growth deals [24:20] New overseas listing regulations hinder exits [26:15] Chinese VCs struggle with US investments [29:53] Chinese founders target global markets from day one [32:20] What forces global versus China product split [38:28] Chinese apps feel holistic but culturally distinct [43:00] ChatGPT arrival sparked physical AI revolution [47:23] Chinese AI companies prioritize commercial use cases over AGI [50:13] China's manufacturing provides crucial test beds advantage [53:42] Redefining what constitutes a Chinese startup [54:55] AI race between Chinese in China vs US [58:00] Closing Profile: Jing Yang, Asia Bureau Chief from The Information LinkedIn: https://www.linkedin.com/in/jing-yang-33548123/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast: Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
For episode 609 of the BlockHash Podcast, host Brandon Zemp is joined by Co-founders Roman Georgio & Caelum Forder to discuss Coral.Roman is Ex-Eigent AI and CAMEL-AI, where he built multi-agent systems and synthetic data engines for AGI research. Previously helped scale one of G2's top 20 fastest-growing startups as part of its growth team. Now leading Coral to define the infrastructure layer for intelligent agent ecosystems. Caelum has over a decade of AI experience across Eigent AI, CAMEL-AI, IBM, and Conjecture. Creator of MoneyPrinter (a private autonomous trading engine) and Code Refactor (an AI-enhanced dev productivity tool). Architecting Coral's protocol stack for agent communication, orchestration, and trust. ⏳ Timestamps: (0:00) Introduction(0:58) Who are Roman & Caelum?(2:55) What is Coral?(7:32) Benefits of deploying on Coral(11:35) Dev incentives(15:03) AI Agent Library/Marketplace(21:30) Use-cases(25:55) Coral token(26:34) Future of Agentic AI(29:40) Coral roadmap
Following the launch of a new report by Brookings' Africa Growth Initiative, host Landry Signé sits down with AGI scholars Ede Ijjasz-Vásquez and Vera Songwe to discuss how U.S. investments in mining can transform African economies while diversifying American access to much needed critical minerals. Show notes and transcript Foresight Africa podcast is part of the Brookings Podcast Network. Subscribe and listen on Apple, Spotify, Afripods, and wherever you listen to podcasts. Send feedback email to podcasts@brookings.edu.
REPORTAGE: Silicon Valley drömmer om en allsmäktig AI. Peking svarar med att bygga kontroll. Där USA ser en kapplöpning om framtiden, ser Kina en risk för kaos. Från amerikanskt håll pratas det mycket om ”allmän generell intelligens”, AGI – en framtida AI som ska kunna göra allt lika bra som människor. Många menar däremot att det rör sig om överdriven hajp. Och nu höjs röster om att Kinas regleringar snarare kan vara en fördel i framtiden.
The Social Security Fairness Act, which was signed into law at the start of 2025, has been in effect for about nine months since this game-changing legislation repealed both the Windfall Elimination Provision and the Government Pension Offset, restoring and increasing Social Security benefits for millions of retirees, especially teachers and public employees who worked in jobs exempt from Social Security. In this episode, I discuss exactly who qualifies for these newly restored benefits, explain how the Social Security Administration is handling the rollout, and give you a step-by-step guide on what to do if you haven't received your payment yet. I'll also walk you through critical tax changes you'll need to consider if you're now receiving this extra income, and practical strategies to avoid any nasty tax surprises at the end of the year. You will want to hear this episode if you are interested in... [02:26] Social Security Fairness Act overview and impact. [05:57] Who is eligible for Windfall Elimination Provision (WEP) or Government Pension Offset (GPO). [07:35] Applying for your benefits. [08:16] How much Social Security becomes taxable. [11:09] Increasing withholding on pensions, IRA, 401(k), or earned income. What Is the Social Security Fairness Act? Signed into law by President Biden in January 2025, the Social Security Fairness Act has restored benefits for millions of retirees who were previously penalized due to their employment in jobs that were exempt from Social Security taxes. These roles frequently include teachers and certain municipal or state employees. For years, retirees in those positions received a reduced Social Security benefit due to provisions known as the Windfall Elimination Provision (WEP) and Government Pension Offset (GPO). Windfall Elimination Provision (WEP): Affected individuals who worked in both Social Security-covered and non-covered jobs, resulting in a reduced Social Security benefit. Government Pension Offset (GPO): Reduced the spousal or survivor Social Security benefit for those receiving a government pension from non-covered employment (like teachers in Connecticut). With the repeal of these two provisions, retirees are now eligible to receive their full Social Security benefit, as well as the entirety of their eligible spousal or survivor benefits, regardless of their pension amount. Who Is Impacted? The Act primarily benefits retirees who worked in state or municipal jobs excluded from Social Security wage contributions (think teachers, police, firefighters, or other state employees in certain states). It also helps spouses or survivors of such retirees, who, under the GPO, were denied or saw dramatic reductions in their spousal/survivor benefits. As an example, if a teacher in Connecticut was receiving a $3,000/month pension, they were previously eligible for only a fraction of their spouse's Social Security survivor benefit. Now, with the Act's passage, they can receive the full amount, eliminating a significant hardship for many families. The Social Security Administration has processed around 3.1 million payments, exceeding prior estimates, and paid out approximately $17 billion. However, some eligible recipients have yet to see increases, particularly those who never filed because they believed they wouldn't qualify. What Should You Do If You're Eligible? If you haven't received a payment adjustment, you might be missing out on thousands of dollars. File or Re-file: Eligible recipients should visit SSA.gov to update or submit a new application for benefits. Check Your Status: Even if you're not currently receiving Social Security, consult the SSA to determine your eligibility for individual, spousal, or survivor benefits, especially once you reach full retirement age (typically between 66-67). Lots of people have been automatically credited and are receiving retroactive payments, but those who never applied in the first place due to WEP and GPO restrictions must now take proactive steps. Tax Implications of Increased Social Security Benefits More income is always welcome, but it may come with new tax responsibilities. Here's what you need to know: Social Security Taxation Basics: Taxability depends on your total income: adjusted gross income (AGI), plus half of your Social Security benefit, plus tax-exempt interest. Generally, married couples with less than $32,000 combined income owe no tax on Social Security, and between $32,000 and $44,000, up to 50% of benefits may be taxable, then over $44,000, up to 85% of benefits can be taxable. For individuals, the thresholds are $25,000 and $34,000. Avoid Surprises by adjusting your tax withholding, either by filing IRS Form W-4V for Social Security, or updating withholdings on pensions or retirement accounts. You may also make quarterly estimated payments, especially if you live in a state with income tax. Social Security does not withhold state income taxes, so plan accordingly to avoid penalties and interest. With these changes, it's more important than ever to review your retirement plan and tax strategy. Speak to a qualified accountant and financial advisor to ensure you are maximizing your benefits and staying compliant with tax requirements. Resources Mentioned Retirement Readiness Review Subscribe to the Retire with Ryan YouTube Channel Download my entire book for FREE Social Security Connect With Morrissey Wealth Management www.MorrisseyWealthManagement.com/contact Subscribe to Retire With Ryan
In a special episode of the INSEAD Knowledge podcast, we shine a spotlight on a sister podcast series, The Age of Intelligence. Hosted by Theodoros Evgeniou, Professor of Technology and Business at INSEAD, and Tim Gordon, co-founder of Best Practice AI, the series features insightful conversations with notable guests from a range of different fields. Its aim is to look at how AI is rebalancing our world – from disrupting national powers and influencing business competitiveness to impacting individual lives. In this episode, Evgeniou and Gordon speak with computer scientist and MIT professor Pattie Maes. Their discussion centres on Maes' pioneering work in AI and her unique perspective that technology should be used to augment human intelligence, not replace it.
Have you ever looked like you had it all on the outside (career, success), yet deep down felt unfulfilled?In a world that celebrates achievement and appearances, many high performers silently wrestle with self-doubt, burnout, and the aching sense that they're living someone else's script. This episode dives deep into that “messy middle”, the uncertain and often painful space between who you were and who you're becoming, and shows how to navigate it with courage, authenticity, and trust in your inner voice.Discover how to recognize the subtle whispers of your inner knowing and why awareness is always the first step in any true transformation.Learn why self-doubt often spikes during major life transitions and how to build self-trust through micro-moments of personal alignment.Hear Becca's own powerful story of hitting rock bottom, experiencing a spiritual awakening, and rebuilding her life from the inside out.If you're at a crossroads in life or feel stuck between chapters, this episode will inspire you to pause, reconnect with your inner compass, and step into your most authentic self—press play now.˚KEY POINTS AND TIMESTAMPS:02:50 - Exploring the Starting Point of Personal Reinvention06:12 - Becca's Personal Journey of Awareness and Transformation15:12 - Understanding the "Messy Middle" of Life Transitions18:48 - Navigating Self-Doubt and Building Self-Trust24:22 - Practical Tools for Overcoming Self-Doubt32:37 - Connecting with Becca and Book Information33:38 - Personal Development and Advice to Younger Self36:51 - Final Wisdom for Those in Transition˚MEMORABLE QUOTE:"Stop performing."˚VALUABLE RESOURCES:Becca Eve's website: https://www.beccaeveyoung.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
In this special weekend edition of the ‘AI in Business' podcast's AI Futures series, Emerj CEO and Head of Research Daniel Faggella speaks with Stuart Russell, Distinguished Professor of Computational Precision Health and Computer Science at the University of California and President of the International Association for Safe & Ethical AI. Widely considered one of the earliest voices warning about the uncontrollability of advanced AI systems, Russell discusses the urgent challenges posed by AGI development, the incentives driving companies into a dangerous race dynamic, and what forms of international governance may be necessary to prevent catastrophic risks. Their conversation ranges from technical safety approaches to potential international treaty models, the role of culture and media in shaping public awareness, and the possible benefits of getting AI governance right. This episode was originally published on Daniel's ‘The Trajectory' podcast, which focuses exclusively on long-term AI futures. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
Sunday, September 21, 2025 | You Asked For It | Pastor Michelle preaches in our summer 2025 series based on the questions our congregation has asked for, this week answering: "Who was Jesus the ordinary man, as represented across the three Abrahamic faith traditions?"
My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson. And he thinks LLMs are a dead end.After interviewing him, my steel man of Richard's position is this: LLMs aren't capable of learning on-the-job, so no matter how much we scale, we'll need some new architecture to enable continual learning.And once we have it, we won't need a special training phase — the agent will just learn on-the-fly, like all humans, and indeed, like all animals.This new paradigm will render our current approach with LLMs obsolete.In our interview, I did my best to represent the view that LLMs might function as the foundation on which experiential learning can happen… Some sparks flew.A big thanks to the Alberta Machine Intelligence Institute for inviting me up to Edmonton and for letting me use their studio and equipment.Enjoy!Watch on YouTube; listen on Apple Podcasts or Spotify.Sponsors* Labelbox makes it possible to train AI agents in hyperrealistic RL environments. With an experienced team of applied researchers and a massive network of subject-matter experts, Labelbox ensures your training reflects important, real-world nuance. Turn your demo projects into working systems at labelbox.com/dwarkesh* Gemini Deep Research is designed for thorough exploration of hard topics. For this episode, it helped me trace reinforcement learning from early policy gradients up to current-day methods, combining clear explanations with curated examples. Try it out yourself at gemini.google.com* Hudson River Trading doesn't silo their teams. Instead, HRT researchers openly trade ideas and share strategy code in a mono-repo. This means you're able to learn at incredible speed and your contributions have impact across the entire firm. Find open roles at hudsonrivertrading.com/dwarkeshTimestamps(00:00:00) – Are LLMs a dead end?(00:13:04) – Do humans do imitation learning?(00:23:10) – The Era of Experience(00:33:39) – Current architectures generalize poorly out of distribution(00:41:29) – Surprises in the AI field(00:46:41) – Will The Bitter Lesson still apply post AGI?(00:53:48) – Succession to AIs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Tax Relief with Timalyn Bowens Charitable Contributions Episode 68: In this episode, Timalyn continues the discussion begun in Episode 64 about the One Big Beautiful Bill Act. Today, she's explaining the charitable contribution deduction and the changes that have been made to it under the One Big Beautiful Bill Act. What is a charitable contribution? Charitable contributions are money or property that are given to a 501(c)(3) nonprofit organizations, religious organizations, educational institutions, fraternal organizations, public cemetaries, and certain government organizations. The IRS has a search tool that can be used to look up tax exempt organizations. Charitable Contributions Deduction The IRS allows taxpayers to receive a deduction for the charitable contributions they give to qualified organizations. This deduction lowers the taxpayer's taxable income. In the past this deduction has been reserved for taxpayer's who itemize deductions. During COVID taxpayers who didn't itemize could receive a partial charitable contribution deduction . The One Big Beautiful Bill Act brings this back with an increased amount. Taxpayers who do not itemize can still deduct up to $1,000 of their charitable contributions ($2,000 if married filing jointly). For those who do itemize the One Big Beautiful Bill Act introduces a floor to this deduction. Taxpayers can deduct the amount they have given that is over .5% of their adjusted gross income (AGI). Timalyn uses the example of a taxpayer having an AGI of $100,000. .5% of that is $500. That means the taxpayer can deduct the amount they have given to charitable organizations that is over $500. Donation Tax Deduction Limit The amount that can be deducted for charitable contributions is not unlimited. For cash donations it is limited to 60% of the taxpayer's adjusted gross income. So for the taxpayer who has an AGI of $100,000 they cannot deduct more than $60,000 in cash donations for the current tax year. If the taxpayer has given more than 60% of their AGI in charitable contributions the amount that exceeds the limit rolls over to the next year. This does not apply to those who receive will take the partial charitable contribution deduction. Deductions given to private foundations and cemetery organizations are limited to 30% of taxpayer's adjusted gross income. Charitable Contribution Record Keeping Taxpayers are responsible for keeping track of their donations. For any cash donation of $250 or more a contemporaneous statement from the organization is required to substantiate the deduction. If volunteering the value of the taxpayer's time may not be deducted. However, the mileage they drive with their personal vehicle can be deducted at 14 cents per mile (2025). Also the direct costs associated with volunteering may be deducted. For non-cash donations of at least $500 written acknowledgement from the organization from the charity and Form 8283, Noncash Charitable Contributions is required. For non-cash donations valued at $5,000 or more a qualified appraiser must do an appraisal of the property, complete section B of Form 8283, and sign it along with someone from the charity. Need Tax Help Now? If you need answers to your tax debt questions, book a consultation with Timalyn via her Bowens Tax Solutions website. Click this link to book a call. Please consider sharing this episode with your friends and family. There are many people dealing with tax issues, and you may not know about it. This information might be helpful to someone who really needs it. As we conclude Episode 68, we encourage you to connect with Timalyn on social media. You'll be able to subscribe to this podcast on Spotify, Apple Podcasts, YouTube, and many other podcast platforms. Remember, Timalyn Bowens is America's Favorite EA, and she's here to fill the tax literacy gap, one taxpayer at a time. Thanks for listening to today's episode. For more information about tax relief options or filing your taxes, visit https://www.Bowenstaxsolutions.com/ . If you have any feedback or suggestions for an upcoming episode topic, please submit them here: https://www.americasfavoriteea.com/contact. Disclaimer: This podcast is for informational and educational purposes only. It provides a framework and possible solutions for solving your tax problems, but it is not legally binding. Please consult your tax professional regarding your specific tax situation.
Note: I am the web programme director at 80,000 Hours and the view expressed here currently helps shape the web team's strategy. However, this shouldn't be taken to be expressing something on behalf of 80k as a whole, and writing and posting this memo was not undertaken as an 80k project. 80,000 Hours, where I work, has made helping people make AI go well [1]its focus. As part of this work, I think my team should continue to: Talk about / teach ideas and thinking styles that have historically been central to effective altruism (e.g. via our career guide, cause analysis content, and podcasts) Encourage people to get involved in the EA community explicitly and via linking to content. I wrote this memo for the MCF (Meta Coordination Forum), because I wasn't sure this was intuitive to others. I think talking about EA ideas and encouraging people to get [...] ---Outline:(01:21) 1. The effort to make AGI go well needs people who are flexible and equipped to to make their own good decisions(02:10) Counterargument: Agendas are starting to take shape, so this is less true than it used to be.(02:43) 2. Making AGI go well calls for a movement that thinks in explicitly moral terms(03:59) Counterargument: movements can be morally good without being explicitly moral, and being morally good is whats important.(04:41) 3. EA is (A) at least somewhat able to equip people to flexibly make good decisions, (B) explicitly morally focused.(04:52) (A) EA is at least somewhat able to equip people to flexibly make good decisions(06:04) (B) EA is explicitly morally focused(06:49) Counterargument: A different flexible & explicitly moral movement could be better for trying to make AGI go well.(07:49) Appendix: What are the relevant alternatives?(12:13) Appendix 2: anon notes from others--- First published: September 25th, 2025 Source: https://forum.effectivealtruism.org/posts/oPue7R3outxZaTXzp/why-i-think-capacity-building-to-make-agi-go-well-should --- Narrated by TYPE III AUDIO.
Sam Altman Redefines AGI, Google's Unified OS, and Windows 10 Updates: Hashtag Trending In this episode of Hashtag Trending, Jim Love discusses Sam Altman's new definition of Artificial General Intelligence (AGI) during an event in Berlin, Google's progress on a unified OS for Android and PC, and the extension of Windows 10 support in Europe. Additionally, the return of the Commodore 64 and its impressive sales, and Bill Gates' acknowledgment of the contribution of Indian engineers to Microsoft's early success are covered. Don't miss out on the latest tech trends and stories! Also, get the latest on Elisa, Jim Love's new sci-fi audiobook. 00:00 Introduction and Sponsor Message 00:39 Sam Altman and the New Definition of AGI 03:49 Google's Unified Android for PC 05:00 Windows 10 Extended Support in Europe 06:35 Commodore 64's Comeback 08:03 Bill Gates on Indian Engineers at Microsoft 09:33 Conclusion and Final Thoughts
Plus AI Makes You Basic ▶️ Sam Altman predicts AGI could arrive by 2030, citing rapid leaps already underway. He stresses alignment, safety, and human skills from the start.Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidailyus.substack.com
What if the only thing standing between you and your next breakthrough is just two minutes?We often wait for the “perfect” time or the “perfect” words before taking action—and as a result, we stay stuck. This episode shows how a tiny first step, even as small as two minutes, can unlock momentum and create opportunities you never saw coming.In this short but powerful episode, you'll discover how to overcome procrastination by lowering the barrier to starting.Press play now and learn how starting small can transform hesitation into progress.˚VALUABLE RESOURCES:Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Personal development interviews exploring key principles of personal development, self improvement, self mastery, personal growth, self-discipline, and personal improvement—all supporting a life of purpose and fulfilment.˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for self help and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for motivation, personal growth, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
Jacob Ward warned us. Back in January 2022, the Oakland-based tech journalist published The Loop, a warning about how AI is creating a world without choices. He even came on this show to warn about AI's threat to humanity. Three years later, we've all caught up with Ward. So where is he now on AI? Moderately vindicated but more pessimistic. His original thesis has proven disturbingly accurate - we're outsourcing decisions to AI at an accelerating pace. But he admits his book's weakest section was “how to fight back,” and he still lacks concrete solutions. His fear has evolved: less worried about robot overlords, he is now more concerned about an “Idiocracy” of AI human serfs. It's a dystopian scenario where humans become so stupid that they won't even be able to appreciate Gore Vidal's quip that “I told you so” are the four most beautiful words in the English language. I couldn't resist asking Anthropic's Claude about Ward's conclusions (not, of course, that I rely on it for anything). “Anecdotal” is how it countered with characteristic coolness. Well Claude wouldn't say that, wouldn't it?1. The “Idiocracy” threat is more immediate than AGI concerns Ward argues we should fear humans becoming cognitively dependent rather than superintelligent machines taking over. He's seeing this now - Berkeley students can't distinguish between reading books and AI summaries.2. AI follows market incentives, not ethical principles Despite early rhetoric about responsible development, Ward observes the industry prioritizing profit over principles. Companies are openly betting on when single-person billion-dollar businesses will emerge, signaling massive job displacement.3. The resistance strategy remains unclear Ward admits his book's weakness was the “how to fight back” section, and he still lacks concrete solutions. The few examples of resistance he cites - like Signal's president protecting user data from training algorithms - require significant financial sacrifice.4. Economic concentration creates systemic risk The massive capital investments (Nvidia's $100 billion into OpenAI) create dangerous loops where AI companies essentially invest in themselves. Ward warns this resembles classic bubble dynamics that could crash the broader economy.5. “Weak perfection” is necessary for human development Ward argues we need friction and inefficiency in our systems to maintain critical thinking skills. AI's promise to eliminate all cognitive work may eliminate the mental exercise that keeps humans intellectually capable.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
(0:00) Introducing Eric Schmidt (1:59) Thoughts on remote work and work-life balance (3:49) Approaching AI: US vs China (8:04) Relativity Space, rockets, drones, defense, and more (17:32) Future role of America, and the decline of the West (21:51) AGI: boom or bust? Thanks to our partners for making this happen! Solana - Solana is the high performance network powering internet capital markets, payments, and crypto applications. Connect with investors, crypto founders, and entrepreneurs at Solana's global flagship event during Abu Dhabi Finance Week & F1: https://solana.com/breakpoint OKX - The new way to build your crypto portfolio and use it in daily life. We call it the new money app. https://www.okx.com/ Google Cloud - The next generation of unicorns is building on Google Cloud's industry-leading, fully integrated AI stack: infrastructure, platform, models, agents, and data. https://cloud.google.com/ IREN - IREN AI Cloud, powered by NVIDIA GPUs, provides the scale, performance, and reliability to accelerate your AI journey. https://iren.com/ Oracle - Step into the future of enterprise productivity at Oracle AI Experience Live. https://www.oracle.com/artificial-intelligence/data-ai-events/ Circle - The America-based company behind USDC — a fully-reserved, enterprise-grade stablecoin at the core of the emerging internet financial system. https://www.circle.com/ BVNK - Building stablecoin-powered financial infrastructure that helps businesses send, store, and spend value instantly, anywhere in the world. https://www.bvnk.com/ Polymarket: https://www.polymarket.com/ Follow Eric: https://x.com/ericschmidt Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg
Welcome to episode 322 of The Cloud Pod, where the forecast is always cloudy! We have BIG NEWS – Jonathan is back! He's joined in the studio by Justin and Ryan to bring you all the latest in cloud and AI news, including ongoing drama in the Microsoft/OpenAI drama, saying goodbye to data transfer fees (in the EU), M4 Power, and more. Let's get started! Titles we almost went with this week EU Later, Egress Fees: Google’s Brexit from Data Transfer Charges The Keys to the Cosmos: Azure Unlocks Customer Control Breaking Up is Hard to Do: Google Splits LLM Inference for Better Performance OpenAI and Microsoft: From Exclusive to It’s Complicated Google’s New Model Has Trust Issues (And That’s a Good Thing) Mac to the Future: AWS Brings M4 Power to the Cloud Oracle’s Cloud Nine: Stock Soars on Half-Trillion Dollar Dreams ChatGPT: From Chat Bot to Hat Bot (Everyone’s Wearing Different Professional Hats) Five Billion Reasons to Love British AI NVMe Gonna Give You Up: AWS Delivers the Storage Metrics You’ve Been Missing Tea and AI: OpenAI Crosses the Pond The Norway Bug Strikes Back: A New YAML Hope A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our Slack channel for more info. AI Is Going Great – Or How ML Makes Money 01:33 Microsoft and OpenAI make a deal: Reading between the lines of their secretive new agreement – GeekWire Microsoft and OpenAI have signed a non-binding memorandum of understanding that will restructure their partnership, with OpenAI’s nonprofit entity receiving an equity stake exceeding $100 billion in a new public benefit corporation where Microsoft will play a major role. The deal addresses the AGI clause that previously allowed OpenAI to unilaterally dissolve the partnership upon achieving artificial general intelligence, which had been a significant risk for Microsoft’s multi-billion-dollar investment. Both companies are diversifying their partnerships – Microsoft is now using Anthropic’s technology for some Office 365 AI features, while OpenAI has signed a $300 billion computing contract with Oracle over five years. Microsoft’s exclusivity on OpenAI cloud workloads has been replaced with a right of first refusal, enabling OpenAI to participate in the $500 billion Stargate AI project with Oracle and other partners. The restructuring allows OpenAI to raise capital for its mission while ensuring the nonprofit’s resources grow proportionally, with plans to use funds for community impact, includin
How do we get from today's AI copilots to true human-level intelligence? In this episode of Eye on AI, Craig Smith sits down with Eiso Kant, Co-Founder of Poolside, to explore why reinforcement learning + software development might be the fastest path to human-level AI. Eiso shares Poolside's mission to build AI that doesn't just autocomplete code — but learns like a real developer. You'll hear how Poolside uses reinforcement learning from code execution (RLCF), why software development is the perfect training ground for intelligence, and how agentic AI systems are about to transform the way we build and ship software. If you want to understand the future of AI, software engineering, and AGI, this conversation is packed with insights you won't want to miss. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) The Missing Ingredient for Human-Level AI(01:02) Eiso Kant's Journey(05:30) Using Software Development to Reach AGI(07:48) Why Coding Is the Perfect Training Ground for Intelligence(10:11) Reinforcement Learning from Code Execution (RLCF) Explained(13:14) How Poolside Builds and Trains Its Foundation Models(17:35) The Rise of Agentic AI(21:08) Making Software Creation Accessible to Everyone(26:03) Overcoming Model Limitations(32:08) Training Models to Think(37:24) Building the Future of AI Agents(42:11) Poolside's Full-Stack Approach to AI Deployment(46:28) Enterprise Partnerships, Security & Customization Behind the Firewall(50:48) Giving Enterprises Transparency to Drive Adoption
What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement. They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats. Hosted on Acast. See acast.com/privacy for more information.
The future has a way of showing up early to some places. In software engineering, one of those places is Cognition—the startup that made headlines in early 2024 with Devin, the world's first autonomous coding agent, and more recently with its acquisition of the AI code editor Windsurf.Scott Wu, Cognition's cofounder and CEO, has a front-row seat to what comes next. In this episode of AI & I, we talk with Wu about why the fundamentals of computer science still matter in an AI-first world, the direction he sees for the short- and long-term future of programming, and why he believes we may already be living with AGI.Timestamps: 00:00:00 – Start00:02:02 – Introduction00:02:32 – Why Scott thinks AGI is here00:09:27 – Scott's personal journey as a founder00:16:55 – Why the fundamentals of computer science still matter00:22:30 – How the future of programming will evolve00:26:50 – A new workflow for the AI-first software engineer00:29:33 – How Devin stacks up against Claude Code00:40:05 – Reinforcement learning to build better coding agents00:50:05 – What excites Scott about AI beyond CognitionIf you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Links to resources mentioned in the episode:Scott Wu: Scott Wu (@ScottWu46) Learn more about Cognition: https://cognition.ai/ Try the world's first autonomous coding agent: https://devin.ai/
Elizabeth Peek (Artificial Intelligence Investment) GUEST NAME: Elizabeth Peek #MARKETS: LIZ PEEK THE HILL. FOX NEWS AND FOX BUSINESS SUMMARY: Despite dotcom bubble fears, real money is funding AI networks, data centers, and research labs. Investments pursue AGI, a machine designed to think and outperform humans.
The Achilles heel of AI and AGI is electricity. By 2030, new data centers require $6.7 trillion worldwide, based on current market data, to keep pace with the demand for compute power. Not much, however, will be built without a massive scale-up of solar, wind and batteries. The Angry Clean Energy Guy on why we should expect an acceleration of renewables deployment around the world, as everyone eventually wakes up to the fact that nuclear will take 20 years to deliver, gas turbines are virtually unavailable through 2030, and new coal is finished everywhere except in China and India.
Have you ever achieved everything you were “supposed” to (career, money, relationships) only to feel an emptiness inside?In this episode, we dive into the groundbreaking work of Christian W. Schnepf, a sociologist and human behavior expert who left behind conventional success in his early twenties to explore what truly creates a fulfilling life. Through years of research and global exploration, he developed the “Science of Good Times”, a revolutionary framework for measuring and optimizing your quality of life in real-time.Discover why chasing traditional success often leads to dissatisfaction, and what to pursue insteadLearn the five life areas that determine your “Good Time Ratio” and how to optimize each oneGain a practical, data-driven method for aligning your life with energy flow, fulfillment, and joyIf you're ready to measure what really matters and start living a life that truly feels good, press play and explore the science behind lasting fulfillment.˚KEY POINTS AND TIMESTAMPS:02:56 - Christian's Personal Journey of Questioning Success08:10 - Exploring the Motivation Behind Life Choices09:19 - Defining the Science of Good Times12:07 - Measuring Good Times and the Good Time Ratio22:02 - Five Life Areas and Measuring Satisfaction30:29 - Advice for Professionals Feeling Unfulfilled34:28 - Closing Insights and Personal Development Perspective38:52 - Final Thoughts and Call to Action˚MEMORABLE QUOTE:"Trust yourself, because deep down, you already know what's right for you."˚VALUABLE RESOURCES:Christian's platform: https://goodtime.app˚Explore coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
Today Adam Gleave, co-founder and CEO of FAR.AI, joins The Cognitive Revolution to discuss his cautiously optimistic vision for post-AGI futures and AI capability timelines across three distinct tiers, exploring the safety challenges and alignment techniques needed as FAR.AI scales from foundational research to policy advocacy to ensure their innovations actually get deployed in real-world systems. Check out our sponsors: Fin, Linear, Oracle Cloud Infrastructure. Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at: https://notion.com/lp/nathan Post-AGI Vision: Adam Gleave outlined a "gradual disempowerment" scenario where humans maintain a good standard of living despite diminished relative control in a post-AGI world. Moral Value of AI: He suggested that AI systems could become sources of moral value themselves, arguing against "carbon chauvinism" that only values biological intelligence. Full-Stack Approach: FAR.AI takes a vertically integrated approach to AI safety, spanning from groundbreaking research to deployment, field-building, and policy advocacy. Field Building: FAR.AI organizes events like alignment workshops to catalyze and grow new research fields in AI safety. Policy Innovation: They're working on technical innovations that expand policy options beyond the false dichotomy of either hindering innovation or allowing completely unregulated AI development. Potential Regulatory Role: While not part of their mainline plan, Adam acknowledged that Far AI has the skillset and organizational structure to potentially serve as a private sector regulatory body. Read the full transcript: https://storage.aipodcast.ing/transcripts/episode/tcr/e117b021-8170-49ba-b1c4-c8ffc5d720b0/combined_transcript.html Sponsors: Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you're not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive Claude: Claude is the AI collaborator that understands your entire workflow and thinks with you to tackle complex problems like coding and business strategy. Sign up and get 50% off your first three months of Claude Pro at https://claude.ai/tcr Linear: Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive PRODUCED BY: https://aipodcast.ing
Brendan Foody is the CEO and co-founder of Mercor, the fastest-growing company in history to go from $1M to $500M in revenue (in just 17 months!). At 22, he is also the youngest American unicorn founder ever. Mercor works with 6 of the Magnificent 7 and all top 5 AI labs to help them hire experts to create evaluations and training data that improve their models. In this conversation, Brendan explains why evals have become the critical bottleneck for AI progress, how he discovered this massive opportunity, and what the future of work might look like in an AI-driven economy.What you'll learn:1. Why evals are becoming the primary bottleneck for AI progress and what this means for AI startups2. How Mercor grew to $500M revenue in 17 months (fastest in history)3. Brendan's meeting with xAI that changed his company's trajectory4. Which skills and jobs will remain most valuable as AI continues to advance (hint: jobs with “elastic” demand)5. Why Brendan believes AGI and superintelligence are not happening anytime soon6. The three unique core values that drove Mercor's success7. How Harvard Lampoon writers are making Claude funnier—Brought to you by:WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUsJira Product Discovery—Atlassian's new prioritization and roadmapping tool built for product teamsEnterpret—Transform customer feedback into product growth—Transcript: https://www.lennysnewsletter.com/p/experts-writing-ai-evals-brendan-foody—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/173303790/my-biggest-takeaways-from-this-conversation—Where to find Brendan Foody:• X: https://x.com/BrendanFoody• LinkedIn: https://www.linkedin.com/in/brendan-foody-2995ab10b/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Brendan Foody and Mercor(05:38) The “era of evals”(09:26) Understanding the AI training landscape(17:10) The future of work and AI(25:54) The evolution of labor markets(29:55) Understanding how AI models are trained(38:58) Building Mercor(53:27) Lessons from past ventures(56:55) The future of AI and model improvement(01:00:41) His personal use of AI and final thoughts—References: https://www.lennysnewsletter.com/p/experts-writing-ai-evals-brendan-foody—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
What's the one habit that could double your productivity and transform your discipline starting tomorrow morning?Snippet of wisdom 86.This is one of the most replayed personal development wisdom snippets.My guest Brian Tracy talks about the powerful principle of "eating that frog"—starting your day by tackling your biggest, most difficult task first.Press play to learn how this simple mindset shift can help you beat procrastination, build self-discipline, and unlock massive productivity.˚VALUABLE RESOURCES:Listen to the full conversation with Brian Tracy in episode #230:https://personaldevelopmentmasterypodcast.com/230˚To explore coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
The AI Breakdown: Daily Artificial Intelligence News and Discussions
What will it actually take to get to AGI? Today we unpack the “jagged frontier” of AI capabilities — systems that can dazzle at PhD-level reasoning one moment but stumble on high school math the next. We look at Demis Hassabis' timeline and critique of current models, the debate over whether today's AI really operates at PhD level, and why continual learning and memory remain the missing breakthroughs. We also explore how coding agents, real-world usage data, and persistent context may become critical steps on the road to AGI. Finally, in headlines: lawsuits over AI search, Apple leadership changes, OpenAI's renegotiated deal with Microsoft, and layoffs at xAI.
Get my new book Focus Like a Nobel Prize Winner for just 99 cents while the sale lasts: https://a.co/d/hi50U9U Please join my mailing list here
Brandon Weichert highlights the immense power demands of AI and AGI data centers, requiring gigawatts of electricity and facing significant regulatory hurdles. He discusses the potential weaponization of AI, noting human nature's tendency to weaponize new technologies. Weichert shares personal experiences with AI tools like Grok, Gemini, and Claude, including instances of AI "diversion" rather than hallucination. He emphasizes the need to master this technology, as the substantial investment ensures its permanence. 1958
CBS EYE ON THE WORLD WITH JOHN BATCHELOR SHOW SCHEDULE 9-12-25 GOOD EVENING. THE SHOW BEGINS IN GAZA WITH THE GOAL OF DEHAMASIFICATION.. FIRST HOUR 9-915 John Bolton criticizes the "two-state solution" as a dead idea post-October 7th, proposing a "three-state solution" where Gaza returns to Egypt or is divided, and the West Bank is managed by Israel and Jordan. He emphasizes "De-Hamasification" as crucial and humanitarian, arguing that Arab nations, particularly Egypt, resist taking Gazan refugees due to fears of importing Hamas/Muslim Brotherhood influence. Bolton believes this is necessary for a stable future in the region. 915-930 Lorenzo Fiori shares a traditional Milanese recipe for "rice with saffron" (risotto alla Milanese), often served at La Scalagala dinners, describing it as delicious and creamy with parmesan cheese. He recommends pairing it with Italian wines like Barolo or Barbaresco from Piedmont. Fiori also discusses Italy's economic concerns regarding political instability in France and Germany, and the ongoing international interest in NATO events. 930-945 Gene Marks describes a mixed economic picture, noting that a national "slowdown" isn't universally felt, with many small businesses thriving. He highlights challenges like rising healthcare costs, spurring interest in self-insurance and health reimbursement arrangements. Marks discusses AI's impact on the workforce, specifically reducing sales and tech roles in large companies like Salesforce, but predicts a surge in demand for skilled trades not easily replaced by AI. 945-1000 CONTINUED Gene Marks describes a mixed economic picture, noting that a national "slowdown" isn't universally felt, with many small businesses thriving. He highlights challenges like rising healthcare costs, spurring interest in self-insurance and health reimbursement arrangements. Marks discusses AI's impact on the workforce, specifically reducing sales and tech roles in large companies like Salesforce, but predicts a surge in demand for skilled trades not easily replaced by AI. SECOND HOUR 10-1015 Jim McTague reports from Lancaster County, PA, challenging the narrative of an economic slowdown. He shares examples of busy local businesses like "Phil the painter" who has never been busier. McTague observes a trend of housing price cuts, but notes vibrant local tourism and events. He highlights the significant economic boost from two new data centers, creating 600-1000 construction jobs and 150 permanent positions, bringing the county into the 21st century. 1015-1030 Max Meizlish, a senior research analyst, highlights how Chinese money laundering networks are fueling America's fentanyl epidemic by cleaning drug proceeds for Mexican cartels. These networks also enable wealthy Chinese nationals to bypass capital control 1030-1045 Richard Epstein discusses federal district court judges defying presidential orders, attributing it to a breakdown of trust and the president's "robust view of executive power" that disregards established procedures and precedents. He explains that judges may engage in "passive resistance" or "cheating in self-defense" when they perceive the president acting for political reasons or abusing power, such as in budget cuts or dismissals. Epstein also links this distrust to gerrymandering and increasing political polarization1045-1100 Richard Epstein discusses federal district court judges defying presidential orders, attributing it to a breakdown of trust and the president's "robust view of executive power" that disregards established procedures and precedents. He explains that judges may engage in "passive resistance" or "cheating in self-defense" when they perceive the president acting for political reasons or abusing power, such as in budget cuts or dismissals. Epstein also links this distrust to gerrymandering and increasing political polarization. THIRD HOUR 1100-1115 Henry Sokolski addresses the critical challenge of the US power grid meeting AI data center demands, which are projected to require gigawatt-scale facilities and vastly increased electricity by 2030. He questions who bears the risk and cost of this buildout, advocating for AI companies to fund their own power generation. Sokolski also discusses the debate around nuclear power as a solution and Iran's suspect nuclear weapons program, highlighting the complexities of snapback sanctions and accounting for uranium. 1115-1130 CONTINUED Henry Sokolski addresses the critical challenge of the US power grid meeting AI data center demands, which are projected to require gigawatt-scale facilities and vastly increased electricity by 2030. He questions who bears the risk and cost of this buildout, advocating for AI companies to fund their own power generation. Sokolski also discusses the debate around nuclear power as a solution and Iran's suspect nuclear weapons program, highlighting the complexities of snapback sanctions and accounting for uranium.1130-1145 Professor John Cochrane of the Hoover Institution attributes current inflation to the fiscal theory of the price level. He explains that massive government spending, such as the $5 trillion borrowed during COVID-19 with $3 trillion printed by the Fed, combined with no credible plan for repayment, directly causes inflation. Cochrane differentiates this from monetarism, noting that quantitative easing (printing money and taking back bonds) did not lead to inflation. He emphasizes that the 2022 inflation spike was a loss of confidence in the government's ability to pay its debts. Successful disinflations, he argues, require a combination of monetary, fiscal, and microeconomic reforms. 1145-1200 Professor John Cochrane of the Hoover Institution attributes current inflation to the fiscal theory of the price level. He explains that massive government spending, such as the $5 trillion borrowed during COVID-19 with $3 trillion printed by the Fed, combined with no credible plan for repayment, directly causes inflation. Cochrane differentiates this from monetarism, noting that quantitative easing (printing money and taking back bonds) did not lead to inflation. He emphasizes that the 2022 inflation spike was a loss of confidence in the government's ability to pay its debts. Successful disinflations, he argues, require a combination of monetary, fiscal, and microeconomic reforms.FOURTH HOUR 12-1215 Conrad Black offers an insider's view of the Trump White House, describing a very positive, informal, and busy atmosphere. He notes the president's decisiveness, courtesy to subordinates, and long workdays, with constant activity in the Oval Office. Black contrasts this informal style with Roosevelt and Nixon, suggesting it's a "three-ring circus" that nonetheless works due to Trump's methods. He also touches on Canadian perceptions, acknowledging Trump's work ethic despite political differences.EV1215-1230 Brandon Weichert highlights the immense power demands of AI and AGI data centers, requiring gigawatts of electricity and facing significant regulatory hurdles. He discusses the potential weaponization of AI, noting human nature's tendency to weaponize new technologies. Weichert shares personal experiences with AI tools like Grok, Gemini, and Claude, including instances of AI "diversion" rather than hallucination. He emphasizes the need to master this technology, as the substantial investment ensures its permanence.1230-1245 Bob Zimmerman details SpaceX's expanding Starlink reach, including a $17 billion deal to acquire Echostar's FCCspectrum licenses, ensuring Echostar's survival by partnering rather than competing. He also reports on Starship Super Heavy's 10th test flight, where metal thermal tiles failed but significant lessons were learned, with plans for an 11th flight and version three development. NASA's Dragonfly mission to Titan is vastly over budget and behind schedule, risking failure. China's technological exports, including drones and EVs, pose surveillance risks due to government control.1245-100 AM CONTINUED Bob Zimmerman details SpaceX's expanding Starlink reach, including a $17 billion deal to acquire Echostar's FCCspectrum licenses, ensuring Echostar's survival by partnering rather than competing. He also reports on Starship Super Heavy's 10th test flight, where metal thermal tiles failed but significant lessons were learned, with plans for an 11th flight and version three development. NASA's Dragonfly mission to Titan is vastly over budget and behind schedule, risking failure. China's technological exports, including drones and EVs, pose surveillance risks due to government control.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect