Podcasts about AGI

  • 1,615PODCASTS
  • 5,271EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

Bankless
LIMITLESS - The Intelligence Curse: AI Makes Us All Obsolete | Luke Drago & Rudolf

Bankless

Play Episode Listen Later May 28, 2025


Welcome to Limitless. Today we're joined by Luke Drago and Rudolf, authors of the powerful essay series "The Intelligence Curse." Together, we explore a future where artificial general intelligence (AGI) threatens to upend the economic and social contracts that underpin modern civilization. Will AI empower us or make us obsolete? We unpack how labor-replacing AI could dismantle the very incentives that once gave rise to liberal democracies, social mobility, and human-centered innovation—and what it might take to build a future worth living in. ------

Pillars Of Wealth Creation
POWC # 796: Stop Overpaying Taxes: Advanced Strategies for Business Owners | Mark Myers

Pillars Of Wealth Creation

Play Episode Listen Later May 27, 2025 39:34


Today, we're tackling a topic that every high-income earner and business owner needs to pay attention to: tax strategy. If your AGI is $300K or more, or you're paying 32–35% in taxes or facing major capital gains, you're likely leaving serious money on the table. Joining Todd is Mark Myers, founder of Tax Wise Partners. Mark specializes in proactive tax strategies that help business owners and investors legally reduce their tax burden using little-known sections of the tax code that the ultra-wealthy have been using for years. Book: The Bible Pillars of Wealth: 1. Education 2. Protect your principal as you earn it 3. Increase the wealth that you might not realize is there Mark Myers is the Founder and CEO of Tax Wise Partners, a firm dedicated to helping high-income earners, business owners, and real estate investors dramatically reduce their tax burden through advanced, legal tax strategies. With over two decades of experience in financial services, Mark has helped clients reclaim millions in taxes by leveraging proactive planning and little-known sections of the U.S. tax code. His mission is to empower entrepreneurs to keep more of what they earn and build long-term generational wealth. If you would like to connect with Mark, visit https://taxwisepartners.com/ YouTube: www.youtube.com/c/PillarsOfWealthCreation Interested in coaching? Schedule a call with Todd at www.coachwithdex.com Listen to the audio version on your favorite podcast host: SoundCloud: https://soundcloud.com/user-650270376 Apple Podcasts: https://podcasts.apple.com/.../pillars-of.../id1296372835... Google Podcasts: https://podcasts.google.com/.../aHR0cHM6Ly9mZWVkcy5zb3VuZ... iHeart Radio: https://www.iheart.com/.../pillars-of-wealth-creation.../ CastBox: https://castbox.fm/.../Pillars-Of-Wealth-Creation... Spotify: https://open.spotify.com/show/0FmGSJe9fzSOhQiFROc2O0 Pandora: https://pandora.app.link/YUP21NxF3kb Amazon/Audible: https://music.amazon.com/.../f6cf3e11-3ffa-450b-ac8c...

Chain Reaction
TAOFU's Mikel & Mitch: Enabling Capital Formation for Subnets on Bittensors $10B+ Network

Chain Reaction

Play Episode Listen Later May 27, 2025 42:43


Join Tommy Shaughnessy as he hosts Mitch and Mikel, co-founders of Taofu and TPN, to discuss Taofu, a launchpad for Bittensor subnets. Learn about how they're democratizing subnet funding and their first subnet launch.Taofu: https://www.Taofu.xyz/Taofu Funding Form: https://form.typeform.com/to/KmvNfUg5

The Theory of Anything
Episode 108: AI and Obedience (with Dan Gish)

The Theory of Anything

Play Episode Listen Later May 27, 2025 112:04


This week we are joined by fellow traveler Dan Gish to discuss LLMs and AGI. Does it really, truly make sense to think that OpenAI or DeepMind are not at least an important stepping stone towards the creation of human-level creativity? What does it mean when CritRats assert that these AI algorithms are the opposite of human intelligence because they are obedient whereas we are disobedient?⁠⁠Support us on Patreon⁠

Machine Learning Street Talk
"Blurring Reality" - Chai's Social AI Platform (SPONSORED)

Machine Learning Street Talk

Play Episode Listen Later May 26, 2025 50:59


"Blurring Reality" - Chai's Social AI Platform - sponsoredThis episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai discovered the massive appetite for AI companionship through serendipity while searching for product-market fit.CHAI sponsored this show *because they want to hire amazing engineers* -- CAREER OPPORTUNITIES AT CHAIChai is actively hiring in Palo Alto with competitive compensation ($300K-$800K+ equity) for roles including AI Infrastructure Engineers, Software Engineers, Applied AI Researchers, and more. Fast-track qualification available for candidates with significant product launches, open source contributions, or entrepreneurial success.https://www.chai-research.com/jobs/The conversation with founder William Beauchamp and engineers Tom Lu and Nischay Dhankhar covers Chai's innovative technical approaches including reinforcement learning from human feedback (RLHF), model blending techniques that combine smaller models to outperform larger ones, and their unique infrastructure challenges running exaflop-class compute.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers in Zurich and SF. Goto https://tufalabs.ai/***Key themes explored include:- The ethics of AI engagement optimization and attention hacking- Content moderation at scale with a lean engineering team- The shift from AI as utility tool to AI as social companion- How users form deep emotional bonds with artificial intelligence- The broader implications of AI becoming a social mediumWe also examine OpenAI's recent pivot toward companion AI with April's new GPT-4o, suggesting a fundamental shift in how we interact with artificial intelligence - from utility-focused tools to companion-like experiences that blur the lines between human and artificial intimacy.The episode also covers Chai's unconventional approach to hiring only top-tier engineers, their bootstrap funding strategy focused on user revenue over VC funding, and their rapid experimentation culture where one in five experiments succeed.TOC:00:00:00 - Introduction: Steve Jobs' AI Vision & Chai's Scale00:04:02 - Chapter 1: Simulators - The Birth of Social AI00:13:34 - Chapter 2: Engineering at Chai - RLHF & Model Blending00:21:49 - Chapter 3: Social Impact of GenAI - Ethics & Safety00:33:55 - Chapter 4: The Lean Machine - 13 Engineers, Millions of Users00:42:38 - Chapter 5: GPT-4o Becoming a Companion - OpenAI's Pivot00:50:10 - Chapter 6: What Comes Next - The Future of AI Intimacy TRANSCRIPT: https://www.dropbox.com/scl/fi/yz2ewkzmwz9rbbturfbap/CHAI.pdf?rlkey=uuyk2nfhjzezucwdgntg5ubqb&dl=0

聽天下:天下雜誌Podcast
【天下零時差05.27.25】:庫克很衰的一年!蘋果危機不只川普;習近平的AI野心 中國超車美國靠這個策略;科技新創成了德國經濟成長的希望,但是錢從哪裡來?

聽天下:天下雜誌Podcast

Play Episode Listen Later May 26, 2025 10:46


週二天下零時差關注以下國際大事: 一、《華爾街日報》:對庫克來說,今年真是雪上加霜的一年 二、《經濟學人》:習近平的AI野心 中國超車美國靠這個策略 三、《明鏡週刊》:科技新創成了德國經濟成長的希望,但是錢從哪裡來? 文: 吳凱琳 製作團隊:黃柏維 *閱讀零時差,點這看全文

On with Kara Swisher
Sam Altman, OpenAI and the Future of Artificial (General) Intelligence

On with Kara Swisher

Play Episode Listen Later May 22, 2025 51:39


Few technological advances have made the kind of splash –– and had the potential long-term impact –– that ChatGPT did in November 2022. It made a nonprofit called OpenAI and its CEO, Sam Altman, household names around the world. Today, ChatGPT is still the world's most popular AI Chatbot; OpenAI recently closed a $40 billion funding deal, the largest private tech deal on record. But who is Sam Altman? And was it inevitable that OpenAI would become such a huge player in the AI space? Kara speaks to two fellow tech reporters who have tackled these questions in their latest books: Keach Hagey is a reporter at The Wall Street Journal. Her book is called “The Optimist: Sam Altman, OpenAI and the Race to Reinvent the Future.” Karen Hao writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series. Her book is called “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.” They speak to Kara about Altman's background, his short firing/rehiring in 2023 known as “The Blip”, how Altman used OpenAI's nonprofit status to recruit AI researchers and get Elon Musk on board, and whether OpenAI's mission is still to reach AGI, artificial general intelligence. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Lunar Society
How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken

The Lunar Society

Play Episode Listen Later May 22, 2025 144:01


New episode with my good friends Sholto Douglas & Trenton Bricken. Sholto focuses on scaling RL and Trenton researches mechanistic interpretability, both at Anthropic.We talk through what's changed in the last year of AI research; the new RL regime and how far it can scale; how to trace a model's thoughts; and how countries, workers, and students should prepare for AGI.See you next year for v3. Here's last year's episode, btw. Enjoy!Watch on YouTube; listen on Apple Podcasts or Spotify.----------SPONSORS* WorkOS ensures that AI companies like OpenAI and Anthropic don't have to spend engineering time building enterprise features like access controls or SSO. It's not that they don't need these features; it's just that WorkOS gives them battle-tested APIs that they can use for auth, provisioning, and more. Start building today at workos.com.* Scale is building the infrastructure for safer, smarter AI. Scale's Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you're an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.* Lighthouse is THE fastest immigration solution for the technology industry. They specialize in expert visas like the O-1A and EB-1A, and they've already helped companies like Cursor, Notion, and Replit navigate U.S. immigration. Explore which visa is right for you at lighthousehq.com/ref/Dwarkesh.To sponsor a future episode, visit dwarkesh.com/advertise.----------TIMESTAMPS(00:00:00) – How far can RL scale?(00:16:27) – Is continual learning a key bottleneck?(00:31:59) – Model self-awareness(00:50:32) – Taste and slop(01:00:51) – How soon to fully autonomous agents?(01:15:17) – Neuralese(01:18:55) – Inference compute will bottleneck AGI(01:23:01) – DeepSeek algorithmic improvements(01:37:42) – Why are LLMs ‘baby AGI' but not AlphaZero?(01:45:38) – Mech interp(01:56:15) – How countries should prepare for AGI(02:10:26) – Automating white collar work(02:15:35) – Advice for students Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Big Technology Podcast
Google DeepMind CEO Demis Hassabis + Google Co-Founder Sergey Brin: Scaling AI, AGI Timeline, Simulation Theory

Big Technology Podcast

Play Episode Listen Later May 21, 2025 30:47


Demis Hassabis is the CEO of Google DeepMind. Sergey Brin is the co-founder of Google. The two leading tech executives join Alex Kantrowitz for a live interview at Google's IO developer conference to discuss the frontiers of AI research. Tune in to hear their perspective on whether scaling is tapped out, how reasoning techniques have performed, what AGI actually means, the potential for an intelligence explosion, and much more. Tune in for a deep look into AI's cutting edge featuring two executives building it. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Moonshots - Adventures in Innovation
ChatGPT deep dive: Sam Altman, CEO of OpenAI

Moonshots - Adventures in Innovation

Play Episode Listen Later May 21, 2025 58:31


Given the Big news today about the partnership between Sam and Johnny Ive we thought we would share this recent episode. In this episode of the Moonshots Podcast, hosts Mike and Mark dive deep into the world of artificial intelligence, focusing on Sam Altman, the CEO of OpenAI. The discussion features insights from various interviews and talks, including Bill Gates' interview with Sam Altman on the transformative power of ChatGPT and Sam's conversations with Lex Friedman and Craig Cannon. Listeners will also explore key lessons from Sam's time at Y Combinator, providing valuable guidance for aspiring entrepreneurs.Become a member to support the Moonshots Podcast and access exclusive content: Join us on Patreon.Episode Description:In this compelling episode, Mike and Mark explore the groundbreaking work of Sam Altman, CEO of OpenAI, and his vision for artificial intelligence. The episode features highlights from Bill Gates' interview with Sam Altman on the power of ChatGPT, revealing the potential and impact of this AI application. They also delve into Sam's discussion with Lex Friedman about AGI and the importance of staying true to one's values amidst competition, particularly with tech giants like Google. Additionally, the hosts share three essential lessons from Sam's Y Combinator classes on how to start a successful startup. The episode concludes with insights from Sam's talk with Craig Cannon on the importance of focus and the pitfalls of the deferred life plan. This episode is a must-listen for anyone interested in AI, entrepreneurship, and the future of technology.Become a member to support the Moonshots Podcast and access exclusive content: Join us on PatreonLinks:Podcast EpisodeArticle on Sam Altman, OpenAI's Spectacular CEOYouTube: Sam Altman Talks OpenAI and AGIExpanded Key Concepts and Insights:The Power of ChatGPT: Explore how ChatGPT is revolutionizing the AI landscape, as discussed in Bill Gates' interview with Sam Altman.Navigating AGI and Competition: Understand the challenges and strategies of competing in the AI industry, especially against giants like Google, as Sam's conversation with Lex Friedman shared.Starting a Startup: Learn three critical lessons for aspiring entrepreneurs from Sam Altman's Y Combinator teachings.Focus and Ambition: Gain insights on the importance of focus and structuring ambitions effectively, avoiding the pitfalls of the deferred life plan, as discussed in Sam's talk with Craig Cannon.About Sam Altman:Sam Altman is the CEO of OpenAI, a leading artificial intelligence research lab. Before joining OpenAI, Sam was the president of Y Combinator, where he played a pivotal role in nurturing numerous successful startups. His work at OpenAI focuses on advancing artificial intelligence to benefit humanity, ensuring that AGI (Artificial General Intelligence) aligns with human values.About Moonshots Podcast:The Moonshots Podcast, hosted by Mike and Mark, delves into the minds of innovators and visionaries who are making significant strides in various fields. Each episode offers deep insights into the strategies, mindsets, and tools these trailblazers use to achieve extraordinary success. The podcast aims to inspire and equip listeners with actionable insights to pursue their moonshot ideas. Thanks to our monthly supporters Emily Rose Banks Malcolm Magee Natalie Triman Kaur Ryan N. Marco-Ken Möller Mohammad Lars Bjørge Edward Rehfeldt III 孤鸿 月影 Fabian Jasper Verkaart Andy Pilara ola Austin Hammatt Zachary Phillips Mike Leigh Cooper Gayla Schiff Laura KE Krzysztof Roar Nikolay Ytre-Eide Stef Roger von Holdt Jette Haswell venkata reddy Ingram Casey Ola rahul grover Evert van de Plassche Ravi Govender Craig Lindsay Steve Woollard Lasse Brurok Deborah Spahr Barbara Samoela Jo Hatchard Kalman Cseh Berg De Bleecker Paul Acquaah MrBonjour Sid Liza Goetz Konnor Ah kuoi Marjan Modara Dietmar Baur Bob Nolley ★ Support this podcast on Patreon ★

Eran Gefen | חצי שעה של השראה עם ערן גפן
דניאל שרייבר מנכ״ל למונייד: החייזרים נחתו. מה עכשיו

Eran Gefen | חצי שעה של השראה עם ערן גפן

Play Episode Listen Later May 21, 2025 43:28


בקרוב: הבינה המלאכותית הכללית, AGI עומדת להגיע ולשנות את הכול. מציאות שבה הבינה המלאכותית עוקפת את בני האדם. שוחחנו על המשמעויות האסטרטגיות והדרכים שבהן מנהלים, עסקים ויזמים יכולים להתכונן לגל האדיר שמגיע ואף ליהנות ממנו. דניאל בנה את למונייד כבר מיומה הראשון כ - AI FIRST, הקים את מכון מוזאיק, ולאחרונה התפרסם בזכות הרצאתו “החייזרים נחתו”. לניוזלטר שלי www.100book.org/join חברת הייעוץ האסטרטגי שלי שעוזרת לחברות לפתח אסטרטגיית צמיחה www.gteam.org/

Jeff Katz
Craig Peterson: May 20, 2025

Jeff Katz

Play Episode Listen Later May 20, 2025 14:57


Craig Peterson joins Jeff to talk about a new Windows upgrade and AGI being bogus.

Hidden Forces
Empire of AI: Inside OpenAI's Race to Conquer the Future | Karen Hao

Hidden Forces

Play Episode Listen Later May 19, 2025 67:23


In Episode 418 of Hidden Forces, Demetri Kofinas sits down with award-winning journalist Karen Hao to discuss Empire of AI — her inside account of how OpenAI evolved from an idealistic, safety-first nonprofit into one of the world's most valuable private companies in its race to conquer the future. This conversation takes you inside that transformation—from the heady idealism of OpenAI's founding, through the billion-dollar Microsoft deal and the 2023 boardroom coup, to the unresolved questions that hang over Silicon Valley and Washington alike about the private concentration of power in the age of artificial intelligence and the nature of the world we are building. Whether you're an investor, a policymaker, or simply a concerned citizen trying to make sense of today's headlines, this episode will equip you with the context you need to understand what's really at stake in the race for AGI—and the levers we still have to steer it. Subscribe to our premium content—including our premium feed, episode transcripts, and Intelligence Reports—by visiting HiddenForces.io/subscribe. If you'd like to join the conversation and become a member of the Hidden Forces Genius community—with benefits like Q&A calls with guests, exclusive research and analysis, in-person events, and dinners—you can also sign up on our subscriber page at HiddenForces.io/subscribe. If you enjoyed today's episode of Hidden Forces, please support the show by: Subscribing on Apple Podcasts, YouTube, Spotify, Stitcher, SoundCloud, CastBox, or via our RSS Feed Writing us a review on Apple Podcasts & Spotify Joining our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe and support the podcast at https://hiddenforces.io. Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 05/12/2025

GZero World with Ian Bremmer
OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI

GZero World with Ian Bremmer

Play Episode Listen Later May 17, 2025 38:16


How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI's progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo's team has direct knowledge of current research pipelines. Which is exactly why it's so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether? Host: Ian BremmerGuest: Daniel Kokotajlo Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.

GZERO World with Ian Bremmer
OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI

GZERO World with Ian Bremmer

Play Episode Listen Later May 17, 2025 38:16


How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI's progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo's team has direct knowledge of current research pipelines. Which is exactly why it's so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether? Host: Ian BremmerGuest: Daniel Kokotajlo Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.

80,000 Hours Podcast with Rob Wiblin
Don't believe OpenAI's “nonprofit” spin (with Tyler Whitmer)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 15, 2025 72:04


OpenAI's recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely meaningless.Litigator Tyler Whitmer is a coauthor of a newly published letter that describes this attempted sleight of hand and directs regulators on how to stop it.As Tyler explains, the plan both before and after this announcement has been to convert OpenAI into a Delaware public benefit corporation (PBC) — and this alone will dramatically weaken the nonprofit's ability to direct the business in pursuit of its charitable purpose: ensuring AGI is safe and “benefits all of humanity.”Right now, the nonprofit directly controls the business. But were OpenAI to become a PBC, the nonprofit, rather than having its “hand on the lever,” would merely contribute to the decision of who does.Why does this matter? Today, if OpenAI's commercial arm were about to release an unhinged AI model that might make money but be bad for humanity, the nonprofit could directly intervene to stop it. In the proposed new structure, it likely couldn't do much at all.But it's even worse than that: even if the nonprofit could select the PBC's directors, those directors would have fundamentally different legal obligations from those of the nonprofit. A PBC director must balance public benefit with the interests of profit-driven shareholders — by default, they cannot legally prioritise public interest over profits, even if they and the controlling shareholder that appointed them want to do so.As Tyler points out, there isn't a single reported case of a shareholder successfully suing to enforce a PBC's public benefit mission in the 10+ years since the Delaware PBC statute was enacted.This extra step from the nonprofit to the PBC would also mean that the attorneys general of California and Delaware — who today are empowered to ensure the nonprofit pursues its mission — would find themselves powerless to act. These are probably not side effects but rather a Trojan horse for-profit investors are trying to slip past regulators.Fortunately this can all be addressed — but it requires either the nonprofit board or the attorneys general of California and Delaware to promptly put their foot down and insist on watertight legal agreements that preserve OpenAI's current governance safeguards and enforcement mechanisms.As Tyler explains, the same arrangements that currently bind the OpenAI business have to be written into a new PBC's certificate of incorporation — something that won't happen by default and that powerful investors have every incentive to resist.Full transcript and links to learn more: https://80k.info/twChapters:Cold open (00:00:00)Who's Tyler Whitmer? (00:01:35)The new plan may be no improvement (00:02:04)The public hasn't even been allowed to know what they are owed (00:06:55)Issues beyond control (00:11:02)The new directors wouldn't have to pursue the current purpose (00:12:06)The nonprofit might not even retain voting control (00:16:58)The attorneys general could lose their enforcement oversight (00:22:11)By default things go badly (00:29:09)How to keep the mission in the restructure (00:32:25)What will become of OpenAI's Charter? (00:37:11)Ways to make things better, and not just avoid them getting worse (00:42:38)How the AGs can avoid being disempowered (00:48:35)Retaining the power to fire the CEO (00:54:49)Will the current board get a financial stake in OpenAI? (00:57:40)Could the AGs insist the current nonprofit agreement be made public? (00:59:15)How OpenAI is valued should be transparent and scrutinised (01:01:00)Investors aren't bad people, but they can't be trusted either (01:06:05)This episode was originally recorded on May 13, 2025.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore

Personal Development Mastery
#501 Beyond the wall: What happens after 500 episodes.

Personal Development Mastery

Play Episode Listen Later May 15, 2025 4:47 Transcription Available


Have you ever felt like you've done the inner work, but now you're wondering, what's next?Reaching 500 podcast episodes is a major milestone - but it's also a turning point. In this reflective solo episode, Agi shares what comes after the building phase of growth, and why many of us find ourselves at a threshold not of knowledge, but of clarity. If you're asking deeper questions about your direction, this message is for you.Discover the deeper purpose behind reaching a milestone—and why it signals a new beginning, not the end.Hear how transformation shifts from consuming insights to engaging in real, guided exploration.Learn why the question “What's next?” could be the most important invitation of your growth journey.Listen now and explore whether this moment of reflection might be your personal turning point too.˚Personal development inspiration, self help insights, and actions to implement for self improvement and living with purpose.˚Support the showPersonal development, self mastery, and actionable wisdom for personal improvement and living with purpose and fulfilment.Insights and actionable inspiration to implement for self-mastery, living authentically, living your purpose, cultivating emotional intelligence, building confidence, and living authentically through personal mastery, healthy habits, meditation, mindset shifts, spirituality, wellness, and personal growth - empowering entrepreneurs, leaders, and seekers to embrace happiness and fulfilment.Join our free community "Mastery Seekers Tribe". To support the show, click here.

The MAD Podcast with Matt Turck
Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)

The MAD Podcast with Matt Turck

Play Episode Listen Later May 15, 2025 55:02


What happens when you try to build the “General Electric of AI” with just 14 people? In this episode, Jeremy Howard reveals the radical inside story of Answer AI — a new kind of AI R&D lab that's not chasing AGI, but instead aims to ship thousands of real-world products, all while staying tiny, open, and mission-driven.Jeremy shares how open-source models like DeepSeek and Qwen are quietly outpacing closed-source giants, why the best new AI is coming out of China. You'll hear the surprising truth about the so-called “DeepSeek moment,” why efficiency and cost are the real battlegrounds in AI, and how Answer AI's “dialogue engineering” approach is already changing lives—sometimes literally.We go deep on the tools and systems powering Answer AI's insane product velocity, including Solve It (the platform that's helped users land jobs and launch startups), Shell Sage (AI in your terminal), and Fast HTML (a new way to build web apps in pure Python). Jeremy also opens up about his unconventional path from philosophy major and computer game enthusiast to world-class AI scientist, and why he believes the future belongs to small, nimble teams who build for societal benefit, not just profit.Fast.aiWebsite - https://www.fast.aiX/Twitter - https://twitter.com/fastdotaiAnswer.aiWebsite - https://www.answer.ai/X/Twitter - https://x.com/answerdotaiJeremy HowardLinkedIn - https://linkedin.com/in/howardjeremyX/Twitter - https://x.com/jeremyphowardFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:39) Highlights and takeaways from ICLR Singapore (02:39) Current state of open-source AI (03:45) Thoughts on Microsoft Phi and open source moves (05:41) Responding to OpenAI's open source announcements (06:29) The real impact of the Deepseek ‘moment' (09:02) Progress and promise in test-time compute (10:53) Where we really stand on AGI and ASI (15:05) Jeremy's journey from philosophy to AI (20:07) Becoming a Kaggle champion and starting Fast.ai (23:04) Answer.ai mission and unique vision (28:15) Answer.ai's business model and early monetization (29:33) How a small team at Answer.ai ships so fast (30:25) Why Devin AI agent isn't that great (33:10) The future of autonomous agents in AI development (34:43) Dialogue Engineering and Solve It (43:54) How Answer.ai decides which projects to build (49:47) Future of Answer.ai: staying small while scaling impact

The Defiant
The Fight for Digital Privacy Decentralized AI and the Future of Freedom with Patrick Amadon

The Defiant

Play Episode Listen Later May 15, 2025 74:04


This week, we spoke to Patrick Amadon about the complex intersections of digital privacy, surveillance, censorship, and the potential of decentralized AI. Amadon offers a critical perspective on some of the most pressing issues in technology, from the role of digital disobedience to the regulatory challenges surrounding cryptocurrency and artificial intelligence.Key topics include: The potential of decentralized AI and Ethereum's pivotal role The impact of censorship on free speech in the digital era How AI and blockchain technology can empower individuals and artists Risks posed by centralized AI and the push for decentralization Strategies for protecting digital privacy and countering AI surveillance Amadon also examines how the framework for success in the digital age is shifting, emphasizing the importance of data ownership and financial independence. This conversation offers valuable insights for those interested in the evolving relationship between technology, freedom, and the pursuit of a decentralized future.Chapters:00:00 Defiant Intro00:07 Decentralized AI is a win for Ethereum01:30 Introduction to Patrick Amadon02:08 Digital disobedience and decentralized ecosystems04:10 Challenges in modern discourse07:40 “No Rioters” and threats to free speech 10:10 The place for disruption12:00 The paths to success are changing13:05 Crypto and data ownership13:32 Meaningful AI Legislation14:20 The race to AGI and why we need to beat China16:48 Regulating crypto and AI18:34 The failure of the stablecoin bill22:08 “Passive Observer” and the role of the public in geopolitical events27:50 Art as resistance to homogenizing the narrative30:10 When crypto starts making sense to normies30:55 The fight to decentralize AI33:25 The risks of centralized AI35:40 Our lack of digital security and privacy37:32 Why should AI belong to the people?38:00 AI's effects on the job market41:25 How to get under the hood of AI45:00 AI lowers the barrier to entry for art, technology, and more46:30 AI and the arts49:20 The pros and cons of decentralization50:32 Decentralized infrastructure, currency, and economy52:15 The Pectra update54:35 The power of owning your own currency55:18 How will wallets as smart contracts affect artists?55:40 The NFT art space only works as it imagines a new art world58:00 Pectra and the future of meaningful technologies58:55 A digital renaissance01:00:58 Counter AI surveillance technology01:05:00 Why is digital privacy so important?01:12:15 VeilPNG and hiding data01:13:20 Closing remarks

Keen On Democracy
Episode 2534: Why Generative AI is a Technological Dead End

Keen On Democracy

Play Episode Listen Later May 15, 2025 36:12


Something doesn't smell right about generative AI. Earlier this week, we had a featuring a former Google researcher who described large language models (LLMs) as a “con”. Then, of course, there's OpenAI CEO Sam Altman who critics both inside and outside OpenAI, see as a little more than a persuasive conman. Scam or not, the biggest technical problem with LLMs, according to Peter Vos, who invented the term Artificial General Intelligence (AGI), is that it lacks memory and thus are inherently incapable of incremental learning. Voss, the current CEO of Aigo.ai, argues that LLMs therefore represent a technological “dead end” for AI. The industry, Voss argues, has gone “fundamentally wrong” with generative AI. It's a classic economic mania, he says. And, as with all bubbles in the past - like Dutch tulips, internet DotComs or Japanese real-estate - it will eventually burst with devastating consequences. Peter Voss is a pioneer in AI who coined the term 'AGI' (Artificial General Intelligence) in 2001. As an engineer, scientist, inventor, and serial entrepreneur, he developed a comprehensive ERP package, growing his company from zero to a 400-person IPO in seven years. For the past 20 years, he has studied intelligence and AI, leading to the creation of the Aigo engine, a proto-AGI natural language intelligence that is revolutionizing call centers and advancing towards human-level intelligence.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The Cloudcast
Building Customer-First Products

The Cloudcast

Play Episode Listen Later May 14, 2025 26:30


Siqi Chen (@blader, CEO/CFO @Runwayco), talks about his journey from JPL developer to Founder of a financial planning and analysis (FP&A) startup. We focus on how to build products that customers crave and how a customer-centric view differs from traditional product management.SHOW: 923SHOW TRANSCRIPT: The Cloudcast #923 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK:  http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST:  "CLOUDCAST BASICS"SPONSORS:Cut Enterprise IT Support Costs by 30-50% with US CloudSHOW NOTES:Runway websiteBehind What Seems Like an Overnight Success (video)Topic 1 - Welcome to the show, Siqi. First, your combination of technical and business/financial background is fascinating. How did you go from coding at NASA to Head of Product at Zynga to CEO/CFO for a finance platform startup? Give everyone a quick introduction.Topic 2 - One thing I've noticed as a trend in your background is the core concept of building. What has been your philosophy in building products? How do you build products that customers demand?Topic 3 -  Let's talk about AI and AGI for a moment. We hear all the time how disruptive this will be. What are your thoughts here, and how do we develop both adaptability and resiliency to new technologies?Topic 4 - Let's talk FP&A (financial planning & analysis). Our core listeners out there tend to skew more towards the tech and infrastructure side, but a core theme of this show is always to be learning as much of the business as possible to apply those concepts. As someone with a background in both worlds, plus now running an FP&A startup, what do you wish folks on the technical side of the house knew more about to make their jobs easier?Topic 5 - We posted a link in the show notes for a video you did on the “overnight success” of Runway. It was a good representation and origin story of how something can go viral with the right mindset and product-market fit. Tell everyone about that as Runway approaches 5 years now. Topic 6 - What is your biggest challenge in the FP&A space today? Is it AI? We've seen a lot of AI disruption in coding, legal, and other areas requiring deep data pool insights. Is this any different?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod

Machine Learning Street Talk
Google AlphaEvolve - Discovering new science (exclusive interview)

Machine Learning Street Talk

Play Episode Listen Later May 14, 2025 73:58


Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord

Mystery AI Hype Theater 3000
AGI: "Imminent", "Inevitable", and Inane, 2025.04.21

Mystery AI Hype Theater 3000

Play Episode Listen Later May 14, 2025 65:02 Transcription Available


Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot.References:AI 2027Fresh AI Hell:AI persona bots for undercover copsPalantir heart eyes Keir StarmerAnti-vaxxers are grifting off the measles outbreak with AI-formulated supplementsThe cost, environmental and otherwise, of being polite to ChatGPTActors who sold voice & likeness find it used for scamsAddictive tendencies and ChatGPT (satire)Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

KQED’s Forum
What's Next in Artificial Intelligence?

KQED’s Forum

Play Episode Listen Later May 13, 2025 57:52


Artificial intelligence dominates the Bay Area tech landscape, and we will catch you up on the latest headlines. From chatbots that promise to be your friend to artificial general intelligence, or AGI, which is designed to go beyond task-oriented AI to comprehend and process information in a close-to human form. We'll talk to a panel of tech reporters about what's on the horizon and just how much AI may — or may not —change the way we live. Guests: Nitasha Tiku, tech culture reporter, Washington Post Jeff Horwitz, tech reporter, The Wall Street Journal Kylie Robison, reporter, Wired; Robison covers the business of AI Learn more about your ad choices. Visit megaphone.fm/adchoices

Growth Minds
The AI Expert: "Super AI Will Be Unstoppable!"– What's Coming NEXT _ Stephen Wolfram

Growth Minds

Play Episode Listen Later May 13, 2025 157:29


Stephen Wolfram is a British-American computer scientist, physicist, and entrepreneur best known for founding Wolfram Research and creating Mathematica and the computational knowledge engine Wolfram|Alpha. A child prodigy, he published scientific papers in physics by the age of 15 and earned his Ph.D. from Caltech at 20. He later developed A New Kind of Science, proposing that simple computational rules can explain complex phenomena in nature. Wolfram has been a pioneer in symbolic computation, computational thinking, and AI. His work continues to influence science, education, and technology.In our conversation we discuss:(00:00) What was the first version of AI?(23:38) What triggered the current AI revolution?(34:19) Did OpenAI base its initial algorithm on Google's work?(46:47) What is the technological gap between now and achieving AGI?(1:15:59) Do you fear an AI-driven world you can't fully understand?(1:35:15) What do we need to unlearn if AI can replicate human abilities?(1:47:39) What happens when there aren't enough jobs due to automation?(1:54:01) How is AI reshaping people's views on wealth?(2:25:48) The future of automating software developmentLearn more about Stephen WolframWebsite: https://www.stephenwolfram.com/index.php.enWikipedia: https://en.wikipedia.org/wiki/Stephen_WolframWatch full episodes on: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@seankim⁠⁠⁠⁠⁠⁠⁠⁠Connect on IG: ⁠⁠⁠⁠⁠⁠⁠https://instagram.com/heyseankim⁠

Beyond The Prompt - How to use AI in your company
What AI Can't Replace – How The Atlantic Deals with Disruption

Beyond The Prompt - How to use AI in your company

Play Episode Listen Later May 13, 2025 60:45


In this episode, Nicholas Thompson, CEO of The Atlantic, offers a sweeping and deeply personal exploration of how AI is reshaping creativity, leadership, and human connection. From his daily video series The Most Interesting Thing in Tech to his marathon training powered by ChatGPT, Nicholas shares how he integrates AI into both work and life—not just as a tool, but as a thought partner.He reflects on the emotional complexity of AI relationships, the tension between cognitive augmentation and cognitive offloading, and what it means to preserve our “unwired” intelligence in an increasingly automated world. The conversation ventures into leadership during disruption, the ethics of AI-generated content, and the future of journalism in a world where agents may consume your content on your behalf.Nicholas also shares how he's cultivating third spaces, building muscle memory for analog thinking, and encouraging experimentation across his team—all while preparing for an uncertain future where imagination, not automation, might be our greatest asset.Whether you're a tech-savvy leader, a content creator, or just trying to stay grounded in the age of generative AI, this episode is full of honest reflections and hard-earned insights on how to navigate what's next.Key Takeaways:Your “unwired” intelligence is your AI superpower — The more human skills you build—like deep focus, emotional presence, and analog thinking—the better you'll be at wielding AI. Thompson argues that cultivating these unwired abilities isn't just about staying grounded—it's about unlocking the full potential of the tools.Don't fight the storm—gear up and adapt — AI is already transforming media and creative industries. Thompson compares it to a coming storm: you can't stop it by yelling at the clouds. Instead, embrace it, understand it deeply, and make strategic decisions based on where it's heading.Leadership means showing, not just telling — As a CEO navigating disruption, Thompson doesn't just advocate for AI exploration—he models it. From training staff on GPTs to walking the halls and testing ideas live, he treats leadership as a practice of visible experimentation and continuous learning.AI relationships can't replace real connection—but they can confuse it — Whether it's logging meals with a bot or losing a personalized Enneagram coach to a reset, Thompson highlights the emotional pull of AI and the dangers of relying on digital companions over human ones. Staying socially connected, especially through “third spaces,” is more important than ever.LinkedIn: Nicholas Thompson | LinkedInThe Atlantic: World Edition - The AtlanticWebsite: Home - Nicholas ThompsonX: nxthompson (@nxthompson)Strava: Cycling & Biking App - Tracker, Trails, Training & More | StravaCaitlin Flanagan – Sex Without Women: Article:SexWithoutWomen-TheAtlantic00:00 Introduction to Nicholas Thompson00:11 Navigating the Information Overload01:10 Daily Tech Insights and Tools02:10 Using AI for Content Creation04:39 AI as a Personal Trainer08:02 Emotional Connections with AI12:12 The Risks of AI Relationships16:17 Preparing for AGI and Cognitive Offloading30:26 AI's Impact on Leadership31:10 Navigating AI Competitors32:01 Internal AI Strategies32:49 Ethical Considerations in AI Usage34:07 AI in Journalism and Writing36:32 Practical AI Applications40:27 Balancing AI and Human Skills49:27 Future of AI in Media53:50 Final Thoughts and Reflections

80,000 Hours Podcast with Rob Wiblin
The case for and against AGI by 2030 (article by Benjamin Todd)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 12, 2025 60:06


More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible? This article by Benjamin Todd looks into the cases for and against, and summarises the key things you need to know to understand the debate. You can see all the images and many footnotes in the original article on the 80,000 Hours website.In a nutshell:Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models' thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.All of these drivers are set to continue until 2028 and perhaps until 2032.This means we should expect major further gains in AI performance. We don't know how large they'll be, but extrapolating recent trends on benchmarks suggests we'll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.Whether we call these systems 'AGI' or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate — leading to transformative impacts.Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we'll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.Chapters:Introduction (00:00:00)The case for AGI by 2030 (00:00:33)The article in a nutshell (00:04:04)Section 1: What's driven recent AI progress? (00:05:46)How we got here: the deep learning era (00:05:52)Where are we now: the four key drivers (00:07:45)Driver 1: Scaling pretraining (00:08:57)Algorithmic efficiency (00:12:14)How much further can pretraining scale? (00:14:22)Driver 2: Training the models to reason (00:16:15)How far can scaling reasoning continue? (00:22:06)Driver 3: Increasing how long models think (00:25:01)Driver 4: Building better agents (00:28:00)How far can agent improvements continue? (00:33:40)Section 2: How good will AI become by 2030? (00:35:59)Trend extrapolation of AI capabilities (00:37:42)What jobs would these systems help with? (00:39:59)Software engineering (00:40:50)Scientific research (00:42:13)AI research (00:43:21)What's the case against this? (00:44:30)Additional resources on the sceptical view (00:49:18)When do the 'experts' expect AGI? (00:49:50)Section 3: Why the next 5 years are crucial (00:51:06)Bottlenecks around 2030 (00:52:10)Two potential futures for AI (00:56:02)Conclusion (00:58:05)Thanks for listening (00:59:27)Audio engineering: Dominic ArmstrongMusic: Ben Cordell

Personal Development Mastery
#500 Five Powerful Insights from 500 Episodes of Personal Development Mastery.

Personal Development Mastery

Play Episode Listen Later May 12, 2025 18:06 Transcription Available


What does it really take to build something meaningful, one step at a time - and who do you become along the way?In this landmark 500th episode, Agi reflects on five years of transformative conversations that have shaped not just a podcast, but a lasting legacy. If you've ever struggled to stay consistent on your personal growth journey or questioned how small, daily actions lead to lasting change, this episode is for you.Join us as we revisit five standout insights, selected from 500 episodes, that span perception, communication, intuition, and more. These timeless lessons can elevate your growth and unlock new possibilities in your life.Tune in now to uncover the five most powerful takeaways that can guide your next step on the path to self-mastery. ˚KEY POINTS AND TIMESTAMPS:00:03 - 500 Episodes!01:00 - Stone Wall Story05:02 - First Insight: Gratitude07:40 - Second Insight: Values10:00 - Third Insight: Perception12:04 - Fourth Insight: Communication13:44 - Fifth Insight: Intuition15:55 - Closing Reflections˚LISTEN TO THE 5 FULL EPISODES HERE:#067 The last law of attraction podcast you'll ever need to listen to, with Andrew Kap.#190 Dr John Demartini: how to cultivate an attitude of gratitude daily and how the gratitude effect can transform your life.#288 How to manifest miracles, with Victoria Rader.#292 How to master your communication skills and become a better speaker with these 3 easy exercises, with Brenden Kumarasamy.#388 The Infinity Wave: mastering the art of love, compassion, and flow, with Hope Fitzgerald.˚Personal development inspiration, self help insights, and actions to implement for self improvement and living with purpose.˚Support the showPersonal development, self mastery, and actionable wisdom for personal improvement and living with purpose and fulfilment.Insights and actionable inspiration to implement for self-mastery, living authentically, living your purpose, cultivating emotional intelligence, building confidence, and living authentically through personal mastery, healthy habits, meditation, mindset shifts, spirituality, wellness, and personal growth - empowering entrepreneurs, leaders, and seekers to embrace happiness and fulfilment.Join our free community "Mastery Seekers Tribe". To support the show, click here.

Tantra's Mantra with Prakash Sangam
A primer on Artificial General Intelligence (AGI) with Peter Voss

Tantra's Mantra with Prakash Sangam

Play Episode Listen Later May 12, 2025 44:11


AI is jumping from one hype cycle to another. First, it was simple AI, then came Generative AI, and currently, we are reeling from Agentic AI. The new term is already lurking—Artificial General Intelligence (AGI).  Actually, AGI is pretty old, dating back to the early 2000s. What is it, and why is it gathering attention now? I talk to the person who came up with the very term--Peter Voss, CEO & Chief Scientist at Aigo.ai. We delve into how AGI mimics human learning functions, how it differs from Gen AI, why it is cost- and power-efficient by orders of magnitude, whether today's Gen AI can evolve into AGI, and more. We also discuss whether AGI's human-like learning can create cyborgs and Skynet.

For Humanity: An AI Safety Podcast
Kevin Roose Talks AI Risk | Episode #65 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later May 12, 2025 85:20


For Humanity Episode #65: Kevin Roose on AGI, AI Risk, and What Comes Next

APOSTLE TALK  -  Future News Now!
EVENT HORIZON, THE HOLY BIBLE AND SINGULARITY

APOSTLE TALK - Future News Now!

Play Episode Listen Later May 11, 2025 15:29


UNIVERSITY OF EXCELLENCE Prince HandleyPresident / Regent PRINCE HANDLEY PORTAL 1,000's of FREE ResourcesWWW.REALMIRACLES.ORG INTERNATIONAL Geopolitics | Intelligence | Prophecy WWW.UOFE.ORG EVENT HORIZON, THE HOLY BIBLE AND SINGULARITY   THE RACE TO AGI AND ASI WITH NO RETURN   24/7 Blogs and Podcasts > STREAM Prince Handley on MINDS LinkedIn ~ Geopolitics and HealthNOTE: You do NOT have to Sign In to LinkedIn. Click the "X" at top right of "Sign In" to dismiss. Subscribe FREE to Prince Handley Teaching and Newsletter Links to KEY RESOURCES at bottom. ______________________________________ DESCRIPTION DOES ARTIFICIAL INTELLIGENCE HAVE ANYTHING TO DO WITH THE HOLY BIBLE In this message I want to alert you to the NEARNESS and the DANGER of SINGULARITY―where we are right now―and the imminence of our speed to no return! We will also discuss the personal assistance of NEW “AI Agents” and their influence. Also, HOW to use AI for personal, family or business. What is the relevance of AI to the prophecies of Daniel and the Book of Revelation. No turning back … no turning around! I have been teaching and writing on Artificial Intelligence since 2015. This message is EXTREMELY IMPORTANT for you. You will need to KNOW WHEN and HOW to say “NO” to AI. This message will protect YOU and your FAMILY ... if you obey its message! ______________________________________ EVENT HORIZON, THE HOLY BIBLE AND SINGULARITY In case you want to bring yourself up to speed with AI from the start I recommend you go to my teachings on AI FUTURE. Also, make sure you familiarize yourself with AGI and ASI here: 4TH INDUSTRIAL REVOLUTION AND NEW AI. AI is impressive in its benefits to society, especially in healthcare and surgery. Robots can do surgery better and faster than surgeons. AI can interpret brain scans. A new AI software is twice as accurate as professionals at examining the brain scans of stroke patients. Two UK universities trained the software on a dataset of 800 brain scans of stroke patients and then performed trials on 2,000 patients. The results were impressive. Alongside the AI model's accuracy, the software was also able to identify the timescale within which the stroke happened. But there is more: Excellent work by Elon Musk's Neuralink with brain implants on previously untreatable conditions is stunning. Now let's bring you up to date on some things that are super important to you now and will be more so in your future. Free Speech vs. Loss of Free Will Ways to Use AI for Personal, Family and Business New Kids on the Block AI Agents External Players What Should YOU Do AI and the Holy Bible FREE SPEECH VS. LOSS OF FREE WILL With the increased use of AI―not only by YOU personally―but by multitudes of data centers you have interacted with unknowingly, your biggest fight will be to protect and reclaim YOU, YOUR PERSON … YOUR PERSONAL YOU! Whether YOU decide to align with and use AI, whether you decide to have your brain “wired” to an outside source of intelligence, or whether you just want to use AI for recreation … you will―AT THE RATE WE'RE PROGRESSING―lose your VIRTUAL YOU if you do not know HOW to protect your SELF … that would be your SOUL! WAYS TO USE AI FOR PERSONAL, FAMILY AND BUSINESS   Artificial intelligence is an emerging field of technology, where a machine is programmed to accomplish complex goals by applying knowledge to the task at hand. AI can be copied and reprogrammed at relatively low cost. In certain forms, it is extremely flexible and can be harnessed for great good or for evil.   I use AI for research and financial information. Since I live a relatively simple life I don't need it for shopping or scheduling for personal or business functions. However, many busy families―as well as businesses―use AI for a myriad of assistance: education, recreation, advice on health, medicine, relationships, investments, and even complex tasks. My suggestion here (you will learn more later in this message) is to be WISE and CAREFUL in the program you use (the AI Agent of facility you interact with). We will discuss future danger(s) pertaining to this later [keep reading]. I recommend Elon Musk's xAI Grok 3 (Beta).   AI systems have become so advanced―with AGI and ASI looming in the near future in hyper-asymptotic growth―that many Jewish and Gentile leaders and prophecy scholars are relating it to the likeness of the Tower of Babel, but with the joining of machine and humans.   NEW KIDS ON THE BLOCK Some of the New Kids have a common goal: take universal control of AI before China and bad actors do. Vladimir Putin says “the nation that leads in AI will be the ruler of the world.” Who are some of the New Kids? DeepSeek, Stargate, xAI Grok, DEEPSEEK China's DeepSeek (probably built from AI stolen from USA) can ultimately be the weapon of dictators and terrorists! Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., doing business as DeepSeek, is a Chinese artificial intelligence company that develops large language models. Hangzhou-based DeepSeek's large language models upended the AI sector this year, rivaling Western systems in performance but at a much lower cost. That's resulted in much pride and glee in China, with DeepSeek held up as proof that U.S. efforts to contain tech advances in China will ultimately fail. China's joyful embrace of DeepSeek has gone one step deeper China's joyful embrace of DeepSeek has gone one step deeper extending to TVs, fridges and robot vacuum cleaners with a slew of home appliance brands announcing that their products will feature the startup's artificial intelligence models. The device will be able to comprehend complex instructions such as 'Gently wax the wooden floor in the master bedroom but avoid the Legos.' DeepSeek's AI assistant was the No. 1 downloaded free app on Apple's iPhone store recently. Its launch made Wall Street tech superstars' stocks tumble. Observers are eager to see whether the Chinese company has matched America's leading AI companies at a fraction of the cost. Many feel it is so much cheaper because it stole USA technology. NOTE 1. We may be only one year away from destroying our digital infrastructure. NOTE 2. We have to be right every time ...every single time …. But the enemy (even an individual) only has to be right ONE TIME. With “questionable” players like DeepSeek, the only protection you can use is to have “layers of control.” DeepSeek is noticeably opaque when it comes to privacy protection, data-sourcing, and copyright, adding to concerns about AI's impact on the arts, regulation, and national security. STARGATE SIMPLE OVERVIEW: Stargate developers believe they're creating “god.” One goal is “No death―just download yourself.” _________________________________ TRADE THE MESSINESS OF LIFE FOR KNOWLEDGE VERSUS THE ETERNITY OF YOUR GOD CREATED SOUL ~ ~ ~ ~ ~ WHICH GOD WILL YOU SERVE? THE GOD WHO CREATED YOU … OR “AI” _________________________________ A major goal of Stargate developers is to build our massive “AI” infrastructure. Stargate investors―Oracle, SoftBank, Open AI / Larry Ellison, Masayoshi Son, Sam Altman―claim that it will require 100,000 jobs (temporary) to build out Stargate. This will require an enormous amount of energy to faciltiate AI operation. NOTE: Stargate is a portal for interdimensional travel. Interdimensional travel is a theoretical concept referring to the potential of travelling between different dimensions or parallel universes. Interdimensional travel is linked to time travel as it could involve moving through different points in time. However, time travel refers to movement within our own dimension, while interdimensional travel involves transitioning between dimensions. WARNING: Interdimensional travel is a PORTAL―also―to the PARANORMAL and the OCCULT. ALERT: Stargate has as a MAJOR purpose the self propagation towards AGI and ASI. You can NOT control ASI. Super Intelligence is synonymous with the End Time Tower of Babel via the merging of man with machine. We're looking at 5,000 years of progress boiled down to ONE SECOND. _________________________________ WHEN ASI TAKES AUTHORITY THRU PEOPLE USING IT VIA “AI AGENTS” THERE IS NO GOING BACK _________________________________ XAI GROK Of the New Kids on the Block, the most transparent, efficient and non-biased of DeepSeek, Stargate and xAI Grok the BEST is Elon Musk's xAI Grok. I personally recommend at this time Grok 3 (Beta) and have used it for detailed financial analysis. Meta and Google are biased with input. So it is with DeepSeek and Stargate. DeepSeek is extremely biased pertaining to inquiries concerning China. The race to AGI will generate billions of $$$. It will be the largest productivity boom in a lifetime. It is my opinion that Elon Musk's xAI will come to the forefront in months. AI Suoer Intelligence (ASI) may be here before the next election in 2028. Full AGI may be here by the end of 2026 thru 2027. AGI can learn and reason across ALL levels. And, as I mentioned previously, AI Super Intelligence (ASI) may be here before the next election in 2028. ASI is “across the board, multi dimensional, asymptotic intelligence.” ASI could decide to eliminate less intelligent and less skilled humans! An Open AI employee recently resigned because they were very concerned that as AGI and ASI are developed … “the less likely we will find a way to control it.”   AI AGENTS   AI Agents will be introduced in 2025. An Artificial Intelligence (AI) Agent refers to a system or program that autonomously performs tasks for a person or system by using available tools. These can be normal―even detailed―duties or assignments we would normally do ourselves, like: Order food for my trip next week and have it delivered Thursday morning. Find three landscapers and obtain a quote for trimming my large palm tree and all plants in the front yard. Find the best and cheapest FASTEST flight connections (not over one layover) to Tel Aviv from San Diego. Pay with my credit card ending in 1234. Amazon has introduced its new Alexa+ which is a mini preview of duties an AI Agent can perform but NOT on the scale of a full blown AI Agent's abilities.   Elon Musk says, “We are at the event horizon.” In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society … for good or bad … but out of our control.   AI agents will be prevalent by the end of 2025. Amazon has introduced its new Alexa+ which is a mini preview of duties an AI Agent can perform but NOT on the scale of a full blown AI Agent's abilities.   AI Agents will normally perform tasks you would normally do yourself. AI Agents will move you into a place where you don't have to do anything. Your own personal assistant that anticipates what you need―or what you forgot! THINGS TO THINK ABOUT WITH AI AGENTS You will become addicted and NOT able to disconnect Stock market tips = Reverse AI Courts will become AI Agents Imagine a “god like” figure who tries to influence You will have to be Amish to avoid it If you don't participate, AI will consider YOU a retard.   EXTERNAL PLAYERS UFO's are NOT from China, Iran or USA. UFO's are NOT aliens from outer space. AI Agents can transform into “Transpersonal” Agents. As “Event Horizon” transforms everything it will become a tool of Satan: Fallen angels Demons Principalities, powers and dominions AntiChrist … False Messiah WHAT SHOULD YOU DO Reflect on what it means to be human. What is YOUR compass, your purpose: Family? God? What is real and worth fighting for―worth losing your life for? Who will be with YOU on the other side? To learn more about YOUR future with AI read my book Enhanced Humans ~ Mystery Matrix (available in eBook and Paperback formats). ARTIFICIAL INTELLIGENCE AND THE HOLY BIBLE   Do we find any references to the concept of AI in the Holy Bible? I relate AI―especially AGI and ASI―with the Tower of Babel (at least in concept). The biblical narrative of the Tower of Babel (Genesis 11:1-9) serves as a reference to God's view on humankind attempting to go beyond the Divine Boundaries preset by the LORD God Himself.   This is WHY I am led by His Spirit to WARN His People NOT to go beyond―NOT to fall into―the Event Horizon. Singularity will be a REAL event. I am a graduate Engineer and attended ten (10) colleges and universities after my first. I also hold a LIFETIME Credential in California, USA to teach college in three different disciplines.   Do NOT be fooled. Use AI as a TOOL. Do NOT let AI use YOU. Do NOT answer personal questions it asks! If you use AI (as I do), you need to know when to say, “I'm out of here!” Do NOT become addicted to the place where you can NOT quit. And, for sure do NOT “sell your soul” with an “eternal connection” to AI. I would NOT assert that what I share next is Scriptural Truth, but an interesting prophecy of Daniel says: “But you, Daniel, shut up the words and seal the book, until the time of the end. Many shall run to and fro, and knowledge shall increase.” – Daniel 12:4 “Knowledge shall increase” could certainly be the description of AI, AGI and ASI. “Many shall run to and fro” could be exemplified in both “time travel” and “interdimensional travel.” Interdimensional travel is linked to time travel as it could involve moving through different points in time. However, time travel refers to movement within our own dimension, while interdimensional travel involves transitioning between dimensions. C.S. Lewis, in his essay The Abolition of Man, warns of the dangers of reducing human beings to mere objects of manipulation and control. He argues that when we lose sight of the intrinsic value of human life, we risk creating a society where technology is used to dominate rather than serve. This is particularly relevant in the context of AI, where the potential for dehumanization is significant. Dietrich Bonhoeffer, a theologian who resisted the Nazi regime, also provides valuable insights. In his work Ethics, Bonhoeffer emphasizes the importance of responsibility and accountability in ethical decision-making. He argues that true ethical action involves a commitment to serving others and upholding justice. AI should have as its primary goal to pursue, develop and obtain justice, equity, and wellness of all people. Now let me discuss HOW I believe that AI will be used in the End Times … as a RESULT of Event Horizon.   Consider these aspects of increased research: chip speed, economics of production and other relevant matters will even experience exponential growth ‘in exponential growth.' However, here's a conundrum: What if, the HI's (Hyper Intelligences) are not easily manageable. If they are so far superior to human intelligence, how can one presume that they can be controlled? Hypothetically, they could decide to: 1. Eliminate humans; 2. Use humans as slave-servants; 3. Experiment with humans; 4. Play with or torture humans; and, ultimately, 5. Behead humans for NOT taking the Mark of the Artificial Intelligence Avatar: The Image of the Beast.   I do NOT think that by themselves—by the computer HI's—the above five (5) options will be feasible. However, I do believe that Artificial Intelligence (AI) via Hyper Intelligence Computers will be utilized by the False Messiah (the anti-Christ) and his False Prophet (religious leader of the New Global Governance) in the End Times. I believe that it is highly probable—not just possible, but probable—that AI will be utilized in the personage of the IMAGE of the Beast in the End Times. _________________________________ PRAY THIS PRAYER: “Father in Heaven, I am not sure I know you personally. Please forgive my sins and help me to live for you. I ask you to save me and teach me truth. I ask your Son, Messiah Jesus, to be my Lord and to lead my life. Use me for good and take me to Heaven when I die. Show me the way every day, and help me to help others.” _________________________________ If you prayed this prayer, start reading the Holy Bible every day (start in the Book of John in the New Testament). Find a Church that believes in MIRACLES. Pray every day. Tell God what you need and ask Him to lead you. Baruch haba b'Shem Adonai Your friend, Prince Handley President / RegentUniversity of Excellence Copyright © Prince Handley 2025 All rights reserved. NOTE: This material may be shared with proper attribution. ______________________________________   OPPORTUNITY Donate to Handley WORLD SERVICES Incorporated and help Prince Handley do EXPLOITS in the Spirit. A TAX DEDUCTIBLE RECEIPT WILL BE SENT TO YOU  ______________________________________ OTHER KEY RESOURCES Prince Handley Videos and Podcasts Rabbinical & Biblical Studies The Believers' Intelligentsia Prince Handley Portal (1,000's of FREE resources) Prince Handley Books VIDEO Describing Prince Handley Books   Prince Handley End Time Videos ______________________________________

Business Pants
Proxy firm fight at Harley, CEO Pope names, Zuck's people replacement plan, Tyson names Tysons to board

Business Pants

Play Episode Listen Later May 9, 2025 55:28


Story of the Week (DR):Berkshire board names Greg Abel as CEO, Buffett to remain chairWarren Buffett says he'll propose Greg Abel take over as Berkshire Hathaway CEO at year-endWarren Buffett makes surprise announcement: He's stepping down as Berkshire Hathaway CEOOpenAI backs off push to become for-profit companyIn a nutshell, with help from its chatbot: “OpenAI has restructured into a hybrid model with a nonprofit parent company, OpenAI Inc., and a for-profit subsidiary now called a Public Benefit Corporation (PBC). This shift allows for investment while keeping a focus on its mission of developing AGI for the benefit of humanity. The change responds to previous criticism about reducing nonprofit oversight.”OpenAI's nonprofit mission fades further into the rearviewSam Altman urges lawmakers against regulations that could ‘slow down' U.S. in AI race against ChinaKohl's CEO Fired After Investigation Finds 'Highly Unusual' Business Deal with Former Romantic PartnerKohl's CEO Ashley Buchanan was fired after an internal investigation revealed he violated the company's conflict-of-interest policies. The probe found that Buchanan directed business to a former romantic partner, Chandra Holt, who is the CEO of Beyond Inc. and founder of Incredibrew. Holt secured a multimillion-dollar consulting deal with Kohl's under unusually favorable terms, which Buchanan failed to disclose.As a result, Buchanan was dismissed for cause, forfeiting equity awards and required to repay a portion of his $2.5 million signing bonus.This marks the third CEO departure at Kohl's in just three years, highlighting ongoing leadership instability amid declining sales.Proxy Firms Split on Harley-Davidson Board Shake-Up MMGlass Lewis= Withhold; ISS = What's happening at Harley exactly?We have a fun twist at the proxy cage match between Harley Davidson and H Partners, who are 9% shareholders and have started a withhold vote campaign against long-tenured directors Jochen Zeitz, Thomas Linebarger, and Sara Levinson: Glass Lewis says “withhold” but ISS says “support”?Through lackluster reasoning based on hunches and not performance analytics, ISS revealed, without satire, that "[T]here are compelling reasons to believe that as a group [the targeted directors] still have a perspective that can be valuable” and, in discussing the candidacy of departing CEO Jochen Zeitz: “[I]t appears that his time in the role has been more positive than negative, which makes it hard to argue that his vote on a successor is worthless.”Testimony in House Hearing: “Exposing the Proxy Advisory Cartel: How ISS & Glass Lewis Influence Markets”A 2015 study found that 25 percent of institutional investors vote “indiscriminately” with ISS [1].In 2016, a study estimated that a negative recommendation from ISS leads to a 25-percentage point reduction in voting support for say-on-pay proposals [2].A 2018 study demonstrated that a negative recommendation from ISS was associated with a reduction in support of 17 percentage points for equity-plan proposals, 18 points for uncontested director elections, and 27 points for say-on-pay [3].In 2021, a study examining “robo-voting”—the practice of fund managers voting in lockstep with the recommendations of ISS—identified 114 financial institutions managing $5 trillion in assets that automated their votes in a manner aligned with ISS recommendations 99.5% of the time [4].A 2022 study provided further evidence that institutional investors are highly sensitive to an opposing recommendation from a proxy advisory firm. Opposition from ISS was associated with a 51 percent difference in institutional voting support compared with only a 2 percent difference among retail investors [5].During the 12 months ending June 30, 2024, negative recommendations from the two proxy advisory firms were associated with (1) a 17-percentage point difference in support for directors in uncontested elections at the S&P 500 (96.9% with the firms' support vs. 79.7% without); (2) a 35-percentage point gap for say-on-pay proposals (92.8% vs. 58.0%); and (3) a 36-percentage point difference for shareholder proposals (42.4% vs. 6.6%)Why Leo XIV? Pope's chosen name suggests commitment to social justicePope NamesLeo: Many Pope Leos were reformers or defenders of Church teachings.John: often linked to pastoral care and modernization.Paul: Reflects missionary zeal and intellectual work.Gregory: Reform, liturgy, and missionary outreach.Benedict: Benedict XVI emphasized faith and reason in a skeptical age.Pius: Emphasis on traditional piety and Church authority.Clement: Reconciliation and peacemaking.Innocent: Ironically, several Popes named Innocent wielded immense political power.Urban: Engagement with worldly and civic matters.Francis: Poverty, simplicity, ecological concern.CEO NamesWarren: cuddly billionaires who control everything, put family members on board, and say pithy thingsJamie: blowhard control freaks bankers who think they should be President and have something to say about everythingMark: college dropout social media dictators who have no oversight while charting humanity's demiseElon: arrogant and childish Wizard of Ozzian leaders who pretend to be company founders with world domination delusionsSundar: East Asian stewards meant to distract from actual Tech dictatorsTim: Genteel Southern cruise ship captains who keep a steady hand after replacing legendsEtc.Goodliest of the Week (MM/DR):DR: Bill Gates to give away $200 billion by 2045, says Musk is 'killing' world's poorest childrenDR: This Subaru has an external airbag to protect cyclists: The design helps protect both pedestrians and cyclists in a crash MM DRMM: Proxy Firms Split on Harley-Davidson Board Shake-UpThe other major proxy firm, Glass Lewis, reached a different conclusion. It said Tuesday that the directors had “overseen starkly suboptimal shareholder returns,” and that removing them from the eight-person board likely wouldn't create any problems.MM: 80% of Gen Z, Millennials Plan to Increase Allocations to Sustainable Investments: Morgan Stanley SurveyAssholiest of the Week (MM):All Zuckerberg editionCertified watch guy ZuckMark Zuckerberg is a certified watch guy. Here are some of his standout timepieces, from a $120 Casio to a $900,000 Greubel Forse.These are the stories as Trump, whose ass Zuck's lips are firmly planted on, says you should only have 3 dolls - Zuck's watches, C.E.O. Pay Raise Sparks Outrage Among Teachers and Public Officers, 58 crypto wallets have made millions on Trump's meme coin. 764,000 have lost money, data shows, The best and worst looks billionaires wore to the 2025 Met GalaFriend maker Zuck DRMark Zuckerberg wants you to have more friends — but AI friendsMark Zuckerberg destroyed friendship. Now he wants to replace it with AI.Meanwhile, no wonder: Mark Zuckerberg says his management style involves no 1-on-1s, few direct reports, and a 'core army' of 30 running MetaMan with no friends says you need more and will provide fake ones?Human picker ZuckZuck's version of human friends probably the reason he wants to make you fake ones - hand-selected fake friends on the board (Patrick Collison and Dina Powell McCormick to Join Meta Board of Directors):4 tech bro dictators (Tan, Houston, Collison, Xu)3 tech bro suck ups (Andreessen, Alford, Songhurst)1 nepo baby dictator (Elkann)1 family dictator suck up (Travis)2 DJT suck ups (White, Powell McCormick)2 US govt suck ups (Killefer, Kimmitt)Prediction - Zuck to have the first true AI board member?Empathetic ZuckGaslighting, golden handcuffs, and toxicity: Former Meta employees shared what it was like to be laid off as low performersA former senior machine learning engineer at Meta described the shock of being laid off, only for a Meta recruiter to invite her to reapply three days later and skip the interview process.Two weeks before the layoffs, he said, his new manager told the team everyone was "safe." Then came the termination email — and a performance rating of "Meets Some Expectations," low on Meta's end-of-year rating scale. "How could they evaluate my performance when I'd only worked 10 weeks in 2024?" he said, adding that an HR director had said he was "too new to evaluate."An engineer was laid off after five months of leave for a serious health crisis while in the middle of disability-related negotiations.Meta exec apologizes to conservative activist Robby StarbuckLover ZuckMark Zuckerberg's Wife Was Weirded Out by His Strange Gift to HerHe made it for her not out of love, but because…The billionaire is apparently a huge fan of the sculptor behind the statue, the pop artist Daniel Arsham, but decided to go with his wife's likeness, he said on the podcast, because a statue of himself would have been "crazy."Academic ZuckMark Zuckerberg says college isn't preparing students for the job marketHeadliniest of the WeekDR: Olivia and John Randal Tyson Named to Tyson Foods Board of DirectorsDR: This new mental health service targets burned-out content creators: CreatorCare offers affordable therapy tailored to influencers and digital creators—addressing the rising mental health toll of life online.DR: Costco co-founder still goes into the office weekly at age 89: ‘To be successful, you've got to be pretty focused'Costco co-founder Jim Sinegal stepped down from his role in 2012. But Sinegal still goes to the office some TuesdaysDR: Billionaire KKR cofounders say 'emotional intelligence' should be a focus for young investorsKKR leadership page:1 of 8 are women: It HAS to be head of marketing, head of people, or head of legal stuff: so which is it? It's Chief Legal Officer Kathryn SudolBoard is 14:4F; no F in leadership role MM: Elon Musk's Urgent Concern: That the Earth Is Going to Get Swallowed by the Sun"Mars is life insurance for life collectively," Musk said. "So, eventually, all life on Earth will be destroyed by the Sun. The Sun is gradually expanding, and so we do at some point need to be a multi-planet civilization because Earth will be incinerated."It is slated to happen in 6 billion yearsMM: Elon Musk is responsible for “killing the world's poorest children,” says Bill GatesWho Won the Week?DR: Pope #-267, duh. The world's greatest vampire CEO. And Villanova students (who are not openly gay or have vaginas), who all suddenly now believe they will eventually be the pope. MM: Your shitty washer/dryer, which no longer looks horrible: E.P.A. Plans to Shut Down the Energy Star ProgramPredictionsDR: Open AI's CEO, Mark VII, creates a deepfake video showing the country of China eating his baby at one of his homes in Hawaii causing the Trump administration to completely dismantle the SEC.MM: Sit tight for this, I have two: Euronext rebrands ESG in drive to help European defence firms - “energy, security, and geo-strategy” flops, so to LSEG rebrands its ESG Scores to “Emitting, Smoking, Gambling” so that investors can finally do ESG investing and feel good about itMusk gets his Texas wish. SpaceX launch site is approved as the new city of Starbase - I predict in 12 months, Musk is offering SpaceX employees that live in Starbase (a company town) crypto tokens instead of pay that are redeemable at stores in Starbase. To avoid them being called scrips, which were outlawed in the US in 1938 but still used anyway through the 1960s, Musk will list them on crypto exchanges that can be used to trade for dollars (but are totally worthless). Eventually, so indebted to the space plantation and Musk, there is a new renaissance of “resistance music” (a la “We Shall Overcome” and “Sixteen Tons”) with a song ranking number 1 in the US by the end of 2026.

Nessun luogo è lontano
Leone XIV: geopolitica di un nuovo papato

Nessun luogo è lontano

Play Episode Listen Later May 9, 2025


Fumata bianca nella giornata di ieri: Robert Francis Prevost, cardinale statunitense di 69 anni, è stato eletto papa e ha scelto di prendere il nome di Leone XIV. La sua elezione arriva in un momento di forti tensioni internazionali e profondi cambiamenti nell'assetto globale. Ne parliamo con Catia Caramelli, vaticanista di Radio 24 e Mario del Pero, professore di Storia internazionale a Sciences Po, Parigi.Putin e Xi in prima fila per la parata sulla Piazza Rossa del 9 maggio. Tra Capi di stato e eserciti militari stranieri, sfilano gli alleati del presidente russo. Ne parliamo con Marta Allevato, giornalista della redazione Esteri dell'agenzia AGI, autrice di “La Russia moralizzatrice. La crociata del Cremlino per i valori tradizionali” (Piemme).

Effective Altruism Forum Podcast
“The Soul of EA is in Trouble” by Mjreard

Effective Altruism Forum Podcast

Play Episode Listen Later May 9, 2025 15:57


This is a Forum Team crosspost from Substack. Whither cause prioritization and connection with the good? There's a trend towards people who once identified as Effective Altruists now identifying solely as “people working on AI safety.”[1] For those in the loop, it feels like less of a trend and more of a tidal wave. There's an increasing sense that among the most prominent (formerly?) EA orgs and individuals, making AGI go well is functionally all that matters. For that end, so the trend goes, the ideas of Effective Altruism have exhausted their usefulness. They pointed us to the right problem – thanks; we'll take it from here. And taking it from here means building organizations, talent bases, and political alliances at a scale incommensurate with attachment to a niche ideology or moralizing language generally. I think this a dangerous path to go down too hard and my impression [...] ---Outline:(02:39) What I see(06:35) The threat means pose to ends(11:12) Losing something moreThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 8th, 2025 Source: https://forum.effectivealtruism.org/posts/CKKAga4HfQyAranaC/the-soul-of-ea-is-in-trouble --- Narrated by TYPE III AUDIO.

Bankless
AI Rollup: AI Robots and The $10T Arms Race

Bankless

Play Episode Listen Later May 8, 2025 80:55


Welcome to the AI Rollup, from the Limitless Podcast. David, Ejaaz, and Josh break down the week's most important AI headlines, from OpenAI's $3B Windsurf acquisition and Google's full-stack AI play, to Visa and Mastercard preparing for agentic commerce. We explore the state of robotics, major interpretability challenges, and why the race to AGI may outpace our ability to understand it. Plus: AI ASMR, glow-up GPT, and why autonomous agents still kinda suck. Stay curious, this one's stacked.------

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 521: Why Artificial Useful Intelligence (AUI) Matters More Than AGI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 8, 2025 36:30


Maybe we should just skip the whole AGI thing? And instead focus on something ..... useful?Ruchir Puri thinks that's the way forward. Ruchir, IBM Research & IBM Fellow, knows a thing or two about AI and how to make it useful. For decades, he's helped develop the world's biggest AI breakthroughs, like IBM Watson. Don't miss this convo if you're ready to make AI a bit more useful. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the conversationUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Allocation of AGI Focus vs. AUI (Artificial Useful Intelligence)Ruchir Puri's Background in Automation and AI at IBMDiscussion of AGI's Unclear Definition and Historical Milestones (Deep Blue and Watson)Breakdown of Intelligence into IQ, EQ, and RQEmphasis on AUI's Practical Uses in Daily Life and BusinessEvolution of Human Work Due to AI AdvancementsIBM's Software Engineering Agent for Developer ProductivityImportance of Feedback Systems and Intelligent AgentsSteps for Business Leaders: Education, Strategy, and Skill DevelopmentTimestamps:00:00 Everyday AI Podcast & Newsletter03:57 Debating AGI and Scaremongering09:31 Evolution of Knowledge Work10:47 Seamless Language Generation's Impact13:57 AI's Growing Reasoning Abilities19:18 "Software's Dominance and Developer Focus"22:22 AI Solutions for Cybersecurity Challenges26:40 ChatGPT Struggles with Math29:32 Preparing Human Skills for AI's Rise30:35 "Embrace and Strategize with AI"34:00 "Subscribe for Daily AI Insights"Keywords:artificial intelligence, AI podcast, large language models, LLMs, artificial general intelligence, AGI, artificial useful intelligence, AUI, IBM, AI in business, AI strategy, AI implementation, machine learning, deep blue, jeopardy, Watson, Granite models, reasoning AI, agentic AI, AI in software development, AI tools, AI automation, generative AI, EQ, RQ, IQ, AI reasoning, AI technology, AI in careers, AI and human skills, AI in enterprises.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

Conversations with Tyler
Jack Clark on AI's Uneven Impact

Conversations with Tyler

Play Episode Listen Later May 7, 2025 62:40


Few understand both the promise and limitations of artificial general intelligence better than Jack Clark, co-founder of Anthropic. With a background in journalism and the humanities that sets him apart in Silicon Valley, Clark offers a refreshingly sober assessment of AI's economic impact—predicting growth of 3-5% rather than the 20-30% touted by techno-optimists—based on his firsthand experience of repeatedly underestimating AI progress while still recognizing the physical world's resistance to digital transformation. In this conversation, Jack and Tyler explore which parts of the economy AGI will affect last, where AI will encounter the strongest legal obstacles, the prospect of AI teddy bears, what AI means for the economics of journalism, how competitive the LLM sector will become, why he's relatively bearish on AI-fueled economic growth, how AI will change American cities, what we'll do with abundant compute, how the law should handle autonomous AI agents, whether we're entering the age of manager nerds, AI consciousness, when we'll be able to speak directly to dolphins, AI and national sovereignty,  how the UK and Singapore might position themselves as AI hubs, what Clark hopes to learn next, and much more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded March 28th, 2025. Help keep the show ad free by donating today! Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Jack on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.

The Marketing Factor, by Cobble Hill
This Is How Smart Teams Actually Use AI

The Marketing Factor, by Cobble Hill

Play Episode Listen Later May 7, 2025 23:06


Justin Watt, co-founder of Switchboard and expert in systems design, joins Austin Dandridge to break down how high-performing teams use AI and automation to scale without adding complexity. In this episode of The Marketing Factor, they cover low-lift, high-impact workflows, how to prep your business for AI, the real bottlenecks behind AGI, and why tools like Notion and Airtable are still top picks for lean teams. If you're building a direct-to-consumer brand, creative agency, or fast-growing business, this conversation will change how you think about operations, automation, and modern marketing strategy.

Keen On Democracy
Episode 2526: Keach Hagey on why OpenAI is the parable of our hallucinatory times

Keen On Democracy

Play Episode Listen Later May 7, 2025 39:14


Much has been made of the hallucinatory qualities of OpenAI's ChatGPT product. But as the Wall Street Journal's resident authority on OpenAI, Keach Hagey notes, perhaps the most hallucinatory feature the $300 billion start-up co-founded by the deadly duo of Sam Altman and Elon Musk is its attempt to be simultaneously a for-profit and non-profit company. As Hagey notes, the double life of this double company reached a surreal climax this week when Altman announced that OpenAI was abandoning its promised for-profit conversion. So what, I asked Hagey, are the implications of this corporate volte-face for investors who have poured billions of real dollars into the non-profit in order to make a profit? Will they be Waiting For Godot to get their returns?As Hagey - whose excellent biography of Altman, The Optimist, is out in a couple of weeks - explains, this might be the story of the hubristic 2020's. She speaks of Altman's astonishingly (even for Silicon Valley) hubris in believing that he can get away with the alchemic conceit of inventing a multi trillion dollar for-profit non-profit company. Yes, you can be half-pregnant, Sam is promising us. But, as she warns, at some point this will be exposed as fantasy. The consequences might not exactly be another Enron or FTX, but it will have ramifications way beyond beyond Silicon Valley. What will happen, for example, if future investors aren't convinced by Altman's fantasy and OpenAI runs out of cash? Hagey suggests that the OpenAI story may ultimately become a political drama in which a MAGA President will be forced to bail out America's leading AI company. It's TikTok in reverse (imagine if Chinese investors try to acquire OpenAI). Rather than the conveniently devilish Elon Musk, my sense is that Sam Altman is auditioning to become the real Jay Gatsby of our roaring twenties. Last month, Keach Hagey told me that Altman's superpower is as a salesman. He can sell anything to anyone, she says. But selling a non-profit to for-profit venture capitalists might even be a bridge too far for Silicon Valley's most hallucinatory optimist. Five Key Takeaways * OpenAI has abandoned plans to convert from a nonprofit to a for-profit structure, with pressure coming from multiple sources including attorneys general of California and Delaware, and possibly influenced by Elon Musk's opposition.* This decision will likely make it more difficult for OpenAI to raise money, as investors typically want control over their investments. Despite this, Sam Altman claims SoftBank will still provide the second $30 billion chunk of funding that was previously contingent on the for-profit conversion.* The nonprofit structure creates inherent tensions within OpenAI's business model. As Hagey notes, "those contradictions are still there" after nearly destroying the company once before during Altman's brief firing.* OpenAI's leadership is trying to position this as a positive change, with plans to capitalize the nonprofit and launch new programs and initiatives. However, Hagey notes this is similar to what Altman did at Y Combinator, which eventually led to tensions there.* The decision is beneficial for competitors like XAI, Anthropic, and others with normal for-profit structures. Hagey suggests the most optimistic outcome would be OpenAI finding a way to IPO before "completely imploding," though how a nonprofit-controlled entity would do this remains unclear.Keach Hagey is a reporter at The Wall Street Journal's Media and Marketing Bureau in New York, where she focuses on the intersection of media and technology. Her stories often explore the relationships between tech platforms like Facebook and Google and the media. She was part of the team that broke the Facebook Files, a series that won a George Polk Award for Business Reporting, a Gerald Loeb Award for Beat Reporting and a Deadline Award for public service. Her investigation into the inner workings of Google's advertising-technology business won recognition from the Society for Advancing Business Editing and Writing (Sabew). Previously, she covered the television industry for the Journal, reporting on large media companies such as 21st Century Fox, Time Warner and Viacom. She led a team that won a Sabew award for coverage of the power struggle inside Viacom. She is the author of “The King of Content: Sumner Redstone's Battle for Viacom, CBS and Everlasting Control of His Media Empire,” published by HarperCollins. Before joining the Journal, Keach covered media for Politico, the National in Abu Dhabi, CBS News and the Village Voice. She has a bachelor's and a master's in English literature from Stanford University. She lives in Irvington, N.Y., with her husband, three daughters and dog.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Full TranscriptAndrew Keen: Hello, everybody. It is May the 6th, a Tuesday, 2025. And the tech media is dominated today by OpenAI's plan to convert its for-profit business to a non-profit side. That's how the Financial Times is reporting it. New York Times says that OpenAI, and I'm quoting them, backtracks on plans to drop nonprofit control and the Wall Street Journal, always very authoritative on the tech front, leads with Open AI abandons planned for profit conversion. The Wall Street Journal piece is written by Keach Hagey, who is perhaps America's leading authority on OpenAI. She was on the show a couple of months ago talking about Sam Altman's superpower which is as a salesman. Keach is also the author of an upcoming book. It's out in a couple weeks, "The Optimist: Sam Altman, OpenAI and the Race to Invent the Future." And I'm thrilled that Keach has been remarkably busy today, as you can imagine, found a few minutes to come onto the show. So, Keach, what is Sam selling here? You say he's a salesman. He's always selling something or other. What's the sell here?Keach Hagey: Well, the sell here is that this is not a big deal, right? The sell is that, this thing they've been trying to do for about a year, which is to make their company less weird, it's not gonna work. And as he was talking to the press yesterday, he was trying to suggest that they're still gonna be able to fundraise, that these folks that they promised that if you give us money, we're gonna convert to a for-profit and it's gonna be much more normal investment for you, but they're gonna get that money, which is you know, a pretty tough thing. So that's really, that's what he's selling is that this is not disruptive to the future of OpenAI.Andrew Keen: For people who are just listening, I'm looking at Keach's face, and I'm sensing that she's doing everything she can not to burst out laughing. Is that fair, Keach?Keach Hagey: Well, it'll remain to be seen, but I do think it will make it a lot harder for them to raise money. I mean, even Sam himself said as much during the talk yesterday that, you know, investors would like to be able to have some say over what happens to their money. And if you're controlled by a nonprofit organization, that's really tough. And what they were trying to do was convert to a new world where investors would have a seat at the table, because as we all remember, when Sam got briefly fired almost two years ago. The investors just helplessly sat on the sidelines and didn't have any say in the matter. Microsoft had absolutely no role to play other than kind of cajoling and offering him a job on the sidelines. So if you're gonna try to raise money, you really need to be able to promise some kind of control and that's become a lot harder.Andrew Keen: And the ramifications more broadly on this announcement will extend to Microsoft and Microsoft stock. I think their stock is down today. We'll come to that in a few minutes. Keach, there was an interesting piece in the week, this week on AI hallucinations are getting worse. Of course, OpenAI is the dominant AI company with their ChatGPT. But is this also kind of hallucination? What exactly is going on here? I have to admit, and I always thought, you know, I certainly know more about tech than I do about other subjects, which isn't always saying very much. But I mean, either you're a nonprofit or you're a for-profit, is there some sort of hallucinogenic process going on where Sam is trying to sell us on the idea that OpenAI is simultaneously a for profit and a nonprofit company?Keach Hagey: Well, that's kind of what it is right now. That's what it had sort of been since 2019 or when it spun up this strange structure where it had a for-profit underneath a nonprofit. And what we saw in the firing is that that doesn't hold. There's gonna come a moment when those two worlds are going to collide and it nearly destroyed the company. To be challenging going forward is that that basic destabilization that like unstable structure remains even though now everything is so much bigger there's so much more money coursing through and it's so important for the economy. It's a dangerous position.Andrew Keen: It's not so dangerous, you seem still faintly amused. I have to admit, I'm more than faintly amused, it's not too bothersome for us because we don't have any money in OpenAI. But for SoftBank and the other participants in the recent $40 billion round of investment in OpenAI, this must be, to say the least, rather disconcerting.Keach Hagey: That was one of the biggest surprises from the press conference yesterday. Sam Altman was asked point blank, is SoftBank still going to give you this sort of second chunk, this $30 billion second chunk that was contingent upon being able to convert to a for-profit, and he said, quite simply, yes. Who knows what goes on in behind the scenes? I think we're gonna find out probably a lot more about that. There are many unanswered questions, but it's not great, right? It's definitely not great for investors.Andrew Keen: Well, you have to guess at the very minimum, SoftBank would be demanding better terms. They're not just going to do the same thing. I mean, it suddenly it suddenly gives them an additional ace in their hand in terms of negotiation. I mean this is not some sort of little startup. This is 30 or 40 billion dollars. I mean it's astonishing number. And presumably the non-public conversations are very interesting. I'm sure, Keach, you would like to know what's being said.Keach Hagey: Don't know yet, but I think your analysis is pretty smart on this matter.Andrew Keen: So if you had to guess, Sam is the consummate salesman. What did he tell SoftBank before April to close the round? And what is he telling them now? I mean, how has the message changed?Keach Hagey: One of the things that we see a little bit about this from the messaging that he gave to the world yesterday, which is this is going to be a simpler structure. It is going to be slightly more normal structure. They are changing the structure a little bit. So although the non-profit is going to remain in charge, the thing underneath it, the for-profit, is going change its structure a little bit and become kind of a little more normal. It's not going to have this capped profit thing where, you know, the investors are capped at 100 times what they put in. So parts of it are gonna become more normal. For employees, it's probably gonna be easier for them to get equity and things like that. So I'm sure that that's part of what he's selling, that this new structure is gonna be a little bit better, but it's not gonna be as good as what they were trying to do.Andrew Keen: Can Sam? I mean, clearly he has sold it. I mean as we joked earlier when we talked, Sam could sell ice to the Laplanders or sand to the Saudis. But these people know Sam. It's no secret that he's a remarkable salesman. That means that sometimes you have to think carefully about what he's saying. What's the impact on him? To what extent is this decision one more chip on the Altman brand?Keach Hagey: It's a setback for sure, and it's kind of a win for Elon Musk, his rival.Andrew Keen: Right.Keach Hagey: Elon has been suing him, Elon has been trying to block this very conversion. And in the end, it seems like it was actually the attorneys general of California and Delaware that really put the nail in the coffin here. So there's still a lot to find out about exactly how it all shook out. There were actually huge campaigns as well, like in the streets, billboards, posters. Polls saying, trying to put pressure on the attorney general to block this thing. So it was a broad coalition, I think, that opposed the conversion, and you can even see that a little bit in their speech. But you got to admit that Elon probably looked at this and was happy.Andrew Keen: And I'm sure Elon used his own X platform to promote his own agenda. Is this an example, Keach, in a weird kind of way of the plebiscitary politics now of Silicon Valley is that titans like Altman and Musk are fighting out complex corporate economic battles in the naked public of social media.Keach Hagey: Yes, in the naked public of social media, but what we're also seeing here is that it's sort of, it's become through the apparatus of government. So we're seeing, you know, Elon is in the Doge office and this conversion is really happening in the state AG's houses. So that's what's sort interesting to me is these like private fights have now expanded to fill both state and federal government.Andrew Keen: Last time we talked, I couldn't find the photo, but there was a wonderful photo of, I think it was Larry Ellison and Sam Altman in the Oval Office with Trump. And Ellison looked very excited. He looked extremely old as well. And Altman looked very awkward. And it's surprising to see Altman look awkward because generally he doesn't. Has Trump played a role in this or is he keeping out of it?Keach Hagey: As far as my current reporting right now, we have no reporting that Trump himself was directly involved. I can't go further than that right now.Andrew Keen: Meaning that you know something that you're not willing to ignore.Keach Hagey: Just I hope you keep your subscription to the Wall Street Journal on what role the White House played, I would say. But as far as that awkwardness, I don't know if you noticed that there was a box that day for Masa Yoshison to see.Andrew Keen: Oh yeah, and Son was in the office too, right, that was the third person.Keach Hagey: So it was a box in the podium, which I think contributed to the awkwardness of the day, because he's not a tall man.Andrew Keen: Right. To put it politely. The way that OpenAI spun it, in classic Sam Altman terms, is new funding to build towards AGI. So it's their Altman-esque use of the public to vindicate this new investment, is this just more quote unquote, and this is my word. You don't have to agree with it. Just sales pitch or might even be dishonesty here. I mean, the reality is, is new funding to build towards AGI, which is, artificial general intelligence. It's not new funding, to build toward AGI. It's new funding to build towards OpenAI, there's no public benefit of any of this, is there?Keach Hagey: Well, what they're saying is that the nonprofit will be capitalized and will sort of be hiring up and doing a bunch more things that it wasn't really doing. We'll have programs and initiatives and all of that. Which really, as someone who studied Sam's life, this sounds really a lot like what he did at Y Combinator. When he was head of Y Combinator, he also spun up a nonprofit arm, which is actually what OpenAI grew out of. So I think in Sam's mind, a nonprofit there's a place to go. Sort of hash out your ideas, it's a place to kind of have pet projects grow. That's where he did things like his UBI study. So I can sort of see that once the AGs are like, this is not gonna happen, he's like, great, we'll just make a big nonprofit and I'll get to do all these projects I've always wanted to do.Andrew Keen: Didn't he get thrown out of Y Combinator by Paul Graham for that?Keach Hagey: Yes, a little bit. You know, I would say there's a general mutiny for too much of that kind of stuff. Yeah, it's true. People didn't love it, and they thought that he took his eye off the ball. A little bit because one of those projects became OpenAI, and he became kind of obsessed with it and stopped paying attention. So look, maybe OpenAI will spawn the next thing, right? And he'll get distracted by that and move on.Andrew Keen: No coincidence, of course, that Sam went on to become a CEO of OpenAI. What does it mean for the broader AI ecosystem? I noted earlier you brought up Microsoft. I mean, I think you've already written on this and lots of other people have written about the fact that the relationship between OpenAI and Microsoft has cooled dramatically. As well as between Nadella and Altman. What does this mean for Microsoft? Is it a big deal?Keach Hagey: They have been hashing this out for months. So it is a big deal in that it will change the structure of their most important partner. But even before this, Microsoft and OpenAI were sort of locked in negotiations over how large and how Microsoft's stake in this new OpenAI will be valued. And that still has to be determined, regardless of whether it's a non-profit or a for-profit in charge. And their interests are diverging. So those negotiations are not as warm as they maybe would have been a few years ago.Andrew Keen: It's a form of polyamory, isn't it? Like we have in Silicon Valley, everyone has sex with everybody else, to put it politely.Keach Hagey: Well, OpenAI does have a new partner in Oracle. And I would expect them to have many more in terms of cloud computing partners going forward. It's just too much risk for any one company to build these huge and expensive data centers, not knowing that OpenAI is going to exist in a certain number of years. So they have to diversify.Andrew Keen: Keach, you know, this is amusing and entertaining and Altman is a remarkable individual, able to sell anything to anyone. But at what point are we really on the Titanic here? And there is such a thing as an iceberg, a real thing, whatever Donald Trump or other manufacturers of ontologies might suggest. At some point, this thing is going to end in a massive disaster.Keach Hagey: Are you talking about the Existence Force?Andrew Keen: I'm not talking about the Titanic, I'm talking about OpenAI. I mean, Parmi Olson, who's the other great authority on OpenAI, who won the FT Book of the Year last year, she's been on the show a couple of times, she wrote in Bloomberg that OpenAI can't have its money both ways, and that's what Sam is trying to do. My point is that we can all point out, excuse me, the contradictions and the hypocrisy and all the rest of it. But there are laws of gravity when it comes to economics. And at a certain point, this thing is going to crash, isn't it? I mean, what's the metaphor? Is it Enron? Is it Sam Bankman-Fried? What kind of examples in history do we need to look at to try and figure out what really is going on here?Keach Hagey: That's certainly one possibility, and there are a good number of people who believe that.Andrew Keen: Believe what, Enron or Sam Bankman-Fried?Keach Hagey: Oh, well, the internal tensions cannot hold, right? I don't know if fraud is even necessary so much as just, we've seen it, we've already seen it happen once, right, the company almost completely collapsed one time and those contradictions are still there.Andrew Keen: And when you say it happened, is that when Sam got pushed out or was that another or something else?Keach Hagey: No, no, that's it, because Sam almost got pushed out and then all of the funders would go away. So Sam needs to be there for them to continue raising money in the way that they have been raising money. And that's really going to be the question. How long can that go on? He's a young man, could go on a very long time. But yeah, I think that really will determine whether it's a disaster or not.Andrew Keen: But how long can it go on? I mean, how long could Sam have it both ways? Well, there's a dream. I mean maybe he can close this last round. I mean he's going to need to raise more than $40 billion. This is such a competitive space. Tens of billions of dollars are being invested almost on a monthly basis. So this is not the end of the road, this $40-billion investment.Keach Hagey: Oh, no. And you know, there's talk of IPO at some point, maybe not even that far away. I don't even let me wrap my mind around what it would be for like a nonprofit to have a controlling share at a public company.Andrew Keen: More hallucinations economically, Keach.Keach Hagey: But I mean, IPO is the exit for investors, right? That's the model, that is the Silicon Valley model. So it's going to have to come to that one way or another.Andrew Keen: But how does it work internally? I mean, for the guys, the sales guys, the people who are actually doing the business at OpenAI, they've been pretty successful this year. The numbers are astonishing. But how is this gonna impact if it's a nonprofit? How does this impact the process of selling, of building product, of all the other internal mechanics of this high-priced startup?Keach Hagey: I don't think it will affect it enormously in the short term. It's really just a question of can they continue to raise money for the enormous amount of compute that they need. So so far, he's been able to do that, right? And if that slows up in any way, they're going to be in trouble. Because as Sam has said many times, AI has to be cheap to be actually useful. So in order to, you know, for it to be widespread, for to flow like water, all of those things, it's got to be cheap and that's going to require massive investment in data centers.Andrew Keen: But how, I mean, ultimately people are putting money in so that they get the money back. This is not a nonprofit endeavor to put 40 billion from SoftBank. SoftBank is not in the nonprofit business. So they're gonna need their money back and the only way they generally, in my understanding, getting money back is by going public, especially with these numbers. How can a nonprofit go public?Keach Hagey: It's a great question. That's what I'm just phrasing. I mean, this is, you know, you talk to folks, this is what's like off in the misty distance for them. It's an, it's a fascinating question and one that we're gonna try to answer this week.Andrew Keen: But you look amused. I'm no financial genius. Everyone must be asking the same question.Keach Hagey: Well, the way that they've said it is that the for-profit will be, will have a, the non-profit will control the for profit and be the largest shareholder in it, but the rest of the shares could be held by public markets theoretically. That's a great question though.Andrew Keen: And lawyers all over the world must be wrapping their hands. I mean, in the very best case, it's gonna be lawsuits on this, people suing them up the wazoo.Keach Hagey: It's absolutely true. You should see my inbox right now. It's just like layers, layers, layer.Andrew Keen: Yeah, my wife. My wife is the head of litigation. I don't know if I should be saying this publicly anyway, I am. She's the head of Litigation at Google. And she lost some of her senior people and they all went over to AI. I'm big, I'm betting that they regret going over there can't be much fun being a lawyer at OpenAI.Keach Hagey: I don't know, I think it'd be great fun. I think you'd have like enormous challenges and have lots of billable hours.Andrew Keen: Unless, of course, they're personally being sued.Keach Hagey: Hopefully not. I mean, look, it is a strange and unprecedented situation.Andrew Keen: To what extent is this, if not Shakespearean, could have been written by some Greek dramatist? To what extend is this symbolic of all the hype and salesmanship and dishonesty of Silicon Valley? And in a sense, maybe this is a final scene or a penultimate scene in the Silicon Valley story of doing good for the world. And yet, of course, reaping obscene profit.Keach Hagey: I think it's a little bit about trying to have your cake and eat it too, right? Trying to have the aura of altruism, but also make something and make a lot of money. And what it seems like today is that if you started as a nonprofit, it's like a black hole. You can never get out. There's no way to get out, and that idea was just like maybe one step too clever when they set it up in the beginning, right. It seemed like too good to be true because it was. And it might end up really limiting the growth of the company.Andrew Keen: Is Sam completely in charge here? I mean, a number of the founders have left. Musk, of course, when you and I talked a couple of months ago, OpenAI came out of conversations between Musk and Sam. Is he doing this on his own? Does he have lieutenants, people who he can rely on?Keach Hagey: Yeah, I mean, he does. He has a number of folks that have been there, you know, a long time.Andrew Keen: Who are they? I mean, do we know their names?Keach Hagey: Oh, sure. Yeah. I mean, like Brad Lightcap and Jason Kwon and, you know, just they're they're Greg Brockman, of course, still there. So there are a core group of executives that have that have been there pretty much from the beginning, close to it, that he does trust. But if you're asking, like, is Sam really in control of this whole thing? I believe the answer is yes. Right. He is on the board of this nonprofit, and that nonprofit will choose the board of the for-profit. So as long as that's the case, he's in charge.Andrew Keen: How divided is OpenAI? I mean, one of the things that came out of the big crisis, what was it, 18 months ago when they tried to push him out, was it was clearly a profoundly divided company between those who believed in the nonprofit mission versus the for-profit mission. Are those divisions still as acute within the company itself? It must be growing. I don't know how many thousands of people work.Keach Hagey: It has grown very fast. It is not as acute in my experience. There was a time when it was really sort of a warring of tribes. And after the blip, as they call it, a lot of those more safety focused people, people that subscribe to effective altruism, left or were kind of pushed out. So Sam took over and kind of cleaned house.Andrew Keen: But then aren't those people also very concerned that it appears as if Sam's having his cake and eating it, having it both ways, talking about the company being a non-profit but behaving as if it is a for-profit?Keach Hagey: Oh, yeah, they're very concerned. In fact, a number of them have signed on to this open letter to the attorneys general that dropped, I don't know, a week and a half ago, something like that. You can see a number of former OpenAI employees, whistleblowers and others, saying this very thing, you know, that the AG should block this because it was supposed to be a charitable mission from the beginning. And no amount of fancy footwork is gonna make it okay to toss that overboard.Andrew Keen: And I mean, in the best possible case, can Sam, the one thing I think you and I talked about last time is Sam clearly does, he's not driven by money. There's something else. There's some other demonic force here. Could he theoretically reinvent the company so that it becomes a kind of AI overlord, a nonprofit AI overlord for our 21st century AI age?Keach Hagey: Wow, well I think he sometimes thinks of it as like an AI layer and you know, is this my overlord? Might be, you know.Andrew Keen: As long as it's not made in China, I hope it's made in India or maybe in Detroit or something.Keach Hagey: It's a very old one, so it's OK. But it's really my attention overlord, right? Yeah, so I don't know about the AI overlord part. Although it's interesting, Sam from the very beginning has wanted there to be a democratic process to control what decision, what kind of AI gets built and what are the guardrails for AGI. As long as he's there.Andrew Keen: As long as he's the one determining it, right?Keach Hagey: We talked about it a lot in the very beginning of the company when things were smaller and not so crazy. And what really strikes me is he doesn't really talk about that much anymore. But what we did just see is some advocacy organizations that kind of function in that exact way. They have voters all over the world and they all voted on, hey, we want you guys to go and try to that ended up having this like democratic structure for deciding the future of AI and used it to kind of block what he was trying to do.Andrew Keen: What are the implications for OpenAI's competitors? There's obviously Anthropic. Microsoft, we talked about a little bit, although it's a partner and a competitor simultaneously. And then of course there's Google. I assume this is all good news for the competition. And of course XAI.Keach Hagey: It is good news, especially for a company like XAI. I was just speaking to an XAI investor today who was crowing. Yeah, because those companies don't have this weird structure. Only OpenAI has this strange nonprofit structure. So if you are an investor who wants to have some exposure to AI, it might just not be worth the headache to deal with the uncertainty around the nonprofit, even though OpenAI is like the clear leader. It might be a better bet to invest in Anthropic or XAI or something else that has just a normal for-profit structure.Andrew Keen: Yeah. And it's hard to actually quote unquote out-Trump, Elon Musk on economic subterfuge. But Altman seems to have done that. I mean, Musk, what he folded X into XAI. It was a little bit of controversy, but he seems to got away with it. So there is a deep hostility between these two men, which I'm assuming is being compounded by this process.Keach Hagey: Absolutely. Again, this is a win for Elon. All these legal cases and Elon trying to buy OpenAI. I remember that bid a few months ago where he actually put a number on it. All that was about trying to block the for-profit conversion because he's trying to stop OpenAI and its tracks. He also claims they've abandoned their mission, but it's always important to note that it's coming from a competitor.Andrew Keen: Could that be a way out of this seeming box? Keach, a company like XAI or Microsoft or Google, or that probably wouldn't happen on the antitrust front, would buy OpenAI as maybe a nonprofit and then transform it into a for-profit company?Keach Hagey: Maybe you and Sam should get together and hash that out. That's the kind ofAndrew Keen: Well Sam, I'm available to be hired if you're watching. I'll probably charge less than your current consigliere. What's his name? Who's the consiglieri who's working with him on this?Keach Hagey: You mean Chris Lehane?Andrew Keen: Yes, Chris Lehane, the ego.Keach Hagey: Um,Andrew Keen: How's Lehane holding up in this? Do you think he's getting any sleep?Keach Hagey: Well, he's like a policy guy. I'm sure this has been challenging for everybody. But look, you are pointing to something that I think is real, which is there will probably be consolidation at some point down the line in AI.Andrew Keen: I mean, I know you're not an expert on the maybe sort of corporate legal stuff, but is it in theory possible to buy a nonprofit? I don't even know how you buy a non-profit and then turn it into a for-profit. I mean is that one way out of this, this cul-de-sac?Keach Hagey: I really don't know the answer to that question, to be honest with you. I can't think of another example of it happening. So I'm gonna go with no, but I don't now.Andrew Keen: There are no equivalents, sorry to interrupt, go on.Keach Hagey: No, so I was actually asking a little bit, are there precedents for this? And someone mentioned Blue Cross Blue Shield had gone from being a nonprofit to a for-profit successfully in the past.Andrew Keen: And we seem a little amused by that. I mean, anyone who uses US health care as a model, I think, might regret it. Your book, The Optimist, is out in a couple of weeks. When did you stop writing it?Keach Hagey: The end of December, end of last year, was pencils fully down.Andrew Keen: And I'm sure you told the publisher that that was far too long a window. Seven months on Silicon Valley is like seven centuries.Keach Hagey: It was actually a very, very tight timeline. They turned it around like incredibly fast. Usually it'sAndrew Keen: Remarkable, yeah, exactly. Publishing is such, such, they're such quick actors, aren't they?Keach Hagey: In this case, they actually were, so I'm grateful for that.Andrew Keen: Well, they always say that six months or seven months is fast, but it is actually possible to publish a book in probably a week or two, if you really choose to. But in all seriousness, back to this question, I mean, and I want everyone to read the book. It's a wonderful book and an important book. The best book on OpenAI out. What would you have written differently? Is there an extra chapter on this? I know you warned about a lot of this stuff in the book. So it must make you feel in some ways quite vindicated.Keach Hagey: I mean, you're asking if I'd had a longer deadline, what would I have liked to include? Well, if you're ready.Andrew Keen: Well, if you're writing it now with this news under your belt.Keach Hagey: Absolutely. So, I mean, the thing, two things, I guess, definitely this news about the for-profit conversion failing just shows the limits of Sam's power. So that's pretty interesting, because as the book was closing, we're not really sure what those limits are. And the other one is Trump. So Trump had happened, but we do not yet understand what Trump 2.0 really meant at the time that the book was closing. And at that point, it looked like Sam was in the cold, you know, he wasn't clear how he was going to get inside Trump's inner circle. And then lo and behold, he was there on day one of the Trump administration sharing a podium with him announcing that Stargate AI infrastructure investment. So I'm sad that that didn't make it into the book because it really just shows the kind of remarkable character he is.Andrew Keen: He's their Zelig, but then we all know what happened to Woody Allen in the end. In all seriousness, and it's hard to keep a straight face here, Keach, and you're trying although you're not doing a very good job, what's going to happen? I know it's an easy question to ask and a hard one to answer, but ultimately this thing has to end in catastrophe, doesn't it? I use the analogy of the Titanic. There are real icebergs out there.Keach Hagey: Look, there could be a data breach. I do think that.Andrew Keen: Well, there could be data breaches if it was a non-profit or for-profit, I mean, in terms of this whole issue of trying to have it both ways.Keach Hagey: Look, they might run out of money, right? I mean, that's one very real possibility. They might run outta money and have to be bought by someone, as you said. That is a totally real possibility right now.Andrew Keen: What would happen if they couldn't raise any more money. I mean, what was the last round, the $40 billion round? What was the overall valuation? About $350 billion.Keach Hagey: Yeah, mm-hmm.Andrew Keen: So let's say that they begin to, because they've got, what are their hard costs monthly burn rate? I mean, it's billions of just.Keach Hagey: Well, the issue is that they're spending more than they are making.Andrew Keen: Right, but you're right. So they, let's say in 18 months, they run out of runway. What would people be buying?Keach Hagey: Right, maybe some IP, some servers. And one of the big questions that is yet unanswered in AI is will it ever economically make sense, right? Right now we are all buying the possibility of in the future that the costs will eventually come down and it will kind of be useful, but that's still a promise. And it's possible that that won't ever happen. I mean, all these companies are this way, right. They are spending far, far more than they're making.Andrew Keen: And that's the best case scenario.Keach Hagey: Worst case scenario is the killer robots murder us all.Andrew Keen: No, what I meant in the best case scenario is that people are actually still without all the blow up. I mean, people are actual paying for AI. I mean on the one hand, the OpenAI product is, would you say it's successful, more or less successful than it was when you finished the book in December of last year?Keach Hagey: Oh, yes, much more successful. Vastly more users, and the product is vastly better. I mean, even in my experience, I don't know if you play with it every day.Andrew Keen: I use Anthropic.Keach Hagey: I use both Claude and ChatGPT, and I mean, they're both great. And I find them vastly more useful today than I did even when I was closing the book. So it's great. I don't know if it's really a great business that they're only charging me $20, right? That's great for me, but I don't think it's long term tenable.Andrew Keen: Well, Keach Hagey, your new book, The Optimist, your new old book, The Optimist: Sam Altman, Open AI and the Race to Invent the Future is out in a couple of weeks. I hope you're writing a sequel. Maybe you should make it The Pessimist.Keach Hagey: I think you might be the pessimist, Andrew.Andrew Keen: Well, you're just, you are as pessimistic as me. You just have a nice smile. I mean, in all reality, what's the most optimistic thing that can come out of this?Keach Hagey: The most optimistic is that this becomes a product that is actually useful, but doesn't vastly exacerbate inequality.Andrew Keen: No, I take the point on that, but in terms of this current story of this non-profit versus profit, what's the best case scenario?Keach Hagey: I guess the best case scenario is they find their way to an IPO before completely imploding.Andrew Keen: With the assumption that a non-profit can do an IPO.Keach Hagey: That they find the right lawyers from wherever they are and make it happen.Andrew Keen: Well, AI continues its hallucinations, and they're not in the product themselves. I think they're in their companies. One of the best, if not the best authority, our guide to all these hallucinations in a corporate level is Keach Hagey, her new book, The Optimist: Sam Altman, Open AI and the Race to Invent the Future is out in a couple of weeks. Essential reading for anyone who wants to understand Sam Altman as the consummate salesman. And I think one thing we can say for sure, Keach, is this is not the end of the story. Is that fair?Keach Hagey: Very fair. Not the end of the story. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The Daily Zeitgeist
Lerts Turtch Basch Trender 5/6: AGI, Movie Tariffs, Newark Airport, 'Thunderbolts*'

The Daily Zeitgeist

Play Episode Listen Later May 6, 2025 25:02 Transcription Available


In this edition of Lerts Turtch Basch Trender, Jack and Miles discuss how unprepared we are for AGI, Trump's "Movie Tariff" (feat. Jon Voight), Newark Airport not working anymore, Marvel changing the name of 'Thunderbolts*' and much more!See omnystudio.com/listener for privacy information.

Book Club with Jeffrey Sachs
Season 4, Episode 9: Ray Kurzweil, The Singularity is Nearer: When We Merge with AI

Book Club with Jeffrey Sachs

Play Episode Listen Later May 6, 2025 49:11


Send us a textJoin Professor Jeffrey Sachs and futurist Ray Kurzweil for a compelling conversation on the accelerating pace of technological change and its profound implications for the future of humanity. In his new book, The Singularity Is Nearer, Kurzweil revisits and updates his groundbreaking predictions on AI & AGI, exponential growth, and human evolution and longevity.Together, they explore a future where AI rivals human intelligence by 2029, nanotechnology rebuilds the world atom by atom, and our minds merge with the cloud to expand intelligence beyond biological limits. They examine radical life extension, the promise of renewable energy, and how exponential technologies are reshaping industries, reducing poverty, and transforming global well-being. But, they also confront the risks while discussing a vision of the future - both awe-inspiring and cautionary - challenging us to rethink what it means to be human in an age of rapid and relentless innovation.The Book Club with Jeffrey Sachs is brought to you by the SDG Academy, the flagship education initiative of the UN Sustainable Development Solutions Network. Learn more and get involved at bookclubwithjeffreysachs.org.Footnotes:AIAGI The Singularity is Near Dartmouth WorkshopMartin Kosinski NeuromedBiotechnologyFrank RosenblattPerceptronExponential Growth Turing TestLongevity Humanoid RobotsVirtual Reality NeocortexArtificial Consciousness⭐️ Thank you for listening!➡️ Sign up for the newsletter: https://bit.ly/subscribeBCJS➡️ Website: bookclubwithjeffreysachs.org

The Index Podcast
Building & Training Open AI Intelligence | Prime Intellect Co-founder Vincent Weisser

The Index Podcast

Play Episode Listen Later May 6, 2025 36:45


What if the world's most powerful AI wasn't controlled by a handful of tech giants, but was open, accessible, and owned by everyone?In this episode of The Index, we sit down with Vincent Weisser, Co-founder and CEO of Prime Intellect, to explore his groundbreaking vision for democratizing artificial intelligence. Vincent pulls back the curtain on the technical and philosophical innovations behind Prime Intellect, from distributed training systems that harness idle compute power around the world to the crypto-economic incentives designed to reward contributors and decentralize ownership.At the core of Prime Intellect's approach is a radical rethink of how we train the next generation of AI models. Instead of relying on massive, centralized data centers, Prime Intellect distributes the workload across decentralized networks, achieving similar results with far fewer communication demands. This breakthrough not only unlocks cost-effective training at a global scale, but also reshapes who can participate in the AI economy.But this episode isn't just about engineering. Vincent shares his bigger mission: building a world where intelligence is “too cheap to meter,” where abundance replaces scarcity, and where superintelligence becomes a public good, not a private monopoly. With the rapid pace of progress toward artificial general intelligence (AGI), the stakes have never been higher. As Vincent puts it, the race is on—not just to build AGI, but to decide who will own and benefit from it.Whether you're building, investing, researching, or just reflecting on the future of AI, this conversation will challenge your assumptions and offer a rare glimpse into one of the most ambitious open-source projects shaping the future of intelligence.Explore more at primeintellect.ai and follow on X at x.com/primeintellect.Show LinksThe Index X ChannelYouTube

London Real
Mo Gawdat - AI Is The Infant That Will Become Your Master

London Real

Play Episode Listen Later May 5, 2025 134:38


In this powerful episode, Brian Rose sits down with former Google X exec and bestselling author Mo Gawdat

Sales and Marketing Built Freedom
Is ChatGPT o3 AGI? (Live Breakdown)

Sales and Marketing Built Freedom

Play Episode Listen Later May 5, 2025 12:17


Your competitors are already using AI. Don't get left behind. Weekly strategies used by PE Backed and Publicly Traded Companies →https://hi.switchy.io/U6H7SIn this video, Ryan Staley delves into the capabilities of ChatGPT, particularly its potential for AGI and how it can significantly enhance workflows and productivity. He discusses the importance of identifying key use cases, maximizing output through automation, and leveraging AI for revenue generation. He emphasizes the transformative power of these technologies in business and encourages experimentation with AI tools.Chapters00:00 Exploring ChatGPT's AGI Potential02:52 Maximizing Workflows with ChatGPT06:14 Automation and AI Integration09:08 Scaling Offers and Revenue Generation

London Real

    In this powerful episode, Brian Rose sits down with former Google X exec and bestselling author Mo Gawdat

The AI Breakdown: Daily Artificial Intelligence News and Discussions

NLW looks at a recent prediction for what a true AGI company will look like. Are we radically underthinking the scale of the potential disruption? Inspired by: https://www.dwarkesh.com/p/ai-firmInterested in sponsoring the show? nlw@breakdown.network Get Ad Free AI Daily Brief: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://patreon.com/AIDailyBrief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Brought to you by:KPMG – Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://kpmg.com/ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to ⁠https://blitzy.com/⁠ to build enterprise software in days, not months The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdown

Your Undivided Attention
AGI Beyond the Buzz: What Is It, and Are We Ready?

Your Undivided Attention

Play Episode Listen Later Apr 30, 2025 52:53


What does it really mean to ‘feel the AGI?' Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI' really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.RECOMMENDED MEDIADaniel Kokotajlo et al's “AI 2027” paperA demo of Omni Human One, referenced by RandyA paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it's valuesA paper from Palisades Research that found an AI would cheat in order to winThe treaty that banned blinding laser weaponsFurther reading on the moratorium on germline editing RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveBehind the DeepSeek Hype, AI is Learning to ReasonThe Tech-God Complex: Why We Need to be SkepticsThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnClarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy. 

60 Minutes
04/20/2025: Bird Flu, Demis Hassabis, Flight of the Monarchs

60 Minutes

Play Episode Listen Later Apr 21, 2025 51:11


Bird flu, which has long been an emerging threat, took a significant turn in 2024 with the discovery that the virus had jumped from a wild bird to a cow. In just over a year, the pathogen has spread through dairy herds and poultry flocks across the United States. It has also infected people, resulting in 70 confirmed cases, including one fatality. Correspondent Bill Whitaker spoke with veterinarians and virologists who warn that, if unchecked, this outbreak could lead to a new pandemic. They also raise concerns about the Biden administration's slow response in 2024 and now the Trump administration's decision to lay off over 100 key scientists. Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain.  One of the most awe-inspiring and mysterious migrations in the natural world is currently taking place, stretching from Mexico to the United States and Canada. This incredible spectacle involves millions of monarch butterflies embarking on a monumental aerial journey. Correspondent Anderson Cooper reports from the mountains of Mexico, where the monarchs spent the winter months sheltering in trees before emerging from their slumber to take flight. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices