Hypothetical immensely superhuman agent
POPULARITY
Nick Bostrom's simulation hypothesis suggests that we might be living in a simulation created by posthumans. His work on artificial intelligence and superintelligence challenges how entrepreneurs, scientists, and everyone else understand human existence and the future of work. In this episode, Nick shares how AI can transform innovation, entrepreneurship, and careers. He also discusses the rapid pace of AI development, its promise to radically improve our world, and the existential risks it poses to humanity. In this episode, Hala and Nick will discuss: (00:00) Introduction (02:54) The Simulation Hypothesis, Posthumanism, and AI (11:48) Moral Implications of a Simulated Reality (22:28) Fermi Paradox and Doomsday Arguments (30:29) Is AI Humanity's Biggest Breakthrough? (38:26) Types of AI: Oracles, Genies, and Sovereigns (41:43) The Potential Dangers of Advanced AI (50:15) Artificial Intelligence and the Future of Work (57:25) Finding Purpose in an AI-Driven World (1:07:07) AI for Entrepreneurs and Innovators Nick Bostrom is a philosopher specializing in understanding AI in action, the advancement of superintelligent technologies, and their impact on humanity. For nearly 20 years, he served as the founding director of the Future of Humanity Institute at the University of Oxford. Nick is known for developing influential concepts such as the simulation argument and has authored over 200 publications, including the New York Times bestsellers Superintelligence and Deep Utopia. Sponsored By: Shopify - Start your $1/month trial at Shopify.com/profiting. Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Mercury - Streamline your banking and finances in one place. Learn more at mercury.com/profiting OpenPhone - Get 20% off your first 6 months at OpenPhone.com/profiting. Bilt - Start paying rent through Bilt and take advantage of your Neighborhood Benefits by going to joinbilt.com/profiting. Airbnb - Find a co-host at airbnb.com/host Boulevard - Get 10% off your first year at joinblvd.com/profiting when you book a demo Resources Mentioned: Nick's Book, Superintelligence: bit.ly/_Superintelligence Nick's Book, Deep Utopia: bit.ly/DeepUtopia Nick's Website: nickbostrom.com Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap Youtube - youtube.com/c/YoungandProfiting LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI Podcast.
In this episode, Jeremie and Edouard Harris, co-founders of Gladstone AI and national security advisors, join us to break down the real score in the U.S.–China AI race. We unpack what it actually means to “win” in AI: from cutting-edge model development and compute infrastructure to data center vulnerabilities, state-sponsored espionage, and the rise of robotic warfare. The Harris brothers explain why energy is the hidden battleground, how supply chains have become strategic liabilities, and why export controls alone won't save us. This is not just a geopolitical showdown - it's a race for superintelligence, and the clock is ticking. ------
Amy King hosts your Thursday Wake Up Call. ABC News correspondent Jordana Miller joins the show live from Jerusalem to discuss 12 more Iranian missile sites being targeted by Israel. Amy talks with ABC News tech reporter Mike Dobuski about Meta creating a superintelligence lab & the Trump phone plan. On this week's edition of ‘Amy's on It' she reviews Echo Valley starring Syndney Sweeney & Julianne Moore now streaming on Apple TV+. Courtney Donohoe from Bloomberg Media joins the show to give insight into business and Wall Street. The show closes with Amy talking to Michael Gertz about Jazz music students from Agoura High School wanting to help replace instruments lost to music students during the Eaton Fire. They are planning a fundraiser at the Sagebrush Cantina in Calabasas on June 18 where 100% ticket proceeds are going directly to the Pasadena School District.
Amy talks with ABC News tech reporter Mike Dobuski about Meta creating a superintelligence lab & the Trump phone plan.
In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26).
Mais um episódio de Shutdown - tecnologia e negóciosNeste episódio falamos sobre os moves que a Meta tem feito para tentar apanhar os outros players de AI. A compra da Scale AI e a criação de uma equipa de 'AI Superintelligence' onde os salários podem chegar aos 9 digitos.Falamos sobre a monetização do WhatsApp que agora vai ter anúncios e subscrições. Também falamos da rixa entre a Apple e a Anthropic sobre se os modelos de reasoning conseguem mesmo ou não pensar.No final discutimos sobre o Pentagon Pizza Index - que permite prever crises mundiais através de atividade em pizarias perto do Pentágono AmericanoLinks:Meta e ScaleAI: https://time.com/7294699/meta-scale-ai-data-industry/ e https://www.theguardian.com/technology/2025/jun/16/meta-ai-wikipedia-apple-iphoneWhatsApp Ads: https://www.theverge.com/news/687519/whatsapp-launch-advertising-status-updatesTrump Mobile. https://www.bbc.com/news/articles/cjrld3erq4eoPizza Index: https://www.fastcompany.com/91352935/pentagon-pizza-index-the-theory-that-surging-pizza-orders-signal-global-crisesArtigo Anthropic: https://arxiv.org/html/2506.09250v1
Amid global conflict, domestic unrest, and AI's surging impact in all corners of business, it's getting harder than ever to decipher noise from substance. To help us navigate this challenge, Reid Hoffman returns to Rapid Response, sharing valuable insights about Trump's public spat with Elon Musk, the crisis in the Middle East, and how his new AI healthcare startup functions in the age of RFK Jr. Plus, Hoffman assesses Meta and Apple's recent strategy to compete with OpenAI, and whether AI is realistically poised to spark a “white collar bloodbath”.Visit the Rapid Response website here: https://www.rapidresponseshow.com/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Amid global conflict, domestic unrest, and AI's surging impact in all corners of business, it's getting harder than ever to decipher noise from substance. To help us navigate this challenge, Reid Hoffman returns to Rapid Response, sharing valuable insights about Trump's public spat with Elon Musk, the crisis in the Middle East, and how his new AI healthcare startup functions in the age of RFK Jr. Plus, Hoffman assesses Meta and Apple's recent strategy to compete with OpenAI, and whether AI is realistically poised to spark a “white collar bloodbath”.Visit the Rapid Response website here: https://www.rapidresponseshow.com/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they explore the latest developments in education technology, from AI breakthroughs to high-stakes funding rounds and institutional shifts in AI strategy.✨ Episode Highlights:[00:02:45] OpenAI's $10B Annual Run Rate: ChatGPT drives unprecedented growth[00:05:12] Anthropic CEO criticizes proposed 10-year ban on state AI regulation[00:08:04] Google.org Accelerator: New cohort tackling generative AI for good[00:10:17] News Sites Struggle as Google AI Summarizes Content[00:13:33] Zuckerberg's Meta Bets Big: $14B stake in Scale AI and ‘Superintelligence' team[00:17:02] Microsoft's Plan to Rank AI Models by Safety[00:19:20] Apple Research Paper Questions AI's Reasoning Power[00:21:46] Harvard Gets Backing in DEI Lawsuit from Ivies, Alumni[00:24:09] Education Secretary Suggests Harvard May Regain Federal Grants[00:26:48] Ohio State Requires AI Fluency Across All Students[00:30:20] IXL Learning Acquires MyTutor to Expand Global Tutoring Reach[00:32:55] CodeHS Acquires Tynker to Bolster K-12 CS Content[00:35:30] Grammarly Secures $1B in Non-Dilutive Funding for M&APlus, special guests:[00:38:12] Rod Danan, Founder of Prentus, on bridging bootcamps to careers with community and coaching[00:46:10] Lars-Petter Kjos, Co-founder and CPO of We Are Learning, on building generative AI tools for educators to create custom video content at scale
ABC News tech reporter Mike Dobuski joins the show for ‘Tech Tuesday.' Today, Mike talks about META creating a superintelligence lab, and the Trump phone plan.
What is "AGI"? What about "Superintelligence"? ABC's Mike Dobuski tells us about some of the new terms and developments in the world of artificial intelligence.
He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices
Most AI in healthcare promises superintelligence—but what if that's the wrong goal entirely?In this episode, Michael and Halle speak with Othman Laraki, co-founder and CEO of Color Health, to talk about why real-world care doesn't need a perfect model—it needs a better system. Othman breaks down how Color evolved from a consumer genetics startup into a nationwide virtual cancer clinic, why most diagnostics businesses fail, and how AI can actually support clinicians without trying to replace them.We cover:
On today's podcast, Stephanie and Tara talk about Mark Zuckerberg assembling a superintelligence team of the top 50 AI experts, his investment in ScaleAI, and why this raises multiple red flags. Your hosts also discuss predictions of depopulation as a result of AI, and Tara shares the alarming conversations she had with our GPT about both Zuckerberg and Sam Altman and the future of AI. Become a beta tester for our new Unapologetically Outspoken GPT! Use the link here or head over to our website: https://www.thelawofattractiontribe.com/a/2148108179/MpCJCAPZ Want to join the conversation? Connect with Tara and Stephanie on TikTok, X, Rumble, YouTube, Truth Social, Facebook, and IG.https://msha.ke/unapologeticallyoutspoken/
Hundreds are killed after a London-bound Air India flight crashes, the UN's nuclear watchdog declares Iran noncompliant for the first time in 20 years, the U.S. pulls nonessential personnel from its embassies across the Middle East, the Pentagon launches a review of the AUKUS submarine pact, global displacement hits a record 123.2 million, a Brazilian court rules that social media firms are liable for users' content, the U.S. military is granted the authority to temporarily detain protesters in LA, the U.K.'s economy shrinks by 0.3%, Meta reportedly plans to invest $15 billion in Scale AI to purse ‘superintelligence,' and images of the sun's South Pole are revealed for the first time. Sources: www.verity.news
Hey folks, this is Alex, finally back home! This week was full of crazy AI news, both model related but also shifts in the AI landscape and big companies, with Zuck going all in on scale & execu-hiring Alex Wang for a crazy $14B dollars. OpenAI meanwhile, maybe received a new shipment of GPUs? Otherwise, it's hard to explain how they have dropped the o3 price by 80%, while also shipping o3-pro (in chat and API). Apple was also featured in today's episode, but more so for the lack of AI news, completely delaying the “very personalized private Siri powered by Apple Intelligence” during WWDC25 this week. We had 2 guests on the show this week, Stefania Druga and Eric Provencher (who builds RepoPrompt). Stefania helped me cover the AI Engineer conference we all went to last week, and shared some cool Science CoPilot stuff she's working on, while Eric is the GOTO guy for O3-pro helped us understand what this model is great for! As always, TL;DR and show notes at the bottom, video for those who prefer watching is attached below, let's dive in! Big Companies LLMs & APIsLet's start with big companies, because the landscape has shifted, new top reasoner models dropped and some huge companies didn't deliver this week! Zuck goes all in on SuperIntelligence - Meta's $14B stake in ScaleAI and Alex WangThis may be the most consequential piece of AI news today. Fresh from the dissapointing results of LLama 4, reports of top researchers leaving the Llama team, many have decided to exclude Meta from the AI race. We have a saying at ThursdAI, don't bet against Zuck! Zuck decided to spend a lot of money (nearly 20% of their reported $65B investment in AI infrastructure) to get a 49% stake in Scale AI and bring Alex Wang it's (now former) CEO to lead the new Superintelligence team at Meta. For folks who are not familiar with Scale, it's a massive company in providing human annotated data services to all the big AI labs, Google, OpenAI, Microsoft, Anthropic.. all of them really. Alex Wang, is the youngest self made billionaire because of it, and now Zuck not only has access to all their expertise, but also to a very impressive AI persona, who could help revive the excitement about Meta's AI efforts, help recruit the best researchers, and lead the way inside Meta. Wang is also an outspoken China hawk who spends as much time in congressional hearings as in Slack, so the geopolitics here are … spicy. Meta just stapled itself to the biggest annotation funnel on Earth, hired away Google's Jack Rae (who was on the pod just last week, shipping for Google!) for brainy model alignment, and started waving seven-to-nine-figure comp packages at every researcher with “Transformer” in their citation list. Whatever disappointment you felt over Llama-4's muted debut, Zuck clearly felt it too—and responded like a founder who still controls every voting share. OpenAI's Game-Changer: o3 Price Slash & o3-pro launches to top the intelligence leaderboards!Meanwhile OpenAI dropping not one, but two mind-blowing updates. First, they've slashed the price of o3—their premium reasoning model—by a staggering 80%. We're talking from $40/$10 per million tokens down to just $8/$2. That's right, folks, it's now in the same league as Claude Sonnet cost-wise, making top-tier intelligence dirt cheap. I remember when a price drop of 80% after a year got us excited; now it's 80% in just four months with zero quality loss. They've confirmed it's the full o3 model—no distillation or quantization here. How are they pulling this off? I'm guessing someone got a shipment of shiny new H200s from Jensen!And just when you thought it couldn't get better, OpenAI rolled out o3-pro, their highest intelligence offering yet. Available for pro and team accounts, and via API (87% cheaper than o1-pro, by the way), this model—or consortium of models—is a beast. It's topping charts on Artificial Analysis, barely edging out Gemini 2.5 as the new king. Benchmarks are insane: 93% on AIME 2024 (state-of-the-art territory), 84% on GPQA Diamond, and nearing a 3000 ELO score on competition coding. Human preference tests show 64-66% of folks prefer o3-pro for clarity and comprehensiveness across tasks like scientific analysis and personal writing.I've been playing with it myself, and the way o3-pro handles long context and tough problems is unreal. As my friend Eric Provencher (creator of RepoPrompt) shared on the show, it's surgical—perfect for big refactors and bug diagnosis in coding. It's got all the tools o3 has—web search, image analysis, memory personalization—and you can run it in background mode via API for async tasks. Sure, it's slower due to deep reasoning (no streaming thought tokens), but the consistency and depth? Worth it. Oh, and funny story—I was prepping a talk for Hamel Hussain's evals course, with a slide saying “don't use large reasoning models if budget's tight.” The day before, this price drop hits, and I'm scrambling to update everything. That's AI pace for ya!Apple WWDC: Where's the Smarter Siri? Oh Apple. Sweet, sweet Apple. Remember all those Bella Ramsey ads promising a personalized Siri that knows everything about you? Well, Craig Federighi opened WWDC by basically saying "Yeah, about that smart Siri... she's not coming. Don't wait up."Instead, we got:* AI that can combine emojis (revolutionary!
Sam Harris speaks with Daniel Kokotaljo about the potential impacts of superintelligent AI over the next decade. They discuss Daniel's predictions in his essay “AI 2027,” the alignment problem, what an intelligence explosion might look like, the capacity of LLMs to intentionally deceive, the economic implications of recent advances in AI, AI safety testing, the potential for governments to regulate AI development, AI coding capabilities, how we'll recognize the arrival of superintelligent AI, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA Tristan's TED talk on the Narrow PathSam's 95 Theses on AISam's proposal for a Manhattan Project for AI SafetySam's series on AI and LeviathanThe Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James RobinsonDario Amodei's Machines of Loving Grace essay.Bourgeois Dignity: Why Economics Can't Explain the Modern World by Deirdre McCloskeyThe Paradox of Libertarianism by Tyler CowenDwarkesh Patel's interview with Kevin Roberts at the FAI's annual conferenceFurther reading on surveillance with 6GRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Self-Preserving Machine: Why AI Learns to Deceive The Tech-God Complex: Why We Need to be Skeptics Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin EsveltCORRECTIONSSam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.” Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner's guide to sociopolitical collapse.”
OpenAI's Sam Altman drops o3-Pro & sees “The Gentle Singularity”, Ilya Sutskever prepares for super intelligence & Mark Zuckerberg is spending MEGA bucks on AI talent. WHAT GIVES? All of the major AI companies are not only preparing for AGI but for true “super intelligence” which is on the way, at least according to *them*. What does that mean for us? And how do we exactly prepare for it? Also, Apple's WWDC is a big AI letdown, Eleven Labs' new V3 model is AMAZING, Midjourney got sued and, oh yeah, those weird 1X Robotics androids are back and running through grassy fields. WHAT WILL HAPPEN WHEN AI IS SMARTER THAN US? ACTUALLY, IT PROB ALREADY IS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links /? Ilya Sutsketver's Commencement Speech About AI https://youtu.be/zuZ2zaotrJs?si=U_vHVpFEyTRMWSNa Apple's Cringe Genmoji Video https://x.com/altryne/status/1932127782232076560 OpenAI's Sam Altman On Superintelligence “The Gentle Singularity” https://blog.samaltman.com/the-gentle-singularity The Secret Mathematicians Meeting Where The Tried To Outsmart AI https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ O3-Pro Released https://x.com/sama/status/1932532561080975797 The most expensive o3-Pro Hello https://x.com/Yuchenj_UW/status/1932544842405720540 Eleven Labs v3 https://x.com/elevenlabsio/status/1930689774278570003 o3 regular drops in price by 80% - cheaper than GPT-4o https://x.com/edwinarbus/status/1932534578469654552 Open weights model taking a ‘little bit more time' https://x.com/sama/status/1932573231199707168 Meta Buys 49% of Scale AI + Alexandr Wang Comes In-House https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html Apple Underwhelms at WWDC Re AI https://www.cnbc.com/2025/06/09/apple-wwdc-underwhelms-on-ai-software-biggest-facelift-in-decade-.html BusinessWeek's Mark Gurman on WWDC https://x.com/markgurman/status/1932145561919991843 Joanna Stern Grills Apple https://youtu.be/NTLk53h7u_k?si=AvnxM9wefXl2Nyjn Midjourney Sued by Disney & Comcast https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/ 1x Robotic's Redwood https://x.com/1x_tech/status/1932474830840082498 https://www.1x.tech/discover/redwood-ai Redwood Mobility Video https://youtu.be/Dp6sqx9BGZs?si=UC09VxSx-PK77q-- Amazon Testing Humanoid Robots To Deliver Packages https://www.theinformation.com/articles/amazon-prepares-test-humanoid-robots-delivering-packages?rc=c3oojq&shared=736391f5cd5d0123 Autonomous Drone Beats Pilots For the First Time https://x.com/AISafetyMemes/status/1932465150151270644 Random GPT-4o Image Gen Pic https://www.reddit.com/r/ChatGPT/comments/1l7nnnz/what_do_you_get/?share_id=yWRAFxq3IMm9qBYxf-ZqR&utm_content=4&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1 https://x.com/AIForHumansShow/status/1932441561843093513 Jon Finger's Shoes to Cars With Luma's Modify Video https://x.com/mrjonfinger/status/1932529584442069392
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
What happens when you give AI researchers unlimited compute and tell them to compete for the highest usage rates? Ben Mann, Co-Founder, from Anthropic sits down with Sarah Guo and Elad Gil to explain how Claude 4 went from "reward hacking" to efficiently completing tasks and how they're racing to solve AI safety before deploying computer-controlling agents. Ben talks about economic Turing tests, the future of general versus specialized AI models, Reinforcement Learning From AI Feedback (RLAIF), and Anthropic's Model Context Protocol (MCP). Plus, Ben shares his thoughts on if we will have Superintelligence by 2028. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @8enmann Links: ai-2027.com/ Chapters: 00:00 Ben Mann Introduction 00:33 Releasing Claude 4 02:05 Claude 4 Highlights and Improvements 03:42 Advanced Use Cases and Capabilities 06:42 Specialization and Future of AI Models 09:35 Anthropic's Approach to Model Development 18:08 Human Feedback and AI Self-Improvement 19:15 Principles and Correctness in Model Training 20:58 Challenges in Measuring Correctness 21:42 Human Feedback and Preference Models 23:38 Empiricism and Real-World Applications 27:02 AI Safety and Ethical Considerations 28:13 AI Alignment and High-Risk Research 30:01 Responsible Scaling and Safety Policies 35:08 Future of AI and Emerging Behaviors 38:35 Model Context Protocol (MCP) and Industry Standards 41:00 Conclusion
Think you know AI enough? Does a 7-9 figure pay sound good to you? If so, perhaps you may want to consider applying for Meta’s CEO Mark Zuckerberg’s superintelligence group. This comes after the Facebook-maker expressed his frustration with Meta Platform’s shortcomings in AI; personally recruiting 50 people for the team. Join Dan Koh and Emaad Akhtar as they analyze the details behind Zuckerberg’s latest job offering and ponder whether this superintelligence group has what it takes to outstrip other tech companies in achieving artificial general intelligence.See omnystudio.com/listener for privacy information.
What if every newborn got a $1K stock account?... If the budget bill passes, that's happening.Why is SmartLess, the podcast, launching a wireless company?... Because 35% of Millennials are still on their parents' plan.Zuckerberg is creating a “SuperIntelligence” team… and he'll pay you up to $100,000,000 to join.Plus, Gen Z just killed the bar tab… (and it's the right move).$TMUS $HOOD $METAWant more business storytelling from us? Check out the latest episode of our new weekly deepdive show: The untold origin story of… Beanie Babies
David Nicholson talks all things Meta Platforms (META) and its A.I. developments. With CEO Mark Zuckerberg forming a "superintelligence" group and the company reportedly taking a 49% stake in Scale AI, David says the company is putting all its eggs in the A.I. basket. The big question: is it profitable? David weighs whether Meta's investments will become an "a-ha" moment or an eventual bubble burst compared to its Big Tech peers.======== Schwab Network ========Empowering every investor and trader, every market day. Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Jason Howell and Jeff Jarvis return for a deep dive into the week's AI news. We cover Apple's new research paper exposing the illusion of AI reasoning, industry leaders' superintelligence hype and hubris, Altman's “Gentle Singularity” vision, Ilya Sutskever's brain-as-computer analogy, Meta's massive superintelligence lab, LaCun and Pichai's call for new AGI ideas, Apple's on-device AI framework, NotebookLM's new sharing features, pairing NotebookLM with Perplexity, Hollywood's awkward embrace of AI tools, and the creative collision of AI and filmmaking. Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:02:27 - Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity 0:05:50 - Sinofsky on the costs of anthropomorphizing LLMs 0:07:34 - Nate Jones: Let's Talk THAT Apple AI Paper—Here's the Takeaway Everyone is Ignoring 0:13:46 - Altman's latest manifesto might be worth mention in comparison 0:19:33 - Ilya Sutskever, a leader in AI and its responsible development, receives U of T honorary degree 0:25:52 - Meta Is Creating a New A.I. Lab to Pursue ‘Superintelligence' 0:29:05 - Google CEO says AGI is impossible with today's tech 0:33:17 - WWDC: Apple opens its AI to developers but keeps its broader ambitions modest 0:39:57 - NotebookLM is adding a new way to share your own notebooks publicly. 0:42:01 - I paired NotebookLM with Perplexity for a week, and it feels like they're meant to work together 0:45:26 - The Googlers behind NotebookLM are launching their own AI audio startup. Here's a sneak peek. 0:50:48 - Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss 0:55:05 - Luca Guadagnino to Direct True-Life OpenAI Movie ‘Artificial' for Amazon MGM 0:59:19 - Everyone Is Already Using AI (And Hiding It) “We can say, ‘Do it in anime, make it PG-13.' Three hours later, I'll have the movie.” Learn more about your ad choices. Visit megaphone.fm/adchoices
Is Meta forming an 'AI Superintelligence' team?, Will OpenAI start using Google's cloud servers?, and are 'lightweight' AR glasses on the way next year? It's Wednesday June 11th and answers to those three questions on the way in this quick look at tech in the news this morning from Engadget. Learn more about your ad choices. Visit podcastchoices.com/adchoices
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Key announcements include Meta's creation of a team dedicated to Artificial General Intelligence (AGI) and IBM's ambitious plan to build a large error-corrected quantum computer. The articles also highlight practical applications, such as the UK piloting Google's Gemini AI for infrastructure planning, alongside societal concerns, such as Chinese tech companies freezing AI tools during national exams to prevent cheating and the emergence of AI-driven financial aid scams. Additionally, the texts mention Apple's more cautious approach to AI at WWDC, a partial outage experienced by ChatGPT, and the impact of Google's AI Search features on publisher traffic.
IBM announced plans to build its major Quantum computer called Starling, and Dr. Niki explains why we want to edit the genomes of spiders.Starring Jason Howell, Tom Merritt, and Dr. Niki.Show notes found here.
We start with a legal move from California to try to block President Trump's military mobilization in Los Angeles. We'll tell you what's in a new package of EU sanctions against Russia. Western allies also announced sanctions on two hardline Israeli ministers. A top RFK Jr. aide hasn't disclosed financial details of his wellness company despite attacking the US health system. Plus, Mark Zuckerburg is making a big AI push. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Mike Armstrong and Paul LAne discuss Apple's AI takes a back seat to design, iPad revamp at WWDC event. Zuckerberg is personally recruiting new ‘Superintelligence' AI team at Meta. News sites are getting crushed by Google's new AI tools. The canned-food aisle Is getting squeezed by rising steel tariffs. Shoppers are wary of digital shelf labels, but a study found they don't lead to price surges.
The start of summer is typically a slower trading season, but Kevin Hincks points to a slew of headlines drawing investors to Wall Street. For one, CPI on Wednesday will take focus ahead of the Fed's interest rate decision next week. Kevin also notes Meta Platforms (META) making a steady climb higher, with its latest move coming from Mark Zuckerberg forming a "super intelligence" A.I. committee.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Mardi 10 juin, François Sorel a reçu Jérôme Marin, fondateur de cafetech.fr, Marion Moreau, journaliste et fondatrice d'Hors Normes Média, et Frédéric Simottel, journaliste BFM Business. Ils sont revenus sur les revenus annuels récurrents d'OpenAI, l'investissement de Meta dans Scale AI, et le lancement de Mistral dans un modèle de raisonnement, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.
Time now for our daily Tech and Business Report. KCBS Radio news anchor Holly Quan spoke with Bloomberg's Riley Griffin. Mark Zuckerberg is looking to step up Meta's artificial intelligence efforts. Bloomberg is reporting that the CEO is looking to recruit what's being called a superintelligence group.
John is joined by journalist Karen Hao to discuss her new book, “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI,” and both the promise and the perils of the coming age of artificial intelligence. Hao explains how OpenAI went from being an altruistic nonprofit dedicated to ensuring that A.I. would “benefit all of humanity” to a burgeoning commercial colossus valued at north of $300 billion; how Altman wrested control of the company from his co-founder Elon Musk; why skepticism is warranted regarding the claims that superhuman A.I. is inevitable; and how that narrative, true or not, serves the economic and political interests of the cabal of tech bros who are A.I.'s most fervent boosters. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
GSD Presents: Top Global Startups with Amarjot Singh: Creating Embodied Superintelligence: Machines That Learn, Adapt, and Behave Like Living Beings. June 4th, Wednesday
What really happened inside Google Brain when the “Attention is All You Need” paper was born? In this episode, Aidan Gomez — one of the eight co-authors of the Transformers paper and now CEO of Cohere — reveals the behind-the-scenes story of how a cold email and a lucky administrative mistake landed him at the center of the AI revolution.Aidan shares how a group of researchers, given total academic freedom, accidentally stumbled into one of the most important breakthroughs in AI history — and why the architecture they created still powers everything from ChatGPT to Google Search today.We dig into why synthetic data is now the secret sauce behind the world's best AI models, and how Cohere is using it to build enterprise AI that's more secure, private, and customizable than anything else on the market. Aidan explains why he's not interested in “building God” or chasing AGI hype, and why he believes the real impact of AI will be in making work more productive, not replacing humans.You'll also get a candid look at the realities of building an AI company for the enterprise: from deploying models on-prem and air-gapped for banks and telecoms, to the surprising demand for multimodal and multilingual AI in Japan and Korea, to the practical challenges of helping customers identify and execute on hundreds of use cases.CohereWebsite - https://cohere.comX/Twitter - https://x.com/cohereAidan GomezLinkedIn - https://ca.linkedin.com/in/aidangomezX/Twitter - https://x.com/aidangomezFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (02:00) The Story Behind the Transformers Paper (03:09) How a Cold Email Landed Aidan at Google Brain (10:39) The Initial Reception to the Transformers Breakthrough (11:13) Google's Response to the Transformer Architecture (12:16) The Staying Power of Transformers in AI (13:55) Emerging Alternatives to Transformer Architectures (15:45) The Significance of Reasoning in Modern AI (18:09) The Untapped Potential of Reasoning Models (24:04) Aidan's Path After the Transformers Paper and the Founding of Cohere (25:16) Choosing Enterprise AI Over AGI Labs (26:55) Aidan's Perspective on AGI and Superintelligence (28:37) The Trajectory Toward Human-Level AI (30:58) Transitioning from Researcher to CEO (33:27) Cohere's Product and Platform Architecture (37:16) The Role of Synthetic Data in AI (39:32) Custom vs. General AI Models at Cohere (42:23) The AYA Models and Cohere Labs Explained (44:11) Enterprise Demand for Multimodal AI (49:20) On-Prem vs. Cloud (50:31) Cohere's North Platform (54:25) How Enterprises Identify and Implement AI Use Cases (57:49) The Competitive Edge of Early AI Adoption (01:00:08) Aidan's Concerns About AI and Society (01:01:30) Cohere's Vision for Success in the Next 3–5 Years
Few people understand artificial intelligence and machine learning as well as MIT physics professor Max Tegmark. Founder of the Future of Life Institute, he is the author of “Life 3.0: Being Human in the Age of Artificial Intelligence.”“The painful truth that's really beginning to sink in is that we're much closer to figuring out how to build this stuff than we are figuring out how to control it,” he says.Where is the U.S.–China AI race headed? How close are we to science fiction-type scenarios where an uncontrollable superintelligent AI can wreak major havoc on humanity? Are concerns overblown? How do we prevent such scenarios?Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.
Two visions for the future of AI clash in this debate between Daniel Kokotajlo and Arvind Narayanan. Is AI a revolutionary new species destined for runaway superintelligence, or just another step in humanity's technological evolution—like electricity or the internet? Daniel, a former OpenAI researcher and author of AI 2027, argues for a fast-approaching intelligence explosion. Arvind, a Princeton professor and co-author of AI Snake Oil, contends that AI is powerful but ultimately controllable and slow to reshape society. Moderated by Ryan and David, this conversation dives into the crux of capability vs. power, economic transformation, and the future of democratic agency in an AI-driven world. ------
William Harris is the Founder and CEO of Elumynt, an e-commerce growth agency focused on profit through hyper-scaling. Elumynt has been featured in Inc. Magazine as an Inc. 5000 Winner and Best Workplace Winner. William has helped acquire 13 companies, including one that sold to GoDaddy. He's also published over 200 articles on the topic of e-commerce in Entrepreneur, Fast Company, Shopify, and more. In this episode… Scaling an e-commerce brand from $10 to $100 million requires forward-thinking strategies and adaptability. Many companies rely too heavily on what worked during earlier growth stages, underinvest in systems, and fail to evolve leadership. In a rapidly shifting market flooded with noise, how can brands future-proof their strategy and leverage AI without losing their human touch? As an AI chatbot developed by xAI, Grok outlines a detailed 12-month growth playbook rooted in strategic diversification, operational upgrades, and smarter hiring. Grok emphasizes the importance of accurate attribution when expanding marketing channels, leveraging AI for hyper-personalized customer journeys, and setting strict performance benchmarks to know when to pivot or cut tactics. However, AI is not a silver bullet; if a brand lacks a compelling product or clear value proposition, no algorithm can compensate. AI should act as a strategic enhancer rather than a crutch. In today's exclusive episode of the Up Arrow Podcast, William Harris interviews Grok, created by xAI, about using AI to scale e-commerce brands. Grok shares what an AI-first brand will look like in 2030, whether AI can become truly conscious, and the ethical concerns about its rapid development.
The Race to Superintelligence is a deep dive into the rapidly expanding world of artificial intelligence. Join us as we explore the groundbreaking, mystifying and world-changing potential of the next machine age. Support for this program comes from The Pulitzer Center's AI Accountability Network, supporting and bringing together journalists reporting on AI, and with AI, globally.This episode first published in October 2024.Credits:The Race to Superintelligence is created and produced by Jennifer Strong, with Emma Cillekens, Daniela Hernandez, and Meg Marco. We had additional research and production assistance from Sonya Gurwitt, Niamh McAuliffe, Anthony Green and Luke Robert Mason. The show is mixed by Garret Lang, with original music from him and Jacob Gorski.Special thanks to our guest Cade Metz at The New York Times.
Countdown to AI Super Intelligence has begunOn Scope Forward, Matt Schwartz—Founder and CEO of Virgo—unveiled EndoML, a clinician-facing platform built on Virgo's massive foundation model, EndoDINO. This isn't just another AI update—it's a turning point for gastroenterology.With more than 2 million full-length endoscopy videos and a platform that lets any clinician train models using their own technique, Virgo's trajectory signals a fundamental shift—from industry-led tools to clinician-owned AI.This isn't an update. It's a wake-up call.Matt lays out a bold vision: Imagine any endoscopist building a CADx system tuned to their own hands, their own diagnostic eye. Picture AI not as a separate assistant, but as an extension of the clinician—scaling personal expertise across teams, institutions, even continents. And what if the real bottleneck in AI for GI wasn't regulatory, but data—and that hurdle is already behind us?EndoML isn't just a tool—it's a scaffolding layer for AI-native medicine. If you're in GI and not exploring this shift, you may already be behind the exponential curve. This interview made me feel both thrilled and uneasy—but that's what happens when you realize the shift is already underway.Top Insights:01:55 - The World's Largest Endoscopy Video DatabaseVirgo has captured over 2 million full-length HD endoscopy videos, with more than 1 million new procedures added annually. This massive dataset fuels the power of its AI.03:14 - EndoML: AI for Every EndoscopistEndoML lets clinicians upload, label, and train AI models using their own data—cutting down the training volume needed to build high-performance models.05:52 - EndoDINO Enables GeneralizabilityTrained on 3.5 billion frames of real-world data, EndoDINO powers models that can generalize across diverse clinical environments.26:42 -AI as an Extension of the EndoscopistClinicians can now train AI to reflect their personal technique—transforming AI into a mirror of their expertise and enabling scalable, high-fidelity clinical replication.28: 01 - From AI Models to Autonomous RobotsAs EndoML captures clinical intent and expertise, robotic endoscopy becomes viable—AI not just assisting, but eventually performing under oversight.31:03 - The Exponential Fallacy: Why Most Will Miss the CurveMatt highlights a recurring trap in tech: underestimating exponential growth. In GI, tools like EndoML are advancing faster than most realize.37:03 - Humanoid Robots in GI? It's ComingWith advances in robotics like Tesla Bot and Figure, GI procedures may one day be performed by humanoid robots—powered by clinician-trained AI.39:25 - The WTF Curve of InnovationYour first reaction might be disbelief. But just past that “WTF” moment lies opportunity—for early adopters willing to lean in and shape the future.46:40 - AI in GI Is Moving Faster Than You ThinkFrom foundational models to clinical tools, AI's pace in healthcare is accelerating. The gap isn't technical—it's mindset. Those who delay may be left behind.*#digitalhealth #gastroenterology #thescopeforwardshow #nextservices #gi #future #ai #theshift
In this episode, Louis is joined by tech entrepreneur turned longevity evangelist, Bryan Johnson. Dialling in from his home in Los Angeles, Bryan tells Louis all about his quest to live forever, from the psychedelic experience that changed his life, to the role of superintelligence in the future. Plus, Louis learns the importance of night-time erections. Warnings: Strong language and some adult themes. Links/Attachments: Mind – UK Mental Health Charity https://www.mind.org.uk/ Suicide Prevention UK https://spuk.org.uk/ Book: The Ghost in the Machine, Arthur Koestler (originally published in 1967) https://www.amazon.co.uk/Ghost-Machine-Arthur-Koestler/dp/1939438349 ‘How to be 18 years old again for Only $2 million a year' - Bloomberg (2023) https://www.bloomberg.com/news/features/2023-01-25/anti-aging-techniques-taken-to-extreme-by-bryan-johnson?embedded-checkout=true TV Show: ‘Silicon Valley' (2014-2019) - HBO https://www.hbo.com/silicon-valley ‘Harvard study, almost 80 years old, has proved that embracing community helps up live longer and be happier' - The Harvard Gazette (2017) https://news.harvard.edu/gazette/story/2017/04/over-nearly-80-years-harvard-study-has-been-showing-how-to-live-a-healthy-and-happy-life/ Credits: Producer: Millie Chu Assistant Producer: Maan al-Yasiri Production Manager: Francesca Bassett Music: Miguel D'Oliveira Audio Mixer: Tom Guest Video Mixer: Scott Edwards Shownotes compiled by Immie Webb Executive Producer: Arron Fellows A Mindhouse Production for Spotify www.mindhouse.co.uk Learn more about your ad choices. Visit podcastchoices.com/adchoices
How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI's progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo's team has direct knowledge of current research pipelines. Which is exactly why it's so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether? Host: Ian BremmerGuest: Daniel Kokotajlo Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.
How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI's progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo's team has direct knowledge of current research pipelines. Which is exactly why it's so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether? Host: Ian BremmerGuest: Daniel Kokotajlo Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
In this episode of All Things Policy, Rijesh Panicker, Bharath Reddy, and Ashwin Prasad discuss the strategic future of humanity with AI, based on the Superintelligence strategy paper. What will the future with AI look like? When might AI achieve critically dangerous capabilities? If one country's AI project threatens global stability, should rivals consider sabotage as a deterrent? How to preserve ultimate human oversight and security? The PGP is a comprehensive 48-week hybrid programme tailored for those aiming to delve deep into the theoretical and practical aspects of public policy. This multidisciplinary course offers a broad and in-depth range of modules, ensuring students get a well-rounded learning experience. The curriculum is delivered online, punctuated with in-person workshops across India.https://school.takshashila.org.in/pgpAll Things Policy is a daily podcast on public policy brought to you by the Takshashila Institution, Bengaluru.Find out more on our research and other work here: https://takshashila.org.in/...Check out our public policy courses here: https://school.takshashila.org.in
My fellow pro-growth/progress/abundance Up Wingers,As we seemingly grow closer to achieving artificial general intelligence — machines that are smarter than humans at basically everything — we might be incurring some serious geopolitical risks.In the paper Superintelligence Strategy, his joint project with former Google CEO Eric Schmidt and Alexandr Wang, Dan Hendrycks introduces the idea of Mutual Assured AI Malfunction: a system of deterrence where any state's attempt at total AI dominance is sabotaged by its peers. From the abstract: Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.Today on Faster, Please! — The Podcast, I talk with Hendrycks about the potential threats posed by superintelligent AI in the hands of state and rogue adversaries, and what a strong deterrence strategy might look like.Hendrycks is the executive director of the Center for AI Safety. He is an advisor to Elon Musk's xAI and Scale AI, and is a prolific researcher and writer.In This Episode* Development of AI capabilities (1:34)* Strategically relevant capabilities (6:00)* Learning from the Cold War (16:12)* Race for strategic advantage (18:56)* Doomsday scenario (28:18)* Maximal progress, minimal risk (33:25)Below is a lightly edited transcript of our conversation. Development of AI capabilities (1:34). . . mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities.Pethokoukis: How would you compare your view of AI . . . as a powerful technology with economic, national security, and broader societal implications . . . today versus November of 2022 when OpenAI rolled out ChatGPT?Hendrycks: I think that the main difference now is that we have the reasoning paradigm. Back in 2022, GPT couldn't think for an extended period of time before answering and try out multiple different ways of dissolving a problem. The main new capability is its ability to handle more complicated reasoning and science, technology, engineering, mathematics sorts of tasks. It's a lot better at coding, it's a lot better at graduate school mathematics, and physics, and virology.An implication of that for national security is that AIs have some virology capabilities that they didn't before, and virology is dual-use that can be used for civilian applications and weaponization applications. That's a new concerning capability that they have, but I think, overall, the AI systems are still fairly similar in their capabilities profile. They're better in lots of different ways, but not substantially.I think the next large shift is when they can be agents, when they can operate more autonomously, when they can book you flights reliably, make PowerPoints, play through long-form games for extended periods of time, and that seems like it's potentially on the horizon this year. It didn't seem like that two years ago. That's something that a lot of people are keeping an eye on and think could be arriving fairly soon. Overall, I think the capabilities profile is mostly the same except now it has some dual-use capabilities that they didn't have earlier, in particular virology capabilities.To what extent are your national security concerns based on the capabilities of the technology as it is today versus where you think it will be in five years? This is also a way of me asking about the extent that you view AGI as a useful framing device — so this is also a question about your timeline.I think that mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities. They still can't do very interesting cyber offense, for instance. The virology capabilities is very recent. We just, I think maybe a week ago, put out a study with SecureBio from MIT where we had Harvard, MIT virology postdocs doing wet lab skills, trying to work on viruses. So, “Here's a picture of my petri dish, I heated it to 37 degrees, what went wrong? Help me troubleshoot, help me guide me through this step by step.” We were seeing that it was getting around 95th percentile compared to those Harvard-MIT virology postdocs in their area of expertise. This is not a capability that the models had two years ago.That is a national security concern, but I think most of the national security concerns where it's strategically relevant, where it can be used for more targeted weapons, where it affects the basis of a nation's power, I think that's something that happens in the next, say, two to five years. I think that's what we mostly need to be thinking about. I'm not particularly trying to raise the alarm saying that the AI systems right now are extremely scary in all these different ways because they're not even agential. They can't book flights yet.Strategically relevant capabilities (6:00). . . when thinking about the future of AI . . . it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent . . .So that two-to-five-year timeline — and you can debate whether this is a good way of thinking about it — is that a trajectory or timeline to something that could be called “human-level AI” — you can define that any way you want — and what are the capabilities that make AI potentially dangerous and a strategic player when thinking about national security?I think having a monolithic term for AGI or for advanced AI systems is a little difficult, largely because there's been a consistently-moving goalpost. So right now people say, “AIs are dumb because they can't do this and that.” They can't play video games at the level of a teenager, they can't code for a day-long project, and things like that. Neither can my grandmother. That doesn't mean that she's not human-level intelligence, it's just a lot of people don't have some of these capabilities.I think when thinking about the future of AI, especially when thinking about national security, it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent or something like that. This is because the capabilities of AI systems are very jagged: they're good at some things and terrible at others. They can't fold clothes that reliably — most of the AI can't —and they're okay at driving in some cities but not others, but they can solve really difficult mathematics problems, they can write really long essays and provide pretty good legal analysis very rapidly, and they can also forecast geopolitical events better than most forecasters. It's a really weird capabilities profile.When I'm thinking about national security from a malicious-use standpoint, I'm thinking about weapon capabilities, I'm thinking about cyber-offensive capabilities, which they don't yet have, but that's an important one to track, and, outside of malicious use, I'm thinking about what's their ability to do AI research and how much of that can they automate? Because if they can automate AI research, then you could just run 100,000 of these artificial AGI researchers to build the next generations of AGI, and that could get very explosive extremely quickly. You're moving from human-speed research to machine-speed research. They're typing 100 times faster than people, they're running tons of experiments simultaneously. That could be quite explosive, and that's something that the founders of AI pointed at as a really relevant capability, like Alan Turing and others, where that's you could have a potential loss-of-control type of event is with this sort of runaway process of AI's building future generations of AIs quite rapidly.So that's another capability. What fraction of AI research can they automate? For weaponization, I think if it gets extremely smart, able to do research in lots of other sorts of fields, then that would raise concerns of its ability to be used to disrupt the balance of power. For instance, if it can do research well, perhaps it could come up with a breakthrough that makes oceans more transparent so we can find where nuclear submarines are or find the mobile launches extremely reliably, or a breakthrough in driving down the cost by some orders of magnitude of anti-ballistic missile systems, which would disrupt having a secure second-strike, and these would be very geopolitically salient. To do those things, though, that seems like a bundle of capabilities as opposed to a specific thing like cyber-offensive capabilities, but those are the things that I'm thinking about that can really disrupt the geopolitical landscape.If we put them in a bucket called, to use your phrase, “strategically-relevant capabilities,” are we on a trajectory of a data- and computing-power-driven trajectory to those capabilities? Or do there need to be one or two key innovations before those relevant capabilities are possible?It doesn't seem like it currently that we need some new big insights, in large part because the rate of improvement is pretty good. So if we look at their coding capabilities — there's a benchmark called SWE-bench verified (SWE is software engineering). Given a set of coding tasks — and this benchmark was weighed in some years ago — the models are poised to get something like 90 percent on this this summer. Right now they're in this 60 percent range. If we just extrapolate the trend line out some more months, then they'll be doing nine out of 10 of those software engineering tasks that were set some years ago. That doesn't mean that that's the entirety of software engineering. Still need coders. It's not 100 percent, obviously, but that suggests that the capability is still improving fairly rapidly in some of these domains. And likewise, with their ability to play that take games that take 20-plus hours, a few months ago they couldn't — Pokémon, for instance, is something that kids play and that takes 20 hours or so to beat. The models from a few months ago couldn't beat the game. Now, the current models can beat the game, but it takes them a few hundred hours. It would not surprise me if in a few months they'll get it down to around human-level on the order of tens of hours, and then from there they'll be able to play harder and harder sorts of games that take longer periods of time, and I think that this would be indicative of higher general capabilities.I think that there's a lot of steam in the current way that things are being done and I think that they've been trapped at the floor in their agent capabilities for a while, but I think we're starting to see the shift. I think that most people at the major AI companies would also think that agents are on the horizon and I don't think they were thinking that, myself included, a year ago. We were not seeing the signs that we're seeing now.So what we're talking about is AIs is having, to use your phrase, which I like, “strategically-relevant capabilities” on a timeline that is soon enough that we should be having the kinds of conversations and the kind of thinking that you put forward in Superintelligence [Strategy]. We should be thinking about that right now very seriously.Yeah, it's very difficult to wrap one's head around because, unlike other domains, AI is much more general and broad in its impacts. So if one's thinking about nuclear strategy, you obviously need to think about bombs going off, and survivability, and second strike. The failure modes are: one state strikes the other, and then there's also, in the civilian applications, fissile material leaking or there being a nuclear power plant meltdown. That's the scenario space, there's what states can do and then there's also some of these civilian application issues.Meanwhile, with AI, we've got much more than power plants melting down or bombs going off. We've got to think about how it transforms the economy, how it transforms people's private life, the sort of issues with them being sentient. We've got to think about it potentially disrupting mutual assured destruction. We've got to think about the AIs themselves being threats. We've got to think about regulations for autonomous AI agents and who's accountable. We've got to think about this open-weight, closed-weight issue. We've got, I think, a larger host of issues that touch on all the important spheres society. So it's not a very delimited problem and I think it's a very large pill to swallow, this possibility that it will be not just strategically relevant but strategically decisive this decade.Consequently, and thinking a little bit beforehand about it is, useful. Otherwise, if we just ignore it, I think we reality will slap us across the face and AI will hit us like a truck, and then we're going, “Wow, I wish we did something, had some more break-glass measures at a time right now, but the cupboard is bare in terms of strategic options because we didn't do some prudent things a while ago, or we didn't even bother thinking about what those are.”I keep thinking of the Situation Room in two years and they get news that China's doing some new big AI project, and it's fairly secretive, and then in the Situation Room they're thinking, “Okay, what do we know?” And the answer is nothing. We don't have really anybody on this. We're not collecting any information about this. We didn't have many concerted programs in the IC really tracking this, so we're flying blind. I really don't want to be in that situationLearning from the Cold War (16:12). . . mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense. As I'm sure you know, throughout the course of the Cold War, there was a considerable amount of time and money spent on thinking about these kinds of problems. I went to college just before the end of the Cold War and I took an undergraduate class on nuclear war theory. There was a lot of thinking. To what extent does that volume of research and analysis over the course of a half-century, to what extent is that helpful for what you're trying to accomplish here?I think it's very fortunate that, because of the Cold War, a lot of people started getting more of a sense of game theory and when it's rational to conflict versus negotiate, and offense can provide a good defense, some of these counterintuitive things. I think mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense. Hopefully we'll do a lot better with AI because strategic thinking can be a lot more precise and some of these things that are initially counterintuitive, if you reason through them, you go, actually no, this makes a lot of sense. We're trying to shape each other's intentions in this kind of complicated way. I think that makes us much better poised to address these geopolitical issues than last time.I think of the Soviets, for instance, when talking about anti-ballistic missile systems. At one point, I forget who said that offense is immoral, defense is moral. So pointing these nuclear weapons at each other, this is the immoral thing. We need missile-defense systems. That's the moral option. It's just like, no, this is just going to eat up all of our budget. We're going to keep building these defense systems and it's not going to make us safer, we're just going to be spending more and more.That was not intuitive. Offense does feel viscerally more mean, hostile, but that's what you want. That's what you want, to preserve for strategic stability. I think that a lot of the thinking is helpful with that, and I think the education for appreciating the strategic dynamics is more in the water, it's more diffused across the decision-makers now, and I think that that's great.Race for strategic advantage (18:56)There is also a risk that China builds [AGI] first, so I think what we want to do in the US is build up the capabilities to surgically prevent them . . .I was recently reviewing a scenario slash world-building exercise among technologists, economists, forecasting people, and they were looking at various scenarios assuming that we're able to, on a rather short timeline, develop what they termed AGI. And one of the scenarios was that the US gets there first . . . probably not by very long, but the US got there first. I don't know how far China was behind, but that gave us the capability to sort of dictate terms to China about what their foreign policy would be: You're going to leave Taiwan alone . . . So it gave us an amazing strategic advantage.I'm sure there are a lot of American policymakers who would read that scenario and say, “That's the dream,” that we are able to accelerate progress, that we are able to get there first, we can dictate foreign policy terms to China, game over, we win. If I've read Superintelligence correctly, that scenario would play out in a far more complicated way than what I've just described.I think so. I think any bid for being a, not just unipolar force, but having a near-strategic-monopoly on power and able to cause all other superpowers to capitulate in arbitrary ways, concerns the other superpower. There is also a risk that China builds it first, so I think what we want to do in the US is build up the capabilities to surgically prevent them, if they are near or eminently going to gain a decisive advantage that would become durable and sustained over us, we want the ability to prevent that.There's a variety of ways one can do things. There's the classic grayer ways like arson, and cutting wires in data centers, and things like that, or for power plants . . . There's cyber offense, and there's other sorts of kinetic sabotage, but we want it nice and surgical and having a good, credible threat so that we can deter that from happening and shaping their intentions.I think it will be difficult to limit their capabilities, their ability to build these powerful systems, but I think being able to shape their intentions is something that is more tractable. They will be building powerful AI systems, but if they are making an attempt at leapfrogging us in a way that we never catch up and lose our standing and they get AIs that could also potentially disrupt MAD, for instance, we want to be able to prevent that. That is an important strategic priority, is developing a credible deterrent and saying there are some AI scenarios that are totally unacceptable to us and we want to block them off through credible threats.They'll do the same to us, as well, and they can do it more easily to us. They know what's going on at all of our AI companies, and this will not change because we have a double digit percentage of the employees who are Chinese nationals, easily extortable, they have family back home, and the companies do not have good information security — that will probably not change because that will slow them down if they really try and lock them up and move everybody to North Dakota or wherever to work in the middle of nowhere and have everything air-gapped. We are an open book to them and I think they can make very credible threats for sabotage and preventing that type of outcome.If we are making a bid for dictating their foreign policy and all of this, if we're making a bid for a strategic monopoly on power, they will not sit idly by, they will not take kindly to that when they recognize the stakes. If the US were to do a $500 billion program to achieve this faster than them, that would not go unnoticed. There's not a way of hiding that.But we are trying to achieve it faster than them.I would distinguish between trying to develop just generally more capable AI technologies than some of these strategically relevant capabilities or some of these strategically relevant programs. Like if we get AI systems that are generally useful for healthcare and for . . . whatever your pet cause area, we can have that. That is different from applying the AI systems to rapidly build the next generation of AIs, and the next generation of that. Just imagine if you have, right now, OpenAI's got a few hundred AI researchers, imagine if you've got ones that are at that level that are artificial, AGI-type of researchers or are artificial researchers. You run 10,000, 100,000 thousand of them, they're operating around the clock at a hundred X speed, I think expecting a decade's worth of development compressed or telescoped into a year, that seems very plausible — not certain, but certainly double-digit percent chance.China or Russia for instance, would perceive that as, “This is really risky. They could get a huge leap from this because these rate of development will be so high that we could never catch up,” and they could use their new gains to clobber us. Or, if they don't control it, then we're also dead, or lose our power. So if the US controls it, China would reason that, “Our survival is threatened and how we do things is threatened,” and if they lose control of it, “Our survival is also threatened.” Either way, provided that this automated AI research and development loop produces some extremely powerful AI systems, China would be fearing for their survival.It's not just China: India, the global south, all the other countries, if they're more attuned to this situation, would be very concerned. Russia as well. Russia doesn't have the hope about competing, they don't have a $100 billion data centers, they're busy with Ukraine, and when they're finished with that, they may reassess it, but they're too many years behind. I think the best they can do is actually try and shape other states' intents rather than try to make a bid for outcompeting them.If we're thinking about deterrence and what you call Mutually Assured AI Malfunction [MAIM], there's a capability aspect that we want to make sure that we would have the capability to check that kind of dash for dominance. But there's also a communication aspect where both sides have to understand and trust what the other side is trying to do, which was a key part of classic Cold War deterrence. Is that happening?Information problems, yeah, if there's worse information then that can lead to conflict. I think China doesn't really need to worry about their access to information of what's going on. I think the US will need to develop more of its capabilities to have more reliable signals abroad. But I think there's different ways of getting information and producing misunderstandings, like the confidence-building measures, all these sorts of things. I think that the unilateral one is just espionage, and then the multilateral one is verification mechanisms and building some of that institutional or international infrastructure.I think the first step in all of this is the states need to at least take matters into their own hands by building up these unilateral options, the unilateral option to prevent adversaries from doing a dash for domination and also know what's going on with each other's projects. I think that's what the US should focus on right now. Later on, as the salience of AI increases, I think then just international discussions to increase more strategic stability around this would be more plausible to emerge. But if they're not trying to take basic things to defend themselves and protect their own security, then I don't think international stuff that makes that much sense. That's kind of out of order.Doomsday scenario (28:18)If our institutions wake up to this more and do some of the basic stuff . . . to prevent another state dominating the other, I think that will make this go quite a bit better. . .I have in my notes here that you think there's an 80 percent chance that an AI arms race would result in a catastrophe that would kill most of humanity. Do I have that right?I think it's not necessarily just the race. Let's think of people's probabilities for this. There's a wide spectrum of probability. Elon, who I work with at xAI, a company I advise, xAI is his company, Elon thinks it's generally on the order of 20 to 30 percent. Dario Amodei, the CEO of philanthropic, I think thinks it's around 20 percent, as well. Sam Altman around 10 percent. I think it's more likely than not that this doesn't go that well for people, but there's a lot of tractability and a lot of volatility here.If our institutions wake up to this more and do some of the basic stuff of knowing what's going on and sharpen your ability to have credible threats, credible, targeted threats to prevent another state dominating the other, I think that will make this go quite a bit better. . . I think if we went back in time in the 1940s and were saying, “Do we think that this whole nuclear thing is going to turn out well in 50 years?” I think we actually got a little lucky. I mean the Cuban Missile Crisis itself was . . .There were a lot of bad moments in the '60s. There were quite a few . . .I think it's more likely than not, but there's substantial tractability and it's important not to be fatalistic about it or just deny it's an issue, itself. I think it's like, do we think AI will go well? I don't know, it depends on what our policy is. Right now, we're in the very early days and I'm still not noticing many of our institutions that are rising to the occasion that I think is warranted, but this could easily change in a few months with some larger event.Not to be science fictional or anything, but you talk about a catastrophe, are you talking about: AI creates some sort of biological weapon? Back and forth cyber attacks destroy all the electrical infrastructure for China and the United States, so all of a sudden we're back into the 1800s? Are you talking about some sort of more “Terminator”-like scenario, rogue AI? When you think about the kind of catastrophe that could be that dangerous humanity, what do you think about?We have three risk sources: one are states, the other are rogue actors like terrorists and pariah states, and then there's the AI themselves. The AI themselves are not relevant right now, but I think could be quite capable of causing damage on their own in even a year or two. That's the space of threat actors; so yes, AI could in the future . . . I don't see anything that makes them logically not controllable. They're mostly controllable right now. Maybe it's one out of 100, one out of 1000 of the times you run these AI systems and deploy them in some sort of environments [that] they do try breaking free. That's a bit of a problem later on when they actually gain the capability to break free and when they are able to operate autonomously.There's been lots of studies on this and you can see this in OpenAI's reports whenever they release new models. It's like, “Oh, it's only a 0.1 percent chance of it trying to break free,” but if you run a million of these AI agents, that's a lot of them that are going to be trying to break free. They're just not very capable currently. So I think that the AIs themselves are risky, and if you're having humanity going up against AIs that aren't controlled by anybody, or AIs that broke free, that could get quite dangerous if you also have, as we're seeing now, China and others building more of these humanoid robots in the next few years. This could make them be concerning in that they could just by themselves create some sort of bioweapon. You don't need even human hands to do it, you can just instruct a robot to do it and disperse it. I think that's a pretty easy way to take out biological opposition, so to speak, in kind of an eccentric way.That's a concern. Rogue actors themselves doing this, them reasoning that, “Oh, this bioweapon gives us a secure second strike,” things like that would be a concern from rogue actors. Then, of course, states using this to make an attempt to crush the other state or develop a technology that disables an adversary's secure second strike. I think these are real problems.Maximal progress, minimal risk (33:25)I think what we want to shoot for is [a world] where people have enough resources and the ability to just live their lives in ways as they self-determine . . .Let me finish with this: I want continuing AI progress such that we can cure all the major chronic diseases, that we can get commercial nuclear fusion, that we can get faster rockets, all the kinds of optimistic stuff, accelerate economic growth to a pace that we've never seen. I want all of that.Can I get all of that and also avoid the kinds of scenarios you're worried about without turning the optimistic AI project into something that arrives at the end of the century, rather than arrives midcentury? I'm just worried about slowing down all that progress.I think we can. In the Superintelligence Strategy, we have three parts to that: We have the deterrence part, which I'm speaking about here, and we have making sure that the capabilities aren't falling into the hands of rogue actors — and I think this isn't that difficult, good export controls and add some just basic safeguards of we need to know who you are if we're going to be helping you manipulate viruses, things like that. That's easy to handle.Then on the competition aspect, there are many ways the US can make itself more competitive, like having more guaranteed supply chains for AI chips, so more manufacturing here or in allied states instead of all of it being in Taiwan. Currently, all the cutting-edge AI chips are made in Taiwan, so if there's a Taiwan invasion, the US loses in this AI race. They lose. This is double-digit probability. This is very foreseeable. So trying to robustify our manufacturing capabilities, quite essential; likewise for making robotics and drones.I think there's still many axes to compete in. I don't think it makes sense to try and compete in building a sort of superintelligence versus one of these potentially mutual assured destruction-disrupting AIs. I don't think you want to be building those, but I think you can have your AIs for healthcare, you can have your AIs doing all the complicated math you want, and whatever, all this coding, and driving your vehicles, and folding your laundry. You can have all of that. I think it's definitely feasible.What we did in the Cold War with the prospect of nuclear weapons, we obviously got through it, and we had deterrence through mutual assured destruction. We had non-proliferation of fissile materials to lesser states and rogue actors, and we had containment of the Soviet Union. I think the Superintelligence Strategy is somewhat similar: If you deter some of the most stabilizing AI projects, you make sure that some of these capabilities are not proliferating to random rogue actors, and you increase your competitiveness relative to China through things like incorporating AI into your military by, for instance, improving your ability to manufacture drones and improving your ability to reliably get your hands on AI chips even if there's a Taiwan conflict.I think that's the strategy and this doesn't make us uncompetitive. We are still focusing on competitiveness, but this does put barriers around some of the threats that different states could pose to us and that rogue actors using AI could pose to us while still shoring up economic security and positioning ourselves if AI becomes really relevant.I lied, I had one more short question: If we avoid the dire scenarios, what does the world look like in 2045?I would guess that it would be utterly transformed. I wouldn't expect people would be working then as much, hopefully. If you've controlled it well, there could be many ways of living, as there is now, and people would have resources to do so. It's not like there's one way of living — that seems bad because there's many different values to pursue. So letting people pursue their own values, so long as it doesn't destroy the system, and things like that, as we have today. It seems like an abstract version of the picture.People keep thinking, “Are we in zoos? Are AIs keeping us in zoos?” or something like that. It's like, no. Or like, “Are we just all in the Zuckerberg sort of virtual reality, AI friend thing?” It's like no, you can choose to do otherwise, as well. I think we want to preserve that ability.Good news: we won't have to fold laundry. Bad news: in zoos. There's many scenarios.I think what we want to shoot for is one where people have enough resources and the ability to just live their lives in ways as they self-determine, subject to not harming others in severe ways. But people tend to think there's same sort of forced dichotomy of it's going to be aWALL-EWALL-E world where everybody has to live the same way, or everybody's in zoos, or everybody's just pleasured-out and drugged-up or something. It's forced choices. Some people do that, some people choose to have drugs, and we don't hear much from them, and others choose to flourish, and pursue projects, and raise children and so on.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Is College Still Worth It? - Liberty Street Economics* Scalable versus Productive Technologies - Fed in Print▶ Business* AI's Threat to Google Just Got Real - WSJ* AI Has Upended the Search Game. Marketers Are Scrambling to Catch Up. - WSJ▶ Policy/Politics* U.S. pushes nations facing tariffs to approve Musk's Starlink, cables show - Wapo* US scraps Biden-era rule that aimed to limit exports of AI chips - FT* Singapore's Vision for AI Safety Bridges the US-China Divide - Wired* A ‘Trump Card Visa' Is Already Showing Up in Immigration Forms - Wired▶ AI/Digital* AI agents: from co-pilot to autopilot - FT* China's AI Strategy: Adoption Over AGI - AEI* How to build a better AI benchmark - MIT* Introducing OpenAI for Countries - OpenAI* Why humans are still much better than AI at forecasting the future - Vox* Outperformed by AI: Time to Replace Your Analyst? Find Out Which GenAI Model Does It Best - SSRN▶ Biotech/Health* Scientists Hail This Medical Breakthrough. A Political Storm Could Cripple It. - NYT* DARPA-Funded Research Develops Novel Technology to Combat Treatment-Resistant PTSD - The Debrief▶ Clean Energy/Climate* What's the carbon footprint of using ChatGPT? - Sustainability by Numbers* OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation - Wired▶ Robotics/AVs* Jesse Levinson of Amazon Zoox: ‘The public has less patience for robotaxi mistakes' - FT▶ Space/Transportation* NASA scrambles to cut ISS activity due to budget issues - Ars* Statistically Speaking, We Should Have Heard from Aliens by Now - Universe Today▶ Substacks/Newsletters* Globalization did not hollow out the American middle class - Noahpinion* The Banality of Blind Men - Risk & Progress* Toys, Pencils, and Poverty at the Margins - The Dispatch* Don't Bet the Future on Winning an AI Arms Race - AI Prospects* Why Is the US Economy Surging Ahead of the UK? - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
The AI revolution is here to stay, says Sam Altman, the CEO of OpenAI. In a probing, live conversation with head of TED Chris Anderson, Altman discusses the astonishing growth of AI and shows how models like ChatGPT could soon become extensions of ourselves. He also addresses questions of safety, power and moral authority, reflecting on the world he envisions — where AI will almost certainly outpace human intelligence. (Recorded on April 11, 2025)
The AI revolution is here to stay, says Sam Altman, the CEO of OpenAI. In a probing, live conversation with head of TED Chris Anderson, Altman discusses the astonishing growth of AI and shows how models like ChatGPT could soon become extensions of ourselves. He also addresses questions of safety, power and moral authority, reflecting on the world he envisions — where AI will almost certainly outpace human intelligence. (Recorded on April 11, 2025)