POPULARITY
PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose
Huge research uncovers the obvious. Television and "traditional media" is going obsolete, as social media revenues and watch/listen time dominates. It is indeed the end of the living room. The boys review reports from WPP and NiemenLab. In marketing winners, Terry Moran launches a substack after being fired. But shouldn't he have had one already? Contractual reasons aside, the boys detail a marketing/creator survival plan that's not just a nice to have. Rants and raves include Answer Engine Optimization Hype and the GENIUS act. ----- This week's links: Creators Overtake Traditional Media The Brand/Studio Line Blurs Social Media Overtakes TV Ads on Whatsapp Terry Moran Launches Substack Journalist Survival Plan Genius Act Progress ----- This week's sponsor: INBOUND 2025 features an incredible lineup including Amy Poehler, Dario Amodei, Dwarkesh Patel, Sean Evans (Hot Ones), Marques Brownlee, Glennon Doyle, and more. Get actionable insights you can implement immediately to grow your business...San Francisco September 3rd-5th, 2025. Go to inbound.com/register to secure your spot at INBOUND 2025. ------- Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. NOTE: You can get captions there. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Seventh Bear.
Dwarkesh Patel is the host of the Dwarkesh Podcast. He joins Big Technology Podcast to discuss the frontiers of AI research, sharing why his timeline for AGI is a bit longer than the most enthusiastic researchers. Tune in for a candid discussion of the limitations of current methods, why continuous AI improvement might help the technology reach AGI, and what an intelligence explosion looks like. We also cover the race between AI labs, the dangers of AI deception, and AI sycophancy. Tune in for a deep discussion about the state of artificial intelligence, and where it's going. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose
Joe and Robert break down Mary Meeker's "Future of AI" report and discuss the biggest opportunities and threats for marketers and creators in the future. Disney sues Midjourney in yet another lawsuit against AI companies. When will it stop and why should we care? And Meta invests $15 billion more in AI, trying to catch up in the AI race. In winners and losers, Joe discusses how to get found in AI searches and Robert talks about the failed mainstream media with the LA coverage. Rants and raves include a new creator economy lobbying group from MatPat and a boost to musical theater. ----- This week's links: Disney Sues Midjourney The Future of AI Report Meta Makes $15 Billion Bet on AGI How to Get Found in AI Search [Chart] MatPat Starts Creator Caucus Live Theater Gets a Boost ----- This week's sponsor: INBOUND 2025 features an incredible lineup including Amy Poehler, Dario Amodei, Dwarkesh Patel, Sean Evans (Hot Ones), Marques Brownlee, Glennon Doyle, and more. Get actionable insights you can implement immediately to grow your business...San Francisco September 3rd-5th, 2025. Go to inbound.com/register to secure your spot at INBOUND 2025. ------- Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. NOTE: You can get captions there. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Seventh Bear.
PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose
Did someone say McDonald's is bringing back snack wraps? In other news, who knew that artificial intelligence would force brands and publishers toward an indie-creator model? If Google AI is undermining journalism, perhaps the way out is through the individual human creator. In another AI scandal, Builder.ai collapses, even with a large sum of money invested by Microsoft. And Netflix adds advertising to its IP. Good decision? Winners and losers include Ohio Bike Week and ELF/Hailey Bieber. Rants and raves include WaPo's new mission statement and the new NYTimes lawsuit against ChatGPT. ----- This week's links: Google AI Mode Undermining Journalism Builder.ai Collapses Netflix Adds Ad Tech Chain Restaurants Go Nostalgia ELF Buys Rhode WaPo's New Mission Statement ----- This week's sponsor: INBOUND 2025 features an incredible lineup including Amy Poehler, Dario Amodei, Dwarkesh Patel, Sean Evans (Hot Ones), Marques Brownlee, Glennon Doyle, and more. Get actionable insights you can implement immediately to grow your business...San Francisco September 3rd-5th, 2025. Go to inbound.com/register to secure your spot at INBOUND 2025. ------- Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. NOTE: You can get captions there. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Seventh Bear.
Firms are the means of economic progress. India's micro, small and medium enterprises have been hobbled for decades, and flounder even today. Sudhir Sarnobat and Narendra Shenoy join Amit Varma in episode 419 of The Seen and the Unseen to discuss this landscape -- and Sudhir's brilliant new venture that aims to tackle this. (FOR FULL LINKED SHOW NOTES, GO TO SEENUNSEEN.IN.) Also check out: 1. Sudhir Sarnobat on Twitter and LinkedIn. 2. Naren Shenoy on Twitter, Instagram and Blogspot. 3. How Frameworks — Sudhir Sarnobat's new venture. 4. Narendra Shenoy and Mr Narendra Shenoy — Episode 250 of The Seen and the Unseen. 5. Sudhir Sarnobat Works to Understand the World -- Episode 350 of The Seen and the Unseen. 6. We Are All Amits From Africa — Episode 343 of The Seen and the Unseen (w Krish Ashok and Naren Shenoy). 7. You're Ugly and You're Hairy and You're Covered in Shit but You're Mine and I Love You -- Episode 362 of The Seen and the Unseen (w Krish Ashok and Naren Shenoy). 8. The Teaching Learning Community. 9. Ascent Foundation. 10. The Beauty of Finance -- Episode 21 of Everything is Everything. 11. Other episodes of Everything is Everything on firms: 1, 2, 3, 4. 12. The Incredible Insights of Timur Kuran -- Episode 349 of The Seen and the Unseen. 13. Restaurant Regulations in India — Episode 18 of The Seen and the Unseen (w Madhu Menon). 14. Restart -- Mihir Sharma. 15. Backstage — Montek Singh Ahluwalia. 16. The Life and Times of Montek Singh Ahluwalia — Episode 285 of The Seen and the Unseen. 17. The Devil's Dictionary -- Ambrose Bierce. 18. The Bad and Complex Tax -- Episode 74 of The Seen and the Unseen (w Shruti Rajagopalan). 19. They Stole a Bridge. They Stole a Pond -- Amit Varma. 20. Bihar real estate brokers build illegal bridge on river. 21. The Globalisation Episode -- Episode 95 of Everything is Everything. 22. Is Globalization Doomed? -- Episode 96 of Everything is Everything 23. The Brave New Future of Electricity -- Episode 40 of Everything is Everything. 24. The Case for Nuclear Electricity -- Episode 78 of Everything is Everything. 25. अंतू बरवा -- Pu La Deshpande. 26. Testaments Betrayed -- Milan Kundera. 27. The Double ‘Thank You' Moment — John Stossel. 28. Marginal Revolution University. 29. The Bankable Wisdom of Harsh Vardhan — Episode 352 of The Seen and the Unseen. 30. Regrets for my Old Dressing Gown -- Denis Diderot. 31. Good to Great -- Jim Collins. 32. Economics in One Lesson -- Henry Hazlitt. 33. The Goal -- Eliyahu Goldratt. 34. The Toyota Way -- Jeffrey Liker. 35. Start With Why -- Simon Sinek. 36. The Lean Startup -- Eric Reis. 37. The Hard Thing About Hard Things -- Ben Horowitz. 38. Romancing the Balance Sheet -- Anil Lamba. 39. Connect the Dots -- Rashmi Bansal. 40. Zero to One -- Peter Thiel. 41. Acquired, Lenny's Podcast, EconTalk, Work Life, Rethinking, The Knowledge Project, How I Built This, Everything is Everything, HBR Podcast, Ideacast, Deep Questions. 42. How Family Firms Evolve -- Episode 34 of Everything is Everything. 43. Can We Build Switzerland in India? -- Episode 58 of Everything is Everything. 44. Dwarkesh Patel and Hannah Fry. 45. Habuild. This episode is sponsored by CTQ Compounds. Check out The Daily Reader and FutureStack. Use the code UNSEEN for Rs 2500 off. Amit Varma and Ajay Shah have launched a new course called Life Lessons, which aims to be a launchpad towards learning essential life skills all of you need. For more details, and to sign up, click here. Amit and Ajay also bring out a weekly YouTube show, Everything is Everything. Have you watched it yet? You must! And have you read Amit's newsletter? Subscribe right away to The India Uncut Newsletter! It's free! Also check out Amit's online course, The Art of Clear Writing. Episode art: ‘Battleground' by Simahina.
DAMION1Kohl's CEO Fired for Funneling Business to Romantic Partner 10Kohl's boss Ashley Buchanan tried to funnel business to a romantic partner and lost his job. It wasn't the first time their personal and professional lives had crossed.Kohl's fired Buchanan on Thursday after it discovered he had instructed the retailer to enter into a “highly unusual” business deal involving a woman with whom he has a romantic relationship, according to people familiar with the situation. The pair currently live together in an upscale golf community in the suburbs of Dallas.Buchanan met the woman, Chandra Holt, when they were both working at Walmart several years ago, the people said. His divorce proceedings show the two had a romantic relationship while he was the CEO of Michaels. The arts-and-crafts chain also tried to hire Holt during his tenure.A Kohl's board investigation by outside lawyers found that Buchanan violated the company's code of conduct in two instances with a vendor with whom he had a personal relationship and whom it didn't name, according to a regulatory filing. The filing said he directed the retailer to conduct business with a vendor founded by this person “on highly unusual terms,” and he caused the company to enter into a multimillion-dollar consulting agreement, where that person was part of the consulting team.On Thursday, Kohl's appointed Chairman Michael Bender as its interim CEO. He becomes the fourth CEO in three years at the department-store chain, which has been struggling with slumping sales.Nominating Committee:John E. Schlifske* (2011; 6%)Michael J. Bender (2019; 18%)Robbin Mitchell (2021; 7%)Adrianne Shapira (2016; 6%)Even CEOs sometimes get the 'you're fired' treatment 11Great, nobody understands corporate governanceMeta exec apologizes to conservative activist Robby StarbuckJoel Kaplan, Meta's chief global affairs officer, has issued a public apology to conservative influencer Robby Starbuck after Starbuck filed a lawsuit alleging that Meta's artificial intelligence chatbot produced responses containing false and defamatory information about him. “Robby — I watched your video — this is unacceptable. This is clearly not how our AI should operate,” Kaplan wrote on X, which is one of Meta's competitors. He referred to a 20-minute video in which Starbuck laid out his claims, including that Meta's AI falsely associated him with the Jan. 6 Capitol riot and the QAnon conspiracy theory.“We're sorry for the results it shared about you and that the fix we put in place didn't address the underlying problem,” Kaplan continued. “I'm working now with our product team to understand how this happened and explore potential solutions.”Bob Monks, fierce champion of shareholders against what he saw as boardroom failings 0An American pioneer of investor activism and better corporate governance.Monks emerged as a doughty champion of shareholders against what he saw as increasingly self-serving and complacent boardroom behaviourIn 1985 he founded Institutional Shareholder Services, which advises funds that own shares in multiple companies how best to exercise their voting power. He also helped create Lens, an activist investment fund, and GMI Ratings, a scrutineer of corporate behaviour which claimed to have downgraded BP before the Deepwater Horizon disaster, the insurance giant AIG before the 2008 financial crisis and News Corp before the phone-hacking scandal.His most celebrated campaign, in 1991, was an attempt to become a director of the underperforming retail and financial conglomerate Sears Roebuck, for which he ran a full-page ad in the Wall Street Journal depicting the existing Sears board as “non-performing assets”. Though his candidacy was rejected, many of his proposals for rationalisation were adopted, and he was able to declare: “Sears has been changed.”This low-profile CEO is the highest-paid in America with a $101 million paycheck that beat out Starbucks, Microsoft, and Apple chiefs 10Jim Anderson, a low-profile executive of Pennsylvania-based Coherent, which produces equipment for networks and lasersHere's what the dopey reporting missed:An originally announced golden hello equity award of $48M that magically morphed into $91M come proxy time.48% NO on Say on PayToo large Pay Committee: 6 members, led by Shaker Sadasivam, who was NOT up reelection this year. Also includes Mike Dreyer (22% NO), former COO of Silicon Valley BankEuronext rebrands ESG in drive to help European defence firms 10In a statement renaming ESG - the acronym given to Environmental, Social and Governance-driven investing - as Energy, Security and Geostrategy, Euronext's CEO and Chairman Stephane Boujnah said it was responding to a "new geopolitical order"."European aerospace and defence companies have expressed the urgent need to invest heavily in their innovation and production capacities to guarantee Europe's strategic autonomy for the next decade," Euronext said in the statement.Among the measures, Euronext said it would revisit the methodologies for ESG indexes to limit the exclusions currently placed on defence companies.OpenAI, facing pressure, announces its nonprofit will stay in control after allOpenAI announced a smaller-scale change to its famously complex structure. Remember that it was founded as a nonprofit. But in 2019, it set up a for-profit subsidiary to start raising money from investors to finance its eye-wateringly expensive A.I. research. Then last year, the company moved to turn itself into a for-profit entity in which the nonprofit held a stake but didn't have control.Now, OpenAI plans to turn its for-profit subsidiary into a public benefit corporation, which would still be controlled by the nonprofit, though the size of its stake remains undetermined. (Got all that?) Sam Altman, its C.E.O., said yesterday that the revised plan still gives his start-up “a more understandable structure to do the things that a company like us has to do.”The AI Industry Has a Huge Problem: the Smarter Its AI Gets, the More It's HallucinatingZuckerberg Says in Response to Loneliness Epidemic, He Will Create Most of Your Friends Using Artificial IntelligenceIn an interview with podcaster Dwarkesh Patel this week, Zuckerberg asserted that more people should be connecting with chatbots on a social level — because, in a striking line of argumentation, they don't have enough real-life friends.When asked if AI chatbots can help fight the loneliness epidemic, the billionaire painted a dystopian vision of a future in which we spend more time talking to AIs than flesh-and-blood humans."There's the stat that I always think is crazy, the average American, I think, has fewer than three friends," Zuckerberg told Patel. "And the average person has demand for meaningfully more, I think it's like 15 friends or something, right?""The average person wants more connectivity, connection, than they have," he concluded, hinting at the possibility that the discrepancy could be filled with virtual friends.Tesla Is Extremely Upset About Reporting That Its Board Has Been Looking Into Replacing Elon MuskLeading Independent Proxy Advisory Firm ISS Recommends Harley-Davidson Shareholders Vote "FOR ALL" of Harley-Davidson's Highly Qualified Director Nominees 10Targeted DirectorsCEO/Chair Zeitz (2007, 30%): who has already stepped down as CEOLead Director Norman Thomas Linebarger (2008, 13%): who is not independentSara Levinson (1996, 20%): the longest-tenured director Matt: HARD HITTING ANALYSIS“[I]t appears that his time in the role has been more positive than negative, which makes it hard to argue that his vote on a successor is worthless.”"[T]here are compelling reasons to believe that as a group [the targeted directors] still have a perspective that can be valuable.”"[I]t appears that the board initiated the [CEO search] process promptly…”, Target CEO's pay slashed by a whopping 45% after his disastrous mishandling of DEI 5Patrick Kennedy of The Minnesota Star Tribune used Total Realized Pay: down from $18.1M last year mostly because of a reduction in vested stock, $5.6M down from $13.6M. Total summary is up: $19.2M to $20.4M. Pay ratio is up: 719:1 to 753:1Matt: What?MATT1Berkshire Hathaway: Board Unanimously Appoints Greg Abel as Firm's Next Chief Executive 1000Rate the goodness of the succession planning processTrump announced Alcatraz reopening just hours after ‘Escape from Alcatraz' aired on a South Florida PBS station 15Rate the goodness of funding PBS, which probably gave Trump the idea to reopen AlcatrazGoldman Sachs Removes Mentions of ‘Black' From Flagship Diversity Pledge 0‘Black in Business,' one program in the effort, is now about staying ‘in the black,' in reference to profits—not raceRate the goodness of Goldman Sachs finally returning to a focus on profit, not black peopleAnthropic CEO Admits We Have No Idea How AI Works"When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate,"Meta exec apologizes to conservative activist Robby Starbuck -4,000,000“Robby — I watched your video — this is unacceptable. This is clearly not how our AI should operate.”
Dwarkesh Patel interviewed the most influential thinkers and leaders in the world of AI and chronicled the history of AI up to now in his book, The Scaling Era. Listen as he talks to EconTalk's Russ Roberts about the book, the dangers and potential of AI, and the role scale plays in AI progress. The conversation concludes with a discussion of the art of podcasting.
This week, we dig into the group chat that's rocking the Trump administration and talk about why turning to Signal to plan military operations probably isn't a great idea. Then, we're joined by the podcaster Dwarkesh Patel to discuss his new book “The Scaling Era,” and whether he's still optimistic about the broad benefits of A.I. And finally, a couple weeks ago we asked whether A.I. was making you dumber. Now we hear your takes. Guest:Dwarkesh Patel, tech podcaster and author of “The Scaling Era: An Oral History of A.I., 2019-2025” Additional Reading:The Trump Administration Accidentally Texted Me Its War PlansSignal Chat Leak Angers U.S. Military PilotsIs A.I. Making Us Dumb? We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Join Nolan Fortman and Logan Kilpatrick for a deep conversation with Dwarkesh Patel, host of the Dwarkesh podcast, and one of the 2024 TIME most influential people in AI. We chat about how the world is going to change with AI in the next 2 years, infinite AI dimensionality, creating content for the AI world, human authenticity in the age of AI, and learning with AI!
She's an economist, an institution-builder, an ecosystem-nurturer and one of our finest thinkers. Shruti Rajagopalan joins Amit Varma in episode 410 of The Seen and the Unseen to talk about her life & times -- and her remarkable work. (FOR FULL LINKED SHOW NOTES, GO TO SEENUNSEEN.IN.) Also check out: 1. Shruti Rajagopalan on Twitter, Substack, Instagram, her podcast, Ideas of India and her own website. 2. Emergent Ventures India. 3. The 1991 Project. 4. Life Lessons That Are Priceless -- Episodes 400 of The Seen and the Unseen. 5. Other episodes of The Seen and the Unseen w Shruti Rajagopalan, in reverse chronological order: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18. 6. The Day Ryan Started Masturbating -- Amit Varma's newsletter post explaining Shruti Rajagopalan's swimming pool analogy for social science research. 7. A Deep Dive Into Education -- Episode 54 of Everything is Everything. 8. Fixing Indian Education — Episode 185 of The Seen and the Unseen (w Karthik Muralidharan). 9. Population Is Not a Problem, but Our Greatest Strength -- Amit Varma. 10. Our Population Is Our Greatest Asset -- Episode 20 of Everything is Everything. 11. Where Has All the Education Gone? -- Lant Pritchett. 12. Lant Pritchett Is on Team Prosperity — Episode 379 of The Seen and the Unseen. 13. The Theory of Moral Sentiments — Adam Smith. 14. The Wealth of Nations — Adam Smith. 15. Commanding Heights -- Daniel Yergin. 16. Capitalism and Freedom -- Milton Friedman. 17. Free to Choose -- Milton Friedman and Rose Friedman. 18. Economics in One Lesson -- Henry Hazlitt. 19. The Road to Serfdom -- Friedrich Hayek. 20. Four Papers That Changed the World -- Episode 41 of Everything is Everything. 21. The Use of Knowledge in Society -- Friedrich Hayek. 22. Individualism and Economic Order -- Friedrich Hayek. 23. Understanding the State -- Episode 25 of Everything is Everything. 24. Richard E Wagner at Mercatus and Amazon. 25. Larry White and the First Principles of Money -- Episode 397 of The Seen and the Unseen. 26. Fixing the Knowledge Society -- Episode 24 of Everything is Everything. 27. Marginal Revolution. 28. Paul Graham's essays. 29. Commands and controls: Planning for indian industrial development, 1951–1990 -- Rakesh Mohan and Vandana Aggarwal. 30. The Reformers -- Episode 28 of Everything is Everything. 31. India: Planning for Industrialization -- Jagdish Bhagwati and Padma Desai. 32. Open Borders: The Science and Ethics of Immigration -- Bryan Caplan and Zach Weinersmith. 33. Cows on India Uncut. 34. Abdul Karim Khan on Spotify and YouTube. 35. The Surface Area of Serendipity -- Episode 39 of Everything is Everything. 36. Objects From Our Past -- Episode 77 of Everything is Everything. 37. Sriya Iyer on the Economics of Religion -- The Ideas of India Podcast. 38. Episodes of The Seen and the Unseen with Ramachandra Guha: 1, 2, 3, 4, 5, 6. 39. Episodes of The Seen and the Unseen with Pratap Bhanu Mehta: 1, 2. 40. Rohit Lamba Reimagines India's Economic Policy Emphasis -- The Ideas of India Podcast. 41. Rohit Lamba Will Never Be Bezubaan — Episode 378 of The Seen and the Unseen. 42. The Constitutional Law and Philosophy blog. 43. Cost and Choice -- James Buchanan. 44. Philip Wicksteed. 45. Pratap Bhanu Mehta on The Theory of Moral Sentiments -- The Ideas of India Podcast. 46. Conversation and Society — Episode 182 of The Seen and the Unseen (w Russ Roberts). 47. The Common Sense of Political Economy -- Philip Wicksteed. 48. Narendra Shenoy and Mr Narendra Shenoy — Episode 250 of The Seen and the Unseen. 49. Sudhir Sarnobat Works to Understand the World — Episode 350 of The Seen and the Unseen. 50. Manmohan Singh: India's Finest Talent Scout -- Shruti Rajagopalan. 51. The Importance of the 1991 Reforms — Episode 237 of The Seen and the Unseen (w Shruti Rajagopalan and Ajay Shah). 52. The Life and Times of Montek Singh Ahluwalia — Episode 285 of The Seen and the Unseen. 53. The Forgotten Greatness of PV Narasimha Rao — Episode 283 of The Seen and the Unseen (w Vinay Sitapati). 54. India's Massive Pensions Crisis — Episode 347 of The Seen and the Unseen (w Ajay Shah & Renuka Sane). 55. The Life and Times of KP Krishnan — Episode 355 of The Seen and the Unseen. 56. Breaking Through — Isher Judge Ahluwalia. 57. Breaking Out — Padma Desai. 58. Perestroika in Perspective -- Padma Desai. 59. Shephali Bhatt Is Searching for the Incredible — Episode 391 of The Seen and the Unseen. 60. Pics from the Seen-Unseen party. 61. Pramod Varma on India's Digital Empowerment -- Episode 50 of Brave New World. 59. Niranjan Rajadhyaksha Is the Impartial Spectator — Episode 388 of The Seen and the Unseen. 60. Our Parliament and Our Democracy — Episode 253 of The Seen and the Unseen (w MR Madhavan). 61. Episodes of The Seen and the Unseen with Pranay Kotasthane: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13. 62. The Overton Window. 63. When Ideas Have Sex -- Matt Ridley. 64. The Three Languages of Politics — Arnold Kling. 65. Arnold Kling and the Four Languages of Politics -- Episode 394 of The Seen and the Unseen. 66. The Double ‘Thank You' Moment — John Stossel. 67. Economic growth is enough and only economic growth is enough — Lant Pritchett with Addison Lewis. 68. What is Libertarianism? — Episode 117 of The Seen and the Unseen (w David Boaz). 69. What Does It Mean to Be Libertarian? — Episode 64 of The Seen and the Unseen. 70. The Libertarian Mind: A Manifesto for Freedom -- David Boaz. 71. Publish and Perish — Agnes Callard. 72. Classical Liberal Institute. 73. Shruti Rajagopalan's YouTube talk on constitutional amendments. 74. What I, as a development economist, have been actively “for” -- Lant Pritchett. 75. Can Economics Become More Reflexive? — Vijayendra Rao. 76. Premature Imitation and India's Flailing State — Shruti Rajagopalan & Alexander Tabarrok. 77. Elite Imitation in Public Policy — Episode 180 of The Seen and the Unseen (w Shruti Rajagopalan and Alex Tabarrok). 78. Invisible Infrastructure -- Episode 82 of Everything is Everything. 79. The Sundara Kanda. 80. Devdutt Pattanaik and the Stories That Shape Us -- Episode 404 of The Seen and the Unseen. 81. Y Combinator. 82. Space Fields. 83. Apoorwa Masuk, Onkar Singh Batra, Naman Pushp, Angad Daryani, Deepak VS and Srijon Sarkar. 84. Deepak VS and the Man Behind His Face — Episode 373 of The Seen and the Unseen. 85. You've Got To Hide Your Love Away -- The Beatles. 86. Caste, Capitalism and Chandra Bhan Prasad — Episode 296 of The Seen and the Unseen. 87. Data For India -- Rukmini S's startup. 88. Whole Numbers And Half Truths — Rukmini S. 89. The Moving Curve — Rukmini S's Covid podcast, also on all podcast apps. 90. The Importance of Data Journalism — Episode 196 of The Seen and the Unseen (w Rukmini S). 91. Rukmini Sees India's Multitudes — Episode 261 of The Seen and the Unseen (w Rukmini S). 92. Prosperiti. 93. This Be The Verse — Philip Larkin. 94. The Dilemma of an Indian Liberal -- Gurcharan Das. 95. Zakir: 1951-2024 -- Shruti Rajagopalan. 96. Dazzling Blue -- Paul Simon, featuring Karaikudi R Mani. 97. John Coltrane, Shakti, Zakir Hussain, Ali Akbar Khan, Pannalal Ghosh, Nikhil Banerjee, Vilayat Khan, Bismillah Khan, Ravi Shankar, Bhimsen Joshi, Bade Ghulam Ali Khan, Nusrat Fateh Ali Khan, Esperanza Spalding, MS Subbulakshmi, Lalgudi Jayaraman, TN Krishnan, Sanjay Subrahmanyan, Ranjani-Gayatri and TM Krishna on Spotify. 98. James Buchanan, Gordon Tullock, Israel Kirzner, Mario Rizzo, Vernon Smith, Thomas Schelling and Ronald Coase. 99. The Calculus of Consent -- James Buchanan and Gordon Tullock. 100. Tim Harford and Martin Wolf. 101. The Shawshank Redemption -- Frank Darabont. 102. The Marriage of Figaro in The Shawshank Redemption. 103. An Equal Music -- Vikram Seth. 104. Beethoven: Symphony No. 7 - Zubin Mehta and the Belgrade Philharmonic. 105. Pyotr Ilyich Tchaikovsky's violin concertos. 106. Animal Farm -- George Orwell. 107. Down and Out in Paris and London -- George Orwell. 108. Gulliver's Travels -- Jonathan Swift. 109. Alice in Wonderland and Through the Looking Glass -- Lewis Carroll. 110. One Day in the Life of Ivan Denisovich -- Aleksandr Solzhenitsyn. 111. The Gulag Archipelago -- Aleksandr Solzhenitsyn. 112. Khosla Ka Ghosla -- Dibakar Banerjee. 113. Mr India -- Shekhar Kapur. 114. Chalti Ka Naam Gaadi -- Satyen Bose. 114. Finding Nemo -- Andrew Stanton. 115. Tom and Jerry and Bugs Bunny. 116. Michael Madana Kama Rajan -- Singeetam Srinivasa Rao. 117. The Music Box, with Laurel and Hardy. 118. The Disciple -- Chaitanya Tamhane. 119. Court -- Chaitanya Tamhane. 120. Dwarkesh Patel on YouTube. Amit Varma and Ajay Shah have launched a new course called Life Lessons, which aims to be a launchpad towards learning essential life skills all of you need. For more details, and to sign up, click here. Amit and Ajay also bring out a weekly YouTube show, Everything is Everything. Have you watched it yet? You must! And have you read Amit's newsletter? Subscribe right away to The India Uncut Newsletter! It's free! Also check out Amit's online course, The Art of Clear Writing. Episode art: ‘Learn' by Simahina.
Read the full transcript here. What interesting things can we learn by studying pre-humans? How many different species of pre-humans were there? Why is there only a single species of human now? If pre-human species wiped each other out for various reasons, why might the ancestors of chimps and bonobos (who are very closely related to humans) have been spared? What roles did language, racism / speciesism, and disease likely play in the shaping of the human evolutionary tree? How is AI development like and unlike human development? What can we learn about AI development from human development and vice versa? What is an "AI firm"? What are some advantages AI firms would have over human companies in addition to intelligence and speed? How can we learn faster and retain knowledge better? Is writing the best way to learn something deeply?Dwarkesh Patel is the host of the Dwarkesh Podcast. Listen to his podcast, read his writings on Substack, or learn more about him at his website, dwarkeshpatel.com. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Бесплатный Подборбот от «Рейтинга Рунета» для выбора digital-подрядчиков: https://clck.ru/3EumK7?erid=LjN8JyMAq Реклама. ООО «ПРОАКТИВИТИ». Бонусные посты от RationalAnswer: — Будет ли в России заморозка банковских вкладов? — https://t.me/RationalAnswer/1144 — Что я думаю про курс доллара — https://t.me/RationalAnswer/732 — Новый большой цикл постов про премию к доходности за риск в акциях — https://t.me/RationalAnswer/1141 Дополнительные материалы к выпуску: — Александр Елисеев про тайну фри-флоата Мосбиржи — https://t.me/Finindie/2042 — Инициатива по разблокировке бумаг российских инвесторов — https://www.change.org/p/protect-millions-from-overreaching-sanctions-revise-eu-regulation-no-269-2014 — Делойтовцы объясняют новое налогообложение крипты в РФ — https://t.me/arturdulkarnaev/1137 — Лонгрид недели: Astral Codex Ten – Prison And Crime: Much More Than You Wanted To Know — https://www.astralcodexten.com/p/prison-and-crime-much-more-than-you — Интервью недели: Gwern Branwen в гостях у Dwarkesh Patel – https://www.dwarkeshpatel.com/p/gwern-branwen Текстовая версия выпуска со ссылками: https://vc.ru/money/1685017 Посмотреть выпуск на YouTube: https://www.youtube.com/watch?v=sm9uY9ZJY_s Поддержи проект RationalAnswer и попади в титры: — Patreon (в валюте) – https://www.patreon.com/RationalAnswer — Boosty (в рублях) – https://boosty.to/RationalAnswer СОДЕРЖАНИЕ: 00:21 – Доллар по 110 01:46 – Укрощаем ставку силой мысли 02:59 – Тайна фри-флоата Мосбиржи 05:14 – Подписи за разблокировку активов российских инвесторов 06:05 – Налоговые новости 07:48 – Биткоин в резервах РФ 08:46 – Телеграм заработал прибыль 09:42 – Первая работа 11:30 – Щитпостинг Трампа про БРИКС 12:19 – Synapse не знает, где деньги финтехов 14:50 – Кен Лич мошенничал со сделками хедж-фондов 15:58 – Хедж-фонд от AQR для налоговых убытков 17:02 – Илон Маск раздал 25% xAI 18:03 – Волчистые северокорейцы нанимаются в IT-компании 18:57 – Японский банк с харакири-практиками 19:23 – Новости крипты: Банан съеден 22:02 – Статистика недели: Золото за 25 лет 24:25 – Лонгрид недели: Тюрьмы снижают преступность? 27:07 – Интервью недели: Gwern у Дваркеша 30:21 – Сплетня недели
Some helpful linksMeter - https://www.meter.com/Meter command - https://command.meter.com/Link to Tyler's book on culture - https://www.amazon.com/Praise-Commercial-Culture-Tyler-Cowen/dp/0674001885Sam Hinkie - https://en.wikipedia.org/wiki/Sam_Hinkie0:00 - Intro3:48 - Anil's early years and background5:23 - Unconventional parenting9:35 - Anil's journey to entrepreneurship12:30 - Sleeping in factories in China15:22 - China VS U.S.18:30 - Why Networks are so important21:35 - Why networking is still an unsolved problem24:10 - Is hardware too hard?26:11 - What does Meter do?37:17 - How does Meter work?41:08 - Future of enterprise software44:00 - Human interaction with AI models46:30 - Why Meter is building AI models50:50 - Spotting young talent54:00 - Anil's framework to find good talent57:30 - How Anil helped Dwarkesh Patel start his podcast1:02:00 - The “X factor” in Anil's investments1:02:00 - Raising the ambition bar1:06:55 - Escaping the competitive Indian dynamics1:08:38 - How cinema influences entrepreneurship1:17:25 - Why don't we know how planes fly1:19:20 - Lessons from Sam Hinkie1:21:04 - Kindness as an operating principle1:22:10 - Why hasn't Anil had a more public brand?1:24:03 - US Immigration1:28:00 - Aarthi, Sriram and Anil show?1:30:44 - Best Indian restaurant in London1:32:50 - Has sneaker culture peaked?1:34:25 - Why don't wealthy people build monuments anymore?1:38:04 - London's rich history1:40:30 - Why does Sriram have sriramk.eth?1:42:00 - Should all startups go direct on comms?1:47:07 - Are Aarthi and Sriram “too online”?1:49:10 - Sriram's Silicon Valley groupchats1:49:46 - Will Aarthi and Sriram move back to India?1:48:12 - Aarthi and Sriram's failures in tech1:53:55 - Netflix's 3D and streaming software1:58:18 - Popfly1:59:55 - Microsoft success under Satya2:02:00 - On tech execs2:03:10 - Nonfiction book that Aarthi and Sriram would write2:06:27 - Aarthi and Sriram's favorite Indian movie before 20002:09:48 - The End Follow Sriram:https://www.instagram.com/sriramk/https://twitter.com/sriramkFollow Aarthi:https://www.instagram.com/aarthir/https://twitter.com/aarthirFollow the podcast:https://www.instagram.com/aarthiandsriramshow/https://twitter.com/aarthisrirampod
Tech podcaster Dwarkesh Patel joins Kevin Roberts at the “Reboot: The New Reality” conference in San Francisco, an event hosted by the Foundation for American Innovation. In this engaging conversation, they explore how the rapidly evolving tech industry and conservative policy can find common ground to foster innovation while safeguarding American values. They delve into […]
Tech podcaster Dwarkesh Patel joins Kevin Roberts at the "Reboot: The New Reality" conference in San Francisco, an event hosted by the Foundation for American Innovation. In this engaging conversation, they explore how the rapidly evolving tech industry and conservative policy can find common ground to foster innovation while safeguarding American values. They delve into pressing issues like free speech on digital platforms, the role of government regulation in tech, and how to balance technological advancements with the preservation of individual liberties.
Extended audio from my conversation with Dwarkesh Patel. This part focuses on my series "Otherness and control in the age of AGI." Transcript available on my website here: https://joecarlsmith.com/2024/09/30/part-1-otherness-extended-audio-transcript-from-my-conversation-with-dwarkesh-patel/
Extended audio from my conversation with Dwarkesh Patel. This part focuses on the basic story about AI takeover. Transcript available on my website here: https://joecarlsmith.com/2024/09/30/part-2-ai-takeover-extended-audio-transcript-from-my-conversation-with-dwarkesh-patel
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #79: Ready for Some Football, published by Zvi on August 29, 2024 on LessWrong. I have never been more ready for Some Football. Have I learned all about the teams and players in detail? No, I have been rather busy, and have not had the opportunity to do that, although I eagerly await Seth Burn's Football Preview. I'll have to do that part on the fly. But oh my would a change of pace and chance to relax be welcome. It is time. The debate over SB 1047 has been dominating for weeks. I've now said my peace on the bill and how it works, and compiled the reactions in support and opposition. There are two small orders of business left for the weekly. One is the absurd Chamber of Commerce 'poll' that is the equivalent of a pollster asking if you support John Smith, who recently killed your dog and who opponents say will likely kill again, while hoping you fail to notice you never had a dog. The other is a (hopefully last) illustration that those who obsess highly disingenuously over funding sources for safety advocates are, themselves, deeply conflicted by their funding sources. It is remarkable how consistently so many cynical self-interested actors project their own motives and morality onto others. The bill has passed the Assembly and now it is up to Gavin Newsom, where the odds are roughly 50/50. I sincerely hope that is a wrap on all that, at least this time out, and I have set my bar for further comment much higher going forward. Newsom might also sign various other AI bills. Otherwise, it was a fun and hopeful week. We saw a lot of Mundane Utility, Gemini updates, OpenAI and Anthropic made an advance review deal with the American AISI and The Economist pointing out China is non-zero amounts of safety pilled. I have another hopeful iron in the fire as well, although that likely will take a few weeks. And for those who aren't into football? I've also been enjoying Nate Silver's On the Edge. So far, I can report that the first section on gambling is, from what I know, both fun and remarkably accurate. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Turns out you did have a dog. Once. 4. Language Models Don't Offer Mundane Utility. The AI did my homework. 5. Fun With Image Generation. Too much fun. We are DOOMed. 6. Deepfaketown and Botpocalypse Soon. The removal of trivial frictions. 7. They Took Our Jobs. Find a different job before that happens. Until you can't. 8. Get Involved. DARPA, Dwarkesh Patel, EU AI Office. Last two in SF. 9. Introducing. Gemini upgrades, prompt engineering guide, jailbreak contest. 10. Testing, Testing. OpenAI and Anthropic formalize a deal with the US's AISI. 11. In Other AI News. What matters? Is the moment over? 12. Quiet Speculations. So many seem unable to think ahead even mundanely. 13. SB 1047: Remember. Let's tally up the votes. Also the poll descriptions. 14. The Week in Audio. Confused people bite bullets. 15. Rhetorical Innovation. Human preferences are weird, yo. 16. Aligning a Smarter Than Human Intelligence is Difficult. 'Alignment research'? 17. People Are Worried About AI Killing Everyone. The Chinese, perhaps? 18. The Lighter Side. Got nothing for you. Grab your torches. Head back to camp. Language Models Offer Mundane Utility Chat with Scott Sumner's The Money Illusion GPT about economics, with the appropriate name ChatTMI. It's not perfect, but he says it's not bad either. Also, did you know he's going to Substack soon? Build a nuclear fusor in your bedroom with zero hardware knowledge, wait what? To be fair, a bunch of humans teaching various skills and avoiding electrocution were also involved, but still pretty cool. Import things automatically to your calendar, generalize this it seems great. Mike Knoop (Co-founder Zapier and Arc Prize): Parent tip: you can upload a ph...
A couple hundred people in San Francisco may be on the cusp of inventing artificial general intelligence (AGI). Yet most people are not paying close attention, are skeptical, and are certainly not in the room. Dwarkesh pulls back the curtain so that the broader public can understand what people who are actually building AI think is going to happen. The conversation begins with Dwarkesh drawing parallels between the current AI revolution and past technological shifts, like the Industrial Revolution, emphasizing the potential for AI to massively increase productivity and automate jobs. He paints a picture of a future where AI, especially artificial general intelligence, could lead to unprecedented economic growth, comparing it to the rapid transformation seen in cities like Shenzhen, China. However, Dwarkesh also cautions about the risks of an intelligence explosion, where superhuman AI could drastically alter societal dynamics. The discussion then moves into scenario planning, exploring various potential futures for AI development. In a positive scenario, AI continues to advance parabolically, boosting productivity and economic growth. Conversely, a negative scenario sees AI hitting significant challenges or diminishing returns, slowing its progress. A surprise scenario considers AI developing in unforeseen ways. Dwarkesh emphasizes the importance of managing public expectations and preparing for a range of outcomes, highlighting the necessity of effective redistribution and retraining programs to help workers transition as AI reshapes the job market. Finally, Ben and Dwarkesh consider the public policy demands and philosophical implications of AI. They discuss how AI could change human cognitive biases, improving decision-making, but also potentially amplify negative traits like greed and power-seeking. Dwarkesh advocates for policies that ensure the broad deployment of AI systems to allow society to adapt gradually and stresses the need for coordinated global efforts to manage AI's geopolitical impacts. The episode concludes with reflections on the crucial role of ethical standards in AI development, underscoring the importance of dialogue and collaboration to harness AI's benefits while mitigating its risks. --- Have questions or feedback about this episode? Drop us a note at Onward@Fundrise.com. Onward is hosted by Ben Miller, co-founder and CEO of Fundrise. Podcast production by The Podcast Consultant. Music by Seaplane Armada. About Fundrise With over 2 million users, Fundrise is America's largest direct-to-investor alternative asset investment platform. Since 2012, our mission has been to build a better financial system by empowering the individual. We make it easier and more efficient than ever for anyone to invest in institutional-quality private alternative assets — all at the touch of a button. Please see fundrise.com/oc for more information on all of the Fundrise-sponsored investment funds and products, including each fund's offering document(s). Want to see the specific assets that make up and power Fundrise portfolios? Check out our active and past projects at www.fundrise.com/assets.
This episode is sponsored by Command Bar, an embedded AI copilot designed to improve user experience on your web or mobile site. Find them here: https://www.commandbar.com/copilot/ Dwarkesh Patel is on a quest to know everything. He's using LLMs to enhance how he reads, learns, thinks, and conducts interviews. Dwarkesh is a podcaster who's interviewed a wide range of people, like Mark Zuckerberg, Tony Blair, and Marc Andreesen. Before conducting each of these interviews, Dwarkesh learns as much as he can about his guest and their area of expertise—AI hardware, tense geopolitical crises, and the genetics of human origins, to name a few. The most important tool in his learning arsenal? AI—specifically Claude, Claude Projects, and a few custom tools he's built to accelerate his workflow. He does this by researching extensively, and as his knowledge grows, each piece of new information builds upon the last, making it easier and easier to grasp meaningful insights. In this interview, I turn the tables on him to understand how the prolific podcaster uses AI to become a smarter version of himself. We get into: - How he uses LLMs to remember everything - His podcast prep workflow with Claude to understand complex topics - Why it's important to be an early adopter of technology - His taste in books and how he uses LLMs to learn from them - How he thinks about building a worldview - His quick takes on the AI's existential questions—AGI and P(doom) We also use Claude live on the show to help Dwarkesh research for an upcoming podcast recording. This is a must-watch for curious people who want to use AI to become smarter. If you found this episode interesting, please like, subscribe, comment, and share! Want even more? Sign up for Every to unlock our ultimate guide to prompting ChatGPT here. It's usually only for paying subscribers, but you can get it here for free. To hear more from Dan Shipper: - Subscribe to Every - Follow him on X Timestamps: 00:00:00 - Teaser 00:01:44 - Introduction 00:05:37 - How Dwarkesh uses LLMs to remember everything 00:11:50 - Dwarkesh's taste in books and how he uses AI to learn from them 00:17:58 - Why it's important to be an early adopter of technology 00:20:44 - How Dwarkesh uses Claude to understand complex concepts 00:26:36 - Dwarkesh on how you can compound your intelligence 00:28:21 - Why Dwarkesh is on a quest to know everything 00:39:19 - Dan and Dwarkesh prep for an upcoming interview 01:04:14 - How Dwarkesh uses AI for post-production of his podcast 01:08:51 - Rapid fire on AI's biggest questions—AGI and P(doom) Links to resources mentioned in the episode: - Dwarkesh Patel - Dwarkesh's podcast and newsletter - Dwarkesh's interview with researcher Andy Matuschak on spaced repetition - The book about technology and society that both Dan and Dwarkesh are reading: Medieval Technology and Social Change - Dan's interview with Reid Hoffman - The book by Will Durant that inspires Dwarkesh: Fallen Leaves - One of the most interesting books Dwarkesh has read: The Great Divide - Upcoming guests on Dwarkesh's podcast: David Reich and Daniel Yergin
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Why the Hawk Tuah Meme broke through 2) Evolution of the internet from the 'nerd internet' to the 'Zynternet' or 'bro internet' 3) The 'For You' recommendation system had turned the internet into a mass appeal machine vs. niche 4) Zuck's fourth of july water sports moment is indicative of the change 5) Raw dogging flights 6) OpenAI hacked 6) NYTimes rips off Dwarkesh Patel's Podcast 7) Figma's Apple Weather App design copy 8) AI startups raising a ton of cash 9) Will there be a reckoning for VCs investing in AI 10) Threads turns one — what is it? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so many "racists" at Manifest?, published by Austin on June 18, 2024 on The Effective Altruism Forum. Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to "would you recommend to a friend" was a 9.0/10. Reviewers said nice things like "one of the best weekends of my life" and "dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams" and "I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe." Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as "racist". Why did we invite these folks? First: our sessions and guests were mostly not controversial - despite what you may have heard Here's the schedule for Manifest on Saturday: (The largest & most prominent talks are on the left. Full schedule here.) And here's the full list of the 57 speakers we featured on our website: Nate Silver, Luana Lopes Lara, Robin Hanson, Scott Alexander, Niraek Jain-sharma, Byrne Hobart, Aella, Dwarkesh Patel, Patrick McKenzie, Chris Best, Ben Mann, Eliezer Yudkowsky, Cate Hall, Paul Gu, John Phillips, Allison Duettmann, Dan Schwarz, Alex Gajewski, Katja Grace, Kelsey Piper, Steve Hsu, Agnes Callard, Joe Carlsmith, Daniel Reeves, Misha Glouberman, Ajeya Cotra, Clara Collier, Samo Burja, Stephen Grugett, James Grugett, Javier Prieto, Simone Collins, Malcolm Collins, Jay Baxter, Tracing Woodgrains, Razib Khan, Max Tabarrok, Brian Chau, Gene Smith, Gavriel Kleinwaks, Niko McCarty, Xander Balwit, Jeremiah Johnson, Ozzie Gooen, Danny Halawi, Regan Arntz-Gray, Sarah Constantin, Frank Lantz, Will Jarvis, Stuart Buck, Jonathan Anomaly, Evan Miyazono, Rob Miles, Richard Hanania, Nate Soares, Holly Elmore, Josh Morrison. Judge for yourself; I hope this gives a flavor of what Manifest was actually like. Our sessions and guests spanned a wide range of topics: prediction markets and forecasting, of course; but also finance, technology, philosophy, AI, video games, politics, journalism and more. We deliberately invited a wide range of speakers with expertise outside of prediction markets; one of the goals of Manifest is to increase adoption of prediction markets via cross-pollination. Okay, but there sure seemed to be a lot of controversial ones… I was the one who invited the majority (~40/60) of Manifest's special guests; if you want to get mad at someone, get mad at me, not Rachel or Saul or Lighthaven; certainly not the other guests and attendees of Manifest. My criteria for inviting a speaker or special guest was roughly, "this person is notable, has something interesting to share, would enjoy Manifest, and many of our attendees would enjoy hearing from them". Specifically: Richard Hanania - I appreciate Hanania's support of prediction markets, including partnering with Manifold to run a forecasting competition on serious geopolitical topics and writing to the CFTC in defense of Kalshi. (In response to backlash last year, I wrote a post on my decision to invite Hanania, specifically) Simone and Malcolm Collins - I've enjoyed their Pragmatist's Guide series, which goes deep into topics like dating, governance, and religion. I think the world would be better with more kids in it, and thus support pronatalism. I also find the two of them to be incredibly energetic and engaging speakers IRL. Jonathan Anomaly - I attended a talk Dr. Anomaly gave about the state-of-the-art on polygenic embryonic screening. I was very impressed that something long-considered scien...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #67: Brief Strange Trip, published by Zvi on June 7, 2024 on LessWrong. I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversation on various topics, every hour passing doors of missed opportunity, gone forever. Most of all it meant almost no writing done for five days, so I am shall we say a bit behind again. Thus, the following topics are pending at this time, in order of my guess as to priority right now: 1. Leopold Aschenbrenner wrote a giant thesis, started a fund and went on Dwarkesh Patel for four and a half hours. By all accounts, it was all quite the banger, with many bold claims, strong arguments and also damning revelations. 2. Partly due to Leopold, partly due to an open letter, partly due to continuing small things, OpenAI fallout continues, yes we are still doing this. This should wait until after Leopold. 3. DeepMind's new scaling policy. I have a first draft, still a bunch of work to do. 4. The OpenAI model spec. As soon as I have the cycles and anyone at OpenAI would have the cycles to read it. I have a first draft, but that was written before a lot happened, so I'd want to see if anything has changed. 5. The Rand report on securing AI model weights, which deserves more attention than the brief summary I am giving it here. 6. You've Got Seoul. I've heard some sources optimistic about what happened there but mostly we've heard little. It doesn't seem that time sensitive, diplomacy flows slowly until it suddenly doesn't. 7. The Problem of the Post-Apocalyptic Vault still beckons if I ever have time. Also I haven't processed anything non-AI in three weeks, the folders keep getting bigger, but that is a (problem? opportunity?) for future me. And there are various secondary RSS feeds I have not checked. There was another big change this morning. California's SB 1047 saw extensive changes. While many were helpful clarifications or fixes, one of them severely weakened the impact of the bill, as I cover on the linked post. The reactions to the SB 1047 changes so far are included here. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Three thumbs in various directions. 4. Language Models Don't Offer Mundane Utility. Food for lack of thought. 5. Fun With Image Generation. Video generation services have examples. 6. Deepfaketown and Botpocalypse Soon. The dog continues not to bark. 7. They Took Our Jobs. Constant AI switching for maximum efficiency. 8. Get Involved. Help implement Biden's executive order. 9. Someone Explains It All. New possible section. Template fixation. 10. Introducing. Now available in Canada. Void where prohibited. 11. In Other AI News. US Safety Institute to get model access, and more. 12. Covert Influence Operations. Your account has been terminated. 13. Quiet Speculations. The bear case to this week's Dwarkesh podcast. 14. Samuel Hammond on SB 1047. Changes address many but not all concerns. 15. Reactions to Changes to SB 1047. So far coming in better than expected. 16. The Quest for Sane Regulation. Your random encounters are corporate lobbyists. 17. That's Not a Good Idea. Antitrust investigation of Nvidia, Microsoft and OpenAI. 18. The Week in Audio. Roman Yampolskiy, also new Dwarkesh Patel is a banger. 19. Rhetorical Innovation. Innovative does not mean great. 20. Oh Anthropic. I have seen the other guy, but you are not making this easy. 21. Securing Model Weights is Difficult. Rand has some suggestions. 22. Aligning a Dumber Than Human Intelligence is Still Difficult. What to do? 23. Aligning a Smarter Than Human Inte...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former OpenAI Superalignment Researcher: Superintelligence by 2030, published by Julian Bradshaw on June 5, 2024 on LessWrong. The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. In the link provided, Leopold Aschenbrenner explains why he believes AGI is likely to arrive within the decade, with superintelligence following soon after. He does so in some detail; the website is well-organized, but the raw pdf is over 150 pages. Leopold is a former member of OpenAI's Superalignment team; he was fired in April for allegedly leaking company secrets. However, he contests that portrayal of events in a recent interview with Dwarkesh Patel, saying he leaked nothing of significance and was fired for other reasons.[1] However, I am somewhat confused by the new business venture Leopold is now promoting, an "AGI Hedge Fund" aimed at generating strong returns based on his predictions of imminent AGI. In the Dwarkesh Patel interview, it sounds like his intention is to make sure financial resources are available to back AI alignment and any other moves necessary to help Humanity navigate a turbulent future. However, the discussion in the podcast mostly focuses on whether such a fund would truly generate useful financial returns. If you read this post, Leopold[2], could you please clarify your intentions in founding this fund? 1. ^ Specifically he brings up a memo he sent to the old OpenAI board claiming OpenAI wasn't taking security seriously enough. He was also one of very few OpenAI employees not to sign the letter asking for Sam Altman's reinstatement last November, and of course, the entire OpenAI superaligment team has collapsed for various reasons as well. 2. ^ Leopold does have a LessWrong account, but hasn't linked his new website here after some time. I hope he doesn't mind me posting in his stead. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarkesh's Podcast with OpenAI's John Schulman, published by Zvi on May 21, 2024 on LessWrong. Dwarkesh Patel recorded a Podcast with John Schulman, cofounder of OpenAI and at the time their head of current model post-training. Transcript here. John's job at the time was to make the current AIs do what OpenAI wanted them to do. That is an important task, but one that employs techniques that their at-the-time head of alignment, Jan Leike, made clear we should not expect to work on future more capable systems. I strongly agree with Leike on that. Then Sutskever left and Leike resigned, and John Schulman was made the new head of alignment, now charged with what superalignment efforts remain at OpenAI to give us the ability to control future AGIs and ASIs. This gives us a golden opportunity to assess where his head is at, without him knowing he was about to step into that role. There is no question that John Schulman is a heavyweight. He executes and ships. He knows machine learning. He knows post-training and mundane alignment. The question is, does he think well about this new job that has been thrust upon him? The Big Take Overall I was pleasantly surprised and impressed. In particular, I was impressed by John's willingness to accept uncertainty and not knowing things. He does not have a good plan for alignment, but he is far less confused about this fact than most others in similar positions. He does not know how to best navigate the situation if AGI suddenly happened ahead of schedule in multiple places within a short time frame, but I have not ever heard a good plan for that scenario, and his speculations seem about as directionally correct and helpful as one could hope for there. Are there answers that are cause for concern, and places where he needs to fix misconceptions as quickly as possible? Oh, hell yes. His reactions to potential scenarios involved radically insufficient amounts of slowing down, halting and catching fire, freaking out and general understanding of the stakes. Some of that I think was about John and others at OpenAI using a very weak definition of AGI (perhaps partly because of the Microsoft deal?) but also partly he does not seem to appreciate what it would mean to have an AI doing his job, which he says he expects in a median of five years. His answer on instrumental convergence is worrisome, as others have pointed out. He dismisses concerns that an AI given a bounded task would start doing things outside the intuitive task scope, or the dangers of an AI 'doing a bunch of wacky things' a human would not have expected. On the plus side, it shows understanding of the key concepts on a basic (but not yet deep) level, and he readily admits it is an issue with commands that are likely to be given in practice, such as 'make money.' In general, he seems willing to react to advanced capabilities by essentially scaling up various messy solutions in ways that I predict would stop working at that scale or with something that outsmarts you and that has unanticipated affordances and reason to route around typical in-distribution behaviors. He does not seem to have given sufficient thought to what happens when a lot of his assumptions start breaking all at once, exactly because the AI is now capable enough to be properly dangerous. As with the rest of OpenAI, another load-bearing assumption is presuming gradual changes throughout all this, including assuming past techniques will not break. I worry that will not hold. He has some common confusions about regulatory options and where we have viable intervention points within competitive dynamics and game theory, but that's understandable, and also was at the time very much not his department. As with many others, there seems to be a disconnect. A lot of the thinking here seems like excellent practical thi...
Dwarkesh Patel is the host of the Dwarkesh Podcast, where he's interviewed Mark Zuckerberg, Ilya Sustkever, Dario Amodei, and more AI leaders. Patel joins Big Technology to discuss the current state and future trajectory of AI development, including the potential for artificial general intelligence (AGI) and superintelligence. Tune in to hear Patel's insights on key issues like AI scaling, alignment, safety, and governance, as well as his perspective on the competitive landscape of the AI industry. We also cover the influence of the effective altruism movement on Patel's thinking, his podcast strategy, and the challenges and opportunities ahead as AI systems become more advanced. Listen for a wide-ranging and insightful conversation that grapples with some of the most important questions of our technological age. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #61: Meta Trouble, published by Zvi on May 5, 2024 on LessWrong. Note by habryka: This post failed to import automatically from RSS for some reason, so it's a week late. Sorry for the hassle. The week's big news was supposed to be Meta's release of two versions of Llama-3. Everyone was impressed. These were definitely strong models. Investors felt differently. After earnings yesterday showed strong revenues but that Meta was investing heavily in AI, they took Meta stock down 15%. DeepMind and Anthropic also shipped, but in their cases it was multiple papers on AI alignment and threat mitigation. They get their own sections. We also did identify someone who wants to do what people claim the worried want to do, who is indeed reasonably identified as a 'doomer.' Because the universe has a sense of humor, that person's name is Tucker Carlson. Also we have a robot dog with a flamethrower. Table of Contents Previous post: On Llama-3 and Dwarkesh Patel's Podcast with Zuckerberg. 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Take the XML. Leave the hypnosis. 4. Language Models Don't Offer Mundane Utility. I have to praise you. It's my job. 5. Llama We Doing This Again. Investors are having none of it. 6. Fun With Image Generation. Everything is fun if you are William Shatner. 7. Deepfaketown and Botpocalypse Soon. How to protect your image model? 8. They Took Our Jobs. Well, they took some particular jobs. 9. Get Involved. OMB, DeepMind and CivAI are hiring. 10. Introducing. A robot dog with a flamethrower. You in? 11. In Other AI News. Mission first. Lots of other things after. 12. Quiet Speculations. Will it work? And if so, when? 13. Rhetorical Innovation. Sadly predictable. 14. Wouldn't You Prefer a Nice Game of Chess. Game theory in action. 15. The Battle of the Board. Reproducing an exchange on it for posterity. 16. New Anthropic Papers. Sleeper agents, detected and undetected. 17. New DeepMind Papers. Problems with agents, problems with manipulation. 18. Aligning a Smarter Than Human Intelligence is Difficult. Listen to the prompt. 19. People Are Worried About AI Killing Everyone. Tucker Carlson. I know. 20. Other People Are Not As Worried About AI Killing Everyone. Roon. 21. The Lighter Side. Click here. Language Models Offer Mundane Utility I too love XML for this and realize I keep forgetting to use it. Even among humans, every time I see or use it I think 'this is great, this is exceptionally clear.' Hamel Husain: At first when I saw xml for Claude I was like "WTF Why XML". Now I LOVE xml so much, can't prompt without it. Never going back. Example from the docs: User: Hey Claude. Here is an email: {{EMAIL}}. Make this email more {{ADJECTIVE}}. Write the new version in XML tags. Assistant: Also notice the "prefill" for the answer (a nice thing to use w/xml) Imbure's CEO suggests that agents are not 'empowering' to individuals or 'democratizing' unless the individuals can code their own agent. The problem is of course that almost everyone wants to do zero setup work let alone writing of code. People do not even want to toggle a handful of settings and you want them creating their own agents? And of course, when we say 'set up your own agent' what we actually mean is 'type into a chat box what you want and someone else's agent creates your agent.' Not only is this not empowering to individuals, it seems like a good way to start disempowering humanity in general. Claude can hypnotize a willing user. [EDIT: It has been pointed out to me that I misinterpreted this, and Janus was not actually hypnotized. I apologize for the error. I do still strongly believe that Claude could do it to a willing user, but we no longer have the example.] The variable names it chose are… somethi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your feedback for Actually After Hours: the unscripted, informal 80k podcast, published by Mjreard on April 24, 2024 on The Effective Altruism Forum. As you may have noticed, 80k After Hours has been releasing a new show where I and some other 80k staff sit down with a guest for a very free form, informal, video(!) discussion that sometimes touches on topical themes around EA and sometimes… strays a bit further afield. We have so far called it "Actually After Hours" in part because (as listeners may be relieved to learn), I and the other hosts don't count this against work time and the actual recordings tend to take place late at night. We've just released episode 3 with Dwarkesh Patel and I feel like this is a good point to gather broader feedback on the early episodes. I'll give a little more background on the rationale for the show below, but if you've listened to [part of] any episode, I'm interested to know what you did or didn't enjoy or find valuable as well as specific ideas for changes. In particular, if you have ideas for a better name than "Actually After Hours," this early point is a good time for that! Rationales Primarily, I have the sense that there's too much doom, gloom, and self-flagellation around EA online and this sits in strange contrast to the attitudes of the EAs I know offline. The show seemed like a low cost way to let people know that the people doing important work from an EA perspective are actually fun, interesting, and even optimistic in addition to being morally serious. It also seemed like a way to highlight/praise individual contributors to important projects. Rob/Luisa will bring on the deep experts and leaders of orgs to talk technical details about their missions and theories of change, but I think a great outcome for more of our users will be doing things like Joel or Chana and I'd like to showcase more people like them and convey that they're still extremely valuable. Another rationale which I haven't been great on so far is expanding the qualitative options people have for engaging with Rob Wiblin-style reasoning. The goal was (and will return to being soon) sub-1-hour, low stakes episodes where smart people ask cruxy questions and steelman alternative perspectives with some in-jokes and Twitter controversies thrown in to make it fun. An interesting piece of feedback we've gotten from 80k plan changes is that it's rare that a single episode on some specific topic was a big driver of someone going to work on that area, but someone listening to many episodes across many topics was predictive of them often doing good work in ~any cause area. So the hope is that shorter, less focused/formal episodes create a lower threshold to hitting play (vs 3 hours with an expert on a single, technical, weighty subject) and therefore more people picking up on both the news and the prioritization mindset. Importantly, I don't see this as intro content. I think it only really makes sense for people already familiar with 80k and EA. And for them, it's a way of knowing more people in these spaces and absorbing the takes/conversations that never get written down. Much of what does get written down is often carefully crafted for broad consumption and that can often miss something important. Maybe this show can be a place for that. Thanks for any and all feedback! I guess it'd be useful to write short comments that capture high level themes and let people up/down vote based on agreement. Feel free to make multiple top-level comments if you have them and DM or email me (matt at 80000hours dot org) if you'd rather not share publicly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Matt Reardon, Arden Koehler, and Huon Porteous sit down with Dwarkesh Patel to find out how you become a world-famous (among tech intellectuals) podcast host at 23. We also discuss how 80k would have advised 21-year-old Dwarkesh and 80k strategy more broadly.You can check out the video version of this episode on YouTube at https://youtu.be/H5px6CQTe8oTopics covered:How did Dwarkesh start landing world-class guests?Why is Bryan Caplan such an easy get?How does Dwarkesh think about ideological labels?Dwarkesh explains his pivot towards AIDo intellectuals matter for progress?Was Microsoft or the Gates Foundation more impactful?Do biographies ever matter more than their subjects?How would 80k have advised young Dwarkesh?What does motivate people in government and what should motivate people in government?Should do-gooders seek power?Should 80k advice always aim at the tails?Are people just layering their simple political memes onto the AI debate?How do you boost people's agency?How do we feel about self-perceived entrepreneurs?What's the tradeoff between having the right initiative and having the right ideas?How does 80k's advice deal with AI timelines? Are 80k users self-selected for not being the highest potential people?Should you assume that everyone can make it to the extreme tail?In how many areas should 80k have detailed advice?What happened to the EA brand?
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Llama-3 and Dwarkesh Patel's Podcast with Zuckerberg, published by Zvi on April 22, 2024 on LessWrong. It was all quiet. Then it wasn't. Note the timestamps on both of these. Dwarkesh Patel did a podcast with Mark Zuckerberg on the 18th. It was timed to coincide with the release of much of Llama-3, very much the approach of telling your story directly. Dwarkesh is now the true tech media. A meteoric rise, and well earned. This is two related posts in one. First I cover the podcast, then I cover Llama-3 itself. My notes are edited to incorporate context from later explorations of Llama-3, as I judged that the readability benefits exceeded the purity costs. Podcast Notes: Llama-3 Capabilities (1:00) They start with Llama 3 and the new L3-powered version of Meta AI. Zuckerberg says "With Llama 3, we think now that Meta AI is the most intelligent, freely-available assistant that people can use." If this means 'free as in speech' then the statement is clearly false. So I presume he means 'free as in beer.' Is that claim true? Is Meta AI now smarter than GPT-3.5, Claude 2 and Gemini Pro 1.0? As I write this it is too soon to tell. Gemini Pro 1.0 and Claude 3 Sonnet are slightly ahead of Llama-3 70B on the Arena leaderboard. But it is close. The statement seems like a claim one can make within 'reasonable hype.' Also, Meta integrates Google and Bing for real-time knowledge, so the question there is if that process is any good, since most browser use by LLMs is not good. (1:30) Meta are going in big on their UIs, top of Facebook, Instagram and Messenger. That makes sense if they have a good product that is robust, and safe in the mundane sense. If it is not, this is going to be at the top of chat lists for teenagers automatically, so whoo boy. Even if it is safe, there are enough people who really do not like AI that this is probably a whoo boy anyway. Popcorn time. (1:45) They will have the ability to animate images and it generates high quality images as you are typing and updates them in real time as you are typing details. I can confirm this feature is cool. He promises multimodality, more 'multi-linguality' and bigger context windows. (3:00) Now the technical stuff. Llama-3 follows tradition in training models in three sizes, here 8b, 70b that released on 4/18, and a 405b that is still training. He says 405b is already around 85 MMLU and they expect leading benchmarks. The 8b Llama-3 is almost as good as the 70b Llama-2. The Need for Inference (5:15) What went wrong earlier for Meta and how did they fix it? He highlights Reels, with its push to recommend 'unconnected content,' meaning things you did not ask for, and not having enough compute for that. They were behind. So they ordered double the GPUs that needed. They didn't realize the type of model they would want to train. (7:30) Back in 2006, what would Zuck have sold for when he turned down $1 billion? He says he realized if he sold he'd just build another similar company, so why sell? It wasn't about the number, he wasn't in position to evaluate the number. And I think that is actually wise there. You can realize that you do not want to accept any offer someone would actually make. (9:15) When did making AGI become a key priority? Zuck points out Facebook AI Research (FAIR) is 10 years old as a research group. Over that time it has become clear you need AGI, he says, to support all their other products. He notes that training models on coding generalizes and helps their performance elsewhere, and that was a top focus for Llama-3. So Meta needs to solve AGI because if they don't 'their products will be lame.' It seems increasingly likely, as we will see in several ways, that Zuck does not actually believe in 'real' AGI. By 'AGI' he means somewhat more capable AI. (13:40) What will the Llama that makes cool produ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #58: Stargate AGI, published by Zvi on April 5, 2024 on LessWrong. Another round? Of economists projecting absurdly small impacts, of Google publishing highly valuable research, a cycle of rhetoric, more jailbreaks, and so on. Another great podcast from Dwarkesh Patel, this time going more technical. Another proposed project with a name that reveals quite a lot. A few genuinely new things, as well. On the new offerings front, DALLE-3 now allows image editing, so that's pretty cool. Table of Contents Don't miss out on Dwarkesh Patel's podcast with Sholto Douglas and Trenton Bricken, which got the full write-up treatment. Introduction. Table of Contents. Language Models Offer Mundane Utility. Never stop learning. Language Models Don't Offer Mundane Utility. The internet is still for porn. Clauding Along. Good at summarization but not fact checking. Fun With Image Generation. DALLE-3 now has image editing. Deepfaketown and Botpocalypse Soon. OpenAI previews voice duplication. They Took Our Jobs. Employment keeps rising, will continue until it goes down. The Art of the Jailbreak. It's easy if you try and try again. Cybersecurity. Things worked out this time. Get Involved. Technical AI Safety Conference in Tokyo tomorrow. Introducing. Grok 1.5, 25 YC company models and 'Dark Gemini.' In Other AI News. Seriously, Google, stop publishing all your trade secrets. Stargate AGI. New giant data center project, great choice of cautionary title. Larry Summers Watch. Economists continue to have faith in nothing happening. Quiet Speculations. What about interest rates? Also AI personhood. AI Doomer Dark Money Astroturf Update. OpenPhil annual report. The Quest for Sane Regulations. The devil is in the details. The Week in Audio. A few additional offerings this week. Rhetorical Innovation. The search for better critics continues. Aligning a Smarter Than Human Intelligence is Difficult. What are human values? People Are Worried About AI Killing Everyone. Can one man fight the future? The Lighter Side. The art must have an end other than itself. Language Models Offer Mundane Utility A good encapsulation of a common theme here: Paul Graham: AI will magnify the already great difference in knowledge between the people who are eager to learn and those who aren't. If you want to learn, AI will be great at helping you learn. If you want to avoid learning? AI is happy to help with that too. Which AI to use? Ethan Mollick examines our current state of play. Ethan Mollick (I edited in the list structure): There is a lot of debate over which of these models are best, with dueling tests suggesting one or another dominates, but the answer is not clear cut. All three have different personalities and strengths, depending on whether you are coding or writing. Gemini is an excellent explainer but doesn't let you upload files. GPT-4 has features (namely Code Interpreter and GPTs) that greatly extend what it can do. Claude is the best writer and seems capable of surprising insight. But beyond the differences, there are four important similarities to know about: All three are full of ghosts, which is to say that they give you the weird illusion of talking to a real, sentient being - even though they aren't. All three are multimodal, in that they can "see" images. None of them come with instructions. They all prompt pretty similarly to each other. I would add there are actually four models, not three, because there are (at last!) two Geminis, Gemini Advanced and Gemini Pro 1.5, if you have access to the 1.5 beta. So I would add a fourth line for Gemini Pro 1.5: Gemini Pro has a giant context window and uses it well. My current heuristic is something like this: If you need basic facts or explanation, use Gemini Advanced. If you want creativity or require intelligence and nuance, or code, use Claude. If ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on Dwarkesh Patel's Podcast with Sholto Douglas and Trenton Bricken, published by Zvi on April 2, 2024 on LessWrong. Dwarkesh Patel continues to be on fire, and the podcast notes format seems like a success, so we are back once again. This time the topic is how LLMs are trained, work and will work in the future. Timestamps are for YouTube. Where I inject my own opinions or takes, I do my best to make that explicit and clear. This was highly technical compared to the average podcast I listen to, or that Dwarkesh does. This podcast definitely threated to technically go over my head at times, and some details definitely did go over my head outright. I still learned a ton, and expect you will too if you pay attention. This is an attempt to distill what I found valuable, and what questions I found most interesting. I did my best to make it intuitive to follow even if you are not technical, but in this case one can only go so far. Enjoy. (1:30) Capabilities only podcast, Trenton has 'solved alignment.' April fools! (2:15) Huge context tokens is underhyped, a huge deal. It occurs to me that the issue is about the trivial inconvenience of providing the context. Right now I mostly do not bother providing context on my queries. If that happened automatically, it would be a whole different ballgame. (2:50) Could the models be sample efficient if you can fit it all in the context window? Speculation is it might work out of the box. (3:45) Does this mean models are already in some sense superhuman, with this much context and memory? Well, yeah, of course. Computers have been superhuman at math and chess and so on for a while. Now LLMs have quickly gone from having worse short term working memory than humans to vastly superior short term working memory. Which will make a big difference. The pattern will continue. (4:30) In-context learning is similar to gradient descent. It gets problematic for adversarial attacks, but of course you can ignore that because as Tenton reiterates alignment is solved, and certainly it is solved for such mundane practical concerns. But it does seem like he's saying if you do this then 'you're fine-tuning but in a way where you cannot control what is going on'? (6:00) Models need to learn how to learn from examples in order to take advantage of long context. So does that mean the task of intelligence requires long context? That this is what causes the intelligence, in some sense, they ask? I don't think you can reverse it that way, but it is possible that this will orient work in directions that are more effective? (7:00) Dwarkesh asks about how long contexts link to agent reliability. Douglas says this is more about lack of nines of reliability, and GPT-4-level models won't cut it there. And if you need to get multiple things right, the reliability numbers have to multiply together, which does not go well in bulk. If that is indeed the issue then it is not obvious to me the extent to which scaffolding and tricks (e.g. Devin, probably) render this fixable. (8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won't be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task. (9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Tacit Knowledge Videos on Every Subject, published by Parker Conley on March 31, 2024 on LessWrong. TL;DR Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos - aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I'll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, George Hotz, and others. What are Tacit Knowledge Videos? Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows: Tacit knowledge is knowledge that can't properly be transmitted via verbal or written instruction, like the ability to create great art or assess a startup. This tacit knowledge is a form of intellectual dark matter, pervading society in a million ways, some of them trivial, some of them vital. Examples include woodworking, metalworking, housekeeping, cooking, dancing, amateur public speaking, assembly line oversight, rapid problem-solving, and heart surgery. In my observation, domains like housekeeping and cooking have already seen many benefits from this revolution. Could tacit knowledge in domains like research, programming, mathematics, and business be next? I'm not sure, but maybe this post will help push the needle forward. For the purpose of this post, Tacit Knowledge Videos are any video that communicates "knowledge that can't properly be transmitted via verbal or written instruction". Here are some examples: Neel Nanda, who leads the Google DeepMind mechanistic interpretability team, has a playlist of "Research Walkthroughs". AI Safety research is discussed a lot around here. Watching research videos could help instantiate what AI research really looks and feels like. GiveWell has public audio recordings of its Board Meetings from 2007-2020. Participants include Elie Hassenfeld, Holden Karnofsky, Timothy Ogden, Rob Reich, Tom Rutledge, Brigid Slipka, Cari Tuna, Julia Wise, and others. Influential business meetings are not usually made public. I feel I have learned some about business communication and business operations, among other things, by listening to these recordings. Andy Matuschak recorded himself studying Quantum Mechanics with Dwarkesh Patel and doing research. Andy Matushak "helped build iOS at Apple and led R&D at Khan Academy". I found it interesting to have a peek into Matushak's spaced repetition practice and various studying heuristics and habits, as well as his process of digesting and taking notes on papers. Call to Action Share links to Tacit Knowledge Videos below! Share them frivolously! These videos are uncommon - the bottleneck to the YouTube knowledge transfer revolution is quantity, not quality. I will add the shared videos to the post. Here are the loose rules: Recall a video that you've seen that communicates tacit knowledge - "knowledge that can't properly be transmitted via verbal or written instruction". A rule of thumb for sharing: could a reader find this video through one or two YouTube searches? If not, share it. Post the title and the URL of the video. Provide information indicating why the expert in the video is credible. (However, don't let this last rule stop you from sharing a video! Again - quantity, not quality.)[1] For information on how to best use these videos, Cedric Chin and Jacob Steinhardt have some potentially relevant practical advice. Andy Matushak also has some working notes about this idea generally. Additionally, DM or email me (email in L...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on Dwarkesh Patel's Podcast with Demis Hassabis, published by Zvi on March 2, 2024 on LessWrong. Demis Hassabis was interviewed twice this past week. First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel. This post covers my notes from both interviews, mostly the one with Dwarkesh. Hard Fork Hard Fork was less fruitful, because they mostly asked what for me are the wrong questions and mostly get answers I presume Demis has given many times. So I only noticed two things, neither of which is ultimately surprising. They do ask about The Gemini Incident, although only about the particular issue with image generation. Demis gives the generic 'it should do what the user wants and this was dumb' answer, which I buy he likely personally believes. When asked about p(doom) he expresses dismay about the state of discourse and says around 42:00 that 'well Geoffrey Hinton and Yann LeCun disagree so that indicates we don't know, this technology is so transformative that it is unknown. It is nonsense to put a probability on it. What I do know is it is non-zero, that risk, and it is worth debating and researching carefully… we don't want to wait until the eve of AGI happening.' He says we want to be prepared even if the risk is relatively small, without saying what would count as small. He also says he hopes in five years to give us a better answer, which is evidence against him having super short timelines. I do not think this is the right way to handle probabilities in your own head. I do think it is plausibly a smart way to handle public relations around probabilities, given how people react when you give a particular p(doom). I am of course deeply disappointed that Demis does not think he can differentiate between the arguments of Geoffrey Hinton versus Yann LeCun, and the implied importance on the accomplishments and thus implied credibility of the people. He did not get that way, or win Diplomacy championships, thinking like that. I also don't think he was being fully genuine here. Otherwise, this seemed like an inessential interview. Demis did well but was not given new challenges to handle. Dwarkesh Patel Demis Hassabis also talked to Dwarkesh Patel, which is of course self-recommending. Here you want to pay attention, and I paused to think things over and take detailed notes. Five minutes in I had already learned more interesting things than I did from the entire Hard Fork interview. Here is the transcript, which is also helpful. (1:00) Dwarkesh first asks Demis about the nature of intelligence, whether it is one broad thing or the sum of many small things. Demis says there must be some common themes and underlying mechanisms, although there are also specialized parts. I strongly agree with Demis. I do not think you can understand intelligence, of any form, without some form the concept of G. (1:45) Dwarkesh follows up by asking then why doesn't lots of data in one domain generalize to other domains? Demis says often it does, such as coding improving reasoning (which also happens in humans), and he expects more chain transfer. (4:00) Dwarkesh asks what insights neuroscience brings to AI. Demis points to many early AI concepts. Going forward, questions include how brains form world models or memory. (6:00) Demis thinks scaffolding via tree search or AlphaZero-style approaches for LLMs is super promising. He notes they're working hard on search efficiency in many of their approaches so they can search further. (9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually 'in scientific problems' there are ways to specify goals. Suspicious dodge? (10:00) Dwarkesh notes humans are super sample efficient, Demis says it ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #53: One More Leap, published by Zvi on February 29, 2024 on LessWrong. The main event continues to be the fallout from The Gemini Incident. Everyone is focusing there now, and few are liking what they see. That does not mean other things stop. There were two interviews with Demis Hassabis, with Dwarkesh Patel's being predictably excellent. We got introduced to another set of potentially highly useful AI products. Mistral partnered up with Microsoft the moment Mistral got France to pressure the EU to agree to cripple the regulations that Microsoft wanted crippled. You know. The usual stuff. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Copilot++ suggests code edits. Language Models Don't Offer Mundane Utility. Still can't handle email. OpenAI Has a Sales Pitch. How does the sales team think about AGI? The Gemini Incident. CEO Pinchai responds, others respond to that. Political Preference Tests for LLMs. How sensitive to details are the responses? GPT-4 Real This Time. What exactly should count as plagiarized? Fun With Image Generation. MidJourney v7 will have video. Deepfaketown and Botpocalypse Soon. Dead internet coming soon? They Took Our Jobs. Allow our bot to provide you with customer service. Get Involved. UK Head of Protocols. Sounds important. Introducing. Evo, Emo, Genie, Superhuman, Khanmigo, oh my. In Other AI News. 'Amazon AGI' team? Great. Quiet Speculations. Unfounded confidence. Mistral Shows Its True Colors. The long con was on, now the reveal. The Week in Audio. Demis Hassabis on Dwarkesh Patel, plus more. Rhetorical Innovation. Once more, I suppose with feeling. Open Model Weights Are Unsafe and Nothing Can Fix This. Another paper. Aligning a Smarter Than Human Intelligence is Difficult. New visualization. Other People Are Not As Worried About AI Killing Everyone. Worry elsewhere? The Lighter Side. Try not to be too disappointed. Language Models Offer Mundane Utility Take notes for your doctor during your visit. Dan Shipper spent a week with Gemini 1.5 Pro and reports it is fantastic, the large context window has lots of great uses. In particular, Dan focuses on feeding in entire books and code bases. Dan Shipper: Somehow, Google figured out how to build an AI model that can comfortably accept up to 1 million tokens with each prompt. For context, you could fit all of Eliezer Yudkowsky's 1,967-page opus Harry Potter and the Methods of Rationality into every message you send to Gemini. (Why would you want to do this, you ask? For science, of course.) Eliezer Yudkowsky: This is a slightly strange article to read if you happen to be Eliezer Yudkowsky. Just saying. What matters in AI depends so much on what you are trying to do with it. What you try to do with it depends on what you believe it can help you do, and what it makes easy to do. A new subjective benchmark proposal based on human evaluation of practical queries, which does seem like a good idea. Gets sensible results with the usual rank order, but did not evaluate Gemini Advanced or Gemini 1.5. To ensure your query works, raise the stakes? Or is the trick to frame yourself as Hiro Protagonist? Mintone: I'd be interested in seeing a similar analysis but with a slight twist: We use (in production!) a prompt that includes words to the effect of "If you don't get this right then I will be fired and lose my house". It consistently performs remarkably well - we used to use a similar tactic to force JSON output before that was an option, the failure rate was around 3/1000 (although it sometimes varied key names). I'd like to see how the threats/tips to itself balance against exactly the same but for the "user" reply. Linch: Does anybody know why this works??? I understand prompts to mostly be about trying to get the AI to be in the ~right data distributio...
Dwarkesh Patel is a renowned podcaster who has hosted interviews with luminaries like Marc Andreesen, Eliezer Yudkowsky, and Grant Sanderson. He's best known for the extraordinary effort he puts into researching the topics he speaks with his guests about, and for covering an exceptionally wide intellectual ground. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #50: The Most Dangerous Thing, published by Zvi on February 8, 2024 on LessWrong. In a week with two podcasts I covered extensively, I was happy that there was little other news. That is, until right before press time, when Google rebranded Bard to Gemini, released an app for that, and offered a premium subscription ($20/month) for Gemini Ultra. Gemini Ultra is Here I have had the honor and opportunity to check out Gemini Advanced before its release. The base model seems to be better than GPT-4. It seems excellent for code, for explanations and answering questions about facts or how things work, for generic displays of intelligence, for telling you how to do something. Hitting the Google icon to have it look for sources is great. In general, if you want to be a power user, if you want to push the envelope in various ways, Gemini is not going to make it easy on you. However, if you want to be a normal user, doing the baseline things that I or others most often find most useful, and you are fine with what Google 'wants' you to be doing? Then it seems great. The biggest issue is that Gemini can be conservative with its refusals. It is graceful, but it will still often not give you what you wanted. There is a habit of telling you how to do something, when you wanted Gemini to go ahead and do it. Trying to get an estimation or probability of any kind can be extremely difficult, and that is a large chunk of what I often want. If the model is not sure, it will say it is not sure and good luck getting it to guess, even when it knows far more than you. This is the 'doctor, is this a 1%, 10%, 50%, 90% or 99% chance?' situation, where they say 'it could be cancer' and they won't give you anything beyond that. I've learned to ask such questions elsewhere. There are also various features in ChatGPT, like GPTs and custom instructions and playground settings, that are absent. Here I do not know what Google will decide to do. I expect this to continue to be the balance. Gemini likely remains relatively locked down and harder to customize or push the envelope with, but very good at normal cases, at least until OpenAI releases GPT-5, then who knows. There are various other features where there is room for improvement. Knowledge of the present I found impossible to predict, sometimes it knew things and it was great, other times it did not. The Gemini Extensions are great when they work and it would be great to get more of them, but are finicky and made several mistakes, and we only get these five for now. The image generation is limited to 512512 (and is unaware that it has this restriction). There are situations in which your clear intent is 'please do or figure out X for me' and instead it tells you how to do or figure out X yourself. There are a bunch of query types that could use more hard-coding (or fine-tuning) to get them right, given how often I assume they will come up. And so on. While there is still lots of room for improvement and the restrictions can frustrate, Gemini Advanced has become my default LLM to use over ChatGPT for most queries. I plan on subscribing to both Gemini and ChatGPT. I am not sure which I would pick if I had to choose. Table of Contents Don't miss the Dwarkesh Patel interview with Tyler Cowen. You may or may not wish to miss the debate between Based Beff Jezos and Connor Leahy. Introduction. Gemini Ultra is here. Table of Contents. Language Models Offer Mundane Utility. Read ancient scrolls, play blitz chess. Language Models Don't Offer Mundane Utility. Keeping track of who died? Hard. GPT-4 Real This Time. The bias happens during fine-tuning. Are agents coming? Fun With Image Generation. Edit images directly in Copilot. Deepfaketown and Botpocalypse Soon. $25 million payday, threats to democracy. They Took Our Jobs. Journalists and lawyers. Get In...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarkesh's 3rd Podcast With Tyler Cowen, published by Zvi on February 4, 2024 on LessWrong. This post is extensive thoughts on Tyler Cowen's excellent talk with Dwarkesh Patel. It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened. I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot. The first conversation is about Tyler's book GOAT about the world's greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler's takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don't skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me. There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts. The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question. If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here. That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks. That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson's Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive. Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth. Thus at this point, I choose to treat most of Tyler's thoughts on AI as if they are part of the second conversation, with an implicit 'assuming an AI at least semi-fizzle' attached ...
In this episode, Nathan sits down with Dan O'Connell, Chief Strategy Officer at Dialpad. They discuss building their own language models using 5 billion minutes of business calls, custom speech recognition models for every customer, and the challenges of bringing AI into business. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period. We're sharing a few of Nathan's favorite AI scouting episodes from other shows. Today, Shane Legg, Cofounder at Deepmind and its current Chief AGI Scientist, shares his insights with Dwarkesh Patel on AGI's timeline, the new architectures needed for AGI, and why multimodality will be the next big landmark. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period. We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com. --- SPONSORS: Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. X/SOCIAL: @labenz (Nathan) @dialdoc (Dan) @dialpad @CogRev_Podcast (Cognitive Revolution) TIMESTAMPS: (00:00) - Introduction and Welcome (06:50) - Interview with Dan O'Connell, Chief AI and Strategy Officer at Dialpad (07:13) - The Functionality and Utility of Dialpad (17:20) - The Development of Dialpad's Large Language Model Trained on 5Billion Minutes of Calls 19:56 The Future of AI in Business (22:21) - Sponsor Break: Shopify (23:56) - The Challenges and Opportunities of AI Development (31:17 ) - Prioritizing latency, capacity, and cost when evaluating AI (39:41) - Most Loved AI Features in Dialpad (42:01) - The Role of AI in Quality Assurance (43:10) - The Future of Transcription Accuracy (44:06) - The Importance of Speech Recognition in Business (46:59) - Personalizing AI for Better Business Interactions (47:01) - The Role of AI in Content Generation (52:47) - The Challenges and Opportunities of AI in Sales and Support
We're sharing a few of Nathan's favorite AI scouting episodes from other shows. Today, Shane Legg, Cofounder at Deepmind and its current Chief AGI Scientist, shares his insights with Dwarkesh Patel on AGI's timeline, the new architectures needed for AGI, and why multimodality will be the next big landmark. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period. You can subscribe to The Dwarkesh Podcast here: https://www.youtube.com/@DwarkeshPatel We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com. --- SPONSORS: Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. X/SOCIAL: @labenz (Nathan) @dwarkesh_sp (Dwarkesh) @shanelegg (Shane) @CogRev_Podcast (Cognitive Revolution) TIMESTAMPS: (00:00:00) - Episode Preview with Nathan's Intro (00:02:45) - Conversation with Dwarkesh and Shane begins (00:14:26) - Do we need new architectures? (00:17:31) - Sponsors: Shopify (00:19:40) - Is search needed for creativity? (00:31:46) - Impact of Deepmind on safety vs capabilities (00:32:48) - Sponsors: Netsuite | Omneky (00:37:10) - Timelines (00:45:18) - Multimodality
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Based Beff Jezos and the Accelerationists, published by Zvi on December 6, 2023 on LessWrong. It seems Forbes decided to doxx the identity of e/acc founder Based Beff Jezos. They did so using voice matching software. Given Jezos is owning it given that it happened, rather than hoping it all goes away, and people are talking about him, this seems like a good time to cover this 'Beff Jezos' character and create a reference point for if he continues to come up later. If that is not relevant to your interests, you can and should skip this one. Do Not Doxx People First order of business: Bad Forbes. Stop it. Do not doxx people. Do not doxx people with a fox. Do not dox people with a bagel with creme cheese and lox. Do not dox people with a post. Do not dox people who then boast. Do not dox people even if that person is advocating for policies you believe are likely to kill you, kill everyone you love and wipe out all Earth-originating value in the universe in the name of their thermodynamic God. If you do doxx them, at least own that you doxxed them rather than denying it. There is absolutely nothing wrong with using a pseudonym with a cumulative reputation, if you feel that is necessary to send your message. Say what you want about Jezos, he believes in something, and he owns it. Beff Jezos Advocates Actions He Thinks Would Probably Kill Everyone What are the things Jezos was saying anonymously? Does Jezos actively support things that he thinks are likely to cause all humans to die, with him outright saying he is fine with that? Yes. In this case it does. But again, he believes that would be good, actually. Emmet Shear: I got drinks with Beff once and he seemed like a smart, nice guy…he wanted to raise an elder machine god from the quantum foam, but i could tell it was only because he thought that would be best for everyone. TeortaxesTex (distinct thread): >in the e/acc manifesto, when it was said "The overarching goal for humanity is to preserve the light of consciousness"… >The wellbeing of conscious entities has *no weight* in the morality of their worldview I am rather confident Jezos would consider these statements accurate, and that this is where 'This Is What Beff Jezos Actually Believes' could be appropriately displayed on the screen. I want to be clear: Surveys show that only a small minority (perhaps roughly 15%) of those willing to put the 'e/acc' label into their Twitter report endorsing this position. #NotAllEAcc. But the actual founder, Beff Jezos? I believe so, yes. A Matter of Some Debate So if that's what Beff Jezos believes, that is what he should say. I will be right here with this microphone. I was hoping he would have the debate Dwarkesh Patel is offering to have, even as that link demonstrated Jezos's unwillingness to be at all civil or treat those he disagrees with any way except utter disdain. Then Jezos put the kabosh on the proposal of debating Dwarkesh in any form, while outright accusing Dwarkesh of… crypto grift and wanting to pump shitcoins? I mean, even by December 2023 standards, wow. This guy. I wonder if Jezos believes the absurdities he says about those he disagrees with? Dwarkesh responded by offering to do it without a moderator and stream it live, to address any unfairness concerns. As expected, this offer was declined, despite Jezos having previously very much wanted to appear on Dwarkesh's podcast. This is a pattern, as Jezos previously backed out from a debate with Dan Hendrycks. Jezos is now instead claiming he will have the debate with Connor Leahy, who I would also consider a sufficiently Worthy Opponent. They say it is on, prediction market says 83%. They have yet to announce a moderator. I suggested Roon on Twitter, another good choice if he'd be down might be Vitalik Buterin. Eliezer Yudkowsky notes (reproduced in full belo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of and Thoughts on the Hotz/Yudkowsky Debate, published by Zvi on August 16, 2023 on LessWrong. George Hotz and Eliezer Yudkowsky debated on YouTube for 90 minutes, with some small assists from moderator Dwarkesh Patel. It seemed worthwhile to post my notes on this on their own. I thought this went quite well for the first half or so, then things went increasingly off the rails in the second half, and Hotz gets into questions where he didn't have a chance to reflect and prepare, especially around cooperation and the prisoner's dilemma. First, some general notes, then specific notes I took while watching. Holz was allowed to drive discussion. In debate terms, he was the con side, raising challenges, while Yudkowsky was the pro side defending a fixed position. These discussions often end up doing what this one did, which is meandering around a series of 10-20 metaphors and anchors and talking points, mostly repeating the same motions with variations, in ways that are worth doing once but not very productive thereafter. Yudkowsky has a standard set of responses and explanations, which he is mostly good at knowing when to pull out, but after a while one has heard them all. The key to a good conversation or debate with Yudkowsky is to allow the conversation to advance beyond those points or go in a new direction entirely. Mostly, once Yudkowsky had given a version of his standard response and given his particular refutation attempt on Hotz's variation of the question, Hotz would then pivot to another topic. This included a few times when Yudkowsky's response was not fully convincing and there was room for Hotz to go deeper, and I wish he would have in those cases. In other cases, and more often than not, the refutation or defense seemed robust. This standard set of responses meant that Holz knew a lot of the things he wanted to respond to, and he prepared mostly good responses and points on a bunch of the standard references. Which was good, but I would have preferred to sidestep those points entirely. What would Tyler Cowen be asking in a CWT? Another pattern was Holz asserting that things would be difficult for future ASIs (artificial superintelligences) because they are difficult for humans, or the task had a higher affinity for human-style thought in some form, often with a flat out assertion that a task would prove difficult or slow. Hotz seemed to be operating under the theory that if he could break Yudkowsky's long chain of events at any point, that would show we were safe. Yudkowsky explicitly contested this on foom, and somewhat in other places as well. This seems important, as what Hotz was treating a load bearing usually very much wasn't. Yudkowsky mentioned a few times that he was not going to rely on a given argument or pathway because although it was true it would strain credulity. This is a tricky balance, on the whole we likely need more of this. Later on, Yudkowsky strongly defended that ASIs would cooperate with each other and not with us, and the idea of a deliberate left turn. This clearly strained a lot of credulity with Hotz and I think with many others, and I do not think these assertions are necessary either. Hotz closes with a vision of ASIs running amok, physically fighting each other over resources, impossible to align even to each other. He then asserts that this will go fine for him and he is fine with this outcome despite not saying he inherently values the ASIs or what they would create. I do not understand this at all. Such a scenario would escalate far quicker than Hotz realizes. But even if it did not, this very clearly leads to a long term future with no humans, and nothing humans obviously value. Is 'this will take long enough that they won't kill literal me' supposed to make that acceptable? Here is my summary of important stat...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #24: Week of the Podcast, published by Zvi on August 11, 2023 on LessWrong. In addition to all the written developments, this was a banner week for podcasts. I would highlight four to consider listening to. Dario Amodei of Anthropic went on The Lunar Society to talk to Dwarkesh Patel. We got our best insight so far into where Dario's head is at, Dwarkesh is excellent at getting people to open up like this and really dive into details. Jan Leike, OpenAI's head of alignment, went on 80,000 hours with Robert Wiblin. If you want to know what is up with the whole superalignment effort, this was pretty great, and left me more optimistic. I still don't think the alignment plan will work, but there's a ton of great understanding of the problems ahead and an invitation to criticism, and a clear intention to avoid active harm, so we can hope for a pivot as they learn more. Tyler Cowen interviewed Paul Graham. This was mostly not about AI, but fascinating throughout, often as a clash of perspectives about the best ways to cultivate talent. Includes Tyler Cowen asking Paul Graham about how to raise someone's ambition, and Paul responding by insisting on raising Tyler's ambition. I got a chance to go on EconTalk and speak with Russ Roberts about The Dial of Progress and other matters, mostly related to AI. I listen to EconTalk, so this was a pretty special moment. Of course, I am a little bit biased on this one. Capabilities continue to advance at a more modest pace, so I continue to have room to breathe, which I intend to enjoy while it lasts. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Proceed with caution. Language Models Don't Offer Mundane Utility. Not with these attitudes. GPT-4 Real This Time. Time for some minor upgrades. Fun With Image Generation. Some fun, also some not so fun. Deepfaketown and Botpocalypse Soon. They keep ignoring previous instructions. They Took Our Jobs. People really, really do not like it when you use AI artwork. Introducing. Real time transcription for the deaf, also not only for the deaf. In Other AI News. Various announcements, and an exciting Anthropic paper. There Seems To Be a Standard Issue RLHF Morality. It has stages. What's next? Quiet Speculations. Cases for and against expecting a lot of progress. The Quest for Sane Regulation. Confidence building, polls show no confidence. The Week in Audio. A cornucopia of riches, extensive notes on Dario's interview. Rhetorical Innovation. People are indeed worried in their own way. No One Would Be So Stupid As To. I always hope not to include this section. Aligning a Smarter Than Human Intelligence is Difficult. Grimes also difficult. People Are Worried About AI Killing Everyone. No one that new, really. Other People Are Not As Worried About AI Killing Everyone. Alan Finkel. The Lighter Side. Finally a plan that works. Language Models Offer Mundane Utility Control HVAC systems with results comparable to industrial standard control systems. Davidad: I've witnessed many philosophical discussions about whether a thermostat counts as an AI, but this is the first time I've seen a serious attempt to establish whether an AI counts as a thermostat. Ethan Mollick offers praise for boring AI, that helps us do boring things. As context, one of the first major experimental papers on the impact of ChatGPT on work just came out in Science (based on the free working paper here) and the results are pretty impressive: in realistic business writing tasks, ChatGPT decreased the time required for work by 40%, even as outside evaluators rated the quality of work written with the help of AI to be 18% better than the ones done by humans alone. After using it, people were more worried about their jobs. but also significantly happier - why? Because a lot of work is boring, an...
Dwarkesh Patel is the creator of the Lunar Society newsletter and podcast. We talk about curiosity, talent, podcasting and research, status games, finding what to work on, standing out from the crowd, and AI & futurism. — (00:42) Starting a podcast out of college while others enter the workforce (04:51) How he overcomes the occasional desire to quit (06:09) Feeling like podcasting isn't building something of value (10:15) Following curiosity vs playing status, money, or power games (16:00) Share your work publicly (16:55) You must have skill to get noticed (19:47) Asking what the world will look like in 2100 (22:40) Breadth vs depth, local vs global maxima (25:11) Evolving your maps & models of the world; focusing on science & tech (27:08) AI optimism (30:06) AI research & regulation (31:46) Other opinions about the future (36:16) Why the initial interest in talent? (38:01) Do the one big thing that stands out (48:02) What makes for good research, questions, and interviews? (52:50) When to podcast vs. write (55:25) Asking good questions, learning fast, and finding the interesting parts to highlight — Dwarkesh's Twitter: https://twitter.com/dwarkesh_sp Dwarkesh's Site: https://www.dwarkeshpatel.com/ Spencer's Twitter: https://twitter.com/SP1NS1R Spencer's Blog: https://spencerkier.substack.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "no sandbagging on checkable tasks" hypothesis, published by Joe Carlsmith on July 31, 2023 on The AI Alignment Forum. (This post is inspired by Carl Shulman's recent podcast with Dwarkesh Patel, which I highly recommend. See also discussion from Buck Shlegeris and Ryan Greenblatt here, and Evan Hubinger here.) Introduction Consider: The "no sandbagging on checkable tasks" hypothesis: With rare exceptions, if a not-wildly-superhuman ML model is capable of doing some task X, and you can check whether it has done X, then you can get it to do X using already-available training techniques (e.g., fine-tuning it using gradient descent). Borrowing from Shulman, here's an example of the sort of thing I mean. Suppose that you have a computer that you don't know how to hack, and that only someone who had hacked it could make a blue banana show up on the screen. You're wondering whether a given model can hack this computer. And suppose that in fact, it can, but that doing so would be its least favorite thing in the world. Can you train this model to make a blue banana show up on the screen? The "no sandbagging on checkable tasks" hypothesis answers: probably. I think it's an important question whether this hypothesis, or something in the vicinity, is true. In particular, if it's true, I think we're in a substantively better position re: existential risk from misaligned AI, because we'll be able to know better what our AI systems can do, and we'll be able to use them to do lots of helpful-for-safety stuff (for example: finding and patching cybersecurity vulnerabilities, reporting checkable evidence for misalignment, identifying problems with our oversight processes, helping us develop interpretability tools, and so on). I'm currently pretty unsure whether the "no sandbagging on checkable tasks" hypothesis is true. My main view is that it's worth investigating further. My hope with this blog post is to help bring the hypothesis into focus as a subject of debate/research, and to stimulate further thinking about what sorts of methods for lowering AI risk might be available if it's true, even in worlds where many models might otherwise want to deceive us about their abilities. Thanks to Beth Barnes, Paul Christiano, Lukas Finnveden, Evan Hubinger, Buck Shlegeris, and Carl Shulman for discussion. My thinking and writing on this topic occurred in the context of my work at Open Philanthropy, but I'm speaking only for myself and not for my employer. Clarifying the hypothesis In popular usage, "sandbagging" means something like "intentionally performing at a lower level than you're capable of." Or at least, that's the sort of usage I'm interested in. Still, the word is an imperfect fit. In particular, the "sandbagging" being disallowed here needn't be intentional. A model, for example, might not know that it's capable of performing the checkable task in question. That said, the intentional version is often the version at stake in stories about AI risk. That is, one way for a misaligned, power-seeking AI system to gain a strategic advantage over humans is to intentionally conceal its full range of abilities, and/or to sabotage/redirect the labor we ask it to perform while we still have control over it (for example: by inserting vulnerabilities into code it writes; generating alignment ideas that won't actually work but which will advantage its own long-term aims; and so on). Can you always use standard forms of ML training to prevent this behavior? Well, if you can't check how well a model is performing at a task, then you don't have a good training signal. Thus, for example, suppose you have a misaligned model that has the ability to generate tons of great ideas that would help with alignment, but it doesn't want to. And suppose that unfortunately, you can't check which alignment ide...
It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic BombWe discuss* similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)* visiting starving former Soviet scientists during fall of Soviet Union* whether Oppenheimer was a spy, & consulting on the Nolan movie* living through WW2 as a child* odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea* how the US pulled of such a massive secret wartime scientific & industrial projectWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(0:00:00) - Oppenheimer movie(0:06:22) - Was the bomb inevitable?(0:29:10) - Firebombing vs nuclear vs hydrogen bombs(0:49:44) - Stalin & the Soviet program(1:08:24) - Deterrence, disarmament, North Korea, Taiwan(1:33:12) - Oppenheimer as lab director(1:53:40) - AI progress vs Manhattan Project(1:59:50) - Living through WW2(2:16:45) - Secrecy(2:26:34) - Wisdom & warTranscript(0:00:00) - Oppenheimer movieDwarkesh Patel 0:00:51Today I have the great honor of interviewing Richard Rhodes, who is the Pulitzer Prize-winning author of The Making of the Atomic Bomb, and most recently, the author of Energy, A Human History. I'm really excited about this one. Let's jump in at a current event, which is the fact that there's a new movie about Oppenheimer coming out, which I understand you've been consulted about. What did you think of the trailer? What are your impressions? Richard Rhodes 0:01:22They've really done a good job of things like the Trinity test device, which was the sphere covered with cables of various kinds. I had watched Peaky Blinders, where the actor who's playing Oppenheimer also appeared, and he looked so much like Oppenheimer to start with. Oppenheimer was about six feet tall, he was rail thin, not simply in terms of weight, but in terms of structure. Someone said he could sit in a children's high chair comfortably. But he never weighed more than about 140 pounds and that quality is there in the actor. So who knows? It all depends on how the director decided to tell the story. There are so many aspects of the story that you could never possibly squeeze them into one 2-hour movie. I think that we're waiting for the multi-part series that would really tell a lot more of the story, if not the whole story. But it looks exciting. We'll see. There have been some terrible depictions of Oppenheimer, there've been some terrible depictions of the bomb program. And maybe they'll get this one right. Dwarkesh Patel 0:02:42Yeah, hopefully. It is always great when you get an actor who resembles their role so well. For example, Bryan Cranston who played LBJ, and they have the same physical characteristics of the beady eyes, the big ears. Since we're talking about Oppenheimer, I had one question about him. I understand that there's evidence that's come out that he wasn't directly a communist spy. But is there any possibility that he was leaking information to the Soviets or in some way helping the Soviet program? He was a communist sympathizer, right? Richard Rhodes 0:03:15He had been during the 1930s. But less for the theory than for the practical business of helping Jews escape from Nazi Germany. One of the loves of his life, Jean Tatlock, was also busy working on extracting Jews from Europe during the 30. She was a member of the Communist Party and she, I think, encouraged him to come to meetings. But I don't think there's any possibility whatsoever that he shared information. In fact, he said he read Marx on a train trip between Berkeley and Washington one time and thought it was a bunch of hooey, just ridiculous. He was a very smart man, and he read the book with an eye to its logic, and he didn't think there was much there. He really didn't know anything about human beings and their struggles. He was born into considerable wealth. There were impressionist paintings all over his family apartments in New York City. His father had made a great deal of money cornering the markets on uniform linings for military uniforms during and before the First World War so there was a lot of wealth. I think his income during the war years and before was somewhere around $100,000 a month. And that's a lot of money in the 1930s. So he just lived in his head for most of his early years until he got to Berkeley and discovered that prime students of his were living on cans of god-awful cat food, because they couldn't afford anything else. And once he understood that there was great suffering in the world, he jumped in on it, as he always did when he became interested in something. So all of those things come together. His brother Frank was a member of the party, as was Frank's wife. I think the whole question of Oppenheimer lying to the security people during the Second World War about who approached him and who was trying to get him to sign on to some espionage was primarily an effort to cover up his brother's involvement. Not that his brothers gave away any secrets, I don't think they did. But if the army's security had really understood Frank Oppenheimer's involvement, he probably would have been shipped off to the Aleutians or some other distant place for the duration of the war. And Oppenheimer quite correctly wanted Frank around. He was someone he trusted.(0:06:22) - Was the bomb inevitable?Dwarkesh Patel 0:06:22Let's start talking about The Making of the Bomb. One question I have is — if World War II doesn't happen, is there any possibility that the bomb just never gets developed? Nobody bothers.Richard Rhodes 0:06:34That's really a good question and I've wondered over the years. But the more I look at the sequence of events, the more I think it would have been essentially inevitable, though perhaps not such an accelerated program. The bomb was pushed so hard during the Second World War because we thought the Germans had already started working on one. Nuclear fission had been discovered in Nazi Germany, in Berlin, in 1938, nine months before the beginning of the Second World War in Europe. Technological surveillance was not available during the war. The only way you could find out something was to send in a spy or have a mole or something human. And we didn't have that. So we didn't know where the Germans were, but we knew that the basic physics reaction that could lead to a bomb had been discovered there a year or more before anybody else in the West got started thinking about it. There was that most of all to push the urgency. In your hypothetical there would not have been that urgency. However, as soon as good physicists thought about the reaction that leads to nuclear fission — where a slow room temperature neutron, very little energy, bumps into the nucleus of a uranium-235 atom it would lead to a massive response. Isidore Rabi, one of the great physicists of this era, said it would have been like the moon struck the earth. The reaction was, as physicists say, fiercely exothermic. It puts out a lot more energy than you have to use to get it started. Once they did the numbers on that, and once they figured out how much uranium you would need to have in one place to make a bomb or to make fission get going, and once they were sure that there would be a chain reaction, meaning a couple of neutrons would come out of the reaction from one atom, and those two or three would go on and bump into other Uranium atoms, which would then fission them, and you'd get a geometric exponential. You'd get 1, 2, 4, 8, 16, 32, and off of there. For most of our bombs today the initial fission, in 80 generations, leads to a city-busting explosion. And then they had to figure out how much material they would need, and that's something the Germans never really figured out, fortunately for the rest of us. They were still working on the idea that somehow a reactor would be what you would build. When Niels Bohr, the great Danish physicist, escaped from Denmark in 1943 and came to England and then United States, he brought with him a rough sketch that Werner Heisenberg, the leading scientist in the German program, had handed him in the course of trying to find out what Bohr knew about what America was doing. And he showed it to the guys at Los Alamos and Hans Bethe, one of the great Nobel laureate physicists in the group, said — “Are the Germans trying to throw a reactor down on us?” You can make a reactor blow up, we saw that at Chernobyl, but it's not a nuclear explosion on the scale that we're talking about with the bomb. So when a couple of these emigres Jewish physicists from Nazi Germany were whiling away their time in England after they escaped, because they were still technically enemy aliens and therefore could not be introduced to top secret discussions, one of them asked the other — “How much would we need of pure uranium-235, this rare isotope of uranium that chain reacts? How much would we need to make a bomb?” And they did the numbers and they came up with one pound, which was startling to them. Of course, it is more than that. It's about 125 pounds, but that's just a softball. That's not that much material. And then they did the numbers about what it would cost to build a factory to pull this one rare isotope of uranium out of the natural metal, which has several isotopes mixed together. And they figured it wouldn't cost more than it would cost to build a battleship, which is not that much money for a country at war. Certainly the British had plenty of battleships at that point in time. So they put all this together and they wrote a report which they handed through their superior physicists at Manchester University where they were based, who quickly realized how important this was. The United States lagged behind because we were not yet at war, but the British were. London was being bombed in the blitz. So they saw the urgency, first of all, of eating Germany to the punch, second of all of the possibility of building a bomb. In this report, these two scientists wrote that no physical structure came to their minds which could offer protection against a bomb of such ferocious explosive power. This report was from 1940 long before the Manhattan Project even got started. They said in this report, the only way we could think of to protect you against a bomb would be to have a bomb of similar destructive force that could be threatened for use if the other side attacked you. That's deterrence. That's a concept that was developed even before the war began in the United States. You put all those pieces together and you have a situation where you have to build a bomb because whoever builds the first bomb theoretically could prevent you from building more or prevent another country from building any and could dominate the world. And the notion of Adolf Hitler dominating the world, the Third Reich with nuclear weapons, was horrifying. Put all that together and the answer is every country that had the technological infrastructure to even remotely have the possibility of building everything you'd have to build to get the material for a bomb started work on thinking about it as soon as nuclear fusion was announced to the world. France, the Soviet Union, Great Britain, the United States, even Japan. So I think the bomb would have been developed but maybe not so quickly. Dwarkesh Patel 0:14:10In the book you talk that for some reason the Germans thought that the critical mass was something like 10 tons, they had done some miscalculation.Richard Rhodes 0:14:18A reactor. Dwarkesh Patel 0:14:19You also have some interesting stories in the book about how different countries found out the Americans were working on the bomb. For example, the Russians saw that all the top physicists, chemists, and metallurgists were no longer publishing. They had just gone offline and so they figured that something must be going on. I'm not sure if you're aware that while the subject of the Making of the Atomic Bomb in and of itself is incredibly fascinating, this book has become a cult classic in AI. Are you familiar with this? Richard Rhodes 0:14:52No. Dwarkesh Patel 0:14:53The people who are working on AI right now are huge fans of yours. They're the ones who initially recommended the book to me because the way they see the progress in the field reminded them of this book. Because you start off with these initial scientific hints. With deep learning, for example, here's something that can teach itself any function is similar to Szilárd noticing the nuclear chain reaction. In AI there's these scaling laws that say that if you make the model this much bigger, it gets much better at reasoning, at predicting text, and so on. And then you can extrapolate this curve. And you can see we get two more orders of magnitude, and we get to something that looks like human level intelligence. Anyway, a lot of the people who are working in AI have become huge fans of your book because of this reason. They see a lot of analogies in the next few years. They must be at page 400 in their minds of where the Manhattan Project was.Richard Rhodes 0:15:55We must later on talk about unintended consequences. I find the subject absolutely fascinating. I think my next book might be called Unintended Consequences. Dwarkesh Patel 0:16:10You mentioned that a big reason why many of the scientists wanted to work on the bomb, especially the Jewish emigres, was because they're worried about Hitler getting it first. As you mentioned at some point, 1943, 1944, it was becoming obvious that Hitler, the Nazis were not close to the bomb. And I believe that almost none of the scientists quit after they found out that the Nazis weren't close. So why didn't more of them say — “Oh, I guess we were wrong. The Nazis aren't going to get it. We don't need to be working on it.”?Richard Rhodes 0:16:45There was only one who did that, Joseph Rotblat. In May of 1945 when he heard that Germany had been defeated, he packed up and left. General Groves, the imperious Army Corps of Engineers General who ran the entire Manhattan Project, was really upset. He was afraid he'd spill the beans. So he threatened to have him arrested and put in jail. But Rotblat was quite determined not to stay any longer. He was not interested in building bombs to aggrandize the national power of the United States of America, which is perfectly understandable. But why was no one else? Let me tell it in terms of Victor Weisskopf. He was an Austrian theoretical physicist, who, like the others, escaped when the Nazis took over Germany and then Austria and ended up at Los Alamos. Weisskopf wrote later — “There we were in Los Alamos in the midst of the darkest part of our science.” They were working on a weapon of mass destruction, that's pretty dark. He said “Before it had almost seemed like a spiritual quest.” And it's really interesting how different physics was considered before and after the Second World War. Before the war, one of the physicists in America named Louis Alvarez told me when he got his PhD in physics at Berkeley in 1937 and went to cocktail parties, people would ask, “What's your degree in?” He would tell them “Chemistry.” I said, “Louis, why?” He said, “because I don't really have to explain what physics was.” That's how little known this kind of science was at that time. There were only about 1,000 physicists in the whole world in 1900. By the mid-30s, there were a lot more, of course. There'd been a lot of nuclear physics and other kinds of physics done by them. But it was still arcane. And they didn't feel as if they were doing anything mean or dirty or warlike at all. They were just doing pure science. Then nuclear fission came along. It was publicized worldwide. People who've been born since after the Second World War don't realize that it was not a secret at first. The news was published first in a German chemistry journal, Die Naturwissenschaften, and then in the British journal Nature and then in American journals. And there were headlines in the New York Times, the Los Angeles Times, the Chicago Tribune, and all over the world. People had been reading about and thinking about how to get energy out of the atomic nucleus for a long time. It was clear there was a lot there. All you had to do was get a piece of radium and see that it glowed in the dark. This chunk of material just sat there, you didn't plug it into a wall. And if you held it in your hand, it would burn you. So where did that energy come from? The physicists realized it all came from the nucleus of the atom, which is a very small part of the whole thing. The nucleus is 1/100,000th the diameter of the whole atom. Someone in England described it as about the size of a fly in a cathedral. All of the energy that's involved in chemical reactions, comes from the electron cloud that's around the nucleus. But it was clear that the nucleus was the center of powerful forces. But the question was, how do you get them out? The only way that the nucleus had been studied up to 1938 was by bombarding it with protons, which have the same electric charge as the nucleus, positive charge, which means they were repelled by it. So you had to accelerate them to high speeds with various versions of the big machines that we've all become aware of since then. The cyclotron most obviously built in the 30s, but there were others as well. And even then, at best, you could chip a little piece off. You could change an atom one step up or one step down the periodic table. This was the classic transmutation of medieval alchemy sure but it wasn't much, you didn't get much out. So everyone came to think of the nucleus of the atom like a little rock that you really had to hammer hard to get anything to happen with it because it was so small and dense. That's why nuclear fission, with this slow neutron drifting and then the whole thing just goes bang, was so startling to everybody. So startling that when it happened, most of the physicists who would later work on the bomb and others as well, realized that they had missed the reaction that was something they could have staged on a lab bench with the equipment on the shelf. Didn't have to invent anything new. And Louis Alvarez again, this physicist at Berkeley, he said — “I was getting my hair cut. When I read the newspaper, I pulled off the robe and half with my hair cut, ran to my lab, pulled some equipment off the shelf, set it up and there it was.” So he said, “I discovered nuclear fission, but it was two days too late.” And that happened all over. People were just hitting themselves on the head and saying, well, Niels Bohr said, “What fools we've all been.” So this is a good example of how in science, if your model you're working with is wrong it doesn't lead you down the right path. There was only one physicist who really was thinking the right way about the uranium atom and that was Niels Bohr. He wondered, sometime during the 30s, why uranium was the last natural element in the periodic table? What is different about the others that would come later? He visualized the nucleus as a liquid drop. I always like to visualize it as a water-filled balloon. It's wobbly, it's not very stable. The protons in the nucleus are held together by something called the strong force, but they still have the repellent positive electric charge that's trying to push them apart when you get enough of them into a nucleus. It's almost a standoff between the strong force and all the electrical charge. So it is like a wobbly balloon of water. And then you see why a neutron just falling into the nucleus would make it wobble around even more and in one of its configurations, it might take a dumbbell shape. And then you'd have basically two charged atoms just barely connected, trying to push each other apart. And often enough, they went the whole way. When they did that, these two new elements, half the weight of uranium, way down the periodic table, would reconfigure themselves into two separate nuclei. And in doing so, they would release some energy. And that was the energy that came out of the reaction and there was a lot of energy. So Bohr thought about the model in the right way. The chemists who actually discovered nuclear fusion didn't know what they were gonna get. They were just bombarding a solution of uranium nitrate with neutrons thinking, well, maybe we can make a new element, maybe a first man-made element will come out of our work. So when they analyzed the solution after they bombarded it, they found elements halfway down the periodic table. They shouldn't have been there. And they were totally baffled. What is this doing here? Do we contaminate our solution? No. They had been working with a physicist named Lisa Meitner who was a theoretical physicist, an Austrian Jew. She had gotten out of Nazi Germany not long before. But they were still in correspondence with her. So they wrote her a letter. I held that letter in my hand when I visited Berlin and I was in tears. You don't hold history of that scale in your hands very often. And it said in German — “We found this strange reaction in our solution. What are these elements doing there that don't belong there?” And she went for a walk in a little village in Western Sweden with her nephew, Otto Frisch, who was also a nuclear physicist. And they thought about it for a while and they remembered Bohr's model, the wobbly water-filled balloon. And they suddenly saw what could happen. And that's where the news came from, the physics news as opposed to the chemistry news from the guys in Germany that was published in all the Western journals and all the newspapers. And everybody had been talking about, for years, what you could do if you had that kind of energy. A glass of this material would drive the Queen Mary back and forth from New York to London 20 times and so forth, your automobile could run for months. People were thinking about what would be possible if you had that much available energy. And of course, people had thought about reactors. Robert Oppenheimer was a professor at Berkeley and within a week of the news reaching Berkeley, one of his students told me that he had a drawing on the blackboard, a rather bad drawing of both a reactor and a bomb. So again, because the energy was so great, the physics was pretty obvious. Whether it would actually happen depended on some other things like could you make it chain react? But fundamentally, the idea was all there at the very beginning and everybody jumped on it. Dwarkesh Patel 0:27:54The book is actually the best history of World War II I've ever read. It's about the atomic bomb, but it's interspersed with the events that are happening in World War II, which motivate the creation of the bomb or the release of it, why it had to be dropped on Japan given the Japanese response. The first third is about the scientific roots of the physics and it's also the best book I've read about the history of science in the early 20th century and the organization of it. There's some really interesting stuff in there. For example, there was a passage where you talk about how there's a real master apprentice model in early science where if you wanted to learn to do this kind of experimentation, you will go to Amsterdam where the master of it is residing. It's much more individual focused. Richard Rhodes 0:28:58Yeah, the whole European model of graduate study, which is basically the wandering scholar. You could go wherever you wanted to and sign up with whoever was willing to have you sign up. (0:29:10) - Firebombing vs nuclear vs hydrogen bombsDwarkesh Patel 0:29:10But the question I wanted to ask regarding the history you made of World War II in general is — there's one way you can think about the atom bomb which is that it is completely different from any sort of weaponry that has been developed before it. Another way you can think of it is there's a spectrum where on one end you have the thermonuclear bomb, in the middle you have the atom bomb, and on this end you have the firebombing of cities like Hamburg and Dresden and Tokyo. Do you think of these as completely different categories or does it seem like an escalating gradient to you? Richard Rhodes 0:29:47I think until you get to the hydrogen bomb, it's really an escalating gradient. The hydrogen bomb can be made arbitrarily large. The biggest one ever tested was 56 megatons of TNT equivalent. The Soviet tested that. That had a fireball more than five miles in diameter, just the fireball. So that's really an order of magnitude change. But the other one's no and in fact, I think one of the real problems, this has not been much discussed and it should be, when American officials went to Hiroshima and Nagasaki after the war, one of them said later — “I got on a plane in Tokyo. We flew down the long green archipelago of the Japanese home island. When I left Tokyo, it was all gray broken roof tiles from the fire bombing and the other bombings. And then all this greenery. And then when we flew over Hiroshima, it was just gray broken roof tiles again.” So the scale of the bombing with one bomb, in the case of Hiroshima, was not that different from the scale of the fire bombings that had preceded it with tens of thousands of bombs. The difference was it was just one plane. In fact, the people in Hiroshima didn't even bother to go into their bomb shelters because one plane had always just been a weather plane. Coming over to check the weather before the bombers took off. So they didn't see any reason to hide or protect themselves, which was one of the reasons so many people were killed. The guys at Los Alamos had planned on the Japanese being in their bomb shelters. They did everything they could think of to make the bomb as much like ordinary bombing as they could. And for example, it was exploded high enough above ground, roughly 1,800 yards, so that the fireball that would form from this really very small nuclear weapon — by modern standards — 15 kilotons of TNT equivalent, wouldn't touch the ground and stir up dirt and irradiate it and cause massive radioactive fallout. It never did that. They weren't sure there would be any fallout. They thought the plutonium and the bomb over Nagasaki now would just kind of turn into a gas and blow away. That's not exactly what happened. But people don't seem to realize, and it's never been emphasized enough, these first bombs, like all nuclear weapons, were firebombs. Their job was to start mass fires, just exactly like all the six-pound incendiaries that had been destroying every major city in Japan by then. Every major city above 50,000 population had already been burned out. The only reason Hiroshima and Nagasaki were around to be atomic bombed is because they'd been set aside from the target list, because General Groves wanted to know what the damage effects would be. The bomb that was tested in the desert didn't tell you anything. It killed a lot of rabbits, knocked down a lot of cactus, melted some sand, but you couldn't see its effect on buildings and on people. So the bomb was deliberately intended to be as much not like poison gas, for example, because we didn't want the reputation for being like people in the war in Europe during the First World War, where people were killing each other with horrible gasses. We just wanted people to think this was another bombing. So in that sense, it was. Of course, there was radioactivity. And of course, some people were killed by it. But they calculated that the people who would be killed by the irradiation, the neutron radiation from the original fireball, would be close enough to the epicenter of the explosion that they would be killed by the blast or the flash of light, which was 10,000 degrees. The world's worst sunburn. You've seen stories of people walking around with their skin hanging off their arms. I've had sunburns almost that bad, but not over my whole body, obviously, where the skin actually peeled blisters and peels off. That was a sunburn from a 10,000 degree artificial sun. Dwarkesh Patel 0:34:29So that's not the heat, that's just the light? Richard Rhodes 0:34:32Radiant light, radiant heat. 10,000 degrees. But the blast itself only extended out a certain distance, it was fire. And all the nuclear weapons that have ever been designed are basically firebombs. That's important because the military in the United States after the war was not able to figure out how to calculate the effects of this weapon in a reliable way that matched their previous experience. They would only calculate the blast effects of a nuclear weapon when they figured their targets. That's why we had what came to be called overkill. We wanted redundancy, of course, but 60 nuclear weapons on Moscow was way beyond what would be necessary to destroy even that big a city because they were only calculating the blast. But in fact, if you exploded a 300 kiloton nuclear warhead over the Pentagon at 3,000 feet, it would blast all the way out to the capital, which isn't all that far. But if you counted the fire, it would start a mass-fire and then it would reach all the way out to the Beltway and burn everything between the epicenter of the weapon and the Beltway. All organic matter would be totally burned out, leaving nothing but mineral matter, basically. Dwarkesh Patel 0:36:08I want to emphasize two things you said because they really hit me in reading the book and I'm not sure if the audience has fully integrated them. The first is, in the book, the military planners and Groves, they talk about needing to use the bomb sooner rather than later, because they were running out of cities in Japan where there are enough buildings left that it would be worth bombing in the first place, which is insane. An entire country is almost already destroyed from fire bombing alone. And the second thing about the category difference between thermonuclear and atomic bombs. Daniel Ellsberg, the nuclear planner who wrote the Doomsday machine, he talks about, people don't understand that the atom bomb that resulted in the pictures we see of Nagasaki and Hiroshima, that is simply the detonator of a modern nuclear bomb, which is an insane thing to think about. So for example, 10 and 15 kilotons is the Hiroshima Nagasaki and the Tsar Bomba, which was 50 megatons. So more than 1,000 times as much. And that wasn't even as big as they could make it. They kept the uranium tamper off, because they didn't want to destroy all of Siberia. So you could get more than 10,000 times as powerful. Richard Rhodes 0:37:31When Edward Teller, co-inventor of the hydrogen bomb and one of the dark forces in the story, was consulting with our military, just for his own sake, he sat down and calculated, how big could you make a hydrogen bomb? He came up with 1,000 megatons. And then he looked at the effects. 1,000 megatons would be a fireball 10 miles in diameter. And the atmosphere is only 10 miles deep. He figured that it would just be a waste of energy, because it would all blow out into space. Some of it would go laterally, of course, but most of it would just go out into space. So a bomb more than 100 megatons would just be totally a waste of time. Of course, a 100 megatons bomb is also a total waste, because there's no target on Earth big enough to justify that from a military point of view. Robert Oppenheimer, when he had his security clearance questioned and then lifted when he was being punished for having resisted the development of the hydrogen bomb, was asked by the interrogator at this security hearing — “Well, Dr. Oppenheimer, if you'd had a hydrogen bomb for Hiroshima, wouldn't you have used it?” And Oppenheimer said, “No.” The interrogator asked, “Why is that?” He said because the target was too small. I hope that scene is in the film, I'm sure it will be. So after the war, when our bomb planners and some of our scientists went into Hiroshima and Nagasaki, just about as soon as the surrender was signed, what they were interested in was the scale of destruction, of course. And those two cities didn't look that different from the other cities that had been firebombed with small incendiaries and ordinary high explosives. They went home to Washington, the policy makers, with the thought that — “Oh, these bombs are not so destructive after all.” They had been touted as city busters, basically, and they weren't. They didn't completely burn out cities. They were not certainly more destructive than the firebombing campaign, when everything of more than 50,000 population had already been destroyed. That, in turn, influenced the judgment about what we needed to do vis-a-vis the Soviet Union when the Soviets got the bomb in 1949. There was a general sense that, when you could fight a war with nuclear weapons, deterrence or not, you would need quite a few of them to do it right. And the Air Force, once it realized that it could aggrandize its own share of the federal budget by cornering the market and delivering nuclear weapons, very quickly decided that they would only look at the blast effect and not the fire effect. It's like tying one hand behind your back. Most of it was a fire effect. So that's where they came up with numbers like we need 60 of these to take out Moscow. And what the Air Force figured out by the late 1940s is that the more targets, the more bombs. The more bombs, the more planes. The more planes, the biggest share of the budget. So by the mid 1950s, the Air Force commanded 47% of the federal defense budget. And the other branches of services, which had not gone nuclear by then, woke up and said, we'd better find some use for these weapons in our branches of service. So the Army discovered that it needed nuclear weapons, tactical weapons for field use, fired out of cannons. There was even one that was fired out of a shoulder mounted rifle. There was a satchel charge that two men could carry, weighed about 150 pounds, that could be used to dig a ditch so that Soviet tanks couldn't cross into Germany. And of course the Navy by then had been working hard with General Rickover on building a nuclear submarine that could carry ballistic missiles underwater in total security. No way anybody could trace those submarines once they were quiet enough. And a nuclear reactor is very quiet. It just sits there with neutrons running around, making heat. So the other services jumped in and this famous triad, we must have these three different kinds of nuclear weapons, baloney. We would be perfectly safe if we only had our nuclear submarines. And only one or two of those. One nuclear submarine can take out all of Europe or all of the Soviet Union.Dwarkesh Patel 0:42:50Because it has multiple nukes on it? Richard Rhodes 0:42:53Because they have 16 intercontinental ballistic missiles with MIRV warheads, at least three per missile. Dwarkesh Patel 0:43:02Wow. I had a former guest, Richard Hanania, who has a book about foreign policy where he points out that our model of thinking about why countries do the things they do, especially in foreign affairs, is wrong because we think of them as individual rational actors, when in fact it's these competing factions within the government. And in fact, you see this especially in the case of Japan in World War II, there was a great book of Japan leading up to World War II, where they talk about how a branch of the Japanese military, I forget which, needed more oil to continue their campaign in Manchuria so they forced these other branches to escalate. But it's so interesting that the reason we have so many nukes is that the different branches are competing for funding. Richard Rhodes 0:43:50Douhet, the theorist of air power, had been in the trenches in the First World War. Somebody (John Masefield) called the trenches of the First World War, the long grave already dug, because millions of men were killed and the trenches never moved, a foot this way, a foot that way, all this horror. And Douhet came up with the idea that if you could fly over the battlefield to the homeland of the enemy and destroy his capacity to make war, then the people of that country, he theorized, would rise up in rebellion and throw out their leaders and sue for peace. And this became the dream of all the Air Forces of the world, but particularly ours. Until around 1943, it was called the US Army Air Force. The dream of every officer in the Air Force was to get out from under the Army, not just be something that delivers ground support or air support to the Army as it advances, but a power that could actually win wars. And the missing piece had always been the scale of the weaponry they carried. So when the bomb came along, you can see why Curtis LeMay, who ran the strategic air command during the prime years of that force, was pushing for bigger and bigger bombs. Because if a plane got shot down, but the one behind it had a hydrogen bomb, then it would be just almost as effective as the two planes together. So they wanted big bombs. And they went after Oppenheimer because he thought that was a terrible way to go, that there was really no military use for these huge weapons. Furthermore, the United States had more cities than Russia did, than the Soviet Union did. And we were making ourselves a better target by introducing a weapon that could destroy a whole state. I used to live in Connecticut and I saw a map that showed the air pollution that blew up from New York City to Boston. And I thought, well, now if that was fallout, we'd be dead up here in green, lovely Connecticut. That was the scale that it was going to be with these big new weapons. So on the one hand, you had some of the important leaders in the government thinking that these weapons were not the war-winning weapons that the Air Force wanted them and realized they could be. And on the other hand, you had the Air Force cornering the market on nuclear solutions to battles. All because some guy in a trench in World War I was sufficiently horrified and sufficiently theoretical about what was possible with air power. Remember, they were still flying biplanes. When H.G. Wells wrote his novel, The World Set Free in 1913, predicting an atomic war that would lead to world government, he had Air Forces delivering atomic bombs, but he forgot to update his planes. The guys in the back seat, the bombardiers, were sitting in a biplane, open cockpit. And when the pilots had dropped the bomb, they would reach down and pick up H.G. Wells' idea of an atomic bomb and throw it over the side. Which is kind of what was happening in Washington after the war. And it led us to a terribly misleading and unfortunate perspective on how many weapons we needed, which in turn fermented the arms race with the Soviets and just chased off. In the Soviet Union, they had a practical perspective on factories. Every factory was supposed to produce 120% of its target every year. That was considered good Soviet realism. And they did that with their nuclear war weapons. So by the height of the Cold War, they had 75,000 nuclear weapons, and nobody had heard yet of nuclear winter. So if both sides had set off this string of mass traps that we had in our arsenals, it would have been the end of the human world without question. Dwarkesh Patel 0:48:27It raises an interesting question, if the military planners thought that the conventional nuclear weapon was like the fire bombing, would it have been the case that if there wasn't a thermonuclear weapon, that there actually would have been a nuclear war by now because people wouldn't have been thinking of it as this hard red line? Richard Rhodes 0:48:47I don't think so because we're talking about one bomb versus 400, and one plane versus 400 planes and thousands of bombs. That scale was clear. Deterrence was the more important business. Everyone seemed to understand even the spies that the Soviets had connected up to were wholesaling information back to the Soviet Union. There's this comic moment when Truman is sitting with Joseph Stalin at Potsdam, and he tells Stalin, we have a powerful new weapon. And that's as much as he's ready to say about it. And Stalin licks at him and says, “Good, I hope you put it to good use with the Japanese.” Stalin knows exactly what he's talking about. He's seen the design of the fat man type Nagasaki plutonium bomb. He has held it in his hands because they had spies all over the place. (0:49:44) - Stalin & the Soviet programDwarkesh Patel 0:49:44How much longer would it have taken the Soviets to develop the bomb if they didn't have any spies? Richard Rhodes 0:49:49Probably not any longer. Dwarkesh Patel 0:49:51Really? Richard Rhodes 0:49:51When the Soviet Union collapsed in the winter of ‘92, I ran over there as quickly as I could get over there. In this limbo between forming a new kind of government and some of the countries pulling out and becoming independent and so forth, their nuclear scientists, the ones who'd worked on their bombs were free to talk. And I found that out through Yelena Bonner, Andrei Sakharov's widow, who was connected to people I knew. And she said, yeah, come on over. Her secretary, Sasha, who was a geologist about 35 years old became my guide around the country. We went to various apartments. They were retired guys from the bomb program and were living on, as far as I could tell, sac-and-potatoes and some salt. They had government pensions and the money was worth a salt, all of a sudden. I was buying photographs from them, partly because I needed the photographs and partly because 20 bucks was two months' income at that point. So it was easy for me and it helped them. They had first class physicists in the Soviet Union, they do in Russian today. They told me that by 1947, they had a design for a bomb that they said was half the weight and twice the yield of the Fat Man bomb. The Fat Man bomb was the plutonium implosion, right? And it weighed about 9,000 pounds. They had a much smaller and much more deliverable bomb with a yield of about 44 kilotons. Dwarkesh Patel 0:51:41Why was Soviet physics so good?Richard Rhodes 0:51:49The Russian mind? I don't know. They learned all their technology from the French in the 19th century, which is why there's so many French words in Russian. So they got good teachers, the French are superb technicians, they aren't so good at building things, but they're very good at designing things. There's something about Russia, I don't know if it's the language or the education. They do have good education, they did. But I remember asking them when they were working, I said — On the hydrogen bomb, you didn't have any computers yet. We only had really early primitive computers to do the complicated calculations of the hydrodynamics of that explosion. I said, “What did you do?” They said, “Oh, we just used nuclear. We just used theoretical physics.” Which is what we did at Los Alamos. We had guys come in who really knew their math and they would sit there and work it out by hand. And women with old Marchant calculators running numbers. So basically they were just good scientists and they had this new design. Kurchatov who ran the program took Lavrentiy Beria, who ran the NKVD who was put in charge of the program and said — “Look, we can build you a better bomb. You really wanna waste the time to make that much more uranium and plutonium?” And Beria said, “Comrade, I want the American bomb. Give me the American bomb or you and all your families will be camp dust.” I talked to one of the leading scientists in the group and he said, we valued our lives, we valued our families. So we gave them a copy of the plutonium implosion bomb. Dwarkesh Patel 0:53:37Now that you explain this, when the Soviet Union fell, why didn't North Korea, Iran or another country, send a few people to the fallen Soviet Union to recruit a few of the scientists to start their own program? Or buy off their stockpiles or something. Or did they?Richard Rhodes 0:53:59There was some effort by countries in the Middle East to get all the enriched uranium, which they wouldn't sell them. These were responsible scientists. They told me — we worked on the bomb because you had it and we didn't want there to be a monopoly on the part of any country in the world. So patriotically, even though Stalin was in charge of our country, he was a monster. We felt that it was our responsibility to work on these things, even Sakharov. There was a great rush at the end of the Second World War to get hold of German scientists. And about an equal number were grabbed by the Soviets. All of the leading German scientists, like Heisenberg and Hans and others, went west as fast as they could. They didn't want to be captured by the Soviets. But there were some who were. And they helped them work. People have the idea that Los Alamos was where the bomb happened. And it's true that at Los Alamos, we had the team that designed, developed, and built the first actual weapons. But the truth is, the important material for weapons is the uranium or plutonium. One of the scientists in the Manhattan Project told me years later, you can make a pretty high-level nuclear explosion just by taking two subcritical pieces of uranium, putting one on the floor and dropping the other by hand from a height of about six feet. If that's true, then all this business about secret designs and so forth is hogwash. What you really need for a weapon is the critical mass of highly enriched uranium, 90% of uranium-235. If you've got that, there are lots of different ways to make the bomb. We had two totally different ways that we used. The gun on the one hand for uranium, and then because plutonium was so reactive that if you fired up the barrel of a cannon at 3,000 feet per second, it would still melt down before the two pieces made it up. So for that reason, they had to invent an entirely new technology, which was an amazing piece of work. From the Soviet point of view, and I think this is something people don't know either, but it puts the Russian experience into a better context. All the way back in the 30s, since the beginning of the Soviet Union after the First World War, they had been sending over espionage agents connected up to Americans who were willing to work for them to collect industrial technology. They didn't have it when they began their country. It was very much an agricultural country. And in that regard, people still talk about all those damn spies stealing our secrets, we did the same thing with the British back in colonial days. We didn't know how to make a canal that wouldn't drain out through the soil. The British had a certain kind of clay that they would line their canals with, and there were canals all over England, even in the 18th century, that were impervious to the flow of water. And we brought a British engineer at great expense to teach us how to make the lining for the canals that opened up the Middle West and then the West. So they were doing the same thing. And one of those spies was a guy named Harry Gold, who was working all the time for them. He gave them some of the basic technology of Kodak filmmaking, for example. Harry Gold was the connection between David Greenglass and one of the American spies at Los Alamos and the Soviet Union. So it was not different. The model was — never give us something that someone dreamed of that hasn't been tested and you know works. So it would actually be blueprints for factories, not just a patent. And therefore when Beria after the war said, give us the bomb, he meant give me the American bomb because we know that works. I don't trust you guys. Who knows what you'll do. You're probably too stupid anyway. He was that kind of man. So for all of those reasons, they built the second bomb they tested was twice the yield and half the way to the first bomb. In other words, it was their new design. And so it was ours because the technology was something that we knew during the war, but it was too theoretical still to use. You just had to put the core and have a little air gap between the core and the explosives so that the blast wave would have a chance to accelerate through an open gap. And Alvarez couldn't tell me what it was but he said, you can get a lot more destructive force with a hammer if you hit something with it, rather than if you put the head on the hammer and push. And it took me several years before I figured out what he meant. I finally understood he was talking about what's called levitation.Dwarkesh Patel 0:59:41On the topic that the major difficulty in developing a bomb is either the refinement of uranium into U-235 or its transmutation into plutonium, I was actually talking to a physicist in preparation for this conversation. He explained the same thing that if you get two subcritical masses of uranium together, you wouldn't have the full bomb because it would start to tear itself apart without the tamper, but you would still have more than one megaton.Richard Rhodes 1:00:12It would be a few kilotons. Alvarez's model would be a few kilotons, but that's a lot. Dwarkesh Patel 1:00:20Yeah, sorry I meant kiloton. He claimed that one of the reasons why we talk so much about Los Alamos is that at the time the government didn't want other countries to know that if you refine uranium, you've got it. So they were like, oh, we did all this fancy physics work in Los Alamos that you're not gonna get to, so don't even worry about it. I don't know what you make of that theory. That basically it was sort of a way to convince people that Los Alamos was important. Richard Rhodes 1:00:49I think all the physics had been checked out by a lot of different countries by then. It was pretty clear to everybody what you needed to do to get to a bomb. That there was a fast fusion reaction, not a slow fusion reaction, like a reactor. They'd worked that out. So I don't think that's really the problem. But to this day, no one ever talks about the fact that the real problem isn't the design of the weapon. You could make one with wooden boxes if you wanted to. The problem is getting the material. And that's good because it's damned hard to make that stuff. And it's something you can protect. Dwarkesh Patel 1:01:30We also have gotten very lucky, if lucky is the word you want to use. I think you mentioned this in the book at some point, but the laws of physics could have been such that unrefined uranium ore was enough to build a nuclear weapon, right? In some sense, we got lucky that it takes a nation-state level actor to really refine and produce the raw substance. Richard Rhodes 1:01:56Yeah, I was thinking about that this morning on the way over. And all the uranium in the world would already have destroyed itself. Most people have never heard of the living reactors that developed on their own in a bed of uranium ore in Africa about two billion years ago, right? When there was more U-235 in a mass of uranium ore than there is today, because it decays like all radioactive elements. And the French discovered it when they were mining the ore and found this bed that had a totally different set of nuclear characteristics. They were like, what happened? But there were natural reactors in Gabon once upon a time. And they started up because some water, a moderator to make the neutrons slow down, washed its way down through a bed of much more highly enriched uranium ore than we still have today. Maybe 5-10% instead of 3.5 or 1.5, whatever it is now. And they ran for about 100,000 years and then shut themselves down because they had accumulated enough fusion products that the U-235 had been used up. Interestingly, this material never migrated out of the bed of ore. People today who are anti-nuclear say, well, what are we gonna do about the waste? Where are we gonna put all that waste? It's silly. Dwarkesh Patel 1:03:35Shove it in a hole. Richard Rhodes 1:03:36Yeah, basically. That's exactly what we're planning to do. Holes that are deep enough and in beds of material that will hold them long enough for everything to decay back to the original ore. It's not a big problem except politically because nobody wants it in their backyard.Dwarkesh Patel 1:03:53On the topic of the Soviets, one question I had while reading the book was — we negotiated with Stalin at Yalta and we surrendered a large part of Eastern Europe to him under his sphere of influence. And obviously we saw 50 years of immiseration there as a result. Given the fact that only we had the bomb, would it have been possible that we could have just knocked out the Soviet Union or at least prevented so much of the world from succumbing to communism in the aftermath of World War II? Is that a possibility? Richard Rhodes 1:04:30When we say we had the bomb, we had a few partly assembled handmade bombs. It took almost as long to assemble one as the battery life of the batteries that would drive the original charge that would set off the explosion. It was a big bluff. You know, when they closed Berlin in 1948 and we had to supply Berlin by air with coal and food for a whole winter, we moved some B-29s to England. The B-29 being the bomber that had carried the bombs. They were not outfitted for nuclear weapons. They didn't have the same kind of bomb-based structure. The weapons that were dropped in Japan had a single hook that held the entire bomb. So when the bay opened and the hook was released, the thing dropped. And that's very different from dropping whole rows of small bombs that you've seen in the photographs and the film footage. So it was a big bluff on our part. We took some time after the war inevitably to pull everything together. Here was a brand new technology. Here was a brand new weapon. Who was gonna be in charge of it? The military wanted control, Truman wasn't about to give the military control. He'd been an artillery officer in the First World War. He used to say — “No, damn artillery captain is gonna start World War III when I'm president.” I grew up in the same town he lived in so I know his accent. Independence, Missouri. Used to see him at his front steps taking pictures with tourists while he was still president. He used to step out on the porch and let the tourists take photographs. About a half a block from my Methodist church where I went to church. It was interesting. Interestingly, his wife was considered much more socially acceptable than he was. She was from an old family in independence, Missouri. And he was some farmer from way out in Grandview, Missouri, South of Kansas City. Values. Anyway, at the end of the war, there was a great rush from the Soviet side of what was already a zone. There was a Soviet zone, a French zone, British zone and an American zone. Germany was divided up into those zones to grab what's left of the uranium ore that the Germans had stockpiled. And there was evidence that there was a number of barrels of the stuff in a warehouse somewhere in the middle of all of this. And there's a very funny story about how the Russians ran in and grabbed off one site full of uranium ore, this yellow black stuff in what were basically wine barrels. And we at the same night, just before the wall came down between the zones, were running in from the other side, grabbing some other ore and then taking it back to our side. But there was also a good deal of requisitioning of German scientists. And the ones who had gotten away early came West, but there were others who didn't and ended up helping the Soviets. And they were told, look, you help us build the reactors and the uranium separation systems that we need. And we'll let you go home and back to your family, which they did. Early 50s by then, the German scientists who had helped the Russians went home. And I think our people stayed here and brought their families over, I don't know. (1:08:24) - Deterrence, disarmament, North Korea, TaiwanDwarkesh Patel 1:08:24Was there an opportunity after the end of World War II, before the Soviets developed the bomb, for the US to do something where either it somehow enforced a monopoly on having the bomb, or if that wasn't possible, make some sort of credible gesture that, we're eliminating this knowledge, you guys don't work on this, we're all just gonna step back from this. Richard Rhodes 1:08:50We tried both before the war. General Groves, who had the mistaken impression that there was a limited amount of high-grade uranium ore in the world, put together a company that tried to corner the market on all the available supply. For some reason, he didn't realize that a country the size of the Soviet Union is going to have some uranium ore somewhere. And of course it did, in Kazakhstan, rich uranium ore, enough for all the bombs they wanted to build. But he didn't know that, and I frankly don't know why he didn't know that, but I guess uranium's use before the Second World War was basically as a glazing agent for pottery, that famous yellow pottery and orange pottery that people owned in the 1930s, those colors came from uranium, and they're sufficiently radioactive, even to this day, that if you wave a Geiger counter over them, you get some clicks. In fact, there have been places where they've gone in with masks and suits on, grabbed the Mexican pottery and taken it out in a lead-lined case. People have been so worried about it but that was the only use for uranium, to make a particular kind of glass. So once it became clear that there was another use for uranium, a much more important one, Groves tried to corner the world market, and he thought he had. So that was one effort to limit what the Soviet Union could do. Another was to negotiate some kind of agreement between the parties. That was something that really never got off the ground, because the German Secretary of State was an old Southern politician and he didn't trust the Soviets. He went to the first meeting, in Geneva in ‘45 after the war was over, and strutted around and said, well, I got the bomb in my pocket, so let's sit down and talk here. And the Soviet basically said, screw you. We don't care. We're not worried about your bomb. Go home. So that didn't work. Then there was the effort to get the United Nations to start to develop some program of international control. And the program was proposed originally by a committee put together by our State Department that included Robert Oppenheimer, rightly so, because the other members of the committee were industrialists, engineers, government officials, people with various kinds of expertise around the very complicated problems of technology and the science and, of course, the politics, the diplomacy. In a couple of weeks, Oppenheimer taught them the basics of the nuclear physics involved and what he knew about bomb design, which was everything, actually, since he'd run Los Alamos. He was a scientist during the war. And they came up with a plan. People have scoffed ever since at what came to be called the Acheson-Lilienthal plan named after the State Department people. But it's the only plan I think anyone has ever devised that makes real sense as to how you could have international control without a world government. Every country would be open to inspection by any agency that was set up. And the inspections would not be at the convenience of the country. But whenever the inspectors felt they needed to inspect. So what Oppenheimer called an open world. And if you had that, and then if each country then developed its own nuclear industries, nuclear power, medical uses, whatever, then if one country tried clandestinely to begin to build bombs, you would know about it at the time of the next inspection. And then you could try diplomacy. If that didn't work, you could try conventional war. If that wasn't sufficient, then you could start building your bombs too. And at the end of this sequence, which would be long enough, assuming that there were no bombs existing in the world, and the ore was stored in a warehouse somewhere, six months maybe, maybe a year, it would be time for everyone to scale up to deterrence with weapons rather than deterrence without weapons, with only the knowledge. That to me is the answer to the whole thing. And it might have worked. But there were two big problems. One, no country is going to allow a monopoly on a nuclear weapon, at least no major power. So the Russians were not willing to sign on from the beginning. They just couldn't. How could they? We would not have. Two, Sherman assigned a kind of a loudmouth, a wise old Wall Street guy to present this program to the United Nations. And he sat down with Oppenheimer after he and his people had studied and said, where's your army? Somebody starts working on a bomb over there. You've got to go in and take that out, don't you? He said, what would happen if one country started building a bomb? Oppenheimer said, well, that would be an act of war. Meaning then the other countries could begin to escalate as they needed to to protect themselves against one power, trying to overwhelm the rest. Well, Bernard Baruch was the name of the man. He didn't get it. So when he presented his revised version of the Acheson–Lilienthal Plan, which was called the Baruch Plan to the United Nations, he included his army. And he insisted that the United States would not give up its nuclear monopoly until everyone else had signed on. So of course, who's going to sign on to that deal? Dwarkesh Patel 1:15:24I feel he has a point in the sense that — World War II took five years or more. If we find that the Soviets are starting to develop a bomb, it's not like within the six months or a year or whatever, it would take them to start refining the ore. And to the point we found out that they've been refining ore to when we start a war and engage in it, and doing all the diplomacy. By that point, they might already have the bomb. And so we're behind because we dismantled our weapons. We are only starting to develop our weapons once we've exhausted these other avenues. Richard Rhodes 1:16:00Not to develop. Presumably we would have developed. And everybody would have developed anyway. Another way to think of this is as delayed delivery times. Takes about 30 minutes to get an ICBM from Central Missouri to Moscow. That's the time window for doing anything other than starting a nuclear war. So take the warhead off those missiles and move it down the road 10 miles. So then it takes three hours. You've got to put the warhead back on the missiles. If the other side is willing to do this too. And you both can watch and see. We require openness. A word Bohr introduced to this whole thing. In order to make this happen, you can't have secrets. And of course, as time passed on, we developed elaborate surveillance from space, surveillance from planes, and so forth. It would not have worked in 1946 for sure. The surveillance wasn't there. But that system is in place today. The International Atomic Energy Agency has detected systems in air, in space, underwater. They can detect 50 pounds of dynamite exploded in England from Australia with the systems that we have in place. It's technical rather than human resources. But it's there. So it's theoretically possible today to get started on such a program. Except, of course, now, in like 1950, the world is awash in nuclear weapons. Despite the reductions that have occurred since the end of the Cold War, there's still 30,000-40,000 nuclear weapons in the world. Way too many. Dwarkesh Patel 1:18:01Yeah. That's really interesting. What percentage of warheads do you think are accounted for by this organization? If there's 30,000 warheads, what percentage are accounted for? Richard Rhodes 1:18:12All.Dwarkesh Patel 1:18:12Oh. Really? North Korea doesn't have secrets? Richard Rhodes 1:18:13They're allowed to inspect anywhere without having to ask the government for permission. Dwarkesh Patel 1:18:18But presumably not North Korea or something, right? Richard Rhodes 1:18:21North Korea is an exception. But we keep pretty good track of North Korea needless to say. Dwarkesh Patel 1:18:27Are you surprised with how successful non-proliferation has been? The number of countries with nuclear weapons has not gone up for decades. Given the fact, as you were talking about earlier, it's simply a matter of refining or transmuting uranium. Is it surprising that there aren't more countries that have it?Richard Rhodes 1:18:42That's really an interesting part. Again, a part of the story that most people have never really heard. In the 50s, before the development and signing of the Nuclear Non-Proliferation Treaty, which was 1968 and it took effect in 1970, a lot of countries that you would never have imagined were working on nuclear weapons. Sweden, Norway, Japan, South Korea. They had the technology. They just didn't have the materials. It was kind of dicey about what you should do. But I interviewed some of the Swedish scientists who worked on their bomb and they said, well, we were just talking about making some tactical
Dwarkesh Patel is the host of The Lunar Society podcast, where he interviews scientists, historians, economists, intellectuals, & founders about their ideas. He also writes about tech, progress, talent, science, and the long-term over at his Substack. Dwarkesh has been described as “one of the best young podcasters alive”, and his Substack has been praised by the likes of Jeff Bezos, Paul Graham and Tyler Cowen. Important Links: The Lunar Society Dwarkesh' Twitter The Mystery of the Miracle Year Popularizers are intellectual market makers Scouting talent as buying options Show Notes: How to become a better podcaster The importance of curiosity Disagreement & problem solving “Computer programs are written by humans for other humans to read, and only incidentally for computers to execute” The difference between podcasting & essay writing Investing in public and private companies; human OS Premeditation & decision-making The mystery of the miracle year How much innovation is baked into the cake? How to cultivate young talent AI & education The importance of intellectual market makers Scouting talent as buying options Interviewing Sam Bankman-Fried Effective altruism & virtue signalling If you do everything, you will win Books Mentioned: The Years of Lyndon Johnson; by Robert Caro What Works on Wall Street: A Guide to the Best-Performing Investment Strategies of All Time; by Jim O'Shaughnessy Little Soldiers: An American Boy, a Chinese School, and the Global Race to Achieve; by Lenora Chu Outliers: The Story of Success; by Malcolm Gladwell One Summer: America, 1927; by Bill Bryson The Lessons of History; by Will & Ariel Durant The Story of Civilization; by Will & Ariel Durant Fallen Leaves: Last Words on Life, Love, War, and God; by Will Durant
Dwarkesh Patel is the host of The Lunar Society – a podcast that interviews intellectuals, scientists, and founders. He's also a writer and an interesting thinker. In this post, we spoke about Dwarkesh getting followed by Jeff Bezos, what the future is going to be like, the most important problems, high volatility, and his interesting challenge to the audience. (0:00) High School Debate (4:10) Jeff Bezos Following Dwarkesh (9:36) Programming, Writing (11:23) Podcasting (14:26) Tyler Cowen (15:42) High Volatility (20:50) Dwarkesh's Path (23:20) The Future (37:35) Important Problems (47:07) Quote From Dwarkesh (55:58) Changing Thinking (57:03) Dwarkesh's Most Underrated Episode (58:22) Dwarkesh vs. Lex (1:00:58) Challenge Dwarkesh's Links Twitter: https://twitter.com/dwarkesh_sp Website: https://www.dwarkeshpatel.com/ The mystery of the miracle year by Dwarkesh Patel – https://www.dwarkeshpatel.com/p/annus-mirabilis My Links ✉️ Newsletter: https://dannymiranda.substack.com
Dwarkesh Patel is the Host of "The Lunar Society" podcast, and blogs on his website https://www.dwarkeshpatel.com/ In this conversation, we discuss everything from artificial intelligence, The "Miracle Years" of past visionaries, Longtermism, effective altruism, talent as leverage, the myth of the myth of the well read person, the power of the human brain, and human enhancement. This one is a marathon and we cover everything. I hope you enjoy the tremendous value in this episode. ======================= Compass Mining is the world's first online marketplace for bitcoin mining hardware and hosting. Compass was founded with the goal of making it easy for everyone to mine bitcoin. Visit https://compassmining.io/ to start mining bitcoin today! ======================= LMAX Digital - the market-leading solution for institutional crypto trading & custodial services - offers clients a regulated, transparent and secure trading environment, together with the deepest pool of crypto liquidity. LMAX Digital is also a primary price discovery venue, streaming real-time market data to the industry's leading analytics platforms. LMAX Digital - secure, liquid, trusted. Learn more at LMAXdigital.com/pomp ======================= Don't miss Mainnet, the most anticipated crypto event of the year, September 21-23 in New York City. Join 4000+ crypto builders and thought leaders for 3-days of can't-be-missed keynotes, fireside chats, demos, networking, and more. Get $300 off of your pass today by visiting https://mainnet.events and entering promo code "POMP" at check out. See you this fall at Mainnet 2022! ======================== BCB Group is the leading payment services partner for the digital assets industry. BCB Group provides payment services in 30+ currencies, FX, cryptocurrency liquidity, digital asset custody and BLINC, which is BCB's free, instant settlements network for the BCB client ecosystem. Find out more by visiting bcbgroup.com/pomp ======================= If you're trying to grow and preserve your crypto-wealth, optimizing your taxes is just as lucrative as trying to find the next hidden gem.Alto IRA can help you invest in crypto in tax-advantaged ways to help you preserve your hard earned money. Alto CryptoIRA lets you invest in more than 200 different coins and tokens with all the same tax advantages of an IRA.They make it easy to fund your Alternative IRA or CryptoIRA via your 401(k) or IRA rollover or by contributing directly from your bank account. So, ready to take your investments to the next level? Diversify like the pros and trade without tax headaches. Open an Alto CryptoIRA to invest in crypto tax-free. Just go to https://altoira.com/pomp ======================= Crypto wallets and browser extensions are outdated, limited in features, and don't meet the needs of today's Web3 users. Core, the free, non-custodial browser extension built by Ava Labs, is more than just a wallet. Core is packed with features that give Avalanche users a more seamless, and secure, Web3 experience. With Core, any crypto user can easily swap assets, display NFTs in a beautiful interface, and store your assets in a Ledger-enabled wallet. Plus you can put real dollars in your Core wallet in just a few clicks. Go to www.core.app to access the full power of Web3 on Avalanche! =======================