The Retort AI Podcast

The Retort AI Podcast

Follow The Retort AI Podcast
Share on
Copy link to clipboard

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

Thomas Krendl Gilbert and Nathan Lambert


    • Feb 18, 2025 LATEST EPISODE
    • monthly NEW EPISODES
    • 46m AVG DURATION
    • 38 EPISODES


    Search for episodes from The Retort AI Podcast with a specific topic:

    Latest episodes from The Retort AI Podcast

    Tom leaves stealth: Hortus AI

    Play Episode Listen Later Feb 18, 2025 53:30


    An exciting week for Tom, who tells Nate about his company Hortus AI, whose mission is to make AI accountable to local communities. We cover a lot of classic Retort themes as Tom makes a case for what's missing from AI development, and how models could be more healthily integrated into everyday people's lives.Press release: https://hortus.ai/wp-content/uploads/2025/02/Hortus-AI-Press-Release_Feb25.pdfCompany website: https://hortus.ai/Link to whitepaper: https://www.newamerica.org/rethinkai/policy-papers/a-sustainable-path-for-ai-development/

    We ask again: Is AI a science?

    Play Episode Listen Later Jan 10, 2025 53:47


    Tom and Nate sit down for a classic discussion of the role of AI in the modern philosophy of science. Much of this discussion is based on Thomas Samuel Kuhn's influential book The Structure of Scientific Revolutions. We ask -- is AI a science in the Kuhn'ian sense? Will the "paradigm" worldview apply to other sciences post AI? How will scientific institutions manage the addition of AI?We promised an AI for science reading list, so here it is:[Dario interview with Lex] https://youtu.be/ugvHCXCOmm4?si=1hnlvue8M4pV2TqCLevers for biological progress https://open.substack.com/pub/cell/p/levers?r=68gy5&utm_medium=iosX thread on theories of change in scienceshttps://x.com/AdamMarblestone/status/1845158919523664019whitepaper linked by seb krierDwarkesh physics pod https://open.substack.com/pub/dwarkesh/p/adam-brown?r=68gy5&utm_medium=ios — Nobel in physics went to aiAi policy perspectives piece A new golden age of discoveryhttps://www.aipolicyperspectives.com/p/a-new-golden-age-of-discoveryOwl posting checking recent NeurIPS papers https://www.owlposting.com/p/can-o1-preview-find-major-mistakes based on idea from Ethan Mollick https://x.com/emollick/status/1868329599438037491also another post on the subject https://open.substack.com/pub/amistrongeryet/p/the-black-spatula-project?r=68gy5&utm_medium=iosKuhn's The Structure of Scientific Revolutionsintrinsic perspective https://open.substack.com/pub/erikhoel/p/great-scientists-follow-intuition?r=68gy5&utm_medium=iosGet The Retort (https://retortai.com/)…… on YouTube: https://www.youtube.com/@TheRetortAIPodcast… on Spotify: https://open.spotify.com/show/0FDjH8ujv7p8ELZGkBvrfv?si=fa17a4d408f245ee… on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-retort-ai-podcast/id1706223190… Follow Interconnects: https://www.interconnects.ai/… email us: mail@retortai.com

    The Retort's biggest AI stories of 2024

    Play Episode Listen Later Dec 6, 2024 47:45


    We're back! Tom and Nate catch up after the Thanksgiving holiday. Our main question was -- what were the biggest AI stories of the year? We touch on the core themes of the show: infrastructure, AI realities, and and antitrust. The power buildout to scale out AI is going to have very real long-term impacts.Some links this week:* Ben Thompson's, The End of the Beginning: https://stratechery.com/2020/the-end-of-the-beginning/* Miles Brundage's Substack: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im* Stochastic Parrots paper: https://dl.acm.org/doi/10.1145/3442188.3445922Thanks for listening! Get The Retort (https://retortai.com/)…… on YouTube: https://www.youtube.com/@TheRetortAIPodcast… on Spotify: https://open.spotify.com/show/0FDjH8ujv7p8ELZGkBvrfv?si=fa17a4d408f245ee… on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-retort-ai-podcast/id1706223190… Follow Interconnects: https://www.interconnects.ai/… email us: mail@retortai.com

    The Nobel Albatross

    Play Episode Listen Later Oct 11, 2024 44:55


    Tom and Nate catch up on the happenings in AI. Of course, we're focused on the biggest awards available to us as esteemed scientists (or something close enough) -- the Nobel Prizes! What does it mean in the trajectory of AI for Hinton and Hassabis to carry added scientific weight. Honestly, feels like a sinking ship. Some links:* Schmidhuber tweet: https://x.com/SchmidhuberAI/status/1844022724328394780* Hinton "I'm proud my student fired Sam": https://x.com/Grady_Booch/status/184414542282424329000:00 Introduction04:43 Criticism of AI-related Nobel Prize awards09:06 Geoffrey Hinton's comments on winning the Nobel Prize18:14 Debate on who should be credited for current AI advancements25:53 Changes in the nature of scientific research and recognition34:44 Changes in AI safety culture and company dynamics37:27 Discussion on AI scaling and its impact on the industry42:21 Reflection on the ongoing AI hype cycleRetort on YouTube: https://www.youtube.com/@TheRetortAIPodcastRetort on Twitter: https://x.com/retortaiRetort website: https://retortai.com/Retort email: mail at retortai dot com

    Claude Needs a Constitutional Convention

    Play Episode Listen Later Sep 13, 2024 42:27


    Tom and Nate catch up on recent events (before the OpenAI o1 release) and opportunities in transparency/policy. We recap the legendary scam of Matt from IT department, why disclosing the outcomes of process is not enough, and more. This is a great episode on understanding why the process technology was birthed from is just as important as the outcome!Some links:* Nathan's post on Model Specs for regulation https://www.interconnects.ai/p/a-post-training-approach-to-ai-regulation* Nathan's post on inference spend https://www.interconnects.ai/p/openai-strawberry-and-inference-scaling-lawsSend your questions to mail at retortai dot com

    Avoiding the AI burnout

    Play Episode Listen Later Aug 26, 2024 36:37


    Tom and Nate catch up on core themes of AI after a somewhat unintended summer break. We discuss the moral groundings and philosophy of what we're building, our travels, The Anxious Generation, AGI obsessions, an update on AI Ethics vs. AI Safety, and plenty more in between.As always, contact us at mail@retortai.comSome links we mention in the episode:* The Emotional Dog and its Rational Tail https://motherjones.com/wp-content/uploads/emotional_dog_and_rational_tail.pdf* The Anxious Generation https://www.amazon.com/Anxious-Generation-Rewiring-Childhood-Epidemic/dp/0593655036* Shadow Lake Lodge https://shadowlakelodge.com/* Recent Dwarkesh Podcast https://www.dwarkeshpatel.com/p/joe-carlsmith

    What we are getting wrong about AI regulation

    Play Episode Listen Later Jul 19, 2024 34:49


    Tom and Nate catch up on the rapidly evolving (and political) space of AI regulation. We cover CA SB 1047, recent policing of data scraping, presidential appointees, antitrust intention vs. implementation, FLOP thresholds, and everything else touching the future of large ML models.Nate's internet cut out, so this episode ends a little abruptly. Reach out with any questions to mail at retortai.comSome links:- night falls on the cumberlands https://en.wikipedia.org/wiki/Night_Comes_to_the_Cumberlands- hillbilly elegy https://en.wikipedia.org/wiki/Hillbilly_Elegy- wired piece on data https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/- nate's recent piece on AI regulation https://www.interconnects.ai/p/sb-1047-and-open-weights00:00  Intro 01:19 Training Data and the Media 03:43 Norms, Power, and the Limits of Regulation08:52 OpenAI's Business Model12:33  Antitrust: The Essential Tool for Governing AI17:11 Users as Afterthoughts20:07 Depoliticizing AI 26:14  "Breaking Bad" & the AI Parallel28:11  The "Little Tech" Agenda31:03  Reframing the Narrative of Big Tech  32:20  "The Lean Startup" & AI's Uncertainty

    AI, feedback, and population public health

    Play Episode Listen Later Jun 28, 2024 41:55


    Tom and Nate revisit one of their old ideas -- AI through the lens of public health infrastructure, and especially alignment. Sorry about Tom's glitchy audio, I figured it out after the fact that he was talking into the microphone at the wrong angle. Regardless, here are some links for this week. Links:- Data foundry for AI https://scale.com/blog/scale-ai-series-f - Information piece on Scale AI ($) https://www.theinformation.com/articles/why-a-14-billion-startup-is-now-hiring-phds-to-train-ai-from-their-living-rooms?shared=168f685a864ca709 - ChatGPT compounding math: https://chatgpt.com/share/2c19a357-acb2-441d-8203-946b74ce785c contact us at mail at retortai dot com00:00 Intro00:39 Chicago's Tech Scene and the "The Bear"01:22 AI and Public Health: A New Framework08:17 Lessons for AI from Sanitation Infrastructure12:58 The Mental Health Impact of Generative AI23:28 Aligning AI with Diverse Societal Values27:06 Power Dynamics in AI's Development33:02 The Need for a Neutral AI Research Body (NAIRR)36:57 New Regulations for a New Era of AI41:05 Outro: Join the Conversation

    Apple sends a memo to the AGI faithful

    Play Episode Listen Later Jun 21, 2024 52:02


    Tom and Nate caught up last week (sorry for the editing delay) on the big two views of the AI future: Apple Intelligence and Situational Awareness (Nationalistic AI doom prevention). One of our best episodes, here are the links:* The Kekulé Problem https://en.wikipedia.org/wiki/The_Kekul%C3%A9_Problem* Truth and Method https://en.wikipedia.org/wiki/Truth_and_Method* Situational Awareness https://situational-awareness.ai/00:00 A Hypothetical Life: From Germany to AGI01:20 Leopold Aschenbrenner: Situational Awareness and Extrapolation02:01 The Retort: Apple vs. Doomsday AI03:40 Credentials and Social Choice Theory05:14 Dissecting "Situational Awareness": Hype vs. Reality07:16 The Limits of Language Models: Are They Really Intelligent?11:04 Apple's Vision: AI for Consumers, Not Conquerors13:53 Silicon Valley Myopia and the Geopolitics of AI18:25 Beyond Benchmarks: The Scientist vs. The Engineer22:04 What is Intelligence? The Narrowness of Human Fixation24:32 A Growing Disrespect for Language?27:40 The Power of Talking to Language Models32:50 Language: Representation or Revelation?38:54 The Future of Meaning: Will AI Obliterate Art?45:32 A Vision for AI as Public Infrastructure

    Murky waters in AI policy

    Play Episode Listen Later May 31, 2024 43:39


    Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy.00:00 Murky waters in AI policy00:33 The Senate AI Roadmap05:14 The Executive Branch Takes the Lead08:33 California's Senate AI Bill22:22 OpenAI's Two Audiences28:53 The Problem with OpenAI Model Spec39:50 A New World of AI RegulationA bunch of links...Data and society whitepaper: https://static1.squarespace.com/static/66465fcd83d1881b974fe099/t/664b866c9524f174acd7931c/1716225644575/24.05.18+-+AI+Shadow+Report+V4.pdfhttps://senateshadowreport.com/ California billhttps://www.hyperdimensional.co/p/california-senate-passes-sb-1047 https://legiscan.com/CA/text/SB1047/id/2999979 Data wallshttps://www.interconnects.ai/p/the-data-wall Interconnects Merchhttps://interconnects.myshopify.com/

    ChatGPT talks: diamond of the season or quite the scandal?

    Play Episode Listen Later May 17, 2024 51:47


    Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.htmlThis is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.00:00 Guess the Donkey Kong Character00:50 OpenAI's New AI Girlfriend07:08 OpenAI's Business Model and Responsible AI08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness12:48 OpenAI and the Mystery Box19:10 The Blurring Boundaries of Intimacy and Technology22:05 Rousseau's Discourse on Inequality and the Impact of Technology26:16 OpenAI's Model Spec and Its Objectives30:10 The Unintelligibility of "Benefiting Humanity"37:01 The Chain of Command and the Paradox of AI Love45:46 The Form and Content of OpenAI's Model Spec48:51 The Future of AI and Societal Disruptions

    Three pillars of AI power

    Play Episode Listen Later May 10, 2024 56:36


    Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence. Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/165384349793226752000:00: Introduction and Cryptozoologists02:00: DC and the National AI Research Resource (NAIR)05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC11:00: The AI Safety vs. Ethics Debate13:42: The Rise of the Third Entity: The Government's Role in AI19:42: New York's Influence and the Power of Narrative29:36: Silicon Valley's Insularity and the Need for Regulation36:50: The Amazon Antitrust Paradox and the Shifting Landscape48:20: The Energy Conundrum and the Need for Policy Solutions56:34: Conclusion: Finding Common Ground and Building a Better Future for AI

    Llama 3: Can't Compete with a Capuchin

    Play Episode Listen Later Apr 26, 2024 46:27


    Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. Links:Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey 00:00 Introductions & advice from a wolf00:45 Llama 307:15 Resources and investment required for large language models14:10 What it means to be a leader in the rapidly evolving AI landscape22:07 How much of AI progress is driven by stories vs resources29:41 Critiquing the concept of Artificial General Intelligence (AGI)38:10 Misappropriation of the term AGI by tech leaders42:09 The future of open models and AI development

    Into the AI Trough of Disillusionment

    Play Episode Listen Later Apr 12, 2024 51:27


    Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.00:00 Introduction 01:16 Recent developments in open model releases 04:21 Tom's experience viewing the total solar eclipse09:38 The Three-Body Problem book and Netflix14:06 The Gartner Hype Cycle22:51 Infrastructure constraints on scaling AI28:47 Metaphors and narratives around AI risk34:43 Rethinking AI risk as public health problems37:37 The "one-way door" nature of releasing open model weights44:04 The relationship between the AI ecosystem and the models48:24 Wrapping up the discussion in the "trough of disillusionment"We've got some links for you again:- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle - MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer - Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property - Earth day history https://www.earthday.org/history/ - For our loyal listeners http://tudorsbiscuitworld.com/

    AI's Eras Tour: Performance, Trust, and Legitimacy

    Play Episode Listen Later Mar 22, 2024 46:53


    Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.The Taylor moment: https://twitter.com/DrJimFan/status/176981794893007293000:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area09:08 Mustafa Suleyman's new role and discussion on AI safety11:31 The shift from performance to trust in AI evaluation17:31 The role of government agencies in AI policy and regulation24:07 The role of accreditation in establishing legitimacy and trust32:11 Grok's open source release and its impact on the AI community39:34 Responsibility and accountability in AI and social media platforms

    Claude 3: Is Nathan too bought into the hype?

    Play Episode Listen Later Mar 8, 2024 43:06


    Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers!More at retortai.com. Contact us at mail at domain.Some topics:- The pace of progress in AI and whether it feels meaningful or like "progress fatigue" to different groups- The role of hype and "vibes" in driving interest and investment in new AI models - Whether the value being created by large language models is actually just being concentrated in a few big tech companies- The debate around whether open source AI is feasible given the massive compute requirements- The limitations of "open letters" and events with Chatham House rules as forms of politics and accountability around AI- The analogy between the AI arms race and historical arms races like the dreadnought naval arms race- The role of narratives, pop culture, and "priesthoods" in shaping public understanding of AIChapters & transcript partially created with https://github.com/FanaHOVA/smol-podcaster.00:00 Introduction and the spirit of open source04:32 Historical parallels of technology arms races10:26 The practical use of language models and their impact on society22:21 The role and potential of open source in AI development28:05 The challenges of achieving coordination and scale in open AI development34:18 Pop culture's influence on the AI conversation, specifically through "Dune"

    Model release therapy session #1

    Play Episode Listen Later Feb 23, 2024 52:46 Transcription Available


    This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy week in AI. We come to the conclusion that we can no longer trust a lot of these big companies to do much. We are the gladiators playing the crowd of AI. This was a great one, I'm proud on one of Tom's all time best jokes. Thanks for listening, and reach out with any questions.

    Waymo vs. the time honored human experiences, vandalism and defacement

    Play Episode Listen Later Feb 16, 2024 37:16


    A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay around distributed technology. Scooters and robots are not safe in this episode, sadly. Here's the link to the Verge piece Tom read from: https://www.theverge.com/2024/2/11/24069251/waymo-driverless-taxi-fire-vandalized-video-san-francisco-china-town

    We believe in the metaverse

    Play Episode Listen Later Feb 9, 2024 37:04


    ... and you should too. We catch up this week on all things Apple Vision Pro and how these devices will intersect with AI. It really turned more into a commentary on the future of society, and how various technologies may or may not tap into our subconscious. The only link we've got for you is DeepDream: https://en.wikipedia.org/wiki/DeepDream 

    How to OLMost find a Waifu

    Play Episode Listen Later Feb 2, 2024 46:02


    Wow, one of our favorites. This week Tom and Nate have a lot to cover. We cover AI2's new OPEN large language models (OLMo) and all that means, the alchemical model merging craze powering waifu factories, model weight leaks from Mistral, the calling card for our loyal fans, and more topics.We have a lot of links you'll enjoy as you'll go through it:The Mistral leak: https://huggingface.co/miqudev/miqu-1-70b/discussions/10 Writing on model merging: https://www.interconnects.ai/p/model-merging Writing on open LLMs: https://www.interconnects.ai/p/olmo The original mechanical turk: https://en.wikipedia.org/wiki/Mechanical_Turk This Waifu does not exist: https://thisanimedoesnotexist.ai/ The warriors film https://www.youtube.com/watch?v=--gdB-nnQkU The Waifu Research Department: https://huggingface.co/waifu-research-department

    Tom's Story: to get through grad school, become a sperm whale

    Play Episode Listen Later Jan 26, 2024 57:15


    We recovered this episode from the depth of lost podcast recordings! We carry on and Tom tells the story of his wonderful sociology turned AI Ph.D. at Berkeley. This comes with plenty of great commentary on the current state of the field and striving for impact. We cover the riverbank of Vienna, the heart of the sperm whale, and deep life lessons. 

    Non-profits need to be businesses too

    Play Episode Listen Later Jan 19, 2024 37:10


    This week Tom and Nate catch up on two everlasting themes of ML: compute and evaluation. We chat about AI2, Zuck's GPUs, evaluation as procurement, NIST comments, neglecting reward models, and plenty of other topics. We're on the tracks for 2024 and waiting for some things to happen. Links for what we covered this week:Zuck interview on The VergeSaturday night live George Washington during revolutionary warNIST RFISam Altman's uncomfortable proposition

    How the US could lose (and win!) against China in AI, with Jordan Schneider of ChinaTalk

    Play Episode Listen Later Jan 12, 2024 56:41


    We're excited to bring you something special today! Our first cross over episode brings some fresh energy to the podcast. Tom and Nate are joined by Jordan Schneider of ChinaTalk (A popular Substack-based publication covering all things China https://www.chinatalk.media/). We cover lots of great ground here, from the economics of Hirschman to the competition from France. All good Patriots should listen to this episode, as we give a real assessment of where competition lies on the U.S.'s path to commercializing AI. Enjoy our best effort at a journal club!

    AI is literally the culture war, figuratively speaking

    Play Episode Listen Later Jan 5, 2024 46:51


    Tom and Nate are ready to kick off the year, but not too ready! There's a ton to be excited about this year, but we're already worried for some parts of it. In this episode, we'll teach you how to be mindful of the so called "other side of ML".Some links:- Link to NYT lawsuit techdirt article https://www.techdirt.com/2023/12/28/the-ny-times-lawsuit-against-openai-would-open-up-the-ny-times-to-all-sorts-of-lawsuits-should-it-win/- Link to AI generated talk tool https://github.com/natolambert/interconnects-tools?tab=readme-ov-file#generated-research--video- They just want to learn https://twitter.com/hamishivi/status/1730633057999483085 and pod episode https://www.dwarkeshpatel.com/p/dario-amodei 

    What I wish someone had told me

    Play Episode Listen Later Dec 22, 2023 50:51


    The end of the year is upon us! Tom and Nate bring a reflective mood to the podcast along with some surprises that may be a delight.Here are some links for the loyal fans:* RAND + executive order piece: https://www.politico.com/news/2023/12/15/billionaire-backed-think-tank-played-key-role-in-bidens-ai-order-00132128  * Sam Altman's blog post we were reading: https://blog.samaltman.com/what-i-wish-someone-had-told-me

    Everyone wants fair benchmarks, but do you even lift?

    Play Episode Listen Later Dec 15, 2023 49:14


    No stone is left unturned on this episode. As the end of the year approaches, Tom and Nate check in on all the vibes of the machine learning world: torrents, faked demos, alchemy, weightlifting, actual science, and blogs are all not safe in this episode. Some links for your weekend:- AI Alliance: https://thealliance.ai/ - Evaluation gaming on Interconnects: https://www.interconnects.ai/p/evals-are-marketing- Fupi: https://www.youtube.com/watch?v=WtVknbxzn7Q

    Cybernetics, Feedback, and Reinventionism in CS

    Play Episode Listen Later Dec 8, 2023 41:39


    In this episode, Tom gives us a lesson on all things feedback, mostly where our scientific framings of it came from. Together, we link this to RLHF, our previous work in RL, and how we were thinking about agentic ML systems before it was cool.Join us, on another great blast from the past on The Retort!We also have brought you video this week!

    Q* and OpenAI's Strange Loop: We Pecan't Even

    Play Episode Listen Later Nov 24, 2023 43:48


    We break down all the recent events of AI, and live react to some of the news about OpenAI's new super-method, codenamed Q*. From CEOs to rogue AI's, no one can be trusted in today's episode. Some links to relevant content on Interconnects:* Discussing how OpenAI's blunders open the doors for openness.* Detailing what Q* probably is.

    OpenAI: Developers, Hegemons, and Origins

    Play Episode Listen Later Nov 10, 2023 47:56


    We cover all things OpenAI as they embrace their role as a consumer technology company with their first developer keynote.Lots of links:Dev. day keynote https://www.youtube.com/watch?v=U9mJuUkhUzk Some papers we coverMultinational AGI consortium (by non technical folks) https://arxiv.org/abs/2310.09217 Frontier model risk paper that DC loves https://arxiv.org/abs/2307.03718 Our Choices, Risk, and Reward Reports paper https://cltc.berkeley.edu/reward-reports/ GPT 2 release blog with discussion of "dangers" of LLMs in 2019 https://openai.com/research/better-language-models 1984 Apple ad https://www.youtube.com/watch?v=VtvjbmoDx-I 

    Executive Orders, Safety Summits, and Open Letters, Oh My!

    Play Episode Listen Later Nov 3, 2023 53:02


    We discuss all the big regulation steps in AI this week, from the Biden Administration's Executive Order to the UK AI Safety Summit. Links:Link the Executive OrderLink the Mozilla Open Letter The Slaughterbots video UK AI Safety Summit graph/meme 

    Transparency (Tom and Nate's Version)

    Play Episode Listen Later Oct 27, 2023 45:51


    This week, we dunk on The Center for Research on Foundation Models's (Stanford) Foundation Model Transparency Index.Yes, the title is inspired by Taylor.Some links:The Index itself. And Nathan's critique.Anthropic's Collective Constitutional AI work, coverage in New York Times.New paper motivating transparency for reward models in RLHF.Jitendra Malik dunks on the idea of foundation models.

    Techno-optimism: Safety, Ethics, or Fascism?

    Play Episode Listen Later Oct 20, 2023 41:28


    Tom and Nate sit down to discuss Marc Andreessen's Techno-Optimist Manifesto. A third wave of AI mindsets that squarely takes on both AI Safety and AI Ethics communities.Some links: * An example of the Shoggoth Monster we referenced.Thanks for listening!

    Methods of Rationality in the Time of Hype

    Play Episode Listen Later Oct 13, 2023 53:58


    This week, Tom and Nate discuss some of the core and intriguing dynamics of AI. We discuss the history of the rationality movement and where Harry Potter fan fiction fits in, if AI will ever not feel hypey, the do's and don'ts of Sam Altman, and other topics.(Editor note: sorry for some small issues in Nate's audio. That will be fixed in the next episode)Some links that are references:* HP MOR (Harry Potter and the Methods of Rationality). * A tweet referencing Sam Altman's funny (?) profile change.* Nathan's recent post on Interconnects on the job market craziness.

    Developing LLMs: Open, closed, or democratic?

    Play Episode Listen Later Oct 6, 2023 57:44


    This is a big one. Getting going on if LLMs should be more open or more closed. We cover everything, OpenAI, scaling, openness for openness sake (relative to OpenAI), actual arguments for open-source values in LLMs, AI as infrastructure, LLMs as platforms, what this means we need, and other topics.Lot's of related links this time from Nathan.Most recent article on Interconnects explaining how open-source startups may be deluding themselves."What is an open-source LLM" on Interconnects.How the open-source economy works on Interconnects.

    As Above, So Below: Alchemy, AI Safety, and RLHF

    Play Episode Listen Later Sep 29, 2023 57:36


    Tom and Nate discuss a few core topics of the show. First, we touch base on the core of the podcast -- the difference between empirical science, alchemy, and magic. Next, we explain some of our deeper understandings of AI safety as a field, then that leads into a discussion of what RLHF means.Lot's of links to share this time:Tom's coverage on alchemy in VentureBeat, and an active thread on Twitter.As Above, So Below: a calling of alchemy,A NeurIPs test of time award speech on alchemy,A bizarre Facebook debate between Yoshua Bengio and Stuart Russell,

    AI in public: education, roads, and institutions

    Play Episode Listen Later Sep 22, 2023 50:23


    Tom and Nate discuss some of the public institutions that form the bedrock of society -- education and roads -- and how AI is poised to shake them up.Some related reading on Interconnects, specifically about Tesla's system design and the self-driving roll-out in San Francisco.

    Alchemy, Oppenheimer, and the quest for AGI

    Play Episode Listen Later Sep 15, 2023 64:21


    Tom and Nate discuss some of the most dominant metaphors in machine learning these days -- alchemy and deep learning's roots, the Oppenheimer film and a modern "Manhattan Project for AI", and of course, a sprinkle of AGI.Some related reading on Interconnects: https://www.interconnects.ai/p/ai-research-tensions-oppenheimerThanks for listening! Reach out if you have any questions.

    Teaser: Welcome to The Retort

    Play Episode Listen Later Sep 6, 2023 5:32


    A brief introduction to the many problems facing AI and a sneak peak into episode 1, coming soon!

    Claim The Retort AI Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel