https://thevalmy.com/
Podcast: The TrajectoryEpisode: Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]Release date: 2025-04-25Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - https://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Podcast: Complex Systems with Patrick McKenzie (patio11) Episode: AI, data centers, and power economics, with Azeem AzharRelease date: 2025-02-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatrick McKenzie (patio11) is joined by Azeem Azhar, writer of the Exponential View newsletter, to discuss the massive data center buildout powering AI and its implications for our energy infrastructure. The conversation covers the physical limitations of modern datacenters, the challenges of electricity generation, the societal ripples from historical largescale infrastructure investments like railways and telecommunications, and the future of energy including solar, nuclear and geothermal power. Through their discussion, Patrick and Azeem explain why our mental models for both computing and energy systems need to be updated.–Full transcript available here: www.complexsystemspodcast.com/ai-llm-data-center-power-economics/–Sponsors: Safebase | CheckReady to save time and close deals faster? Inbound security reviews shouldn't slow down your team or your sales cycle. Leading companies use SafeBase to eliminate up to 98% of inbound security questionnaires, automate workflows, and accelerate pipeline. Go to safebase.io/podcast Check is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Recommended in this episode:Azeem's newsletter: https://www.exponentialview.co/ Azeem Azhar's guest essay: The 19th-Century Technology That Threatens A.I. https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlElectric Twin: https://www.electrictwin.com/ Video of Elon Musk's Colossus https://www.youtube.com/watch?v=Tw696JVSxJQ Complex Systems with Travis Dauwalter on the electrical grid: https://open.spotify.com/episode/5JY8e84sEXmHFlc8IR2kRb?si=35ymIC0UQ5SKdV8rrBcgIw Complex Systems with Austin Vernon on fracking: https://open.spotify.com/episode/0YDV1XyjUCM2RtuTcBGYH9?si=YshjUXPEQBiScNxrNaI-Gw Complex Systems with Casey Handmer on direct capture of CO2 to turn into hydrocarbon: https://open.spotify.com/episode/0GHegWgLSubYxvATmbWhQu?si=xNYBjn0ZTX2IT_pAZ5Ozsg –Twitter:@azeem@patio11–Timestamps:(00:00) Intro (00:27) The power economics of data centers(01:12) Historical infrastructure rollouts(04:58) The telecoms bubble (06:22) Unprecedented enterprise spend on AI capabilities(11:12) Let's have your LLM talk to my LLM(16:44) Is there a saturation point?(19:25) Sponsors: Safebase | Check(21:55) What's in a data center?(24:52) The challenges of data centers(29:40) Geographical considerations for data centers(36:53) Energy consumption and future needs(40:48) Challenges in building transmission lines(41:35) The solar power learning curve(43:51) Small modular nuclear reactors(51:26) Geothermal energy and fracking(01:01:34) The future of AI and energy systems(01:12:57) Wrap
Podcast: 80,000 Hours Podcast Episode: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anywayRelease date: 2025-02-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationTechnology doesn't force us to do anything — it merely opens doors. But military and economic competition pushes us through.That's how today's guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don't. Those who resist too much can find themselves taken over or rendered irrelevant.Links to learn more, highlights, video, and full transcript.This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don't, less careful actors will develop transformative AI capabilities at around the same time anyway.But Allan argues this technological determinism isn't absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they're used for first.As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.As of mid-2024 they didn't seem dangerous at all, but we've learned that our ability to measure these capabilities is good, but imperfect. If we don't find the right way to ‘elicit' an ability we can miss that it's there.Subsequent research from Anthropic and Redwood Research suggests there's even a risk that future models may play dumb to avoid their goals being altered.That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they're necessary.But with much more powerful and general models on the way, individual company policies won't be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.Host Rob and Allan also cover:The most exciting beneficial applications of AIWhether and how we can influence the development of technologyWhat DeepMind is doing to evaluate and mitigate risks from frontier AI systemsWhy cooperative AI may be as important as aligned AIThe role of democratic input in AI governanceWhat kinds of experts are most needed in AI safety and governanceAnd much moreChapters:Cold open (00:00:00)Who's Allan Dafoe? (00:00:48)Allan's role at DeepMind (00:01:27)Why join DeepMind over everyone else? (00:04:27)Do humans control technological change? (00:09:17)Arguments for technological determinism (00:20:24)The synthesis of agency with tech determinism (00:26:29)Competition took away Japan's choice (00:37:13)Can speeding up one tech redirect history? (00:42:09)Structural pushback against alignment efforts (00:47:55)Do AIs need to be 'cooperatively skilled'? (00:52:25)How AI could boost cooperation between people and states (01:01:59)The super-cooperative AGI hypothesis and backdoor risks (01:06:58)Aren't today's models already very cooperative? (01:13:22)How would we make AIs cooperative anyway? (01:16:22)Ways making AI more cooperative could backfire (01:22:24)AGI is an essential idea we should define well (01:30:16)It matters what AGI learns first vs last (01:41:01)How Google tests for dangerous capabilities (01:45:39)Evals 'in the wild' (01:57:46)What to do given no single approach works that well (02:01:44)We don't, but could, forecast AI capabilities (02:05:34)DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)How 'structural risks' can force everyone into a worse world (02:15:01)Is AI being built democratically? Should it? (02:19:35)How much do AI companies really want external regulation? (02:24:34)Social science can contribute a lot here (02:33:21)How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions: Katy Moore
Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward HughesRelease date: 2025-02-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, Edward Hughes, researcher at Google DeepMind, and Aron Vallinder, an independent researcher and PIBBSS fellow discuss their pioneering research on cultural evolution and cooperation among large language model agents. The conversation delves into the study's design, exploring how different AI models exhibit cooperative behavior in simulated environments, the implications of these findings for future AI development, and the potential societal impacts of autonomous AI agents. They elaborate on their experimental setup involving different LLMs like Claude, Gemini 1.5, and GPT-4.0 in a cooperative donor-recipient game, shedding light on how various AI models handle cooperation and their potential societal impacts. Key points include the importance of understanding externalities, the role of punishment and communication, and future research directions involving mixed-model societies and human-AI interactions. The episode invites listeners to engage in this fast-growing field, stressing the need for more hands-on research and empirical evidence to navigate the rapidly evolving AI landscape.Link to Aron & Edward's research paper "Cultural Evolution of Cooperation among LLMAgents"SPONSORS:Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitiveNetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitiveShopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitiveCHAPTERS:(00:00) Teaser(00:42) About the Episode(03:26) Introduction(03:40) The Rapid Evolution of AI(04:58) Human Cooperation and Society(07:03) Cultural Evolution and Stories(08:39) Mechanisms of Cultural Evolution (Part 1)(20:56) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite(23:35) Mechanisms of Cultural Evolution (Part 2)(27:07) Experimental Setup: Donor Game (Part 1)(37:35) Sponsors: Shopify(38:55) Experimental Setup: Donor Game (Part 2)(44:32) Exploring AI Societies: Claude, Gemini, and GPT-4(45:50) Striking Graphical Differences(48:08) Experiment Results and Implications(50:54) Prompt Engineering and Cooperation(57:40) Mixed Model Societies(01:00:35) Future Research Directions(01:03:10) Human-AI Interaction and Influence(01:05:20) Complexifying AI Games(01:18:14) Evaluations and Feedback Loops(01:20:50) Open Source and AI Safety(01:23:23) Reflections and Future Work(01:30:04) Outro
Podcast: Epoch After HoursEpisode: AI in 2030, Scaling Bottlenecks, and Explosive GrowthRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn our first episode of Epoch After Hours, Ege, Tamay and Jaime dig into what they expect AI to look like by 2030; why economists are underestimating the likelihood of explosive growth; the startling regularity in technological trends like Moore's Law; Moravec's paradox, and how we might overcome it; and much more!
Podcast: AI SummerEpisode: Ajeya Cotra on AI safety and the future of humanityRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation's grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
Podcast: Machine Learning Street Talk (MLST) Episode: Nora Belrose - AI Development, Safety, and MeaningRelease date: 2024-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)
Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: The Road to Autonomous Intelligence with Andrej KarpathyRelease date: 2024-09-05Andrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo's approaches, and the technical challenges ahead. They also cover Tesla's Optimus humanoid robot, the bottlenecks of AI development today, and how AI capabilities could be further integrated with human cognition. Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead.Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @KarpathyShow Notes: (0:00) Introduction(0:33) Evolution of self-driving cars(2:23) The Tesla vs. Waymo approach to self-driving (6:32) Training Optimus with automotive models(10:26) Reasoning behind the humanoid form factor(13:22) Existing challenges in robotics(16:12) Bottlenecks of AI progress (20:27) Parallels between human cognition and AI models(22:12) Merging human cognition with AI capabilities(27:10) Building high performance small models(30:33) Andrej's current work in AI-enabled education(36:17) How AI-driven education reshapes knowledge networks and status(41:26) Eureka Labs(42:25) What young people study to prepare for the future
Podcast: Machine Learning Street Talk (MLST) Episode: Joscha Bach - AGI24 Keynote (Cyberanimism)Release date: 2024-08-21Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at https://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
Podcast: The TrajectoryEpisode: Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)Release date: 2024-07-26This is an interview with Nick Bostrom, the Founding Director of Future of Humanity Institute Oxford.This is the first installment of The Worthy Successor series - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.This episode referred to the following other essays and resources:-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm -- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/Watch this episode on The Trajectory YouTube channel: https://youtu.be/_ZCE4XZ9doc?si=RXptg0y6JcxelXkFRead the Nick Bostrom's episode highlight: danfaggella.com/bostrom1/...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:Blog: danfaggella.com/trajectory X: x.com/danfaggella LinkedIn: linkedin.com/in/danfaggella Newsletter: bit.ly/TrajectoryTw
Podcast: Dwarkesh Podcast Episode: Patrick McKenzie - How a Discord Server Saved Thousands of LivesRelease date: 2024-07-24I talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.Enjoy!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.SponsorThis episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.Timestamps(00:00:00) – Why hackers on Discord had to save thousands of lives(00:17:26) – How politics crippled vaccine distribution(00:38:19) – Fundraising for VaccinateCA(00:51:09) – Why tech needs to understand how government works(00:58:58) – What is crypto good for?(01:13:07) – How the US government leverages big tech to violate rights(01:24:36) – Can the US have nice things like Japan?(01:26:41) – Financial plumbing & money laundering: a how-not-to guide(01:37:42) – Maximizing your value: why some people negotiate better(01:42:14) – Are young people too busy playing Factorio to found startups?(01:57:30) – The need for a post-mortem Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: 80,000 Hours Podcast Episode: #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problemsRelease date: 2024-05-29"You can't charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don't get me wrong. I don't think that they should have charged $5,000 or $6,000. That's not ethical. It's also not economically efficient, because they didn't cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people."But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they're not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I'm a firm, but it's a very efficient strategy for society, and so we've got to bridge that gap." —Rachel GlennersterIn today's episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team's new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.Links to learn more, highlights, and full transcript.They cover:How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.The challenges in designing effective pull mechanisms, from design to implementation.Why it's important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.The massive benefits of accelerating vaccine development, in some cases, even if it's only by a few days or weeks.The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.The shortlist of ideas from the Market Shaping Accelerator's recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.“Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.Lessons from Rachel's career at the forefront of global development, and how insights from economics can drive transformative change.And much more.Chapters:The Market Shaping Accelerator (00:03:33)Pull mechanisms for innovation (00:13:10)Accelerating the pneumococcal and COVID vaccines (00:19:05)Advance market commitments (00:41:46)Is this uncertainty hard for funders to plan around? (00:49:17)The story of the malaria vaccine that wasn't (00:57:15)Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40)Universal COVID vaccine (01:18:14)Climate-resilient crops (01:34:09)The Market Shaping Accelerator's Innovation Challenge (01:45:40)Indoor air quality to reduce respiratory infections (01:49:09)Repurposing generic drugs (01:55:50)Clean air conditioning units (02:02:41)Broad-spectrum antivirals for pandemic prevention (02:09:11)Improving education in low- and middle-income countries (02:15:53)What's still weird for Rachel about living in the US? (02:45:06)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Podcast: 80,000 Hours Podcast Episode: #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problemsRelease date: 2024-05-29"You can't charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don't get me wrong. I don't think that they should have charged $5,000 or $6,000. That's not ethical. It's also not economically efficient, because they didn't cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people."But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they're not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I'm a firm, but it's a very efficient strategy for society, and so we've got to bridge that gap." —Rachel GlennersterIn today's episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team's new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.Links to learn more, highlights, and full transcript.They cover:How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.The challenges in designing effective pull mechanisms, from design to implementation.Why it's important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.The massive benefits of accelerating vaccine development, in some cases, even if it's only by a few days or weeks.The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.The shortlist of ideas from the Market Shaping Accelerator's recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.“Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.Lessons from Rachel's career at the forefront of global development, and how insights from economics can drive transformative change.And much more.Chapters:The Market Shaping Accelerator (00:03:33)Pull mechanisms for innovation (00:13:10)Accelerating the pneumococcal and COVID vaccines (00:19:05)Advance market commitments (00:41:46)Is this uncertainty hard for funders to plan around? (00:49:17)The story of the malaria vaccine that wasn't (00:57:15)Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40)Universal COVID vaccine (01:18:14)Climate-resilient crops (01:34:09)The Market Shaping Accelerator's Innovation Challenge (01:45:40)Indoor air quality to reduce respiratory infections (01:49:09)Repurposing generic drugs (01:55:50)Clean air conditioning units (02:02:41)Broad-spectrum antivirals for pandemic prevention (02:09:11)Improving education in low- and middle-income countries (02:15:53)What's still weird for Rachel about living in the US? (02:45:06)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Podcast: The Ezra Klein Show Episode: Salman Rushdie Is Not Who You Think He IsRelease date: 2024-04-26Salman Rushdie's 1988 novel, “The Satanic Verses,” made him the target of Ayatollah Ruhollah Khomeini, who denounced the book as blasphemous and issued a fatwa calling for his assassination. Rushdie spent years trying to escape the shadow the fatwa cast on him, and for some time, he thought he succeeded. But in 2022, an assailant attacked him onstage at a speaking engagement in western New York and nearly killed him.“I think now I'll never be able to escape it. No matter what I've already written or may now write, I'll always be the guy who got knifed,” he writes in his new memoir, “Knife: Meditations After an Attempted Murder.”In this conversation, I asked Rushdie to reflect on his desire to escape the fatwa; the gap between the reputation of his novels and their actual merits; how his “shadow selves” became more real to millions than he was; how many of us in the internet age also have to contend with our many shadow selves; what Rushdie lives for now; and more.Mentioned:Midnight's Children by Salman RushdieBook Recommendations:Don Quixote by Miguel de Cervantes, translated by Edith GrossmanOne Hundred Years of Solitude by Gabriel García MárquezThe Trial by Franz KafkaThe Castle by Franz KafkaThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones. Our senior editor is Claire Gordon. The show's production team also includes Rollin Hu, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Mrinalini Chakravorty.
Podcast: The Ezra Klein Show Episode: Salman Rushdie Is Not Who You Think He IsRelease date: 2024-04-26Salman Rushdie's 1988 novel, “The Satanic Verses,” made him the target of Ayatollah Ruhollah Khomeini, who denounced the book as blasphemous and issued a fatwa calling for his assassination. Rushdie spent years trying to escape the shadow the fatwa cast on him, and for some time, he thought he succeeded. But in 2022, an assailant attacked him onstage at a speaking engagement in western New York and nearly killed him.“I think now I'll never be able to escape it. No matter what I've already written or may now write, I'll always be the guy who got knifed,” he writes in his new memoir, “Knife: Meditations After an Attempted Murder.”In this conversation, I asked Rushdie to reflect on his desire to escape the fatwa; the gap between the reputation of his novels and their actual merits; how his “shadow selves” became more real to millions than he was; how many of us in the internet age also have to contend with our many shadow selves; what Rushdie lives for now; and more.Mentioned:Midnight's Children by Salman RushdieBook Recommendations:Don Quixote by Miguel de Cervantes, translated by Edith GrossmanOne Hundred Years of Solitude by Gabriel García MárquezThe Trial by Franz KafkaThe Castle by Franz KafkaThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones. Our senior editor is Claire Gordon. The show's production team also includes Rollin Hu, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Mrinalini Chakravorty.
Podcast: Political Philosophy Podcast Episode: THE POLITICAL RIGHT & EQUALITY With MattMcManusRelease date: 2024-04-28What defines the modern American right? Matt McManus argues we should understand the movement as fundementally about hierarchy, we then get into a general conversation about the Biden administration and the direction of the US Left.
Podcast: Political Philosophy Podcast Episode: THE POLITICAL RIGHT & EQUALITY With MattMcManusRelease date: 2024-04-28What defines the modern American right? Matt McManus argues we should understand the movement as fundementally about hierarchy, we then get into a general conversation about the Biden administration and the direction of the US Left.
Podcast: The Gradient: Perspectives on AI Episode: David Thorstad: Bounded Rationality and the Case Against LongtermismRelease date: 2024-05-02Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it's a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David's interest in rationality* (02:45) David's crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore's Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can't intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David's homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts* Three mistakes in the moral mathematics of existential risk (+ blog posts* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
Podcast: The Gradient: Perspectives on AI Episode: David Thorstad: Bounded Rationality and the Case Against LongtermismRelease date: 2024-05-02Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it's a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David's interest in rationality* (02:45) David's crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore's Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can't intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David's homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts* Three mistakes in the moral mathematics of existential risk (+ blog posts* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
Podcast: Conversations with Tyler Episode: Peter Thiel on Political TheologyRelease date: 2024-04-17In this conversation recorded live in Miami, Tyler and Peter Thiel dive deep into the complexities of political theology, including why it's a concept we still need today, why Peter's against Calvinism (and rationalism), whether the Old Testament should lead us to be woke, why Carl Schmitt is enjoying a resurgence, whether we're entering a new age of millenarian thought, the one existential risk Peter thinks we're overlooking, why everyone just muddling through leads to disaster, the role of the katechon, the political vision in Shakespeare, how AI will affect the influence of wordcels, Straussian messages in the Bible, what worries Peter about Miami, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded February 21st, 2024. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Peter on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
Podcast: Conversations with Tyler Episode: Peter Thiel on Political TheologyRelease date: 2024-04-17In this conversation recorded live in Miami, Tyler and Peter Thiel dive deep into the complexities of political theology, including why it's a concept we still need today, why Peter's against Calvinism (and rationalism), whether the Old Testament should lead us to be woke, why Carl Schmitt is enjoying a resurgence, whether we're entering a new age of millenarian thought, the one existential risk Peter thinks we're overlooking, why everyone just muddling through leads to disaster, the role of the katechon, the political vision in Shakespeare, how AI will affect the influence of wordcels, Straussian messages in the Bible, what worries Peter about Miami, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded February 21st, 2024. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Peter on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
Podcast: Making Sense with Sam Harris Episode: #361 — Sam Bankman-Fried & Effective AltruismRelease date: 2024-04-01Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Podcast: Making Sense with Sam Harris Episode: #361 — Sam Bankman-Fried & Effective AltruismRelease date: 2024-04-01Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Podcast: Dwarkesh Podcast Episode: Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's MindRelease date: 2024-03-28Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.No way to summarize it, except: This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.You would be shocked how much of what I know about this field, I've learned just from talking with them.To the extent that you've enjoyed my other AI interviews, now you know why.So excited to put this out. Enjoy! I certainly did :)Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. There's a transcript with links to all the papers the boys were throwing down - may help you follow along.Follow Trenton and Sholto on Twitter.Timestamps(00:00:00) - Long contexts(00:16:12) - Intelligence is just associations(00:32:35) - Intelligence explosion & great researchers(01:06:52) - Superposition & secret communication(01:22:34) - Agents & true reasoning(01:34:40) - How Sholto & Trenton got into AI research(02:07:16) - Are feature spaces the wrong way to think about intelligence?(02:21:12) - Will interp actually work on superhuman models(02:45:05) - Sholto's technical challenge for the audience(03:03:57) - Rapid fire Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: Dwarkesh Podcast Episode: Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's MindRelease date: 2024-03-28Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.No way to summarize it, except: This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.You would be shocked how much of what I know about this field, I've learned just from talking with them.To the extent that you've enjoyed my other AI interviews, now you know why.So excited to put this out. Enjoy! I certainly did :)Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. There's a transcript with links to all the papers the boys were throwing down - may help you follow along.Follow Trenton and Sholto on Twitter.Timestamps(00:00:00) - Long contexts(00:16:12) - Intelligence is just associations(00:32:35) - Intelligence explosion & great researchers(01:06:52) - Superposition & secret communication(01:22:34) - Agents & true reasoning(01:34:40) - How Sholto & Trenton got into AI research(02:07:16) - Are feature spaces the wrong way to think about intelligence?(02:21:12) - Will interp actually work on superhuman models(02:45:05) - Sholto's technical challenge for the audience(03:03:57) - Rapid fire Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: Joe Carlsmith Audio Episode: An even deeper atheismRelease date: 2024-01-11Who isn't a paperclipper?Text version here: https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi
Podcast: Joe Carlsmith Audio Episode: An even deeper atheismRelease date: 2024-01-11Who isn't a paperclipper?Text version here: https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi
Podcast: a16z Podcast Episode: Drones, Data, and Deterrence: Technology's Role in Public SafetyRelease date: 2024-01-10Flock is a public safety technology platform that operates in over 4,000 cities across the United States, and solves about 2,200 crimes daily. That's 10 percent of reported crimes nationwide.Taken from a16z's recent LP Summit, a16z General Partner David Ulevitch joins forces with Flock Safety's founder, Garrett Langley and Sheriff Kevin McMahill of the Las Vegas Metropolitan Police Department. Together, they cover the delicate balance between using technology to combat crime and respecting individual privacy, and explore the use of drones and facial recognition, building trust within communities, and the essence of objective policing. Resources: Find Garret on Twitter: https://twitter.com/glangleyFind Sheriff McMahill on Twitter: https://twitter.com/Sheriff_LVMPDFind David on Twitter:https://twitter.com/daviduLearn more about Flock Safety: https://www.flocksafety.com Stay Updated: Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://twitter.com/stephsmithio Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Podcast: a16z Podcast Episode: Drones, Data, and Deterrence: Technology's Role in Public SafetyRelease date: 2024-01-10Flock is a public safety technology platform that operates in over 4,000 cities across the United States, and solves about 2,200 crimes daily. That's 10 percent of reported crimes nationwide.Taken from a16z's recent LP Summit, a16z General Partner David Ulovich joins forces with Flock Safety's founder, Garrett Langley and Sheriff Kevin McMayhill of the Las Vegas Metropolitan Police Department. Together, they cover the delicate balance between using technology to combat crime and respecting individual privacy, and explore the use of drones and facial recognition, building trust within communities, and the essence of objective policing. Resources: Find Garret on Twitter: https://twitter.com/glangleyFind Sheriff McMayhill on Twitter: https://twitter.com/Sheriff_LVMPDFind David on Twitter:https://twitter.com/daviduLearn more about Flock Safety: https://www.flocksafety.com Stay Updated: Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://twitter.com/stephsmithio Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Biden's Executive Order and AI Safety with Flo Crivello, Founder of Lindy AIRelease date: 2023-11-03In this episode, Flo Crivello, founder of Lindy AI, joins Nathan to chat about Biden's executive order, and the state of AI safety. They discuss Flo's thoughts on the executive order, building AGI kill switches, self driving cars, and more. If you need an ERP platform, check out our sponsor NetSuite: https://netsuite.com/cognitive.SPONSORS: Netsuite | OmnekyShopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitiveNetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: https://netsuite.com/cognitive and download your own customized KPI checklist.Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.RECOMMENDED PODCAST: Every week investor and writer of the popular newsletter The Diff, Byrne Hobart, and co-host Erik Torenberg discuss today's major inflection points in technology, business, and markets – and help listeners build a diversified portfolio of trends and ideas for the future. Subscribe to “The Riff” with Byrne Hobart and Erik Torenberg: https://link.chtbl.com/theriffTIMESTAMPS:(00:00) Episode Preview(00:06:42) The natural order of technological progress(00:07:00) Self driving cars(00:10:57) Where is Flo accelerationist?(00:12:34) Artificial intelligence as a new form of life(00:17:08) - Sponsors: Oracle | Omneky(00:18:05) Silicon-based intelligence vs carbon-based intelligence(00:24:36) Executive Order(00:29:32) How would a GPU kill switch work?(00:31:24) “Let's not regulate model development, but applications”(00:32:08) - Sponsor: Netsuite(00:36:00) GPT-4 is the most critical component for AGI(00:38:00) AGI in 2-8 years(00:39:26) Eureka moment from a general system(00:48:00) AI research with China(00:52:00) Does AI have subjective experience? The Mu responseThis show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co.Producer: Vivian MengExecutive Producers: Amelia Salyers, and Erik TorenbergEditor: Graham Bessellieu
Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Biden's Executive Order and AI Safety with Flo Crivello, Founder of Lindy AIRelease date: 2023-11-03In this episode, Flo Crivello, founder of Lindy AI, joins Nathan to chat about Biden's executive order, and the state of AI safety. They discuss Flo's thoughts on the executive order, building AGI kill switches, self driving cars, and more. If you need an ERP platform, check out our sponsor NetSuite: http://netsuite.com/cognitive.SPONSORS: Oracle | Netsuite | OmnekyWith the onset of AI, it's time to upgrade to the next generation of the cloud: Oracle Cloud Infrastructure. OCI is a single platform for your infrastructure, database, application development, and AI needs. Train ML models on the cloud's highest performing NVIDIA GPU clusters.Do more and spend less like Uber, 8x8, and Databricks Mosaic, take a FREE test drive of OCI at oracle.com/cognitiveNetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.RECOMMENDED PODCAST: Every week investor and writer of the popular newsletter The Diff, Byrne Hobart, and co-host Erik Torenberg discuss today's major inflection points in technology, business, and markets – and help listeners build a diversified portfolio of trends and ideas for the future. Subscribe to “The Riff” with Byrne Hobart and Erik Torenberg: https://link.chtbl.com/theriffTIMESTAMPS:(00:00) Episode Preview(00:06:42) The natural order of technological progress(00:07:00) Self driving cars(00:10:57) Where is Flo accelerationist?(00:12:34) Artificial intelligence as a new form of life(00:17:08) - Sponsors: Oracle | Omneky(00:18:05) Silicon-based intelligence vs carbon-based intelligence(00:24:36) Executive Order(00:29:32) How would a GPU kill switch work?(00:31:24) “Let's not regulate model development, but applications”(00:32:08) - Sponsor: Netsuite(00:36:00) GPT-4 is the most critical component for AGI(00:38:00) AGI in 2-8 years(00:39:26) Eureka moment from a general system(00:48:00) AI research with China(00:52:00) Does AI have subjective experience? The Mu responseThe Cognitive Revolution is brought to you by the Turpentine Media network.Producer: Vivian MengExecutive Producers: Amelia Salyers, and Erik TorenbergEditor: Graham BessellieuFor inquiries about guests or sponsoring the podcast, please email vivian@turpentine.co
Podcast: Dwarkesh Podcast Episode: Paul Christiano - Preventing an AI TakeoverRelease date: 2023-10-31Paul Christiano is the world's leading AI safety researcher. My full episode with him is out!We discuss:- Does he regret inventing RLHF, and is alignment necessarily dual-use?- Why he has relatively modest timelines (40% by 2040, 15% by 2030),- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?- Why he's leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,- His current research into a new proof system, and how this could solve alignment by explaining model's behavior- and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Open PhilanthropyOpen Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/The deadline to apply is November 9th; make sure to check out those roles before they close.Timestamps(00:00:00) - What do we want post-AGI world to look like?(00:24:25) - Timelines(00:45:28) - Evolution vs gradient descent(00:54:53) - Misalignment and takeover(01:17:23) - Is alignment dual-use?(01:31:38) - Responsible scaling policies(01:58:25) - Paul's alignment research(02:35:01) - Will this revolutionize theoretical CS and math?(02:46:11) - How Paul invented RLHF(02:55:10) - Disagreements with Carl Shulman(03:01:53) - Long TSMC but not NVIDIA Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: Dwarkesh Podcast Episode: Paul Christiano - Preventing an AI TakeoverRelease date: 2023-10-31Paul Christiano is the world's leading AI safety researcher. My full episode with him is out!We discuss:- Does he regret inventing RLHF, and is alignment necessarily dual-use?- Why he has relatively modest timelines (40% by 2040, 15% by 2030),- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?- Why he's leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,- His current research into a new proof system, and how this could solve alignment by explaining model's behavior- and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Open PhilanthropyOpen Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/The deadline to apply is November 9th; make sure to check out those roles before they close.Timestamps(00:00:00) - What do we want post-AGI world to look like?(00:24:25) - Timelines(00:45:28) - Evolution vs gradient descent(00:54:53) - Misalignment and takeover(01:17:23) - Is alignment dual-use?(01:31:38) - Responsible scaling policies(01:58:25) - Paul's alignment research(02:35:01) - Will this revolutionize theoretical CS and math?(02:46:11) - How Paul invented RLHF(02:55:10) - Disagreements with Carl Shulman(03:01:53) - Long TSMC but not NVIDIA This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
Podcast: Tetragrammaton with Rick Rubin Episode: Tyler Cowen: From Avant-Garde to Pop (Bonus DJ Episode)Release date: 2023-10-18Tyler Cowen has long nurtured an obsession with music. It's one of the few addictions Tyler believes is actually conducive to a fulfilling intellectual life.In this bonus episode, an addendum to Rick's conversation with Tyler, Rick sits with Tyler as he plays and talks through the music that moves him: from the outer bounds of the avant-garde to contemporary pop music and all points in between.
Podcast: Tetragrammaton with Rick Rubin Episode: Tyler Cowen: From Avant-Garde to Pop (Bonus DJ Episode)Release date: 2023-10-18Tyler Cowen has long nurtured an obsession with music. It's one of the few addictions Tyler believes is actually conducive to a fulfilling intellectual life.In this bonus episode, an addendum to Rick's conversation with Tyler, Rick sits with Tyler as he plays and talks through the music that moves him: from the outer bounds of the avant-garde to contemporary pop music and all points in between.
Podcast: Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas Episode: 233 | Hugo Mercier on Reasoning and SkepticismRelease date: 2023-04-17Here at the Mindscape Podcast, we are firmly pro-reason. But what does that mean, fundamentally and in practice? How did humanity come into the idea of not just doing things, but doing things for reasons? In this episode we talk with cognitive scientist Hugo Mercier about these issues. He is the co-author (with Dan Sperber) of The Enigma of Reason, about how the notion of reason came to be, and more recently author of Not Born Yesterday, about who we trust and what we believe. He argues that our main shortcoming is not being insufficiently skeptical of radical claims, but of being too skeptical of claims that don't fit our views.Support Mindscape on Patreon.Hugo Mercier received a Ph.D. in cognitive sciences from the École des Hautes Études en Sciences Sociales. He is currently a Permanent CNRS Research Scientist at the Institut Jean Nicod, Paris. Among his awards are the Prime d'excellence from the CNRS.Web siteGoogle Scholar publicationsAmazon author pageTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Podcast: Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas Episode: 233 | Hugo Mercier on Reasoning and SkepticismRelease date: 2023-04-17Here at the Mindscape Podcast, we are firmly pro-reason. But what does that mean, fundamentally and in practice? How did humanity come into the idea of not just doing things, but doing things for reasons? In this episode we talk with cognitive scientist Hugo Mercier about these issues. He is the co-author (with Dan Sperber) of The Enigma of Reason, about how the notion of reason came to be, and more recently author of Not Born Yesterday, about who we trust and what we believe. He argues that our main shortcoming is not being insufficiently skeptical of radical claims, but of being too skeptical of claims that don't fit our views.Support Mindscape on Patreon.Hugo Mercier received a Ph.D. in cognitive sciences from the École des Hautes Études en Sciences Sociales. He is currently a Permanent CNRS Research Scientist at the Institut Jean Nicod, Paris. Among his awards are the Prime d'excellence from the CNRS.Web siteGoogle Scholar publicationsAmazon author pageTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Podcast: The Book Club Episode: Tom Holland: DominionRelease date: 2019-12-04In this week's Book Club, Sam's guest is the historian Tom Holland, author of the new book Dominion: The Making of the Western Mind. The book, though as Tom remarks, you might not know it from the cover, is essentially a history of Christianity -- and an account of the myriad ways, many of them invisible to us, that it has shaped and continues to shape Western culture. It's a book and an argument that takes us from Ancient Babylon to Harvey Weinstein's hotel room, draws in the Beatles and the Nazis, and orbits around two giant figures: St Paul and Nietzsche. Is there a single discernible, distinctive Christian way of thinking? Is secularism Christianity by other means? And are our modern-day culture wars between alt-righters and woke progressives a post-Christian phenomenon or, as Tom argues, essentially a civil war between two Christian sects? Presented by Sam Leith.
Podcast: The Book Club Episode: Tom Holland: DominionRelease date: 2019-12-04In this week's Book Club, Sam's guest is the historian Tom Holland, author of the new book Dominion: The Making of the Western Mind. The book, though as Tom remarks, you might not know it from the cover, is essentially a history of Christianity -- and an account of the myriad ways, many of them invisible to us, that it has shaped and continues to shape Western culture. It's a book and an argument that takes us from Ancient Babylon to Harvey Weinstein's hotel room, draws in the Beatles and the Nazis, and orbits around two giant figures: St Paul and Nietzsche. Is there a single discernible, distinctive Christian way of thinking? Is secularism Christianity by other means? And are our modern-day culture wars between alt-righters and woke progressives a post-Christian phenomenon or, as Tom argues, essentially a civil war between two Christian sects? Presented by Sam Leith.
Podcast: Clearer Thinking with Spencer Greenberg Episode: How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)Release date: 2023-08-23Read the full transcript here. Along what axes and at what rates is the AI industry growing? What algorithmic developments have yielded the greatest efficiency boosts? When, if ever, will we hit the upper limits of the amount of computing power, data, money, etc., we can throw at AI development? Why do some people seemingly become fixated on particular tasks that particular AI models can't perform and draw the conclusion that AIs are still pretty dumb and won't be taking our jobs any time soon? What kinds of tasks are more or less easily automatable? Should more people work on AI? What does it mean to "take ownership" of our friendships? What sorts of thinking patterns employed by AI engineers can be beneficial in other areas of life? How can we make better decisions, especially about large things like careers and relationships?Danny Hernandez was an early AI researcher at OpenAI and Anthropic. He's best known for measuring macro progress in AI. For example, he helped show that the compute of the largest training runs was growing at 10x per year between 2012 and 2017. He also helped show an algorithmic equivalent of Moore's Law that was faster, and he's done work on scaling laws and mechanistic interpretability of learning from repeated data. He is currently focused on alignment research. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Podcast: Clearer Thinking with Spencer Greenberg Episode: How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)Release date: 2023-08-23Read the full transcript here. Along what axes and at what rates is the AI industry growing? What algorithmic developments have yielded the greatest efficiency boosts? When, if ever, will we hit the upper limits of the amount of computing power, data, money, etc., we can throw at AI development? Why do some people seemingly become fixated on particular tasks that particular AI models can't perform and draw the conclusion that AIs are still pretty dumb and won't be taking our jobs any time soon? What kinds of tasks are more or less easily automatable? Should more people work on AI? What does it mean to "take ownership" of our friendships? What sorts of thinking patterns employed by AI engineers can be beneficial in other areas of life? How can we make better decisions, especially about large things like careers and relationships?Danny Hernandez was an early AI researcher at OpenAI and Anthropic. He's best known for measuring macro progress in AI. For example, he helped show that the compute of the largest training runs was growing at 10x per year between 2012 and 2017. He also helped show an algorithmic equivalent of Moore's Law that was faster, and he's done work on scaling laws and mechanistic interpretability of learning from repeated data. He is currently focused on alignment research. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Podcast: Invest Like the Best with Patrick O'Shaughnessy Episode: Samo Burja - The Great Founder Theory of History - [Invest Like the Best, EP.339]Release date: 2023-08-01My guest today is Samo Burja. Samo is the founder of consulting firm, Bismark Analysis, and has dedicated his life's work to understanding why there has never been an immortal society. His research focuses on institutions, the founders behind them, how they rise and why they always fall in the end. As you'll hear, Samo has an encyclopedic grasp of history and his work has led him to some fascinating theories about human progress, the nature of exceptional founders, and the future of different societies across the world. Please enjoy my conversation with Samo Burja.Listen to Founders PodcastFounders Episode 311: James CameronFor the full show notes, transcript, and links to mentioned content, check out the episode page here.-----This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can't be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick.-----Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more.Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here.Follow us on Twitter: @patrick_oshag | @JoinColossusShow Notes (00:02:52) - (First question) - The core thesis behind the Great Founder Theory(00:06:40) - Great ideas inevitably being discovered at some point in history (00:08:45) - The historic implications of a global adoption of the Great Founder Theory(00:10:51) - The different possible directions of future trends(00:17:08) - Distinctions between great founders versus live players (00:22:15) - Common misconceptions about what qualifies one as a great founder(00:24:38) - Noteworthy great founders in the United States(00:28:34) - Recurring observable traits and common themes of great founders (00:31:29) - Using caution when projecting a mythic lens onto great founders (00:37:53) - Social technology as the upstream effects of prior material technology(00:43:32) - Whether or not institutions play a role in propagating the work of great founders (00:49:08) - The role of power and differences between owned and borrowed power (00:56:51) - Additional ideas that play an outsized role in shaping the world (01:01:09) - A differing worldview to his own that he finds interesting(01:04:53) - Whether or not capital allocators can benefit from the Great Founder Theory (01:07:37) - The kindest thing anyone has ever done for him
Podcast: Invest Like the Best with Patrick O'Shaughnessy Episode: Samo Burja - The Great Founder Theory of History - [Invest Like the Best, EP.339]Release date: 2023-08-01My guest today is Samo Burja. Samo is the founder of consulting firm, Bismark Analysis, and has dedicated his life's work to understanding why there has never been an immortal society. His research focuses on institutions, the founders behind them, how they rise and why they always fall in the end. As you'll hear, Samo has an encyclopedic grasp of history and his work has led him to some fascinating theories about human progress, the nature of exceptional founders, and the future of different societies across the world. Please enjoy my conversation with Samo Burja.Listen to Founders PodcastFounders Episode 311: James CameronFor the full show notes, transcript, and links to mentioned content, check out the episode page here.-----This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can't be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick.-----Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more.Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here.Follow us on Twitter: @patrick_oshag | @JoinColossusShow Notes (00:02:52) - (First question) - The core thesis behind the Great Founder Theory(00:06:40) - Great ideas inevitably being discovered at some point in history (00:08:45) - The historic implications of a global adoption of the Great Founder Theory(00:10:51) - The different possible directions of future trends(00:17:08) - Distinctions between great founders versus live players (00:22:15) - Common misconceptions about what qualifies one as a great founder(00:24:38) - Noteworthy great founders in the United States(00:28:34) - Recurring observable traits and common themes of great founders (00:31:29) - Using caution when projecting a mythic lens onto great founders (00:37:53) - Social technology as the upstream effects of prior material technology(00:43:32) - Whether or not institutions play a role in propagating the work of great founders (00:49:08) - The role of power and differences between owned and borrowed power (00:56:51) - Additional ideas that play an outsized role in shaping the world (01:01:09) - A differing worldview to his own that he finds interesting(01:04:53) - Whether or not capital allocators can benefit from the Great Founder Theory (01:07:37) - The kindest thing anyone has ever done for him
Podcast: The Joe Walker Podcast Episode: Stephen Wolfram — Constructing the Computational ParadigmRelease date: 2023-08-16Stephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science. Full transcript available at: jnwpod.com.See omnystudio.com/listener for privacy information.
Podcast: The Joe Walker Podcast (Jolly Swagman formerly) Episode: Constructing the Computational Paradigm — Stephen WolframRelease date: 2023-08-16Stephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science. Full transcript available at: jnwpod.com.See omnystudio.com/listener for privacy information.
Podcast: Dwarkesh Podcast Episode: Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI ProgressRelease date: 2023-08-08Here is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic's Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: Dwarkesh Podcast Episode: Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI ProgressRelease date: 2023-08-08Here is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic's Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and InflectionRelease date: 2023-05-11Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi.Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.Sarah and Elad also discuss Mustafa's upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymnShow Notes:[00:06] - From Conflict Resolution to AI Pioneering[10:36] - Defining Intelligence[15:32] - DeepMind's Journey and Breakthroughs[24:45] - The Future of Personal AI Companionship[33:22] - AI and the Future of Personalized Content[41:49] - The Launch of Pi[51:12] - Mustafa's New Book The Coming Wave
Podcast: No Priors: Artificial Intelligence | Machine Learning | Technology | Startups Episode: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and InflectionRelease date: 2023-05-11Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi.Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.Sarah and Elad also discuss Mustafa's upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymnShow Notes:[00:06] - From Conflict Resolution to AI Pioneering[10:36] - Defining Intelligence[15:32] - DeepMind's Journey and Breakthroughs[24:45] - The Future of Personal AI Companionship[33:22] - AI and the Future of Personalized Content[41:49] - The Launch of Pi[51:12] - Mustafa's New Book The Coming Wave
Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far FutureRelease date: 2023-06-26The second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00 - Intro (0:00:47 - AI takeover via cyber or bio (0:32:27 - Can we coordinate against AI? (0:53:49 - Human vs AI colonizers (1:04:55 - Probability of AI takeover (1:21:56 - Can we detect deception? (1:47:25 - Using AI to solve coordination problems (1:56:01 - Partial alignment (2:11:41 - AI far future (2:23:04 - Markets & other evidence (2:33:26 - Day in the life of Carl Shulman (2:47:05 - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far FutureRelease date: 2023-06-26The second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00 - Intro (0:00:47 - AI takeover via cyber or bio (0:32:27 - Can we coordinate against AI? (0:53:49 - Human vs AI colonizers (1:04:55 - Probability of AI takeover (1:21:56 - Can we detect deception? (1:47:25 - Using AI to solve coordination problems (1:56:01 - Partial alignment (2:11:41 - AI far future (2:23:04 - Markets & other evidence (2:33:26 - Day in the life of Carl Shulman (2:47:05 - Space warfare, Malthusian long run, & other rapid fire This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com