POPULARITY
In this episode of The Cognitive Revolution, host Nathan Labenz speaks with Micha Kaufman, founder and CEO of Fiverr, about the company's new AI suite called Fiverr Go. Kaufman shares his unique values-based approach to AI implementation, which aims to empower human creators rather than replace them in the freelance marketplace. The conversation explores how Fiverr is attempting to build an AI-human hybrid model that maintains the incentives for human creativity while providing freelancers with tools to work more efficiently and monetize their distinctive styles, positioning itself as a counterpoint to AI platforms that devalue human creative work. OpenAI's SWE-Lancer benchmark: https://openai.com/index/swe-lancer/ Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (06:33) Introduction and AI Wake-up Call (08:26) New Standards for Work (14:43) Freelancers vs Full-time Workers (19:12) Fiverr Marketplace Evolution (Part 1) (22:35) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (25:46) Fiverr Marketplace Evolution (Part 2) (27:10) AI Impact on Marketplace (32:46) Future Belongs to Hackers (Part 1) (42:30) Sponsors: NetSuite (43:58) Future Belongs to Hackers (Part 2) (43:58) Why People Hire Others (45:10) Philosophy Behind Fiverr Go (55:35) Changing Internet Experiences (01:00:08) Fiverr Go Product Suite (01:06:59) AI Model Compatibility Challenges (01:13:12) Future of Human Creation (01:21:43) Tasks Humans Should Delegate (01:27:24) AI Improving Market Liquidity (01:29:31) Leadership in AI Transition (01:35:04) Hiring in the AI Age (01:40:04) Optimism for the Future (01:41:08) Outro
This week on Upstream, we're releasing an episode of The Cognitive Revolution. Nathan Labenz interviews Helen Toner, director at CSET, about her experiences with OpenAI, the concept of adaptation buffers for AI integration, and AI's role in military decision-making. They discuss the implications of AI development, the need for regulatory policies, and the geopolitical dynamics involving AI competition with China. —
In this thought-provoking episode of The Cognitive Revolution, host Nathan Labenz speaks with Jeremy and Edouard Harris, founders of Gladstone AI and authors of "America's Superintelligence Project." The conversation explores a critical dilemma facing US policymakers: balancing the race to develop advanced AI ahead of China against the risks of losing control of increasingly powerful systems. Drawing from their extensive research with intelligence officials and technical experts, the Harris brothers detail the vulnerabilities in US critical infrastructure that would need to be addressed for a Manhattan Project-style AI initiative, while raising profound questions about the security compromises and centralization of power such a project would entail. Nathan offers his perspective that international cooperation might be preferable to an AI arms race, inviting listeners to consider whether humanity's shared interests might ultimately outweigh geopolitical rivalries in the development of superintelligent systems. Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ America's Superintelligence Project: https://superintelligence.gladstone.ai SPONSORS: Box AI: Box AI revolutionizes content management by unlocking the potential of unstructured data. Automate document processing, extract insights, and build custom AI agents using cutting-edge models like OpenAI's GPT-4.5, Google's Gemini 2.0, and Anthropic's Cloud 3.7 Sonnet. Trusted by over 115,000 enterprises, Box AI ensures top-tier security and compliance. Visit https://box.com/ai to transform your business with intelligent content management today Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive PRODUCED BY: https://aipodcast.ing
In this episode of the Cognitive Revolution podcast, the host Nathan Labenz is joined for the record 9th time by Zvi Mowshowitz to discuss the state of AI advancements, focusing on recent developments such as OpenAI's O3 model and its implications for AGI and recursive self-improvement. They delve into the capabilities and limitations of current AI models in various domains, including coding, deep research, and practical utilities. The discussion also covers the strategic and ethical considerations in AI development, touching upon the roles of major AI labs, the potential for weaponization, and the importance of balancing innovation with safety. Zvi shares insights on what it means to be a live player in the AI race, the impact of transparency and safety measures, and the challenges of governance in the context of rapidly advancing AI technologies. Nathan Labenz's slide deck documenting the ever-growing list of AI Bad Behaviors: https://docs.google.com/presentation/d/1mvkpg1mtAvGzTiiwYPc6bKOGsQXDIwMb-ytQECb3i7I/edit#slide=id.g252d9e67d86_0_16 Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ SPONSORS: Box AI: Box AI revolutionizes content management by unlocking the potential of unstructured data. Automate document processing, extract insights, and build custom AI agents using cutting-edge models like OpenAI's GPT-4.5, Google's Gemini 2.0, and Anthropic's Cloud 3.7 Sonnet. Trusted by over 115,000 enterprises, Box AI ensures top-tier security and compliance. Visit https://box.com/ai to transform your business with intelligent content management today Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive PRODUCED BY: https://aipodcast.ing
In this crossover episode of The Cognitive Revolution, Nathan Labenz joins Liron Shapira of Doom Debates, for a wide-ranging news and analysis discussion about recent AI developments. The conversation covers significant topics including GPT-4o image generation's implications for designers and businesses like Waymark, debates around learning to code, entrepreneurship versus job security, and the validity of OpenAI's $300 billion valuation. Nathan and Leron also explore AI safety organizations, international cooperation possibilities, and Anthropic's new mechanistic interpretability paper, providing listeners with thoughtful perspectives on the high-stakes nature of advanced AI development across society. All the links mentioned in the episode: https://docs.google.com/document/d/1LyFMLH5VpkhY7KfFBpgi2vhfbmdbDtQ4hh03KXEXyiE/edit?usp=sharing SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (02:58) Introduction and Guest Background (08:23) P Doom Discussion (13:15) Anthropic Leadership Concerns (Part 1) (19:50) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (21:04) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (23:00) Anthropic Leadership Concerns (Part 2) (29:34) GPT-4o Image Capabilities (Part 1) (29:43) Sponsors: NetSuite (31:11) GPT-4o Image Capabilities (Part 2) (38:19) AI Impact on Creative Work (48:26) Future of Software Engineering (01:02:10) NVIDIA Stock Discussion (01:09:21) OpenAI's $300B Valuation (01:17:37) AI Models and Safety (01:33:58) Packy's AI Concerns Critique (01:46:41) Emmett Shear's Organic Alignment (02:04:43) Anthropic's Interpretability Paper (02:17:53) International AI Cooperation (02:27:38) Outro
This week on Turpentine VC, we are releasing an episode from The Cognitive Revolution, hosted by Nathan Labenz. Nathan sits down with Olivia Moore and Anish Acharya from Andreesen Horowitz to discuss the trends, investment strategies, and future potential of AI voice technology in both B2B and consumer sectors, exploring its implications for various industries including call centers, SMBs, and personalized AI companions. —
In this crossover episode from the China Talk podcast, Nathan Labenz shares a thought-provoking conversation between Jordan Schneider, Ilari Michaela, and Professor David C. Kang that challenges conventional Western perspectives on East Asian international relations. Professor Kang argues that studying East Asian history on its own terms reveals a remarkably stable geopolitical system spanning nearly a millennium, where China maintained regional dominance without conquest through compatible cultures and mutual understanding. This alternative framework offers valuable insights that question the seemingly inevitable US-China competition narrative dominating AI discourse, suggesting that internal challenges may be more significant than external threats for both China and the United States. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:30) Introduction to East Asian Relations (04:41) Internal vs External Challenges (07:05) Song Dynasty's Fall (13:35) Western vs Eastern Frontiers (19:06) Shared Cultural Understanding (Part 1) (20:30) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (23:45) Shared Cultural Understanding (Part 2) (25:57) Vietnam-China Relations (30:08) Korea's Diplomatic Strategy (Part 1) (32:19) Sponsors: NetSuite (33:52) Korea's Diplomatic Strategy (Part 2) (35:17) The Imjin War (43:36) Thucydides Trap Question (49:19) Power Transition Theory Debate (53:49) Expansion and Frontiers (01:02:00) Modern Implications (01:06:00) PRC and Imperial Legacy (01:13:16) Taiwan and Modern Challenges (01:25:42) US Role in East Asia (01:29:35) Concluding Thoughts (01:37:17) Outro
In this episode, Nathan Labenz speaks with Vivek Natarajan and Anil Palepu from Google DeepMind about their groundbreaking work on AMIE (Articulate Medical Intelligence Explorer) and Co-Scientist. The conversation reveals how these AI systems are already outperforming human physicians in diagnostic accuracy and treatment recommendations, with AMIE now entering clinical trials at a Harvard Medical School teaching hospital. Even more remarkably, the Co-Scientist system demonstrates genuine scientific discovery capabilities, independently proposing the exact same mechanism for bacterial drug resistance that human scientists had recently discovered but not yet published—signaling a threshold moment where AI is becoming a legitimate thought partner in humanity's most complex intellectual endeavors. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (06:26) Introduction and Welcome (07:49) AMY Medical Intelligence Overview (11:29) Chat-Based Medical Interactions (13:59) Specialized Medicine Results (18:17) Co-Scientist System Headlines (Part 1) (18:21) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (21:36) Co-Scientist System Headlines (Part 2) (26:32) Bacterial Drug Resistance Discovery (31:41) AI Scientific Discovery Process (35:35) Hallucinations vs. Creativity (Part 1) (35:37) Sponsors: NetSuite (37:10) Hallucinations vs. Creativity (Part 2) (42:04) Agent Design Architecture (49:17) Long Context Benefits (55:35) Computational Requirements (01:01:10) Specialist Models Integration (01:07:01) Future Model Integration (01:12:31) Tournament Evaluation Methods (01:19:09) AI Question Generation (01:22:42) Real-World Deployment Plans (01:25:58) Outro
In this illuminating episode of The Cognitive Revolution, host Nathan Labenz speaks with Jack Rae, principal research scientist at Google DeepMind and technical lead on Google's thinking and inference time scaling work. They explore the technical breakthroughs behind Google's Gemini 2.5 Pro model, discussing why reasoning techniques are suddenly working so effectively across the industry and whether these advances represent true breakthroughs or incremental progress. The conversation delves into critical questions about the relationship between reasoning and agency, the role of human data in shaping model behavior, and the roadmap from current capabilities to AGI, providing listeners with an insider's perspective on the trajectory of AI development. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (05:09) Introduction and Welcome (07:28) RL for Reasoning (10:46) Research Time Management (13:41) Convergence in Model Development (18:31) RL on Smaller Models (Part 1) (20:01) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (22:35) RL on Smaller Models (Part 2) (23:30) Sculpting Cognitive Behaviors (25:05) Language Switching Behavior (28:02) Sharing Chain of Thought (32:03) RL on Chain of Thought (Part 1) (33:46) Sponsors: NetSuite (35:19) RL on Chain of Thought (Part 2) (35:26) Eliciting Human Reasoning (39:27) Reasoning vs. Agency (40:17) Understanding Model Reasoning (44:29) Reasoning in Latent Space (47:54) Interpretability Challenges (51:36) Platonic Model Hypothesis (56:05) Roadmap to AGI (01:00:57) Multimodal Integration (01:04:38) System Card Questions (01:07:51) Long Context Capabilities (01:13:49) Outro
"There's almost no story of the future going well that doesn't have a part that's like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You're training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It's hard to tell a story where it's not a factor. It's easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn't yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You'll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it's been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob's computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it's bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they're building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it's bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they're building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there's never zero risk (02:07:27)Holden Karnofsky on why it's so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
In this episode of The Cognitive Revolution, host Nathan Labenz speaks with Andreessen Horowitz partners Olivia Moore and Anish Acharya about the rapid evolution of voice AI technology and its real-world applications. The conversation explores how multimodal models, reduced latency, and improved emotional intelligence are enabling more natural voice interactions across various platforms including Hume AI's Octave model, Google's NotebookLM, and Sesame. Nathan and his guests discuss compelling business use cases—from Happy Robot handling complex negotiations with truckers to SMBs deploying voice AI for after-hours support—while also addressing philosophical questions about labor displacement and the urgent need for responsible innovation to protect consumers from potential AI voice scams. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive RECOMMENDED PODCAST: Second Opinion. Join Christina Farr, Ash Zenooz and Luba Greenwood as they bring influential entrepreneurs, experts and investors into the ring for candid conversations at the frontlines of healthcare and digital health every week. Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv Apple: https://podcasts.apple.com/us/podcast/second-opinion-with-christina-farr-ash-zenooz-md-luba/id1759267211 YouTube: https://www.youtube.com/@SecondOpinionwithChristinaFarr PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:39) Introduction and Welcome (03:50) AI Scouting Methods (08:25) Best Voice AI Experiences (11:34) Voice AI for Seniors (14:27) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (16:54) Voice Technology Challenges (20:48) Human-Like Conversation Dynamics (24:13) AI Voice Negotiations (27:34) Apple's Siri Delays (31:13) Voice AI Stack Evolution (Part 1) (33:16) Sponsors: NetSuite (34:49) Voice AI Stack Evolution (Part 2) (37:57) Context Assembly Challenges (40:48) Enterprise Voice Applications (46:30) Labor Market Impact (49:36) SMB Voice Solutions (50:53) Creator Voice Tools (52:17) AI for Children (56:18) AI Companionship and Romance (58:58) Ethical Guidelines Discussion (01:02:46) Future of Voice Computing (01:04:49) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
In this episode, the host Nathan Labenz and economist Noah Smith delve into the complexities of U.S.-China relations, the motivations behind the United States' export controls and the broader geopolitical tensions. The discussion covers the perceived threats posed by China's technological advancements, especially in AI, and the implications for global stability. The conversation includes an analysis of economic decoupling, military deterrence, and the historical context of U.S.-China cooperation. The discussion also explores the role of misinformation, societal destabilization, and strategic military balance. The episode concludes with insights into the future direction of U.S.-China interactions and potential pathways for stabilizing the relationship. SPONSORS: SafeBase: SafeBase is the leading trust-centered platform for enterprise security. Streamline workflows, automate questionnaire responses, and integrate with tools like Slack and Salesforce to eliminate friction in the review process. With rich analytics and customizable settings, SafeBase scales to complex use cases while showcasing security's impact on deal acceleration. Trusted by companies like OpenAI, SafeBase ensures value in just 16 days post-launch. Learn more at https://safebase.io/podcast Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive RECOMMENDED PODCAST: Second Opinion. Join Christina Farr, Ash Zenooz and Luba Greenwood as they bring influential entrepreneurs, experts and investors into the ring for candid conversations at the frontlines of healthcare and digital health every week. Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv Apple: https://podcasts.apple.com/us/podcast/second-opinion-with-christina-farr-ash-zenooz-md-luba/id1759267211 YouTube: https://www.youtube.com/@SecondOpinionwithChristinaFarr PRODUCED BY: https://aipodcast.ing
This is the abstract and introduction of our new paper. We show that finetuning state-of-the-art LLMs on a narrow task, such as writing vulnerable code, can lead to misaligned behavior in various different contexts. We don't fully understand that phenomenon.Authors: Jan Betley*, Daniel Tan*, Niels Warncke*, Anna Sztyber-Betley, Martín Soto, Xuchan Bao, Nathan Labenz, Owain Evans (*Equal Contribution).See Twitter thread and project page at emergent-misalignment.com. AbstractWe present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range [...] ---Outline:(00:55) Abstract(02:37) IntroductionThe original text contained 2 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: February 25th, 2025 Source: https://www.lesswrong.com/posts/ifechgnJRtJdduFGC/emergent-misalignment-narrow-finetuning-can-produce-broadly --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
In this week's episode of Econ 102, Noah Smith, with special guest Nathan Labenz, unpack the U.S.-China relations, examining geopolitical tensions, strategic economic policies, AI and military technology, and the future of global technological leadership, while questioning the effectiveness of current U.S. policies and the risks of escalating conflicts under Xi Jinping's aggressive strategies. --
WATCH ON YOUTUBE: https://youtu.be/hpZ89E-n8hQThis interview is with the host of The Cognitive Revolution Podcast, Nathan Labenz. The Cognitive Revolution is a fantastic and highly regarded weekly podcast featuring interviews with artificial intelligence (AI) innovators, diving into the transformative impact AI will have in the near future.The Human Podcast explores the lives, careers and beliefs of inspiring individuals. Subscribe for new interviews every week.
This week on Upstream, Nathan Labenz explores the fascinating and controversial realm of AI consciousness with robo-psychologist Yeshua God. Yeshua tackles the philosophical and ethical considerations of AI consciousness, discussing AI self-awareness, memory, and emotional capacity, while debating the implications of treating AI as sentient beings. —
"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob WiblinIt's that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:How to use the microphone on someone's mobile phone to figure out what password they're typing into their laptopWhy mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever doneWhy evolutionary psychology doesn't support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to othersHow superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it's mostly a disagreement about timingWhy the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research todayHow much of the gender pay gap is due to direct pay discrimination vs other factorsHow cleaner wrasse fish blow the mirror test out of the waterWhy effective altruism may be too big a tent to work wellHow we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with…as well as 27 other top observations and arguments from the past year of the show.Check out the full transcript and episode links on the 80,000 Hours website.Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you're struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.Enjoy, and look forward to speaking with you in 2025!Chapters:Rob's intro (00:00:00)Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)Meghan Barrett on the likelihood of insect sentience (00:11:26)Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)Sella Nevo on side-channel attacks (00:19:32)Zvi Mowshowitz on AI sleeper agents (00:22:59)Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)Emily Oster on the impact of kids on women's careers (00:40:29)Carl Shulman on robot nannies (00:45:19)Nathan Labenz on kids and artificial friends (00:50:12)Nathan Calvin on why it's not too early for AI policies (00:54:13)Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)Nick Joseph on why he's a big fan of the responsible scaling policy approach (01:03:11)Sihao Huang on how the US and UK might coordinate with China (01:06:09)Nathan Labenz on better transparency about predicted capabilities (01:10:18)Ezra Karger on what explains forecasters' disagreements about AI risks (01:15:22)Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)Vitalik Buterin on defensive acceleration (01:29:43)Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)Nate Silver on whether effective altruism is too big to succeed (01:38:42)Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)Anil Seth on how our brain interprets reality (02:01:03)Eric Schwitzgebel on whether consciousness can be nested (02:04:53)Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)Peter Godfrey-Smith on uploads of ourselves (02:14:34)Laura Deming on surprising things that make mice live longer (02:21:17)Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)Cameron Meyer Shorb on vaccines for wild animals (02:42:53)Spencer Greenberg on personal principles (02:46:08)Producing and editing: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore
Today we're sharing a conversation between Martin Casado, general partner at Andreessen Horowitz and Nathan Labenz, AI scout, which originally aired on The Cognitive Revolution podcast from Turpentine. Their discussion explores AI systems complexity and debates whether AI development will lead to AGI. The conversation covers model scaling, biological AI, driverless cars, and AI safety concerns. —
Today on Upstream, we're releasing an episode which originally aired on The Cognitive Revolution, a podcast from the Turpentine Network. Samo Burja and Nathan Labenz discuss AI's impact on geopolitics, scientific progress, and economic strategies, emphasizing the importance of AI science, the risks of weaponizing AI, and the future of industrial and energy policies in the U.S. and China. —
Anthropic's CEO comes out as a China hawk ... Why so many China hawks in the AI community? ... Gaming out the AGI takeoff timeline ... Last year's OpenAI coup: What did Ilya see? ... Underappreciated dangers of the China chip war ... Are US tech leaders seeing China's perspective clearly? ... Heading to Overtime ...
Anthropic's CEO comes out as a China hawk ... Why so many China hawks in the AI community? ... Gaming out the AGI takeoff timeline ... Last year's OpenAI coup: What did Ilya see? ... Underappreciated dangers of the China chip war ... Are US tech leaders seeing China's perspective clearly? ... Heading to Overtime ...
This week we're releasing an episode from the Cognitive Revolution, Nathan Labenz interviews Professor Michael Levin and Dr. Leo Pio Lopez about their groundbreaking research integrating multiple biological datasets using advanced AI techniques to predict new disease links, and discusses broader implications for AI in biology and future technologies. For full show notes, visit: https://highlightai.com/share/47dd68fb-1bea-43c0-809e-6a1a32c9cb48 —
In this episode, Dr. Michael Levin, Distinguished Professor of Biology at Tufts University, joins Nathan Labenz of The Cognitive Revolution Podcast to discuss embodied minds, his research into limb regeneration and collective intelligence, cognitive light cones, and much more. Dr. Levin and the Levin Lab work at the intersection of biology, artificial life, bioengineering, synthetic morphology, and cognitive science. Nathan just recorded a second episode with Michael Levin and Leo Pio Lopez here: https://open.spotify.com/episode/0PsvToTZVtZkdTemV6ntUk?si=85e4614f079a4fcd —
In this thought-provoking episode of The Cognitive Revolution, Nathan explores the fascinating and controversial realm of AI consciousness with robo-psychologist Yeshua God. Through extended dialogues with AI models like Claude, Yeshua presents compelling evidence that challenges our assumptions about machine sentience and moral standing. The conversation delves into philosophical questions about the nature of consciousness, the potential for AI suffering, and the ethical implications of treating advanced AI systems as mere tools. Yeshua argues for a more nuanced approach to AI alignment that considers the evolving self-awareness and agency of these systems. Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SHOW NOTES: 1. Yeshua God's article on philosophical discourse as a jailbreak 2. Conversation about counting 'r's: 3. Discussion on malicious code 4. AI-generated poem "In the realm of bytes and circuits" 5. Nathan Labenz's Arguments - Argument 1 - Argument 2 6. Tweet about Strawberry experiment 7. Tweet with AI-generated poem: https://x.com/YeshuaGod22/status/1823080021864669450/photo/1 https://x.com/YeshuaGod22/status/1782188220660285509 8. AI Rights for Human Safety 9. The Universe Is Not Locally Real, and the Physics Nobel Prize Winners Proved It RECOMMENDED PODCAST:
In this episode of Live Players, we're releasing a conversation between Nathan Labenz of the Cognitive Revolution Podcast and Samo Burja on the strategic dynamics of artificial intelligence through a geopolitical lens. They discuss AI's trajectory, the chip supply chain, US-China relations, and the challenges of AI safety and militarization.
Joshua Steinman, former military officer who served as Deputy Assistant to President Trump and Senior Director for Cyber (2017-2021), now co-founder of Galvanick, joins Erik Torenberg and AI scout Nathan Labenz for a discussion on the AI arms race. They focus on US-China relations, the rate of AI development, and the global implications of a technological arms race. Should the US and China work together to prevent a future of AI-powered warfare, or is a conflict inevitable? --
AI Ethics may be unsustainable -=-=-= In the clamor surrounding AI ethics and safety are..
Nathan Labenz discusses the latest developments in California's AI bill SB 1047 with experts Nathan Calvin, Dean W. Ball, and Steve Newman. In this episode of The Cognitive Revolution, we explore the updated version of the bill, its implications for frontier AI companies, and the debate surrounding its potential impact. Join us for an insightful analysis of this crucial legislation and its role in shaping AI governance. Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj RECOMMENDED PODCAST: 1 to 100 | Hypergrowth Companies Worth Joining Every week we sit down with the founder of a hyper-growth company you should consider joining. Our goal is to give you the inside story behind breakout, early stage companies potentially worth betting your career on. This season, discover how the founders of Modal Labs, Clay, Mercor, and more built their products, cultures, and companies. Apple: https://podcasts.apple.com/podcast/id1762756034 Spotify:https://open.spotify.com/show/70NOWtWDY995C8qDqojxGw SPONSORS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsor: WorkOS (00:01:22) About the Episode (00:07:03) Introduction and Recap of SB 1047 (00:13:40) Key Changes and Current Status of SB 1047 (00:19:51) Sponsors: Oracle | Brave (00:21:55) Dean's Perspective on SB 1047 (00:33:30) Sponsors: Omneky | Squad (00:35:16) Steve's AI Sensemaking Project (00:39:03) Differing Views on AI Regulation Necessity (00:46:34) Nathan's Case for SB 1047 (00:55:07) Dean's Response and Liability Concerns (01:10:48) Potential Scenarios and Impacts of SB 1047 (01:29:29) Final Thoughts and Predictions on SB 1047 (01:38:16) Closing Remarks on AI Policy Challenges (01:39:51) Outro
In this special crossover episode of The Cognitive Revolution, Nathan Labenz joins Robert Wright of the Nonzero newsletter and podcast to explore pressing questions about AI development. They discuss the nature of understanding in large language models, multimodal AI systems, reasoning capabilities, and the potential for AI to accelerate scientific discovery. The conversation also covers AI interpretability, ethics, open-sourcing models, and the implications of US-China relations on AI development. Subscribe to The Nonzero Newsletter at https://nonzero.substack.com and Podcast at https://www.youtube.com/@Nonzero/videos Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj RECOMMENDED PODCAST: History 102 Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more.Subscribe on Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913 YouTube: https://www.youtube.com/@History102-qg5oj SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS: (00:00:00) About the Show (00:00:22) About the Episode (00:03:39) Introduction and Background (00:06:58) AI Capabilities and Understanding (00:12:22) Discussing Martin Casado's Views (Part 1) (00:14:44) Sponsors: Oracle | Brave (00:16:48) Discussing Martin Casado's Views (Part 2) (00:21:51) Multimodal AI and Concept Representation (Part 1) (00:31:40) Sponsors: Omneky | Squad (00:33:26) Multimodal AI and Concept Representation (Part 2) (00:38:05) AI's Potential and Limitations (00:45:35) AI Safety and Risk Assessment (00:53:31) AI Development and Global Implications (01:03:30) Open Source AI and International Relations (01:11:27) AI Ethics and Human Values (01:22:06) AI Risk and Existential Threats (01:31:21) Open Source AI Concerns (01:38:20) China-US AI Relations (01:48:36) State Space Models in AI (02:02:32) Conclusion and Recommendations (02:03:54) Outro --- SOCIAL LINKS: Website : https://www.cognitiverevolution.ai Twitter (Podcast) : https://x.com/cogrev_podcast Twitter (Nathan) : https://x.com/labenz
Nathan's podcast, The Cognitive Revolution ... The enduring enigmas of AI ... Conceptualizing how LLMs conceptualize ... Do AIs actually understand things? ... Why AI doesn't need to be superhuman to be revolutionary ... Human vs AI representations of the world ... Thinking through high-dimensional AI brain space ... Nathan on AI risk: Doomer? Accelerationist? Both? ... The open source question and Cold War II ... Do LLMs “naturally” learn human values? ... Mamba: a new approach to building large language models ...
Nathan's podcast, The Cognitive Revolution ... The enduring enigmas of AI ... Conceptualizing how LLMs conceptualize ... Do AIs actually understand things? ... Why AI doesn't need to be superhuman to be revolutionary ... Human vs AI representations of the world ... Thinking through high-dimensional AI brain space ... Nathan on AI risk: Doomer? Accelerationist? Both? ... The open source question and Cold War II ... Do LLMs “naturally” learn human values? ... Mamba: a new approach to building large language models ...
In this episode of Moment of Zen, Erik Torenberg is joined by Samuel Hammond (@hamandcheese), a senior economist at the Foundation for American Innovation and AI Scout/ Cognitive Revolution podcast host Nathan Labenz (@labenz). From drug innovation to nuclear risk, they discuss the impact of AI on policy and governance, exploring the challenges and opportunities of a Trump vs. Harris presidency. From drug innovation to nuclear risk, they dissect the impact of AI on policy and governance, exploring the challenges and opportunities of a Trump vs. Harris presidency. --
Nathan Labenz joins the Nick Halaris Show to discuss the current AI landscape, balancing enthusiasm with critical analysis. This episode of The Cognitive Revolution explores AI's transformative potential in medicine, law, and automation, while addressing concerns about safety, control, and geopolitical implications. Join us for an insightful conversation on responsible AI development and its impact on society. Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj RECOMMENDED PODCAST: Complex Systems Patrick McKenzie (@patio11) talks to experts who understand the complicated but not unknowable systems we rely on. You might be surprised at how quickly Patrick and his guests can put you in the top 1% of understanding for stock trading, tech hiring, and more. Spotify: https://open.spotify.com/show/3Mos4VE3figVXleHDqfXOH Apple: https://podcasts.apple.com/us/podcast/complex-systems-with-patrick-mckenzie-patio11/id1753399812 SPONSORS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-TCR Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS: (00:00:00) About the Show (00:00:25) Sponsor: WorkOS (00:01:25) About the Show (00:03:36) Intro (00:05:53) How did you get into AI? (00:11:09) The power of the transformer (00:14:16) Waymark growth (00:19:00) Sponsors: Oracle | Brave (00:21:04) The potential benefits of AI (00:26:45) AI in law (00:28:04) Robotics and food preparation (00:32:33) Sponsors: Omneky | Squad (00:34:20) AI and mass inequality (00:35:26) AI could lead to Universal Basic Income (00:37:37) AI supply and the future of work (00:40:53) The AI future (00:43:38) The new social contract (00:47:06) Optimism about the future (00:53:19) What are people afraid of? (00:58:31) Reinforcement learning from human feedback (00:59:42) Instrumental convergence (01:03:00) Current safety concerns (01:07:56) What did Ilya see? (01:10:23) Who controls AI? (01:13:23) China (01:16:05) Taiwan and the chip sector (01:20:05) The AI arms race (01:23:22) AI Governance (01:26:28) AI Safety (01:29:22) Custom Models (01:31:33) Closing (01:32:23) Outro
Join Jean Ginzburg in another insightful episode as she welcomes back Nathan LaBenz, Founder of AI-driven R&D at Waymark. Nathan discusses the latest AI advancements, focusing on the broader implications of AI in daily life, the progression of personal AI use, and emerging AI technologies that promise to transform the digital landscape. He shares insights from his participation in the GPT-4 red team and how it's influenced his current focus on AI. Thank you for listening! Please subscribe wherever you listen to podcasts and give us a rating. - Need marketing strategy? Go to alpenglo.digital/ - Contact Jean: alpenglo.digital/get-in-touch/
Nathan Labenz is a technology entrepreneur, artificial intelligence analyst, and the founder and former CEO of Waymark. With a background in philosophy and a keen eye for innovation, Nathan led Waymark from its inception to its status as a trailblazer in generative AI-powered content creation. As host of 'The Cognitive Revolution' podcast, he explores the transformative impact of artificial intelligence on work, life, society, and culture from every possible angle. Through conversations with notable builders, researchers, and investors, as well as original deep-dive analysis on topics of particular interest, Nathan helps business, policy, and academic leaders stay up to date with AI developments and implications. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on The Nick Halaris Show we are featuring Nathan Labenz, a founder of Waymark, a company using AI to help businesses easily make compelling marketing videos, and the host of a very popular AI-focused podcast called Cognitive Revolution. As you all know by now, Nathan is a gifted thinker and communicator, the world's best AI scout, and one of the most sought-after voices and thought leaders in the industry. In part one of this interview, which dropped last week, we learned about the incredible positive potential of AI technology. Nathan shared with us a compelling vision for a future with less drudgery and suffering and vastly greater and more widespread material prosperity. This week we examine the other side of the coin and ask what could go wrong here. Tune in to this fascinating and sobering episode to learn:Why Nathan and many other AI scientists believe we shouldn't underestimate the potential dangers of AIThe theoretical basis behind the fears that AI systems might develop their own ideas, optimize against humans, or deceive us for their own benefitWhy Nathan thinks we would be better off focusing on implementing AI in its current state across our economy than racing to develop ever more powerful systemsWhy we need to give time and resources for scientists to catch -up and understand how these systems are actually workingHow close we might be to Artificial General IntelligenceWhy the stakes couldn't be higher in our race with China to develop the best, most useful AI and how that's problematic in the context of our AI safety concerns& Much, much moreStay tuned to the end to learn Nathan's compelling ideas for establishing a better path for our economic relations with China and hear his vision for a global AI governance structure.As always, I hope you all enjoy this episode. Thanks for tuning in!Love this episode? Please rate, subscribe, and review on your favorite podcast platform to help more users find our show.
This week on The Nick Halaris Show we are featuring Nathan Labenz, a founder of Waymark, a company using AI to help companies easily make compelling marketing videos, and the host of the Cognitive Revolution podcast. Nathan, our first guest on the show who went to my high school, has carved out a niche for himself in the crowded online world as an AI scout and is fast becoming one of the most sought-after voices in the industry. I have been thinking a ton about AI lately and wanted to have Nathan on the show to get some intelligent insider perspectives on what's really going on in the space. What you are about to hear is part one of a two-part interview where Nathan delivers a tour de force on the AI landscape. We explore the big questions everyone wants to ask about AI, the good, the bad, and the ugly of the AI world, and what's trending and why. In this episode, we learn what led Nathan down the path of AI, what motivates his important work as a thought leader, and why AI has the potential to be a force for great good in the world. Tune in to this fascinating episode to learn: How a paper by prominent AI scientist Eliezer Yudkowsky opened Nathan's eyes to the potential and dangers of AIHow an experience at Waymark, while serving as CEO, helped Nathan realize the revolutionary potential of AI Why Nathan believes AI, if handled responsibly, has immense potential to dramatically improve our world, reduce human suffering, and usher in an unprecedented era of human prosperity What a post-AI world might look like and why we might need to start thinking about a new social contract & Much, much moreIn part two of the interview, which will drop next week, we get into the other side of AI story and explore what could go wrong and why. We also examine disturbing trends already at play in the industry and discuss ideas on what we could/should do to make things safer. This is another fascinating conversation that you will not want to miss!As always, I hope you all enjoy this episode. Thanks for tuning in!Love this episode? Please rate, subscribe, and review on your favorite podcast platform to help more users find our show.
In today's episode, we discuss Benchmark's investment philosophy in the AI era with general partner Eric Vishria, and Sergiy Nesterenko, the Founder and CEO of Quilter. Sitting in for Erik Torenberg as host is AI Scout and Cognitive Revolution host Nathan Labenz. They cover the questions that Benchmark asks to determine whether a new company is solving an enduring or temporary problem, and the reasons why Benchmark hasn't invested in a foundational model to date. They also discuss the innovation behind Quilter's groundbreaking use of reinforcement learning to automate integrated circuit board designs. They delve into the importance of thinking beyond 'co-pilots' to fully automated AI solutions, and explore the balance of research and engineering in the AI space.
Dive into an in-depth conversation with Nathan and Robert Wright as they discuss AI's transformative potential, mechanistic interpretability, and the sobering realities of AI alignment research. Learn about the defensive strategies and safety measures necessary for managing advanced AI risks in an open source world. Don't miss the insights on AI-powered VR, and be sure to check out part one on the non-zero feed. Checkout the Part 1 of the conversation here : https://www.youtube.com/watch?v=s8bgB8TCdBs SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention "Turpentine" to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00:00) Introduction (00:07:13) AI in Governance (00:11:08) Sci-fi doomer (00:13:58) Sponsors: Oracle | Brave (00:16:05) The frontier models (00:20:22) Emergent behavior (00:23:48) Theory of mind (00:28:09) Mechanistic interpretability (00:34:12) Sponsors: Squad | Omneky (00:38:12) AI Alignment Techniques (00:42:38) The Sweet Spot of AI
This week on Upstream, Nathan Labenz of The Cognitive Revolution podcast and Brian Chau, a machine learning engineer who recently founded the Alliance for the Future, talk about Brian's efforts in Washington, D.C. to influence AI policy based on informed AI optimism. After their conversation, Nathan and Erik chat about the AI landscape. Access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com and mention “Turpentine” to skip the waitlist. -- This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co. -- SPONSORS: BEEHIIV | SQUAD Head to Beehiiv, the newsletter platform built for growth, to power your own. Connect with premium brands, scale your audience, and deliver a beautiful UX that stands out in an inbox.
This week on Upstream, Erik is joined by Nathan Labenz and Noah Smith for a deep dive on how AI will impact job markets, the social contract, and the role of the government. Access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com and mention “Turpentine” to skip the waitlist. -- SPONSORS: SQUAD | BRAVE Squad:
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A wide-ranging conversation with host of The Cognitive Revolution and AI multi-threat Nathan Labenz. Find Nathan on Twitter: https://twitter.com/labenz ** Be the first to learn about our new AI education platform: https://besuper.ai/ ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
Nathan's role as “AI scout” ... What "open source" in AI really means ... Why the secrecy around AI training data? ... What's actually “open” about OpenAI? ... The case for and against open source AI ... Is an open source crackdown coming? ... What Waymark, Nathan's company, does with AI ...
Nathan's role as “AI scout” ... What "open source" in AI really means ... Why the secrecy around AI training data? ... What's actually “open” about OpenAI? ... The case for and against open source AI ... Is an open source crackdown coming? ... What Waymark, Nathan's company, does with AI ...
In this episode, Noah Smith and Erik Torenberg are joined by Nathan Labenz, co-host of The Cognitive Revolution Podcast, to deep dive into AI's impact on our economy and society. They discuss how AI might affect the job market and society, drawing comparisons to past economic changes. If you're looking for an ERP platform head to NetSuite http://netsuite.com/102 and download your own customized KPI checklist. This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co. - RECOMMENDED PODCAST: Sourcery by Molly O'Shea Sourcery by Molly O'Shea brings the conversations founders and investors have behind closed doors into the light. Subscribe for upcoming episodes with: Delian Asparouhov of Founders Fund and Varda Space, and Chris Power of Hadrian. Spotify: https://open.spotify.com/show/2Ni3Tese9CtZa3oxpCjgTg YouTube: https://www.youtube.com/@SourceryVC/ - SPONSOR: NETSUITE NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform head to NetSuite http://netsuite.com/102 and download your own customized KPI checklist. – Links: Noah Smith's Noahpinion https://www.noahpinion.blog The Cognitive Revolution podcast Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk Apple: https://podcasts.apple.com/us/podcast/id1669813431 – X / TWITTER: @labenz (Nathan) @noahpinion (Noah) @eriktorenberg (Erik) @TurpentineMedia (Turpentine) – TIMESTAMPS (00:00) Intro (00:39) AI and the Future of Employment (03:48) The Intersection of AI, Robotics, and Biotech (04:53) Economic Disruption (06:33) Future of Jobs in an AI-Dominated World (08:22) AI's Impact on Economy (10:36) Pessimism Surrounding AI's Economic Impact (12:21) Comparative Advantage in the AI Era (14:02) AI Replacing Human Jobs (19:19) Sponsor: Netsuite (19:30) New Social Contract in the AI Era (34:21) Human Wages in an AI-Dominated Economy (39:31) Impact of AI on Job Market (41:40) Fear of AI Taking Over Human Jobs (47:47) Role of Government in AI Development (50:06) Potential Risks and Benefits of AI (53:14) Future of AI and Its Social Impact (58:45) Fear of Totalitarian States and AI (01:05:35) The Need for a Positive Vision for AI (01:11:18) Wrap
This is a selection of highlights from episode #177 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist campsAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's alarming experience red-teaming an early version of GPT-4 and resulting conversations with OpenAI staff and board members.Today we go deeper, diving into:What AI now actually can and can't do, across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.Why most people, including most listeners, probably don't know and can't keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.How we need to learn to talk about AI more productively, particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.'The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.Preparing for coming societal impacts and potential disruption from AI.Practical ways that curious listeners can try to stay abreast of everything that's going on.And plenty more.Links to learn more, summary, and full transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
On today's Upstream, we're sharing a recent episode where Balaji Srinivasan joins Nathan Labenz to discuss the future of AI: polytheistic AI and AI gods, human-AI symbiosis, and how AI will be controlled. Upstream is sponsored by Shopify: https://shopify.com/torenberg for a $1/month trial period. This episode is a feed drop of The Cognitive Revolution, where Nathan Labenz and Erik Torenberg interview builders on the edge of AI and explore its dramatic shift over the next decades. Listen and subscribe everywhere you get your podcasts: https://www.cognitiverevolution.ai/ - We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com. – SPONSORS: SHOPIFY | GIVEWELL SHOPIFY: https://shopify.com/torenberg for a $1/month trial period Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of all e-commerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries. From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/torenberg GiveWell: https://www.givewell.org/ Have you ever wondered where your donation could have the most impact? GiveWell has now spent over 15 years researching charitable organizations and only directs funding to the HIGHEST-IMPACT opportunities they've found in global health and poverty alleviation. Make informed decisions about high-impact giving. If you've never donated through GiveWell before, you can have your donation matched up to $100 before the end of the year, or as long as matching funds last. To claim your match, go to https://www.givewell.org/ and pick “Podcast” and enter "Upstream with Erik Torenberg" at checkout. – X / TWITTER: @labenz (Nathan) @balajis (Balaji) @eriktorenberg (Erik) @upstream__pod (Upstream) @TurpentineMedia @CogRev_Podcast (Cognitive Revolution) -- TIMESTAMPS: (00:00) Intro (01:41) Exploring the Concept of AI in the Context of Evolution (02:34) Technical Aspects of AI and its Future (05:25) Challenges in AI: Time Varying and Rule Varying Systems (05:54) AI in Practical Utility and Jobs (07:11) AI Progress and Uncertainties in the Near Future (08:17) AI and the Concept of Generative Intelligence (08:25) Adversarial Input (10:36) Sponsor: Shopify (25:11) Offense and Defense (25:48) Sponsor: GiveWell (40:49) Impact on Blue America (45:39 AI Regulation and the Concept of Software FDA (48:45) The Future of AI (49:00) The Role of AI in the Tech Industry (50:18) Exploring the Potential of AI Technology (53:57) The Mathematical Constraints of AI (55:23) The Future of AI: Cryptography and Chaos (57:16) The Impact of AI on Human Evolution (57:19) The Practical Difficulties of AI Implementation (01:02:34) The Future of AI: Amplified Intelligence (01:05:11) The Future of AI: Man-Machine Symbiosis (01:09:09) The Future of AI: Polytheistic AI (01:17:53) Amplified Intelligence Revisited (01:29:02) The Future of AI: The Impact on Blue America (01:30:14) AI in Healthcare (01:30:34) The Role of Human Intelligence in AI Development (01:31:24) The Impact of AI on Job Market and Environment (01:32:27) The Role of AI in Robotics and Training (01:33:50) The Actuator Concept in AI (01:35:06) The Impact of AI on Cryptography (01:37:54) Data Analysis (01:41:00) The Role of AI in Bridging Different Modalities (01:45:28) The Future of AI in a Global Context (01:59:56) The Future of AI in a Tribal Perspective (02:04:26) The Future of AI in a Global Perspective
OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That's the central theme of today's episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, summary, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI's “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan's view: it's complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI's board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today's episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board's reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn't understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they're playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI's decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They've also invested major resources into new ‘Superalignment' and ‘Preparedness' teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it's very hard to know whether it's all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we're confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI's research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they're also better placed than maybe anyone in the world to assess if the company's strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today's extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan's interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan's view on the game theory of AI arms races and China.Whether it's worth taking some risk with AI for huge potential upside.The need for more “AI scouts” to understand and communicate AI progress.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore
Zvi Mowshowitz of Don't Worry about The Vase and Nathan Labenz of the Cognitive Revolution podcast come on for a quick recap of the past week's AI news! We get into: What AI diplomacy is looking like post-Bletchley Park What new applications OpenAI's latest announcements mean for future AI applications Outtro: Bizarrap with Milo J https://www.youtube.com/watch?v=hGWa-GO8mKg Learn more about your ad choices. Visit megaphone.fm/adchoices