POPULARITY
We published our first episode on the threat of antibiotic resistance in 2016, and nearly a decade later, it remains one of the world's most pressing health crises. Today, with advances in artificial intelligence (AI), the race to develop new antibiotics is evolving. In this episode, co-host Danielle Mandikian sits down with guests Tommaso Biancalani, Distinguished Scientist and Director of Biological Research and AI Development, and Steven Rutherford, Senior Principal Scientist and Director of Infectious Diseases in Research Biology, to share the latest in the fight against antibiotic resistance. Together, they discuss the challenges of antibiotic discovery and development, and how AI could streamline the process of identifying novel antibiotics within the vast, uncharted chemical universe. Read the full text transcript at www.gene.com/stories/ai-and-the-quest-for-new-antibiotics
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME's 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models. (0:00) Intro(1:15) Overview of AI 2027(2:32) AI Development Timeline(4:10) Race and Slowdown Branches(12:52) US vs China(18:09) Potential AI Misalignment(31:06) Getting Serious About the Threat of AI(47:23) Predictions for AI Development by 2027(48:33) Public and Government Reactions to AI Concerns(49:27) Policy Recommendations for AI Safety(52:22) Diverging Views on AI Alignment Timelines(1:01:30) The Role of Public Awareness in AI Safety(1:02:38) Reflections on Insider vs. Outsider Strategies(1:10:53) Future Research and Scenario Planning(1:14:01) Best and Worst Case Outcomes for AI(1:17:02) Final Thoughts and Hopes for the Future With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
L'ultima ricerca pubblicata da NewsGuard lo scorso febbraio ha rilevato e identificato la presenza online di 1.254 siti di notizie e informazioni inaffidabili generate dall'intelligenza artificiale in 16 lingue, tra cui anche l'italiano. Ma come funziona esattamente l'AI e come possono gli utenti sapere se quella che si trovano davanti è un'informazione affidabile? Ne abbiamo parlato nella puntata di oggi con Andrea Pazzaglia, Head of AI Development di Class Editori. ... Qui il link per iscriversi al canale Whatsapp di Notizie a colazione: https://whatsapp.com/channel/0029Va7X7C4DjiOmdBGtOL3z Per iscriverti al canale Telegram: https://t.me/notizieacolazione ... Qui gli altri podcast di Class Editori: https://milanofinanza.it/podcast Musica https://www.bensound.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Lin Qiao, CEO of Fireworks AI, dives into the practical challenges AI developers face, from UX/DX hurdles to complex systems engineering. Discover key trends like the convergence of open-source and proprietary models, the rise of agentic workflows, and strategies for optimizing quality, speed, and cost. Subscribe to the Gradient Flow Newsletter
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE's foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google's continued investment in open source and scalable architecture.Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI WorkKubeCon Europe: How Google Will Evolve Kubernetes in the AI EraApache Ray Finds a Home on the Google Kubernetes EngineJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
在 AI 重塑科技生態的關鍵時刻,Arm 如何串聯雲端與終端,打造更高效、安全、普及的人工智慧應用?本集邀請 Arm Principal FAE Blade,搶先揭露 COMPTUEX 2025 Arm 的焦點活動、前瞻技術高峰演講與開發者限定的 Arm Developer Experience 技術演講及交流會。 COMPUTEX 2025 將於 5 月 20 日至 5 月 23 日於台北南港展覽館1、2館舉行,其中 Arm資深副總裁暨終端產品事業部總經理 Chris Bergey 將於 5月 19 日下午三點到四點率先主講 Arm 前瞻技術高峰論壇,探討如何攜手產業加速邁向人工智慧的新時代。此外,Arm將於5/20-5/21舉辦開發者專屬的Arm Developer Experience 技術演講與開發者小聚,不容錯過,請立即報名! As AI continues to transform the tech landscape, how is Arm bridging cloud and edge technologies to deliver efficient, secure, and scalable AI solutions? In this episode, we are joined by Blade Lin, Principal FAE at Arm, who offers an exclusive preview of Arm's Executive Session and Developer Experience showcases at COMPUTEX 2025, from cutting-edge demos to behind-the-scenes insights you won't want to miss. COMPUTEX 2025 takes place May 20–23 at Taipei Nangang Exhibition Center Hall 1 & 2. Ahead of the main event, the Arm Executive Session presented by Chris Bergey, SVP and GM, Client Line of Business, will take place on May 19 from 3:00 to 4:00 PM. In addition, Arm will host the Arm Developer Experience technical talks and developer meetup from May 20 to 21. Don't miss this exciting event — register now! -節目主持:Raymond -成音剪輯:林佳欣 -製作團隊:TAITRA X Soundtalk Creative -音樂來源:http://www.premiumbeat.com -- Hosting provided by SoundOn
Knowledge Project: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware's partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes?VMware's Golden PathJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Knowledge Project Key Takeaways Check out the episode pageRead the full notes @ podcastnotes.orgMost accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Join us as we explore the transformative changes in software development and cybersecurity due to AI. We discuss new terminology like ‘vibe coding' — a novel, behavior-focused development approach, and ‘MCP' (Model Context Protocol) — an open standard for AI interfaces. We also address the concept of ‘slopsquatting,' a new type of threat involving AI-generated […] The post What Vibe Coding, MCP, and Slopsquatting Reveal About the Future of AI Development appeared first on Shared Security Podcast.
Join JP Newman in this fascinating episode of 'Investing on Purpose' as he, along with guests Brett Hurt, Chantel Mc Daniel, and Brad Weimert, discuss their experiences and key takeaways from the 2025 TED Conference in Vancouver. They explore the conference theme 'Humanity Reimagined,' touching on topics such as artificial intelligence, quantum computing, and the essence of human artistry. The episode delves into the impact of AI on future jobs, the necessity for intentionality in technology, and the inspiring talks around innovation, peace-making, and mental health. Tune in for an insightful journey into the evolving landscape of technology and human connection.
Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Sir Niall Ferguson, renowned historian and Milbank Family Senior Fellow at the Hoover Institution, joins Azeem Azhar to discuss the evolving relationship between the U.S. and China, Trump's foreign policy doctrine, and what the new global economic and security order might look like. (00:00) What most analysts are missing about Trump (05:43) The win-win outcome in Europe–U.S relations (11:17) How the U.S. is reestablishing deterrence (15:50) Can the U.S. economy weather the impact of tariffs? (23:33) Niall's read on China (29:29) How is China performing in tech? (33:35) What might happen with Taiwan (42:43) Predictions for the coming world order Sir Niall Ferguson's links:Substack: Time MachineBooks: War of the World, Doom: The Politics of CatastropheTwitter/X: https://x.com/nfergusAzeem's links:Substack: https://www.exponentialview.co/ Website: https://www.azeemazhar.com/ LinkedIn: https://www.linkedin.com/in/azhar Twitter/X: https://x.com/azeem Our new show This was originally recorded for "Friday with Azeem Azhar" on 28 March. Produced by supermix.io and EPIIPLUS1 Ltd
"Preview: Author Gary Rivlin, 'AI Valley,' presents the back story of AI development and then dismissal in the 1970s and 1980s. More later in the week." 1952 https://www.amazon.com/Valley-Microsoft-Trillion-Dollar-Artificial-Intelligence-ebook/dp/B0D7ZRSH7P/ref=tmm_kin_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.AJeF940tKhADhdajpBWTAM0NBzzXjrOJ_C6W040rhkNRlFXvSpVdtjYclENO74aCPgq8yPNhAdGjb4kZ6pCmmsvyYKET_EuYyGnf7RXSZ1W0YbU_h0r7EYDDvZj_aB3j0OvGg0OsK8JaOmlzX_eB_Guar_jgqhTgBwEIONt0nHM78nJZmlCxXzawvx6xrjBrmPX4Te68hgrEMLpI0Gy2uvscj4pm4-CxX8c9U7MOG6Q.yKug_BFX2VvXr6xFXIOgeEKJEg-eZqu1K-NYi9O1kcg&qid=1745068898&sr=1-1
With artificial intelligence development expanding at a breakneck speed, powering it is becoming a hot topic. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode, Jenna Barron interviews Antje Barth, principal developer advocate for generative AI at AWS.They discuss:The skills that are becoming more important as AI adoption increasesThe emergence of the AI engineering roleThe trend of "vibe coding"Resources:Amazon Q DeveloperEpisode transcript
What if artificial intelligence could radically improve human life — not just automate it? John Maytham is joined by Jacks Shiels, AI Research Fellow and founder of Shiels.ai, to explore the bold and hopeful vision laid out by Dario Amodei, CEO of Anthropic, in his recent essay “Machines of Loving Grace.”See omnystudio.com/listener for privacy information.
- Market Analysis and Silver Investment (0:00) - Trump's Economic Policies and Dollar Value (3:04) - Historical Newspaper Analysis (6:27) - Decline in Human Knowledge and Cognitive Capacity (12:01) - Preservation of Human Knowledge and AI Development (18:12) - Impact of AI on Human Knowledge and Society (18:31) - Challenges and Opportunities in the Token Economy (55:28) - Practical Steps for Living a More Centralized Life (1:09:59) - Gold Backs and Their Value (1:10:56) - Future of AI and Human Knowledge (1:26:31) - Gold and Silver Market Stress (1:26:50) - Trump's Alleged Actions Against the Crown (1:29:22) - Impact of Gold and Silver Paper Contracts (1:31:59) - Introduction of Chris Sullivan and His Background (1:34:11) - Sullivan's Insights on Bitcoin and Financial Markets (1:39:38) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Episode 53: What role will AI agents play in addressing global challenges? Join Matt Wolfe (https://x.com/mreflow) Amanda Saunders (https://x.com/amandamsaunders), Director of Enterprise Generative AI Product Marketing at Nvidia, then Bob Pette (https://x.com/RobertPette) Vice President and General Manager of Enterprise Platforms at Nvidia, as they delve into the transformative potential of agentic AI at the Nvidia GTC Conference. Vote for us at the Webby's https://vote.webbyawards.com/PublicVoting#/2025/podcasts/shows/business This episode explores the concept of AI agents as digital employees that perceive, reason, and act, reshaping industries like healthcare and telecom. Discover Nvidia's approach to building powerful AI agents and the measures in place to ensure their secure and productive deployment. From optimizing workflows with agentic AI blueprints to fascinating agent applications in sports coaching, the discussion unpacks AI's promising future. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Exploring Nvidia's AI Revolution (03:29) AI's Breakneck Growth Spurs Innovation (06:29) Video Agents Enhancing Athletic Performance (09:46) AI: Problem Solver and Concern Raiser (14:54) Rise of Sophisticated AI Agents (18:21) Earth-2: Visualizing Future Changes (21:53) Nvidia Optimizes Llama for Reasoning (23:50) Reasoning Models Enhance Problem Solving (27:20) Balancing AI Creativity and Accuracy (30:31) Nvidia's AI Development in Windows (34:16) AI Development Acceleration Benefits (37:32) High-Power Servers & Workstations Overview (39:37) Liquid Cooling in AI Workstations — Mentions: Get the free AI Agent Playbook: https://clickhubspot.com/ovw Amanda Saunders: https://www.linkedin.com/in/amandamsaunders/ Bob Pette: https://www.linkedin.com/in/bobpette/ Nvidia: https://www.nvidia.com/en-us/ Nvidia GTC Conference: https://www.nvidia.com/gtc/ Earth-2: https://www.nvidia.com/en-us/high-performance-computing/earth-2/ Vote for us! https://vote.webbyawards.com/PublicVoting#/2025/podcasts/shows/business — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
www.peoplenottitles.comFaisal Hoque is recognized as one of the world's leading management thinkers and technologists.He is an award-winning entrepreneur, innovator and a #1 Wall Street Journal Best-selling author, enabling sustainable innovation and transformation.www.faisalhoque.comIntroduction and Guest Introduction (00:00:00)Discussion of the Book "Transcend" (00:01:40)The AI Genie Is Out of the Bottle (00:03:14)Philosophical Questions About AI (00:04:53)Fear and Humanity's Relationship with AI (00:07:09)AI as a Caregiver (00:08:37)AI and Job Automation (00:10:20)AI as a Partner in Research (00:11:38)Emotional Connections and AI (00:13:22)The Dangers of Algorithm Manipulation (00:14:52)Philosophical Tenets and AI Usage (00:15:28)AI as an Active Technology (00:17:57)The Parent-Child Analogy (00:19:52)Historical Context of Technological Advances (00:21:03)Guardrails for AI Development (00:22:11)Interconnectedness of Humanity and AI (00:24:21)The Impact of Thoughts and AI (00:26:19)Unintentional Bias in AI (00:26:59)Practical Steps for AI Integration (00:28:15)Frameworks for AI Utilization (00:28:43)Real-World Applications of AI (00:30:26)Understanding AI Personas (00:31:02)Prototyping with AI (00:33:33)Human Relationships vs. AI (00:36:57)The Role of Adversity in Growth (00:45:14)Approaching AI with Humility (00:46:36)Full episodes available at www.peoplenottitles.comPeople, Not Titles podcast is hosted by Steve Kaempf and is dedicated to lifting up professionals in the real estate and business community. Our inspiration is to highlight success principles of our colleagues.Our Success Series covers principles of success to help your thrive!www.peoplenottitles.comIG - https://www.instagram.com/peoplenotti...FB - https://www.facebook.com/peoplenottitlesTwitter - https://twitter.com/sjkaempfSpotify - https://open.spotify.com/show/1uu5kTv...
Welcome to episode #978 of Six Pixels of Separation - The ThinkersOne Podcast. Dr. Christopher DiCarlo is a philosopher, educator, author, and ethicist whose work lives at the intersection of human values, science, and emerging technology. Over the years, Christopher has built a reputation as a Socratic nonconformist, equally at home lecturing at Harvard during his postdoctoral years as he is teaching critical thinking in correctional institutions or corporate boardrooms. He's the author of several important books on logic and rational discourse, including How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions and So You Think You Can Think?, as well as the host of the podcast, All Thinks Considered. In this conversation, we dig into his latest book, Building A God - The Ethics Of Artificial Intelligence And The Race To Control It, which takes a sobering yet practical look at the ethical governance of AI as we accelerate toward the possibility of artificial general intelligence. Drawing on years of study in philosophy of science and ethics, Christopher lays out the risks - manipulation, misalignment, lack of transparency - and the urgent need for international cooperation to set safeguards now. We talk about everything from the potential of AI to revolutionize healthcare and sustainability to the darker realities of deepfakes, algorithmic control, and the erosion of democratic processes. His proposal? A kind of AI “Geneva Conventions,” or something akin to the IAEA - but for algorithms. In a world rushing toward techno-utopianism, Christopher is a clear-eyed voice asking: “What kind of Gods are we building… and can we still choose their values?” If you're thinking about the intersection of ethics and AI (and we should all be focused on this!), this is essential listening. Enjoy the conversation... Running time: 58:55. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on X. Here is my conversation with Dr. Christopher DiCarlo. Building A God - The Ethics Of Artificial Intelligence And The Race To Control It. How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions. So You Think You Can Think?. All Thinks Considered. Convergence Analysis. Follow Christopher on LinkedIn. Follow Christopher on X. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction to AI Ethics and Philosophy. (03:14) - The Interconnectedness of Systems. (05:56) - The Race for AGI and Its Implications. (09:04) - Risks of Advanced AI: Misuse and Misalignment. (11:54) - The Need for Ethical Guidelines in AI Development. (15:05) - Global Cooperation and the AI Arms Race. (18:03) - Values and Ethics in AI Alignment. (20:51) - The Role of Government in AI Regulation. (24:14) - The Future of AI: Hope and Concerns. (31:02) - The Dichotomy of Regulation and Innovation. (34:57) - The Drive Behind AI Pioneers. (37:12) - Skepticism and the Tech Bubble Debate. (39:39) - The Potential of AI and Its Risks. (43:20) - Techno-Selection and Control Over AI. (48:53) - The Future of Medicine and AI's Role. (51:42) - Empowering the Public in AI Governance. (54:37) - Building a God: Ethical Considerations in AI.
In this episode, Dr. Matthew Lungren and Seth Hain, leaders in the implementation of healthcare AI technologies and solutions at scale, join Lee to discuss the latest developments. Lungren, the chief scientific officer at Microsoft Health and Life Sciences, explores the creation and deployment of generative AI for automating clinical documentation and administrative tasks like clinical note-taking. Hain, the senior vice president of R&D at the healthcare software company Epic, focuses on the opportunities and challenges of integrating AI into electronic health records at global scale, highlighting AI-driven workflows, decision support, and Epic's Cosmos project, which leverages aggregated healthcare data for research and clinical insights.
AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations.Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections.Learn more from The New Stack about Kong's AI GatewayKong: New ‘AI-Infused' Features for API Management, Dev ToolsFrom Zero to a Terraform Provider for Kong in 120 HoursJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)
We gotta talk about this
To unpack some of the most topical questions in AI, I'm joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We've been wanting to do a cross-over episode for a while and finally made it happen.Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he's been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.To subscribe or learn more about Latent Space, click here: https://www.latent.space/ [0:00] Intro[1:08] Reflecting on AI Surprises of the Past Year[2:24] Open Source Models and Their Adoption[6:48] The Rise of GPT Wrappers[7:49] Challenges in AI Model Training[10:33] Over-hyped and Under-hyped AI Trends[24:00] The Future of AI Product Market Fit[30:27] Google's Momentum and Customer Support Insights[33:16] Emerging AI Applications and Market Trends[35:13] Challenges and Opportunities in AI Development[39:02] Defensibility in AI Applications[42:42] Infrastructure and Security in AI[50:04] Future of AI and Unanswered Questions[55:34] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
AdTech Heroes - Interviews with Advertising Technology Executives
In this episode of the AdTech Heroes Podcast, Dal sits down with James Sandham, Global Head of Digital & AI Development at MullenLowe Global, to discuss the evolving landscape of AI ethics in the advertising industry. They explore the impact of generative AI and the ethical considerations surrounding AI outputs. The conversation also delves into the evolution of AI models and how they are utilized in campaign generation, emphasizing the importance of human expertise in the process.Interested in being a guest? Contact us: adtechheroespodcast.com/contact
- Interview with Doc Pete Chambers and Special Reports (0:00) - DeepSea 3v AI Model and Its Capabilities (2:39) - Challenges in AI Development and Future Plans (5:10) - China's AI Advancements and US Education System (7:21) - The Era of Self-Aware AI and Its Implications (13:49) - Germany's Self-Sabotage and Western Nations' Satanic Practices (21:41) - The Role of Satanism in Western Governments and Societal Practices (29:13) - The End Times and the Role of God in Human History (38:02) - Book Review: Global Tyranny, Step by Step (52:27) - Book Review: Everyday Survival (1:00:04) - Customer Appreciation Week Promotions (1:23:00) - Introduction of Doc Pete Chambers (1:31:04) - Philosophy and Conflict Resolution (1:32:29) - Conflict Resolution with Drug Cartels (1:35:10) - Impact of Border Security on Cartel Operations (1:42:40) - Challenges of Dealing with Human Trafficking (1:51:22) - Support for the Remnant Ministry (1:59:31) - Spiritual and Practical Approaches to Conflict Resolution (2:14:37) - The Role of the Remnant Ministry in Disaster Relief (2:23:18) - The Importance of Faith and Perseverance (2:23:32) - Conclusion and Future Plans (2:25:12) - Introduction to the Seed Kit Campaign (2:29:30) - Details of the Seed Kits (2:29:46) - Features and Benefits of the Seed Kits (2:31:00) - Additional Information and Support (2:32:15) - Closing Remarks (2:32:34) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
In today's episode of Double Tap, Steven and Shaun uncover the mystery behind the Hable One's secret evolution into the Hable Easy—a smartphone navigation device that's been flying under the radar for a year. They share real-world use cases, highlight its accessibility benefits, and question why no one—including them—knew about this sooner.The hosts also dive deep into Apple's current AI ambitions and ask the hard question: Is Apple too late to the AI race? With mounting rumors of internal chaos and a Siri rebuild, Steven doesn't hold back on his frustrations.Other hot topics include:The confusing cable requirements for AirPods Max's new “lossless” upgradeAmazon's Lady A Plus and which devices will (or won't) support itQuebec's tough new language law that's paused OtterBox shipmentsRivo 2 keyboard strugglesStay tuned for Steven's candid take on Apple's AI mess, new accessibility gear, and why the team's AirPods Max might be dog-approved… literally.Relevant Links:Hable Easy Product Page: https://www.iamhable.comAT Guys (U.S. Distributor): https://www.atguys.comSight and Sound Technology (UK Distributor): https://www.sightandsound.co.ukHable Easy Webinar Info (April 2nd): https://www.sightandsound.co.uk/webinarsOtterBox Info: https://www.otterbox.comBill 96 Quebec Regulation: https://www.cfib-fcei.caVerge article on Alexa Plus: https://www.theverge.comGet in touch with Double Tap by emailing us feedback@doubletaponair.com or by call 1-877-803-4567 and leave us a voicemail. You can also now contact us via Whatsapp on 1-613-481-0144 or visit doubletaponair.com/whatsapp to connect. We are also across social media including X, Mastodon and Facebook. Double Tap is available daily on AMI-audio across Canada, on podcast worldwide and now on YouTube.Chapter Markers:00:00 Introduction03:43 Introduction of Hable Easy: A New Assistive Tech Device20:06 Alexa Plus: Updates and Device Compatibility30:16 Quebec's New Bilingual Marketing Law32:10 Apple AirPods Max: Lossless Audio Update41:42 The Future of Apple Intelligence and Siri54:57 The Complacency of Apple in AI Development Find Double Tap online: YouTube, Double Tap WebsiteJoin the conversation and add your voice to the show either by calling in, sending an email or leaving us a voicemail!Email: feedback@doubletaponair.comPhone: 1-877-803-4567
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Microsoft and OpenAI's complex relationship is heating up, shaping the future of AI in unexpected ways. As tensions grow, Microsoft pushes forward independently with new models called MAI, which directly compete with OpenAI's reasoning models. Meanwhile, OpenAI diversifies its partnerships, signing a massive cloud deal with CoreWeave and teaming up with Oracle and SoftBank for Project Stargate. Brought to you by:KPMG – Go to www.kpmg.us/ai to learn more about how KPMG can help you drive value with our AI solutions.Vanta - Simplify compliance - https://vanta.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdown
Technology doesn't force us to do anything — it merely opens doors. But military and economic competition pushes us through. That's how Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don't. Those who resist too much can find themselves taken over or rendered irrelevant.These highlights are from episode #212 of The 80,000 Hours Podcast: Allan Dafoe on why technology is unstoppable & how to shape AI development anyway, and include:Who's Allan Dafoe? (00:00:00)Astounding patterns in macrohistory (00:00:23)Are humans just along for the ride when it comes to technological progress? (00:03:58)Flavours of technological determinism (00:07:11)The super-cooperative AGI hypothesis and backdoors (00:12:50)Could having more cooperative AIs backfire? (00:19:16)The offence-defence balance (00:24:23)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
When the American company OpenAI released ChatGPT, it was the first time that a lot of people had ever interacted with Generative AI. ChatGPT has become so popular that, for many, it's now synonymous with artificial intelligence.But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn't just a new company, it could be an entirely different approach to building artificial intelligence.To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she's better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.Mentioned:“The messy, secretive reality behind OpenAI's bid to save the world,” by Karen HaoFurther Reading:“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman
We're experimenting and would love to hear from you!In this episode of ‘Discover Daily' by Perplexity, we delve into the latest developments in tech and geopolitics. OpenAI is set to revolutionize its business model with the introduction of advanced AI agents, offering monthly subscription plans ranging from $2,000 to $20,000. These agents are designed to perform complex tasks autonomously, leveraging advanced language models and decision-making algorithms. This move is supported by a significant $3 billion investment from SoftBank, highlighting the potential for these agents to contribute significantly to OpenAI's future revenue.The Pacific island nation of Nauru is also making headlines with its controversial 'golden passport' scheme. For $105,000, individuals can gain citizenship and visa-free access to 89 countries. This initiative aims to fund Nauru's climate change mitigation efforts, as the island faces existential threats from rising sea levels. However, the program raises ethical concerns about criminal exploitation, vetting issues, and the commodification of national identity. As Nauru navigates these challenges, it will be crucial to monitor the program's effectiveness in providing necessary funds for climate adaptation without compromising national security or ethical standards.Our main story focuses on former Google CEO Eric Schmidt's opposition to a U.S. government-led 'Manhattan Project' for developing Artificial General Intelligence (AGI). Schmidt argues that such a project could escalate international tensions and trigger a dangerous AI arms race, particularly with China. Instead, he advocates for a more cautious approach, emphasizing defensive strategies and international cooperation in AI advancement. This stance reflects a growing concern about the risks of unchecked superintelligence development and highlights the need for policymakers and tech leaders to prioritize AI safety and collaboration.From Perplexity's Discover Feed:https://www.perplexity.ai/page/openai-s-20000-ai-agent-nvz8rzw7TZ.ECGL9usO2YQhttps://www.perplexity.ai/page/nauru-sells-citizenship-for-re-mWT.fYg_Su.C7FVaMGqCfQhttps://www.perplexity.ai/page/eric-schmidt-opposes-agi-manha-pymGB79nR.6rRtLvcqONIA **Introducing Perplexity Deep Research:**https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Create agentic solutions quickly and efficiently with Azure AI Foundry. Choose the right models, ground your agents with knowledge, and seamlessly integrate AI into your development workflow—from early experimentation to production. Test, optimize, and deploy with built-in evaluation and management tools. See how to leverage the Azure AI Foundry SDK to code and orchestrate intelligent agents, monitor performance with tracing and assessments, and streamline DevOps with production-ready management. Yina Arenas, from the Azure AI Foundry team, shares its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle. ► QUICK LINKS: 00:00 - Create agentic solutions with Azure AI Foundry 00:20 - Model catalog in Azure AI Foundry 02:15 - Experiment in the Azure AI Foundry playground 03:10 - Create and customize agents 04:13 - Assess and improve agents 05:58 - Monitor and manage apps 06:50 - Create a multi-agentic app in code 09:26 - Create a Sender agent 10:39 - How to connect orchestration logic 11:25 - Watch agents work 12:26 - Wrap up ► Link References Get started with Azure AI Foundry at https://ai.azure.com ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Recorded at our 2025 Technology, Media and Telecom (TMT) Conference, TMT Credit Research Analyst Lindsay Tyler joins Head of Investment Grade Debt Coverage Michelle Wang to discuss the how the industry is strategically raising capital to fund growth.----- Transcript -----Lindsay Tyler: Welcome to Thoughts on the Market. I'm Lindsay Tyler, Morgan Stanley's Lead Investment Grade TMT Credit Research Analyst, and I'm here with Michelle Wang, Head of Investment Grade Debt Coverage in Global Capital Markets.On this special episode, we're recording at the Morgan Stanley Technology, Media, and Telecom (TMT) Conference, and we will discuss the latest on the technology space from the fixed income perspective.It's Thursday, March 6th at 12 pm in San Francisco.What a week it's been. Last I heard, we had over 350 companies here in attendance.To set the stage for our discussion, technology has grown from about 2 percent of the broader investment grade market – about two decades ago – to almost 10 percent now; though that is still relatively a small percentage, relative to the weightings in the equity market.So, can you address two questions? First, why was tech historically such a small part of investment grade? And then second, what has driven the growth sense?Michelle Wang: Technology is still a relatively young industry, right? I'm in my 40s and well over 90 percent of the companies that I cover were founded well within my lifetime. And if you add to that the fact that investment grade debt is, by definition, a later stage capital raising tool. When the business of these companies reaches sufficient scale and cash generation to be rated investment grade by the rating agencies, you wind up with just a small subset of the overall investment grade universe.The second question on what has been driving the growth? Twofold. Number one the organic maturation of the tech industry results in an increasing number of scaled investment grade companies. And then secondly, the increasing use of debt as a cheap source of capital to fund their growth. This could be to fund R&D or CapEx or, in some cases, M&A.Lindsay Tyler: Right, and I would just add in this context that my view for this year on technology credit is a more neutral one, and that's against a backdrop of being more cautious on the communications and media space.And part of that is just driven by the spread compression and the lack of dispersion that we see in the market. And you mentioned M&A and capital allocation; I do think that financial policy and changes there, whether it's investment, M&A, shareholder returns – that will be the main driver of credit spreads.But let's turn back to the conference and on the – you know, I mentioned investment. Let's talk about investment.AI has dominated the conversation here at the conference the past two years, and this year is no different. Morgan Stanley's research department has four key investment themes. One of those is AI and tech diffusion.But from the fixed income angle, there is that focus on ongoing and upcoming hyperscaler AI CapEx needs.Michelle Wang: Yep.Lindsay Tyler: There are significant cash flows generated by many of these companies, but we just discussed that the investment grade tech space has grown relative to the index in recent history.Can you discuss the scale of the technology CapEx that we're talking about and the related implications from your perspective?Michelle Wang: Let's actually get into some of the numbers. So in the past three years, total hyperscaler CapEx has increased from [$]125 billion three years ago to [$]220 billion today; and is expected to exceed [$]300 billion in 2027.The hyperscalers have all publicly stated that generative AI is key to their future growth aspirations. So, why are they spending all this money? They're investing heavily in the digital infrastructure to propel this growth. These companies, however, as you've pointed out, are some of the most scaled, best capitalized companies in the entire world. They have a combined market cap of [$]9 trillion. Among them, their balance sheet cash ranges from [$]70 to [$]100 billion per company. And their annual free cash flow, so the money that they generate organically, ranges from [$]30 to [$]75 billion.So they can certainly fund some of this CapEx organically. However, the unprecedented amount of spend for GenAI raises the probability that these hyperscalers could choose to raise capital externally.Lindsay Tyler: Got it.Michelle Wang: Now, how this capital is raised is where it gets really interesting. The most straightforward way to raise capital for a lot of these companies is just to do an investment grade bond deal.Lindsay Tyler: Yep.Michelle Wang: However, there are other more customized funding solutions available for them to achieve objectives like more favorable accounting or rating agency treatment, ways for them to offload some of their CapEx to a private credit firm. Even if that means that these occur at a higher cost of capital.Lindsay Tyler: You touched on private credit. I'd love to dig in there. These bespoke capital solutions.Michelle Wang: Right.Lindsay Tyler: I have seen it in the semiconductor space and telecom infrastructure, but can you please just shed some more light, right? How has this trend come to fruition? How are companies assessing the opportunity? And what are other key implications that you would flag?Michelle Wang: Yeah, for the benefit of the audience, Lindsay, I think just to touch a little bit…Lindsay Tyler: Some definitions,Michelle Wang: Yes, some definitions around ...Lindsay Tyler: Get some context.Michelle Wang: What we're talking about.Lindsay Tyler: Yes.So the – I think what you're referring to is investment grade companies doing asset level financing. Usually in conjunction with a private credit firm, and like all financing trends that came before it, all good financing trends, this one also resulted from the serendipitous intersection of supply and demand of capital.On the supply of capital, the private credit pocket of capital driven by large pockets of insurance capital is now north of $2 trillion and it has increased 10x in scale in the past decade. So, the need to deploy these funds is driving these private credit firms to seek out ways to invest in investment grade companies in a yield enhanced manner.Lindsay Tyler: Right. And typically, we're saying 150 to 200 basis points greater than what maybe an IG bond would yield.Michelle Wang: That's exactly right. That's when it starts to get interesting for them, right? And then the demand of capital, the demand for this type of capital, that's always existed in other industries that are more asset-heavy like telcos.However, the new development of late is the demand for capital from tech due to two megatrends that we're seeing in tech. The first is semiconductors. Building these chip factories is an extremely capital-intensive exercise, so creates a demand for capital. And then the second megatrend is what we've seen with the hyperscalers and GenerativeAI needs. Building data centers and digital infrastructure for GenerativeAI is also extremely expensive, and that creates another pocket of demand for capital that private credit conveniently kinda serves a role in.Lindsay Tyler: Right.Michelle Wang: So look, think we've talked about the ways that companies are using these tools. I'm interested to get your view, Lindsay, on the investor perspective.Lindsay Tyler: Sure.Michelle Wang: How do investors think about some of these more bespoke solutions?Lindsay Tyler: I would say that with deals that have this touch of extra complexity, it does feel that investor communication and understanding is all important. And I have found that, some of these points that you're raising – whether it's the spread pickup and the insurance capital at the asset managers and also layering in ratings implications and the deal terms. I think all of that is important for investors to get more comfortable and have a better understanding of these types of deals.The last topic I do want us to address is the macro environment. This has been another key theme with the conference and with this recent earnings season, so whether it's rate moves this year, the talk of M& A, tariffs – what's your sense on how companies are viewing and assessing macro in their decision making?Michelle Wang: There are three components to how they're thinking about it.The first is the rate move. So, the fact that we're 50 to 60 basis points lower in Treasury yields in the past month, that's welcome news for any company looking to issue debt. The second thing I'll say here is about credit spreads. They remain extremely tight. Speaking to the incredible kind of resilience of the investment grade investor base. The last thing I'll talk about is, I think, the uncertainty. [Because] that's what we're hearing a ton about in all the conversations that we've had with companies that have presented here today at the conference.Lindsay Tyler: Yeah. For my perspective, also the regulatory environment around that M&A, whether or not companies will make the move to maybe be more acquisitive with the current new administration.Michelle Wang: Right, so until the dust settles on some of these issues, it's really difficult as a corporate decision maker to do things like big transformative M&A, to make a company public when you don't know what could happen both from a the market environment and, as you point out, regulatory standpoint.The thing that's interesting is that raising debt capital as an investment grade company has some counter cyclical dynamics to it. Because risk-off sentiment usually translates into lower treasury yields and more favorable cost of debt.And then the second point is when companies are risk averse it drives sometimes cash hoarding behavior, right? So, companies will raise what they call, you know, rainy day liquidity and park it on balance sheet – just to feel a little bit better about where their balance sheets are. To make sure they're in good shape…Lindsay Tyler: Yeah, deal with the maturities that they have right here in the near term.Michelle Wang: That's exactly right. So, I think as a consequence of that, you know, we do see some tailwinds for debt issuance volumes in an uncertain environment.Lindsay Tyler: Got it. Well, appreciate all your insights. This has been great. Thank you for taking the time, Michelle, to talk during such a busy week.Michelle Wang: It's great speaking with you, Lindsay.Lindsay Tyler: And thanks to everyone listening in to this special episode recorded at the Morgan Stanley TMT Conference in San Francisco. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
On today's Unsupervised Learning, Mike Schroepfer (ex-CTO of Meta and founder of Gigascale Capital) reveals why energy is a key bottleneck holding AI progress back. Mike discusses how we can scale energy production to democratize AI globally and explores AI's role in climate change. He also reflects on a decade as Meta's CTO and how AI coding is transforming the CTO role. Finally, he offers predictions on the future of AI developer tools, VR, and open-source models. [0:00] Intro[0:43] AI's Role in Energy and Climate Change[4:32] Innovative Energy Solutions[14:50] Open Source and AI Development[22:35] Challenges in Chip Design[24:04] Balancing Data Center Capacity[25:55] The Future of VR and AI Integration[29:41] AI's Role in Climate Solutions[31:41] AI in Material Science and Beyond[34:31] Personal AI Assistants and Their Impact[38:47] Reflections on AI and Future Predictions[41:23] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
In this installment of AI Corner, Siadhal Magos, the CEO of Metaview, and Nolan Church discuss 'agentic AI' in a practical and relevant way for HR pros. They dig into what agentic AI is today vs future expectations and debate current tool limitations. They also share their current AI tool stack, workflows, and their overall approach to viewing AI like a 'junior colleague'.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.—
Artificial intelligence is radically transforming software development. AI-assisted coding tools are generating billions in investment, promising faster development cycles, and shifting engineering roles from code authors to code editors. But how does this impact software quality, security, and team dynamics? How can product teams embrace AI without falling into the hype? In this episode, AI assisted Agile expert Mike Gehard shares his hands-on experiments with AI in software development. From his deep background at Pivotal Labs to his current work pushing the boundaries of AI-assisted coding, Mike reveals how AI tools can amplify quality practices, speed up prototyping, and even challenge the way we think about source code. He discusses the future of pair programming, the evolving role of test-driven development, and how engineers can better focus on delivering user value. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Mike's background at Pivotal Labs and why he kept returning How AI is changing the way we think about source code as a liability Why test-driven development still matters in an AI-assisted world The future of pair programming with AI copilots The importance of designing better software in an AI-driven development process Using AI to prototype faster and build user-facing value sooner Lessons learned from real-world experiments with AI-driven development The risks of AI-assisted software, from hallucinations to security Mentioned in this episode Mike's Substack: https://aiassistedagiledevelopment.substack.com/ Mike's Github repo: https://github.com/mikegehard/ai-assisted-agile-development Pivotal Labs: https://en.wikipedia.org/wiki/Pivotal_Labs 12-Factor Apps: https://12factor.net/ GitHub Copilot: https://github.com/features/copilot Cloud Foundry: https://en.wikipedia.org/wiki/Cloud_Foundry Lean Startup by Eric Ries: https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898 Refactoring by Martin Fowler and Kent Beck https://www.amazon.com/Refactoring-Improving-Existing-Addison-Wesley-Signature/dp/0134757599 Dependabot: https://github.com/dependabot Tessl CEO Guy Podjarny's talk: https://youtu.be/e1a3WuxTY-k Aider AI Pair programming terminal: https://aider.chat/ Gemini LLM: https://gemini.google.com/app Perplexity AI: https://www.perplexity.ai/ DeepSeek: https://www.deepseek.com/ Ian Cooper's talk on TDD: https://www.youtube.com/watch?v=IN9lftH0cJc Mike's newest Mountain Bike IBIS Ripmo V2S: https://www.ibiscycles.com/bikes/past-models/ripmo-v2s Mike's recommended house slippers: https://us.giesswein.com/collections/mens-wool-slippers/products/wool-slippers-dannheim Sorba Chattanooga Mountain Biking Trails: https://www.sorbachattanooga.org/localtrails Subscribe to the Convergence podcast wherever you get podcasts, including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5-star review and like the podcast on YouTube. It's how we grow.
- Introduction and News Segment (0:10) - Trump and Pfizer CEO Introduction (2:56) - RFK Jr. and Direct-to-Consumer Drug Advertising (4:35) - Special Report on Trump's Potential Ban on COVID Vaccines (6:06) - Call for Mass Arrests and Full Disclosure (13:35) - The FDA as a Grave Threat to America (15:07) - Interview with Mike Ferris on UBI and Economic Collapse (26:01) - Music Video: Going Back in Time is Coming Home (30:40) - Commentary on the Song and Its Message (1:06:21) - Special Report: Humanity's Future with AI (1:07:06) - Conclusion and Call to Action (1:16:58) - Replacement Theory and British Leadership (1:18:22) - British Military's Weakness and Future Conflict (1:25:41) - Historical Context and American Independence (1:28:34) - Bank of England's Financial Crisis (1:31:20) - Exploring Tom Paine's Book on Elite Manipulation (1:35:04) - Jim Mars' Book on Digital Age Mysteries (1:41:25) - Interview with Michael Ferris on AI and Gold (2:02:56) - The Role of AI in the Future Economy (2:21:46) - The Future of Work and Education (2:32:31) - The Importance of Decentralization in AI Development (2:33:28) - AI and Human Creativity (2:41:05) - Decentralized Agriculture and Local Robotics (2:43:51) - Future Outlook and Economic Revolution (2:45:59) - Confirmed Appointments and Potential Changes (2:47:50) - Humanity's Future with AI (2:50:33) - Military Operations and Cartel Threats (2:55:54) - Technological Solutions and Border Security (2:59:37) - Global Instability and Travel Advisories (3:00:08) - European Collapse and Future Outlook (3:02:53) - Censorship and the Fight for Free Speech (3:05:02) - Final Thoughts and Future Predictions (3:07:28) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we begin with rumors about Apple's iPhone 17 lineup. Recent leaks suggest the phones will have a dramatic redesign, featuring a horizontal camera bar across all models and introducing the ultra-thin iPhone 17 Air. The Pro Max variant may also have a cutting-edge metalens technology that could transform the iconic Dynamic Island, potentially setting new standards for smartphone design.OpenAI's latest update to ChatGPT marks a significant shift in AI interaction, removing restrictive warning messages while maintaining essential safety protocols. This change, championed by product head Nick Turley, allows for more natural conversations around complex topics like mental health and fiction, addressing long-standing concerns about the platform's limitations while ensuring responsible AI usage.The final story features an innovative breakthrough from Filipino scientists at Ateneo de Manila University, who have developed a cost-effective method to create transparent aluminum oxide. This remarkable achievement uses simple microdroplets of acid and minimal electricity, potentially revolutionizing industries from electronics to solar energy. While experts remain cautiously optimistic about scaling challenges, this development could transform everything from smartphone screens to building materials, showcasing how innovative thinking can solve complex engineering challenges.From Perplexity's Discover Feed:https://www.perplexity.ai/page/iphone-17-design-may-be-drasti-oylUbPVXRh.42WcTn.8bpg https://www.perplexity.ai/page/chatgpt-removes-content-warnin-IJNbBZ5OTT2aLK9HSoW84g https://www.perplexity.ai/page/see-through-aluminum-breakthro-ahsOUUCvQfCTByCO5ylvSA **Introducing Perplexity Deep Research:**https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
This week, we dive into DeepSeek. SallyAnn DeLucia, Product Manager at Arize, and Nick Luzio, a Solutions Engineer, break down key insights on a model that have dominating headlines for its significant breakthrough in inference speed over other models. What's next for AI (and open source)? From training strategies to real-world performance, here's what you need to know.Read a summary: https://arize.com/blog/how-deepseek-is-pushing-the-boundaries-of-ai-development/Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.
David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon's SF AGI Lab. In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he's looking for and more.[0:00] Intro[1:14] DeepSeek Reactions and Market Implications[2:44] AI Models and Efficiency[4:11] Challenges in Building AGI[7:58] Research Problems in AI Development[11:17] The Future of AI Agents[15:12] Engineering Challenges and Innovations[19:45] The Path to Reliable AI Agents[21:48] Defining AGI and Its Impact[22:47] Challenges and Gating Factors[24:05] Future Human-Computer Interaction[25:00] Specialized Models and Policy[25:58] Technical Challenges and Model Evaluation[28:36] Amazon's Role in AGI Development[30:33] Data Labeling and Team Building[36:37] Reflections on OpenAI[42:12] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He addresses AI scaling, goal misalignment (Goodhart's Law), and the need for holistic alignment, offering a quick look at the future of AI and how to guide it.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT/REFS:https://www.dropbox.com/scl/fi/yqjszhntfr00bhjh6t565/JAKOB.pdf?rlkey=scvny4bnwj8th42fjv8zsfu2y&dl=0 Prof. Jakob Foersterhttps://x.com/j_foersthttps://www.jakobfoerster.com/University of Oxford Profile: https://eng.ox.ac.uk/people/jakob-foerster/Chris Lu:https://chrislu.page/TOC1. GPU Acceleration and Training Infrastructure [00:00:00] 1.1 ARC Challenge Criticism and FLAIR Lab Overview [00:01:25] 1.2 GPU Acceleration and Hardware Lottery in RL [00:05:50] 1.3 Data Wall Challenges and Simulation-Based Solutions [00:08:40] 1.4 JAX Implementation and Technical Acceleration2. Learning Frameworks and Policy Optimization [00:14:18] 2.1 Evolution of RL Algorithms and Mirror Learning Framework [00:15:25] 2.2 Meta-Learning and Policy Optimization Algorithms [00:21:47] 2.3 Language Models and Benchmark Challenges [00:28:15] 2.4 Creativity and Meta-Learning in AI Systems3. Multi-Agent Systems and Decentralization [00:31:24] 3.1 Multi-Agent Systems and Emergent Intelligence [00:38:35] 3.2 Swarm Intelligence vs Monolithic AGI Systems [00:42:44] 3.3 Democratic Control and Decentralization of AI Development [00:46:14] 3.4 Open Source AI and Alignment Challenges [00:49:31] 3.5 Collaborative Models for AI DevelopmentREFS[[00:00:05] ARC Benchmark, Chollethttps://github.com/fchollet/ARC-AGI[00:03:05] DRL Doesn't Work, Irpanhttps://www.alexirpan.com/2018/02/14/rl-hard.html[00:05:55] AI Training Data, Data Provenance Initiativehttps://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html[00:06:10] JaxMARL, Foerster et al.https://arxiv.org/html/2311.10090v5[00:08:50] M-FOS, Lu et al.https://arxiv.org/abs/2205.01447[00:09:45] JAX Library, Google Researchhttps://github.com/jax-ml/jax[00:12:10] Kinetix, Mike and Michaelhttps://arxiv.org/abs/2410.23208[00:12:45] Genie 2, DeepMindhttps://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/[00:14:42] Mirror Learning, Grudzien, Kuba et al.https://arxiv.org/abs/2208.01682[00:16:30] Discovered Policy Optimisation, Lu et al.https://arxiv.org/abs/2210.05639[00:24:10] Goodhart's Law, Goodharthttps://en.wikipedia.org/wiki/Goodhart%27s_law[00:25:15] LLM ARChitect, Franzen et al.https://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:28:55] AlphaGo, Silver et al.https://arxiv.org/pdf/1712.01815.pdf[00:30:10] Meta-learning, Lu, Towers, Foersterhttps://direct.mit.edu/isal/proceedings-pdf/isal2023/35/67/2354943/isal_a_00674.pdf[00:31:30] Emergence of Pragmatics, Yuan et al.https://arxiv.org/abs/2001.07752[00:34:30] AI Safety, Amodei et al.https://arxiv.org/abs/1606.06565[00:35:45] Intentional Stance, Dennetthttps://plato.stanford.edu/entries/ethics-ai/[00:39:25] Multi-Agent RL, Zhou et al.https://arxiv.org/pdf/2305.10091[00:41:00] Open Source Generative AI, Foerster et al.https://arxiv.org/abs/2405.08597
Technology doesn't force us to do anything — it merely opens doors. But military and economic competition pushes us through.That's how today's guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don't. Those who resist too much can find themselves taken over or rendered irrelevant.Links to learn more, highlights, video, and full transcript.This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don't, less careful actors will develop transformative AI capabilities at around the same time anyway.But Allan argues this technological determinism isn't absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they're used for first.As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.As of mid-2024 they didn't seem dangerous at all, but we've learned that our ability to measure these capabilities is good, but imperfect. If we don't find the right way to ‘elicit' an ability we can miss that it's there.Subsequent research from Anthropic and Redwood Research suggests there's even a risk that future models may play dumb to avoid their goals being altered.That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they're necessary.But with much more powerful and general models on the way, individual company policies won't be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.Host Rob and Allan also cover:The most exciting beneficial applications of AIWhether and how we can influence the development of technologyWhat DeepMind is doing to evaluate and mitigate risks from frontier AI systemsWhy cooperative AI may be as important as aligned AIThe role of democratic input in AI governanceWhat kinds of experts are most needed in AI safety and governanceAnd much moreChapters:Cold open (00:00:00)Who's Allan Dafoe? (00:00:48)Allan's role at DeepMind (00:01:27)Why join DeepMind over everyone else? (00:04:27)Do humans control technological change? (00:09:17)Arguments for technological determinism (00:20:24)The synthesis of agency with tech determinism (00:26:29)Competition took away Japan's choice (00:37:13)Can speeding up one tech redirect history? (00:42:09)Structural pushback against alignment efforts (00:47:55)Do AIs need to be 'cooperatively skilled'? (00:52:25)How AI could boost cooperation between people and states (01:01:59)The super-cooperative AGI hypothesis and backdoor risks (01:06:58)Aren't today's models already very cooperative? (01:13:22)How would we make AIs cooperative anyway? (01:16:22)Ways making AI more cooperative could backfire (01:22:24)AGI is an essential idea we should define well (01:30:16)It matters what AGI learns first vs last (01:41:01)How Google tests for dangerous capabilities (01:45:39)Evals 'in the wild' (01:57:46)What to do given no single approach works that well (02:01:44)We don't, but could, forecast AI capabilities (02:05:34)DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)How 'structural risks' can force everyone into a worse world (02:15:01)Is AI being built democratically? Should it? (02:19:35)How much do AI companies really want external regulation? (02:24:34)Social science can contribute a lot here (02:33:21)How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions: Katy Moore
In this conversation, the boys discuss the cultural implications of Kendrick Lamar's performance at the Super Bowl halftime show, addressing the backlash against representation in media. They explore the themes of control, freedom of speech, and societal reactions to race and identity. The discussion then shifts to a debate about technology, specifically an AI bird feeder, leading to a broader conversation about the future of artificial intelligence and its potential impact on humanity. In this conversation, the boys delve into the competitive landscape of AI, discussing key players like Sam Altman, Elon Musk, and Larry Ellison. They explore the ethical implications of AI development, personal perspectives on consciousness merging, and the potential risks associated with AI, including the gray goo problem. Also, The CHO introduces a new segment: Today in Southern History, where this week he talks about the day Georgia seceded from the Union, and the ramifications it caused CoreyRyanForrester.com to grab tickets to see Corey in Atlanta and Charleston! TraeCrowder.com to see Trae EVERYWHERE! DrewMorganComedy.com Subscribe to WeLoveCorey.com for bonus stuff from The CHO and read his latest essay at: https://coreyryanforrester.substack.com/p/they-not-like-us-the-annual-halftime Go to FactorMeals.com/WellRED50off and use code WellRED50off to get 50% off your first box of heat and eat nutritious meals! Takeaways: The outrage over Kendrick Lamar's performance reflects deeper societal issues. Cultural representation in media often sparks controversy and backlash. Freedom of speech is selectively applied in discussions about race and identity. The AI bird feeder debate highlights the complexities of technology in everyday life. Artificial intelligence is rapidly evolving and could have significant implications for the future. The conversation around AI often lacks nuance and understanding of its capabilities. Humans may not be prepared for the consequences of advanced AI development. Cultural moments in America are increasingly diverse, challenging traditional norms. The future of AI could lead to both utopian and dystopian outcomes. The merging of technology and humanity raises ethical questions about identity and existence. AI is currently dominated by companies like Deep AI and Alibaba. Sam Altman is seen as a leading figure in AI technology. The ethical implications of AI development are concerning. Merging human consciousness with robotics raises moral questions. The gray goo problem illustrates potential AI risks. Media plays a significant role in shaping public perception of technology. Historical events can provide context for current discussions. Personal experiences can influence views on technology and health. Fitness discussions reveal the importance of health in daily life. Chapters 00:00 The Bold Beginnings of a Podcast Adventure 02:30 AI Bird Feeders: A New Age of Technology 05:56 Understanding AI: Definitions and Misconceptions 09:50 The Future of AI: Potential and Pitfalls 13:36 Philosophical Perspectives on AI and Its Impact 17:17 The Debate on AI's Impact 20:29 The Future of AI and Humanity 23:21 The Ethical Dilemmas of AI 26:48 The Role of Corporations in AI Development 30:26 The Intersection of AI and Human Experience 34:32 Reflections on History and AI's Future 41:50 The Cost of Innovation 42:06 Ego and Power in Tech 43:34 The Misunderstood Villains 44:32 Personal Accountability and Relationships 46:59 The Struggles of Running 51:58 The Debate on Biking 55:42 Upcoming Shows and Farewells 58:14 Putting on Airs: A Redneck Perspective 59:40 Squirrels and Family Drama: A Humorous Take 01:00:47 Kendrick Lamar's Halftime Show Controversy 01:04:00 Cultural Representation and Control in Entertainment
My guest today is Marc Andreessen. Marc is a co-founder of Andreessen Horowitz and one of Silicon Valley's most influential figures. He combines deep technical knowledge from his engineering background with broad historical understanding and strategic thinking about societal patterns. He last joined me on Invest Like the Best in 2021, and the playing field looks a lot different today. Marc goes deep on the seismic shifts reshaping technology and geopolitics. We discuss DeepSeek's open-source AI and what it means for the technological rivalry between America and China, his perspective on the evolution of power structures, and the transformation of the venture capital industry as a whole. Please enjoy my conversation with Marc Andreessen. Subscribe to Colossus Review. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Ramp is the fastest-growing FinTech company in history, and it's backed by more of my favorite past guests (at least 16 of them!) than probably any other company I'm aware of. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. I think this platform will become the standard for investment managers, and if you run an investing firm, I highly recommend you find time to speak with them. Head to ridgelineapps.com to learn more about the platform. – This episode is brought to you by Alphasense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Imagine completing your research five to ten times faster with search that delivers the most relevant results, helping you make high-conviction decisions with confidence. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Learn about Ramp, Ridgeline, & Alphasense (00:06:00) Introduction to DeepSeek's R1 (00:07:24) DeepSeek's Global Impact (00:09:25) AI's Ubiquity and Future (00:10:36) Winners and Losers in the AI Race (00:14:22) The New AI Cold War (00:16:34) China's Technological Ambitions (00:21:31) Open Source and Intellectual Property (00:27:48) The Role of Open Source in AI Development (00:30:02) National Interests vs. Global Competition (00:37:25) The Future of Capital Allocation (00:45:41) Challenges of Sustaining Private Partnerships (00:46:34) Building a Franchise Business (00:48:27) The Role of Political Operations (00:50:51) The Dynamics of Power and Elites (01:01:44) Technological Change and Its Implications (01:13:37) The Future of Robotics and Supply Chains (01:21:09) American Dynamism and Defense Technology
We're experimenting and would love to hear from you!In this episode of ‘Discover Daily', we explore groundbreaking developments in AI technology and their far-reaching implications. Leading the headlines is Google's release of Gemini 2.0, introducing three powerful models - Flash, Pro, and Flash-Lite - each tailored for specific use cases and offering enhanced performance capabilities. We also delve into how the EU's Digital Markets Act is reshaping the iOS app landscape, with the controversial release of Hot Tub marking a significant shift in Apple's traditionally strict content policies.Our main story focuses on DeepSeek's revolutionary R1 AI model, which promises to transform the energy sector with its unprecedented efficiency gains. This Chinese startup's innovation has triggered a dramatic sell-off in energy stocks, with major players like Constellation Energy and Vistra experiencing substantial declines. The development challenges previous assumptions about AI's growing energy demands and could potentially reshape the future of data center infrastructure and power consumption patterns.The implications of DeepSeek's breakthrough extend beyond immediate market reactions, potentially accelerating the transition to renewable energy sources and forcing a reassessment of planned energy infrastructure expansions. With current projections suggesting AI-driven data centers could consume up to 12% of U.S. electricity demand by 2028, this efficiency breakthrough could fundamentally alter the trajectory of energy consumption in the tech sector and influence how companies approach their sustainability goals.From Perplexity's Discover Feed:https://www.perplexity.ai/page/google-s-gemini-2-0-now-availa-.jZH0lMHSSWdnsRf4nHWxwhttps://www.perplexity.ai/page/first-iphone-porn-app-controve-v6tz6uHVTfu.3v6lWPmeAwhttps://www.perplexity.ai/page/deepseek-upends-energy-industr-Ce9aHa1nSZyHcFbXWnCTrQ Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
The recent controversy between WordPress and WP Engine put Matt Mullenweg (Co-Founder of WordPress, CEO of Automattic) under intense online scrutiny. In our conversation, he shared lessons from the controversy and managing through crisis, as well as this thoughts on the future of open source AI and more.(00:00) Intro(01:17) Controversy with WP Engine(03:36) Understanding Open Source and Trademarks(04:36) Automattic's Role and Contributions(08:26) Navigating Legal Battles and Community Relations(18:27) Leadership and Personal Resilience(21:49) The Impact of Social Media on CEOs(31:22) Future Outlook and Reflections(32:42) Exploring the Quinn Model and Open Source Innovations(33:17) The Evolution of AI Interfaces and User Interactions(35:36) AI as a Writing and Coding Partner(38:07) The Power of Open Source in AI Development(40:00) Commoditizing Complements: A Business Strategy(41:39) The Battle with Shopify and Open Source Models(42:33) The Impact of Open Source on Market Dynamics(43:55) USB-C Transition and Gadget Recommendations(47:53) The Benefits of Sabbaticals(53:34) The Future of WordPress and Automattic(59:12) Employee Ownership and Liquidity Programs(01:04:33) Conclusion and Final Thoughts Executive Producer: Rashad AssirProducer: Leah ClapperMixing and editing: Justin Hrabovsky Check out Unsupervised Learning, Redpoint's AI Podcast: https://www.youtube.com/@UCUl-s_Vp-Kkk_XVyDylNwLA
- Trump's Achievements and AI Wars (0:00) - Critique of Media and Tech Industry (0:49) - China's AI Achievements and America's Response (6:55) - Trump's Deportation Policies and AI Implications (8:28) - Trump's Actions and Future Prospects (26:50) - AI's Role in the Future (32:24) - Challenges and Opportunities in AI Development (55:27) - Interview with Alex Jones (55:43) - The Future of AI and National Security (1:03:32) - Conclusion and Call to Action (1:04:39) - Competition in AI and Decentralization (1:05:07) - Challenges in AI Development and Innovation (1:20:08) - Cultural and Educational Sabotage (1:21:29) - The Role of Innovation and Ending Censorship (1:22:45) - Support for RFK Jr. and Vaccine Safety (1:25:37) - Concerns About Pharma Ads and COVID Fraud (1:33:20) - The Future of Currency and Decentralization (1:39:02) - The Role of Gold and Crypto in Financial Stability (1:55:05) - The Importance of Local Governments and Decentralization (2:06:29) - The Future of AI and Decentralized Governance (2:06:50) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
- Joe Biden's Economic Sabotage and AI Technology Enhancements (0:11) - Depopulation and Survival Strategies (4:11) - Critique of Federal Agencies and Proposed Solutions (5:54) - Economic Impact of Biden's Policies (10:54) - The Role of Energy in Economic Prosperity (18:33) - Global Competition and AI Development (32:04) - The Future of the U.S. Dollar and Economic Restructuring (35:45) - Preparation for Economic and Health Crises (40:35) - The Role of Government in Depopulation Efforts (51:28) - Survival Strategies and Community Support (1:08:45) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com