POPULARITY
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
How can AI tech help you write better code? Carl and Richard talk to Mark Miller about the latest AI features coming in CodeRush. Mark talks about focusing on a fast and cost-effective AI assistant driven by voice, so you don't have to switch to a different window and type. The conversation delves into the rapid evolution of software development, utilizing AI technologies to accomplish more in less time.
How can AI tech help you write better code? Carl and Richard talk to Mark Miller about the latest AI features coming in CodeRush. Mark talks about focusing on a fast and cost-effective AI assistant driven by voice, so you don't have to switch to a different window and type. The conversation delves into the rapid evolution of software development, utilizing AI technologies to accomplish more in less time.
Jim Love hosts Krish Banerjee, Canadian Managing Director at Accenture for AI and Data, in a discussion that spans the rapid evolution of AI, enterprise adoption, and the interplay between data and innovation. They tackle the transformation of industry practices, the growing role of AI in everyday life, and the significance of responsible AI development. Krish emphasizes the need for focusing on tangible value and the transformation of existing processes through AI, while touching on the future implications for Canada's digital sovereignty and productivity advances. 00:00 Introduction and Guest Welcome 01:06 AI Evolution and Market Shifts 02:31 Data's Crucial Role in AI 04:53 Enterprise AI: Challenges and Opportunities 13:28 Global AI Landscape and Canada's Position 24:10 Innovative AI Projects and Passionate Pursuits 25:59 Reinventing Healthcare with AI 26:52 Commercializing AI for Canadian Businesses 28:41 The Responsibility of AI Development 29:13 Economic Impact and Future Predictions of AI 33:42 Agentic AI: The Next Frontier 39:24 Democratization and Open Source AI 41:34 Advice for Executives on AI Adoption 45:23 Encouraging AI Learning in the Next Generation 47:20 Final Thoughts and Future Conversations
Enterprises face one common problem: the hidden costs of AI-based technical debt.“There's a lot of hype around AI, but many initiatives aren't founded in a business value proposition,” says Paul Brownell, CTO, Growth Acceleration Partners (GAP). “People wander in without an intentional path for ROI.” In this episode of the Don't Panic, It's Just Data podcast, host Douglas Laney, BARC Research and Advisory Fellow, and author of Infonomics and Data Juice, speaks with Paul Brownell from GAP and Frank Lavigne, Advisory Board Member of CloudArmy.The speakers ultimately agree that AI promises greater returns on investment (ROI). However, it's imperative to note – without a strong data foundation and strategy, AI can quickly turn into a financial nightmare.AI's Potential to Cause Technical Debt Alluding to a significant “leak in the bucket” for AI initiatives, Brownell says, "a lot of these projects aren't founded in a business value proposition." This can often lead to organisations "wandering in without an intentional path." Both Brownell and Lavigne agreed that the most overlooked and costly area is data engineering. Lavigne exemplified this by referring to a meme depicting a sleek F-35 jet labelled "your AI" flying above a pockmarked, potholed road labelled "your data infrastructure.""I think that pretty much says it all," Lavigne stated, highlighting the critical and often unglamorous role of data engineering. Brownell resonated with this, calling it "mundane, routine, detail, hard pick and shovel work."Without mighty data quality, data governance, and traceability, AI projects are built on unsteady ground. Such AI initiatives occasionally result in inaccuracies and create a lack of trust.Scientific Path to AI Initiatives in DataBrownell advocated for a scientific approach to AI initiatives to overcome the hidden costs and maximise ROI. He said, "Come up with a hypothesis around where the business value is going to be, then apply some prototyping. Do real-life experiments to prove out your theory." Such an approach allows organisations to adjust course quickly. "The larger the ship, the harder it is to turn. So if you have these smaller kinds of proofs of concept, you can kind of find out in smaller increments how far we're off course,” explained Lavigne. This lowers risk and paves the way for more experimentation.TakeawaysAI investments can create hidden financial burdens.Data readiness is crucial for successful AI initiatives.A hypothesis-driven approach can guide AI projects.Iterative experimentation leads to better outcomes.Data engineering is essential but often overlooked.Generative AI can assist in data pipeline management.Selecting AI tools requires flexibility and speed.Purpose-built AI models may outperform generative models.Organisations must foster a culture of continuous learning.Understanding the total cost of ownership for AI is vital.Chapters00:00 Uncovering AI Technical Debt04:56 Data Readiness for AI Initiatives09:55 Selecting the Right AI Tools13:06 Generative AI vs Predictive AI18:14 The Future of AI Development
n this episode, we explore the rise of AI in Hollywood through the lens of actors and artists. We discuss the promise of AI tools—like virtual readers for self-tapes—and how they could free creatives to focus on their craft, but also warn of the risks when AI replaces human storytelling. Our guest stresses the need for diverse ethical oversight in AI development, drawing parallels to how Facebook's unintended global impact stemmed from a lack of diverse perspectives at creation. Learn why we need more “naysayers” guiding AI's creative applications, where to draw the line between useful automation and creative displacement, and how tech-savvy actors can advocate for their future. Tune in for a timely conversation on balancing innovation and ethics in Hollywood's AI era.Target KeywordsAI in HollywoodHollywood AI ethicsActors and AI toolsAI creative jobs riskAI entertainment futureTags: AI, Hollywood, AI Ethics, Actors, AI in Entertainment, Creative AI Tools, Self-Tapes, Ethical AI, Tech in Film, AI Risks, Storytelling, Virtual Readers, AI Oversight, Diversity in AI, Creative Automation, AI Jobs, Film Industry Trends, Casting Tech, AI Development, Actor Advocacy, Innovation, Digital Ethics, Future of Acting, Machine Learning, Entertainment Technology, Tech Experts, Artist Perspectives, AI Regulation, Career Impact, PodcastEpisodeHashtags: #AIinHollywood #HollywoodEthics #ActorsAndAI #CreativeAI #EntertainmentTech #AIrisks #AItools #FilmInnovation #Storytelling #EthicalAI #DiversityInTech #SelfTapes #CastingTech #AIoversight
For episode 534 of the BlockHash Podcast, host Brandon Zemp is joined by Jawad Ashraf, CEO of Vanar Chain to discuss how they are the intelligent chain for real world finance.Vanar Chain is working on their innovative Neutron AI Compression technology. Neutron introduces advanced AI-driven compression, enabling a 50MB file to fit into a mere 25-50 character seed stored directly on-chain. This innovation eliminates reliance on external storage systems, dramatically reducing costs and improving blockchain scalability and efficiency. ⏳ Timestamps: 0:00 | Introduction1:04 | Who is Jawad Ashraf?8:10 | What is Vanar Chain?16:34 | Vanar Chain ecosystem20:56 | Developer resources22:20 | Innovation in on-chain data storage25:50 | Consumer usability in Web330:57 | Future of Semantic Internet36:40 | Vanar Chain roadmap for 202538:08 | Conferences and Events38:50 | Web3 in Dubai40:48 | Vanar website, socials & community
In this episode of Cybersecurity Today, host Jim Love is joined by Krish Banerjee, the Canada Managing Director at Accenture for AI and Data. They begin the discussion with a report from Accenture that highlights the gap between the perceived and actual preparedness for cybersecurity as AI becomes more integrated into business operations. Jim and Krish discuss the pressing need for businesses to implement AI responsibly while addressing cybersecurity concerns. They also touch upon the current state of AI in Canada, efforts towards digital sovereignty, and the importance of integrating AI thoughtfully into various sectors. Through their insightful conversation, they explore the challenges and opportunities that lie ahead in making AI a cornerstone of productivity and innovation in the enterprise, emphasizing the need for value-driven strategies, the right tools, and skilled talent. 00:00 Introduction and Overview 02:10 AI in the Enterprise: Challenges and Opportunities 03:17 The Evolution of Data and AI 06:42 Enterprise AI: Current State and Future Prospects 15:20 Digital Sovereignty and National AI Strategies 25:07 Accelerating Technological Advancements 26:18 Dream Projects and AI for Good 27:58 Reinventing Healthcare with AI 28:42 Commercializing AI for Canadian Businesses 30:30 The Responsibility of AI Development 31:02 Economic Shifts and AI's Role 31:57 Future Predictions for AI 35:31 Agentic AI: The Next Frontier 41:14 Open Source AI and Its Implications 43:32 Advice for Executives on AI Adoption 47:13 Encouraging AI Learning in the Next Generation 49:10 Final Thoughts and Reflections
This week on The AI Report, Liam Lawson sits down with Gordon Wintrob, co-founder and CTO of Newfront, to talk about bringing AI into one of the slowest-moving industries: insurance.Gordon shares how Newfront is redesigning the broker experience with automation and AI—from parsing 200-page policies in seconds to helping HR teams save weeks of work. They discuss building AI tools that clients actually trust, how to manage risk in regulated industries, and why embedding AI into company culture matters as much as the code.Also in this episode: • Why insurance is one of the last big frontiers for tech • What makes a good AI use case in complex workflows • The story behind Benji, Newfront's internal AI assistant • How to foster internal adoption from hiring to hackathons • What regulation and SOC 2 mean for AI innovationThis is a real look at what happens when AI goes beyond chatbots and into core business infrastructure.Subscribe to The AI Report:https://theaireport.beehiiv.com/subscribeJoin the community:https://www.skool.com/the-ai-report-community/aboutChapters:(00:00) Reimagining the Insurance Stack(01:06) Why Insurance Feels So Behind(02:41) How Brokers Work and Where AI Fits(05:08) Founding Newfront With Future Tech in Mind(06:59) Automating Contract Review at Scale(08:49) Working With Startups and Industry Giants(10:19) What It's Like Serving Diverse Client Profiles(12:17) Making Room for Value-Add Conversations(13:28) Key AI Tools: Benji, Gap Analysis, and More(15:29) Why Products Succeed or Fail in Legacy Fields(16:39) Creating a Culture of Technical Curiosity(18:25) From Engineering to Recruiting: AI in the Org(19:30) Equity, Values, and Ownership at Scale(21:50) What Keeps Traditional Brokerages Behind(23:43) AI as a Signal of Operator Leverage(25:22) Who Newfront Builds For(25:56) Staying Compliant While Moving Fast(28:39) Managing Data Risk in a Privacy-Critical Industry(29:01) Vendor Security and SOC 2 in AI Development(30:25) Expanding AI Beyond the Frontend(31:58) CTO Strategy and Time Allocation(32:54) Staying Up-to-Date in a Fast-Shifting Landscape(34:42) Building With Purpose in a Legacy System(36:11) Connect With Gordon
There are all calls for the government to take action and deal with the opportunities and risks associated with AI. As Ireland is at risk from falling behind other countries. Professor Barry O'Sullivan, School of Computer Science and It in University College Cork, and Member of The Government's AI Advisory Council discussed this further with Jonathan this morning on the show.
Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025) About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More Watch this video episode on YouTube Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie. --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
In this thought-provoking episode of Project Synapse, host Jim and his friends Marcel Gagne and John Pinard delve into the complexities of artificial intelligence, especially in the context of cybersecurity. The discussion kicks off by revisiting a blog post by Sam Altman about reaching a 'Gentle Singularity' in AI development, where the progress towards artificial superintelligence seems inevitable. They explore the idea of AI surpassing human intelligence and the implications of machines learning to write their own code. Throughout their engaging conversation, they emphasize the need to integrate security into AI systems from the start, rather than as an afterthought, citing recent vulnerabilities like Echo Leak and Microsoft Copilot's Zero Click vulnerability. Derailing into stories from the past and pondering philosophical questions, they wrap up by urging for a balanced approach where speed and thoughtful planning coexist, and to prioritize human welfare in technological advancements. This episode serves as a captivating blend of storytelling, technical insights, and ethical debates. 00:00 Introduction to Project Synapse 00:38 AI Vulnerabilities and Cybersecurity Concerns 02:22 The Gentle Singularity and AI Evolution 04:54 Human and AI Intelligence: A Comparison 07:05 AI Hallucinations and Emotional Intelligence 12:10 The Future of AI and Its Limitations 27:53 Security Flaws in AI Systems 30:20 The Need for Robust AI Security 32:22 The Ubiquity of AI in Modern Society 32:49 Understanding Neural Networks and Model Security 34:11 Challenges in AI Security and Human Behavior 36:45 The Evolution of Steganography and Prompt Injection 39:28 AI in Automation and Manufacturing 40:49 Crime as a Business and Security Implications 42:49 Balancing Speed and Security in AI Development 53:08 Corporate Responsibility and Ethical Considerations 57:31 The Future of AI and Human Values
Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through "developmental interpretability" based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of "singularities" where models can change internally without affecting external behavior—potentially masking dangerous misalignment. Using their Local Learning Coefficient measure, they've demonstrated the ability to identify critical phase changes during training in models up to 7 billion parameters, offering a complementary approach to mechanistic interpretability. This work aims to move beyond trial-and-error neural network training toward a more principled engineering discipline that could catch safety issues during training rather than after deployment. Sponsors: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY (Cisco): The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utmcampaign=fy25q4agntcyamerpaid-mediaagntcy-cognitiverevolutionpodcast&utmchannel=podcast&utmsource=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (04:44) Introduction and Background (06:17) Timaeus Origins and Philosophy (09:13) Mathematical Background and SLT (12:27) Developmental Interpretability Approach (Part 1) (16:09) Sponsors: Oracle Cloud Infrastructure | The AGNTCY (Cisco) (18:09) Developmental Interpretability Approach (Part 2) (19:24) Proto-Paradigm and SAEs (24:37) Understanding Generalization (30:15) Central Dogma Framework (Part 1) (32:13) Sponsor: NetSuite by Oracle (33:37) Central Dogma Framework (Part 2) (34:35) Loss Landscape Geometry (40:41) Degeneracies and Evidence (47:25) Structure and Data Connection (55:36) Essential Dynamics and Algorithms (01:00:53) Implicit Regularization and Complexity (01:07:19) Double Descent and Scaling (01:09:55) Big Picture Applications (01:17:17) Reward Hacking and Risks (01:25:19) Future Training Vision (01:32:01) Scaling and Next Steps (01:36:43) Outro
In this episode, host Jim Love dives into the escalating tensions between OpenAI and Microsoft, shedding light on potential antitrust complaints, disagreements over AI technology sharing, and their strategic maneuverings for control over AI's future. Meanwhile, Canada is racing to develop its own AI tools to prevent sensitive government data from leaking to Big Tech, revealing efforts with CANChat and other specialized AI projects. Additionally, the episode covers the stalled progress of Canada's open banking system and the risk of bio weapons posed by future AI models, as warned by OpenAI executives. The show concludes with a call for listener engagement and support to improve the content delivered. 00:00 Introduction and Headlines 00:30 Microsoft and OpenAI: A Crumbling Partnership 03:47 Canada's AI Sovereignty Push 08:05 Stalled Open Banking in Canada 11:09 Bio Weapons Risk in AI Development 14:05 Conclusion and Call for Support
Explore FSx for Lustre's new intelligent storage tiering that delivers cost savings and unlimited scalability for file storage in the cloud. Plus, discover how the new Model Context Protocol (MCP) servers are revolutionizing AI-assisted development across ECS, EKS, and serverless platforms with real-time contextual responses and automated resource management. 00:00 - Intro, 00:52 - Introduction new storage class, 03:43 - MCP Servers, 07:18 - Analytics, 09:34 - Application Integration, 15:52 - Business Applications, 16:21 - Cloud Financial Management, 17:44 - Compute, 20:44 - Containers, 21:31 - Databases, 24:25 - Developer Tools, 25:42 - End User Computing, 25:58 - Gaming, 26:34 - Management and Governance, 28:35 - Marketplace, 28:51 - Media Services, 29:29 - Migration and Transfer, 30:01 - Networking and Content Delivery, 34:01 - Security Identity and Compliance, 34:43 - Serverless, 35:06 - Storage, 36:55 - Wrap up Show Notes: https://dqkop6u6q45rj.cloudfront.net/shownotes-20250613-185437.html
In this episode of Hashtag Trending, titled 'The Inflection Point: AI's Gentle Singularity and the Security Conundrum', the hosts grapple with planning their show amidst rapid technological changes and delve into a blog post by Sam Altman on the 'Gentle Singularity.' The discussion touches on concepts from astrophysics and AI, explaining the singularity where AI progresses beyond human control. Historical AI figure Ray Kurzweil is mentioned for his predictive insights. They explore how large language models mimic human behavior, their strengths in emotional intelligence, and the inevitable march towards superintelligence. This technological optimism is countered with a serious look at security flaws in AI models and real-world examples of corporate negligence. They highlight the critical need for integrating security into AI development to prevent exploitation. The episode concludes with a contemplation of human nature, the ethics of business, and an advocacy for using AI's potential responsibly. 00:00 Introduction and Show Planning 00:20 Discussing Sam Altman's Gentle Singularity 01:06 Ray Kurzweil and the Concept of Singularity 02:41 Human-Machine Integration and Event Horizon 05:02 AI Hallucinations and Human Creativity 09:02 Capabilities and Limitations of Large Language Models 10:27 AI's Role in Future Productivity and Quality of Life 13:02 Debating AI Consciousness and Singularity 25:51 Security Concerns in AI Development 30:57 Hacking the Human Brain: Elections and Persuasion 31:16 Understanding AI Models and Security 33:04 The Role of CISOs in Modern Security 34:43 Steganography and Prompt Injection 37:26 AI in Automation and Security Challenges 38:47 Crime as a Business: The Reality of Cybersecurity 40:47 Balancing Speed and Security in AI Development 51:06 Corporate Responsibility and Ethical Leadership 55:29 The Future of AI and Human Values
Chinese exports of rare earths, a critical component in manufacturing high-end tech products, emerged as a key sticking point in this week's trade talks between Beijing and Washington. Underpinning all of this is the race for artificial intelligence supremacy. Who is winning this competition? Who is best placed to control the supply chains of all the components that go into the top chips and AI models? Jared Cohen and George Lee, the co-heads of the Goldman Sachs Global Institute, join FP Live. Note: This discussion is part of a series of episodes brought to you by the Goldman Sachs Global Institute. Jared Cohen: The AI Economy's Massive Vulnerability Rishi Iyengar & Lili Pike: Is It Too Late to Slow China's AI Development? Vivek Chilukuri: How the United States Can Win the Global Tech Race Brought to you by: nordvpn.com/fplive (Exclusive NordVPN Deal: Try it risk-free now with a 30-day money-back guarantee) Learn more about your ad choices. Visit megaphone.fm/adchoices
Alex Gleason was one of the main architects behind Donald Trump's Truth Social. Now he focuses on the intersection of nostr, ai, and bitcoin. We dive deep into how he thinks about the future of nostr and vibe coding: using ai tools to rapidly prototype and ship apps with simple text based prompts.Alex on Nostr: https://primal.net/p/nprofile1qqsqgc0uhmxycvm5gwvn944c7yfxnnxm0nyh8tt62zhrvtd3xkj8fhggpt7fyStacks: https://getstacks.dev/EPISODE: 164BLOCK: 901101PRICE: 957 sats per dollar(00:00:02) Alex's Presentation at the Oslo Freedom Forum(00:01:31) Challenges and Opportunities in Decentralized Platforms(00:02:31) The Role of AI in Decentralized Social Media(00:05:00) Happy Bitcoin Friday(00:06:09) Guest Introduction: Alex Gleason(00:07:02) Truth Social(00:10:35) Challenges of Centralized vs Decentralized Platforms(00:14:01) Bridging Platforms(00:19:13) Limitations and Potential of Mastodon and Bluesky(00:24:08) The Future of AI and Vibe Coding(00:31:08) Empowering Developers with AI(00:38:09) The Impact of AI on Software Development(00:47:02) Building with Getstacks.dev(00:53:04) Impact of AI Models(01:02:01) Monetization and Future of AI Development(01:14:07) Open Source Development in an AI World(01:22:17) Data Preservation Using NostrVideo: https://primal.net/e/nevent1qqs96kxmxc7mufgt6n2rxpphg8ptyx2kl47a7rj389jrwmvjy6rhuhgmfel87support dispatch: https://citadeldispatch.com/donatenostr live chat: https://citadeldispatch.com/streamodell nostr account: https://primal.net/odelldispatch nostr account: https://primal.net/citadelyoutube: https://www.youtube.com/@CitadelDispatchpodcast: https://serve.podhome.fm/CitadelDispatchstream sats to the show: https://www.fountain.fm/rock the badge: https://citadeldispatch.com/shopjoin the chat: https://citadeldispatch.com/chatlearn more about me: https://odell.xyz
This week, I'm speaking with Kevin Weil, Chief Product Officer at OpenAI, who is steering product development at what might be the world's most important company right now.We talk about:(00:00) Episode trailer(01:37) OpenAI's latest launches(03:43) What it's like being CPO of OpenAI(04:34) How AI will reshape our lives(07:23) How young people use AI differently(09:29) Addressing fears about AI(11:47) Kevin's "Oh sh!t" moment(14:11) Why have so many models within ChatGPT?(18:19) The unpredictability of AI product progress(24:47) Understanding model “evals”(27:21) How important is prompt engineering?(29:18) Defining “AI agent”(37:00) Why OpenAI views coding as a prime target use-case(41:24) The "next model test” for any AI startup(46:06) Jony Ive's role at OpenAI(47:50) OpenAI's hardware vision(50:41) Quickfire questions(52:43) When will we get AGI?Kevin's links:LinkedIn: https://www.linkedin.com/in/kevinweil/Twitter/X: @kevinweilAzeem's links:Substack: https://www.exponentialview.co/Website: https://www.azeemazhar.com/LinkedIn: https://www.linkedin.com/in/azharTwitter/X: https://x.com/azeemOur new show:This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET. You can tune in through Exponential View on Substack.Produced by supermix.io and EPIIPLUS1 Ltd.
A new Axios Harris poll reveals that most Americans, across all age groups, are urging companies to take a more cautious approach to artificial intelligence development. Subscribe to our newsletter to stay informed with the latest news from a leading Black-owned & controlled media company: https://aurn.com/newsletter Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, New York State Assembly Member Alex Bores discusses the RAISE Act, a proposed bill aimed at regulating frontier AI models with basic safety protocols. He explains his background in technology, his motivations for the bill, and the legislative process. He emphasizes the importance of having clear safety protocols, third-party audits, and whistleblower protections for AI developers. He explains the intricacies of the bill, including its focus on large developers and frontier models, and addresses potential objections related to regulatory capture and state-level legislation. Alex encourages public and industry input to refine and support the bill, aiming for a balanced approach to AI regulation that keeps both innovation and public safety in mind. Link to the RAISE bill: https://legislation.nysenate.gov/pdf/bills/2025/A6453 Link to support Raise bill: https://win.newmode.net/aisafetynewyork Link to support AI Transparency Legislation: https://docs.google.com/forms/d/e/1FAIpQLSdvpAHkAWiA38oIu1cp57azchPqOOoxb789tHQ896ikJf3CKg/viewform Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ SPONSORS: ElevenLabs: ElevenLabs gives your app a natural voice. Pick from 5,000+ voices in 31 languages, or clone your own, and launch lifelike agents for support, scheduling, learning, and games. Full server and client SDKs, dynamic tools, and monitoring keep you in control. Start free at https://elevenlabs.io/cognitive-revolution Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-cognitiverevolution_podcast&utm_channel=podcast&utm_source=podcast Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing
ChatGPT Codex is here - the first cloud hosted Autonomous Software Engineer (A-SWE) from OpenAI. We sat down for a quick pod with two core devs on the ChatGPT Codex team: Josh Ma and Alexander Embiricos to get the inside scoop on the origin story of Codex, from WHAM to its future roadmap. Follow them: https://github.com/joshma and https://x.com/embirico Chapters - 00:00 Introduction to the Latent Space Podcast - 00:59 The Launch of ChatGPT Codex - 03:08 Personal Journeys into AI Development - 05:50 The Evolution of Codex and AI Agents - 08:55 Understanding the Form Factor of Codex - 11:48 Building a Software Engineering Agent - 14:53 Best Practices for Using AI Agents - 17:55 The Importance of Code Structure for AI - 21:10 Navigating Human and AI Collaboration - 23:58 Future of AI in Software Development - 28:18 Planning and Decision-Making in AI Development - 31:37 User, Developer, and Model Dynamics - 35:28 Building for the Future: Long-Term Vision - 39:31 Best Practices for Using AI Tools - 42:32 Understanding the Compute Platform - 48:01 Iterative Deployment and Future Improvements
We published our first episode on the threat of antibiotic resistance in 2016, and nearly a decade later, it remains one of the world's most pressing health crises. Today, with advances in artificial intelligence (AI), the race to develop new antibiotics is evolving. In this episode, co-host Danielle Mandikian sits down with guests Tommaso Biancalani, Distinguished Scientist and Director of Biological Research and AI Development, and Steven Rutherford, Senior Principal Scientist and Director of Infectious Diseases in Research Biology, to share the latest in the fight against antibiotic resistance. Together, they discuss the challenges of antibiotic discovery and development, and how AI could streamline the process of identifying novel antibiotics within the vast, uncharted chemical universe. Read the full text transcript at www.gene.com/stories/ai-and-the-quest-for-new-antibiotics
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME's 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models. (0:00) Intro(1:15) Overview of AI 2027(2:32) AI Development Timeline(4:10) Race and Slowdown Branches(12:52) US vs China(18:09) Potential AI Misalignment(31:06) Getting Serious About the Threat of AI(47:23) Predictions for AI Development by 2027(48:33) Public and Government Reactions to AI Concerns(49:27) Policy Recommendations for AI Safety(52:22) Diverging Views on AI Alignment Timelines(1:01:30) The Role of Public Awareness in AI Safety(1:02:38) Reflections on Insider vs. Outsider Strategies(1:10:53) Future Research and Scenario Planning(1:14:01) Best and Worst Case Outcomes for AI(1:17:02) Final Thoughts and Hopes for the Future With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
L'ultima ricerca pubblicata da NewsGuard lo scorso febbraio ha rilevato e identificato la presenza online di 1.254 siti di notizie e informazioni inaffidabili generate dall'intelligenza artificiale in 16 lingue, tra cui anche l'italiano. Ma come funziona esattamente l'AI e come possono gli utenti sapere se quella che si trovano davanti è un'informazione affidabile? Ne abbiamo parlato nella puntata di oggi con Andrea Pazzaglia, Head of AI Development di Class Editori. ... Qui il link per iscriversi al canale Whatsapp di Notizie a colazione: https://whatsapp.com/channel/0029Va7X7C4DjiOmdBGtOL3z Per iscriverti al canale Telegram: https://t.me/notizieacolazione ... Qui gli altri podcast di Class Editori: https://milanofinanza.it/podcast Musica https://www.bensound.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Lin Qiao, CEO of Fireworks AI, dives into the practical challenges AI developers face, from UX/DX hurdles to complex systems engineering. Discover key trends like the convergence of open-source and proprietary models, the rise of agentic workflows, and strategies for optimizing quality, speed, and cost. Subscribe to the Gradient Flow Newsletter
In this episode, former OpenAI research scientist Steven Adler discusses his insights on OpenAI's transition through various phases, including its growth, internal culture shifts, and the contentious move from nonprofit to for-profit. The conversation delves into the initial days of OpenAI's development of GPT-3 and GPT-4, the cultural and ethical disagreements within the organization, and the recent amicus brief addressing the Elon versus OpenAI lawsuit. Steven Adler also explores the broader implications of AI capabilities, safety evaluations, and the critical need for transparent and responsible AI governance. The episode provides a candid look at the internal dynamics of a leading AI company and offers perspectives on the responsibilities and challenges faced by AI researchers and developers today. Amicus brief to the Elon Musk versus OpenAI lawsuit: https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf Steven Adler's post on 'X' about Personhood credentials (a paper co-authored by him) : https://x.com/sjgadler/status/1824245211322568903 Steven Adler's substack post on "minimum testing period" for frontier AI : https://substack.com/@sjadler/p-161143327?utm_source=profile&utm_medium=reader2 Steven Adler's substack post on TSFT Model Testing: https://substack.com/@sjadler/p-159883282?utm_source=profile&utm_medium=reader2 Steven Adler's Substack: https://stevenadler.substack.com/ Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (05:15) Joining OpenAI: Early Days and Cultural Insights (06:41) The Anthropic Split and Its Impact (11:32) Product Safety and Content Policies at OpenAI (Part 1) (19:21) Sponsors: ElevenLabs | Oracle Cloud Infrastructure (OCI) (21:48) Product Safety and Content Policies at OpenAI (Part 2) (22:08) The Launch and Impact of GPT-4 (32:15) Evaluating AI Models: Challenges and Best Practices (Part 1) (33:46) Sponsors: Shopify | NetSuite (37:10) Evaluating AI Models: Challenges and Best Practices (Part 2) (55:58) AGI Readiness and Personhood Credentials (01:05:03) Biometrics and Internet Friction (01:06:52) Credential Security and Recovery (01:08:05) Trust and Ecosystem Diversity (01:09:40) AI Agents and Verification Challenges (01:14:28) OpenAI's Evolution and Ambitions (01:22:07) Safety and Regulation in AI Development (01:35:53) Internal Dynamics and Cultural Shifts (01:58:18) Concluding Thoughts on AI Governance (02:02:29) Outro
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE's foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google's continued investment in open source and scalable architecture.Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI WorkKubeCon Europe: How Google Will Evolve Kubernetes in the AI EraApache Ray Finds a Home on the Google Kubernetes EngineJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Knowledge Project: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware's partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes?VMware's Golden PathJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Join us as we explore the transformative changes in software development and cybersecurity due to AI. We discuss new terminology like ‘vibe coding' — a novel, behavior-focused development approach, and ‘MCP' (Model Context Protocol) — an open standard for AI interfaces. We also address the concept of ‘slopsquatting,' a new type of threat involving AI-generated […] The post What Vibe Coding, MCP, and Slopsquatting Reveal About the Future of AI Development appeared first on Shared Security Podcast.
Join JP Newman in this fascinating episode of 'Investing on Purpose' as he, along with guests Brett Hurt, Chantel Mc Daniel, and Brad Weimert, discuss their experiences and key takeaways from the 2025 TED Conference in Vancouver. They explore the conference theme 'Humanity Reimagined,' touching on topics such as artificial intelligence, quantum computing, and the essence of human artistry. The episode delves into the impact of AI on future jobs, the necessity for intentionality in technology, and the inspiring talks around innovation, peace-making, and mental health. Tune in for an insightful journey into the evolving landscape of technology and human connection.
Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Sir Niall Ferguson, renowned historian and Milbank Family Senior Fellow at the Hoover Institution, joins Azeem Azhar to discuss the evolving relationship between the U.S. and China, Trump's foreign policy doctrine, and what the new global economic and security order might look like. (00:00) What most analysts are missing about Trump (05:43) The win-win outcome in Europe–U.S relations (11:17) How the U.S. is reestablishing deterrence (15:50) Can the U.S. economy weather the impact of tariffs? (23:33) Niall's read on China (29:29) How is China performing in tech? (33:35) What might happen with Taiwan (42:43) Predictions for the coming world order Sir Niall Ferguson's links:Substack: Time MachineBooks: War of the World, Doom: The Politics of CatastropheTwitter/X: https://x.com/nfergusAzeem's links:Substack: https://www.exponentialview.co/ Website: https://www.azeemazhar.com/ LinkedIn: https://www.linkedin.com/in/azhar Twitter/X: https://x.com/azeem Our new show This was originally recorded for "Friday with Azeem Azhar" on 28 March. Produced by supermix.io and EPIIPLUS1 Ltd
"Preview: Author Gary Rivlin, 'AI Valley,' presents the back story of AI development and then dismissal in the 1970s and 1980s. More later in the week." 1952 https://www.amazon.com/Valley-Microsoft-Trillion-Dollar-Artificial-Intelligence-ebook/dp/B0D7ZRSH7P/ref=tmm_kin_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.AJeF940tKhADhdajpBWTAM0NBzzXjrOJ_C6W040rhkNRlFXvSpVdtjYclENO74aCPgq8yPNhAdGjb4kZ6pCmmsvyYKET_EuYyGnf7RXSZ1W0YbU_h0r7EYDDvZj_aB3j0OvGg0OsK8JaOmlzX_eB_Guar_jgqhTgBwEIONt0nHM78nJZmlCxXzawvx6xrjBrmPX4Te68hgrEMLpI0Gy2uvscj4pm4-CxX8c9U7MOG6Q.yKug_BFX2VvXr6xFXIOgeEKJEg-eZqu1K-NYi9O1kcg&qid=1745068898&sr=1-1
With artificial intelligence development expanding at a breakneck speed, powering it is becoming a hot topic. Learn more about your ad choices. Visit podcastchoices.com/adchoices
- Market Analysis and Silver Investment (0:00) - Trump's Economic Policies and Dollar Value (3:04) - Historical Newspaper Analysis (6:27) - Decline in Human Knowledge and Cognitive Capacity (12:01) - Preservation of Human Knowledge and AI Development (18:12) - Impact of AI on Human Knowledge and Society (18:31) - Challenges and Opportunities in the Token Economy (55:28) - Practical Steps for Living a More Centralized Life (1:09:59) - Gold Backs and Their Value (1:10:56) - Future of AI and Human Knowledge (1:26:31) - Gold and Silver Market Stress (1:26:50) - Trump's Alleged Actions Against the Crown (1:29:22) - Impact of Gold and Silver Paper Contracts (1:31:59) - Introduction of Chris Sullivan and His Background (1:34:11) - Sullivan's Insights on Bitcoin and Financial Markets (1:39:38) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Episode 53: What role will AI agents play in addressing global challenges? Join Matt Wolfe (https://x.com/mreflow) Amanda Saunders (https://x.com/amandamsaunders), Director of Enterprise Generative AI Product Marketing at Nvidia, then Bob Pette (https://x.com/RobertPette) Vice President and General Manager of Enterprise Platforms at Nvidia, as they delve into the transformative potential of agentic AI at the Nvidia GTC Conference. Vote for us at the Webby's https://vote.webbyawards.com/PublicVoting#/2025/podcasts/shows/business This episode explores the concept of AI agents as digital employees that perceive, reason, and act, reshaping industries like healthcare and telecom. Discover Nvidia's approach to building powerful AI agents and the measures in place to ensure their secure and productive deployment. From optimizing workflows with agentic AI blueprints to fascinating agent applications in sports coaching, the discussion unpacks AI's promising future. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Exploring Nvidia's AI Revolution (03:29) AI's Breakneck Growth Spurs Innovation (06:29) Video Agents Enhancing Athletic Performance (09:46) AI: Problem Solver and Concern Raiser (14:54) Rise of Sophisticated AI Agents (18:21) Earth-2: Visualizing Future Changes (21:53) Nvidia Optimizes Llama for Reasoning (23:50) Reasoning Models Enhance Problem Solving (27:20) Balancing AI Creativity and Accuracy (30:31) Nvidia's AI Development in Windows (34:16) AI Development Acceleration Benefits (37:32) High-Power Servers & Workstations Overview (39:37) Liquid Cooling in AI Workstations — Mentions: Get the free AI Agent Playbook: https://clickhubspot.com/ovw Amanda Saunders: https://www.linkedin.com/in/amandamsaunders/ Bob Pette: https://www.linkedin.com/in/bobpette/ Nvidia: https://www.nvidia.com/en-us/ Nvidia GTC Conference: https://www.nvidia.com/gtc/ Earth-2: https://www.nvidia.com/en-us/high-performance-computing/earth-2/ Vote for us! https://vote.webbyawards.com/PublicVoting#/2025/podcasts/shows/business — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Welcome to episode #978 of Six Pixels of Separation - The ThinkersOne Podcast. Dr. Christopher DiCarlo is a philosopher, educator, author, and ethicist whose work lives at the intersection of human values, science, and emerging technology. Over the years, Christopher has built a reputation as a Socratic nonconformist, equally at home lecturing at Harvard during his postdoctoral years as he is teaching critical thinking in correctional institutions or corporate boardrooms. He's the author of several important books on logic and rational discourse, including How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions and So You Think You Can Think?, as well as the host of the podcast, All Thinks Considered. In this conversation, we dig into his latest book, Building A God - The Ethics Of Artificial Intelligence And The Race To Control It, which takes a sobering yet practical look at the ethical governance of AI as we accelerate toward the possibility of artificial general intelligence. Drawing on years of study in philosophy of science and ethics, Christopher lays out the risks - manipulation, misalignment, lack of transparency - and the urgent need for international cooperation to set safeguards now. We talk about everything from the potential of AI to revolutionize healthcare and sustainability to the darker realities of deepfakes, algorithmic control, and the erosion of democratic processes. His proposal? A kind of AI “Geneva Conventions,” or something akin to the IAEA - but for algorithms. In a world rushing toward techno-utopianism, Christopher is a clear-eyed voice asking: “What kind of Gods are we building… and can we still choose their values?” If you're thinking about the intersection of ethics and AI (and we should all be focused on this!), this is essential listening. Enjoy the conversation... Running time: 58:55. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on X. Here is my conversation with Dr. Christopher DiCarlo. Building A God - The Ethics Of Artificial Intelligence And The Race To Control It. How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions. So You Think You Can Think?. All Thinks Considered. Convergence Analysis. Follow Christopher on LinkedIn. Follow Christopher on X. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction to AI Ethics and Philosophy. (03:14) - The Interconnectedness of Systems. (05:56) - The Race for AGI and Its Implications. (09:04) - Risks of Advanced AI: Misuse and Misalignment. (11:54) - The Need for Ethical Guidelines in AI Development. (15:05) - Global Cooperation and the AI Arms Race. (18:03) - Values and Ethics in AI Alignment. (20:51) - The Role of Government in AI Regulation. (24:14) - The Future of AI: Hope and Concerns. (31:02) - The Dichotomy of Regulation and Innovation. (34:57) - The Drive Behind AI Pioneers. (37:12) - Skepticism and the Tech Bubble Debate. (39:39) - The Potential of AI and Its Risks. (43:20) - Techno-Selection and Control Over AI. (48:53) - The Future of Medicine and AI's Role. (51:42) - Empowering the Public in AI Governance. (54:37) - Building a God: Ethical Considerations in AI.
In this episode, Dr. Matthew Lungren and Seth Hain, leaders in the implementation of healthcare AI technologies and solutions at scale, join Lee to discuss the latest developments. Lungren, the chief scientific officer at Microsoft Health and Life Sciences, explores the creation and deployment of generative AI for automating clinical documentation and administrative tasks like clinical note-taking. Hain, the senior vice president of R&D at the healthcare software company Epic, focuses on the opportunities and challenges of integrating AI into electronic health records at global scale, highlighting AI-driven workflows, decision support, and Epic's Cosmos project, which leverages aggregated healthcare data for research and clinical insights.
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)
We gotta talk about this
To unpack some of the most topical questions in AI, I'm joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We've been wanting to do a cross-over episode for a while and finally made it happen.Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he's been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.To subscribe or learn more about Latent Space, click here: https://www.latent.space/ [0:00] Intro[1:08] Reflecting on AI Surprises of the Past Year[2:24] Open Source Models and Their Adoption[6:48] The Rise of GPT Wrappers[7:49] Challenges in AI Model Training[10:33] Over-hyped and Under-hyped AI Trends[24:00] The Future of AI Product Market Fit[30:27] Google's Momentum and Customer Support Insights[33:16] Emerging AI Applications and Market Trends[35:13] Challenges and Opportunities in AI Development[39:02] Defensibility in AI Applications[42:42] Infrastructure and Security in AI[50:04] Future of AI and Unanswered Questions[55:34] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
- Interview with Doc Pete Chambers and Special Reports (0:00) - DeepSea 3v AI Model and Its Capabilities (2:39) - Challenges in AI Development and Future Plans (5:10) - China's AI Advancements and US Education System (7:21) - The Era of Self-Aware AI and Its Implications (13:49) - Germany's Self-Sabotage and Western Nations' Satanic Practices (21:41) - The Role of Satanism in Western Governments and Societal Practices (29:13) - The End Times and the Role of God in Human History (38:02) - Book Review: Global Tyranny, Step by Step (52:27) - Book Review: Everyday Survival (1:00:04) - Customer Appreciation Week Promotions (1:23:00) - Introduction of Doc Pete Chambers (1:31:04) - Philosophy and Conflict Resolution (1:32:29) - Conflict Resolution with Drug Cartels (1:35:10) - Impact of Border Security on Cartel Operations (1:42:40) - Challenges of Dealing with Human Trafficking (1:51:22) - Support for the Remnant Ministry (1:59:31) - Spiritual and Practical Approaches to Conflict Resolution (2:14:37) - The Role of the Remnant Ministry in Disaster Relief (2:23:18) - The Importance of Faith and Perseverance (2:23:32) - Conclusion and Future Plans (2:25:12) - Introduction to the Seed Kit Campaign (2:29:30) - Details of the Seed Kits (2:29:46) - Features and Benefits of the Seed Kits (2:31:00) - Additional Information and Support (2:32:15) - Closing Remarks (2:32:34) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Microsoft and OpenAI's complex relationship is heating up, shaping the future of AI in unexpected ways. As tensions grow, Microsoft pushes forward independently with new models called MAI, which directly compete with OpenAI's reasoning models. Meanwhile, OpenAI diversifies its partnerships, signing a massive cloud deal with CoreWeave and teaming up with Oracle and SoftBank for Project Stargate. Brought to you by:KPMG – Go to www.kpmg.us/ai to learn more about how KPMG can help you drive value with our AI solutions.Vanta - Simplify compliance - https://vanta.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdown
Recorded at our 2025 Technology, Media and Telecom (TMT) Conference, TMT Credit Research Analyst Lindsay Tyler joins Head of Investment Grade Debt Coverage Michelle Wang to discuss the how the industry is strategically raising capital to fund growth.----- Transcript -----Lindsay Tyler: Welcome to Thoughts on the Market. I'm Lindsay Tyler, Morgan Stanley's Lead Investment Grade TMT Credit Research Analyst, and I'm here with Michelle Wang, Head of Investment Grade Debt Coverage in Global Capital Markets.On this special episode, we're recording at the Morgan Stanley Technology, Media, and Telecom (TMT) Conference, and we will discuss the latest on the technology space from the fixed income perspective.It's Thursday, March 6th at 12 pm in San Francisco.What a week it's been. Last I heard, we had over 350 companies here in attendance.To set the stage for our discussion, technology has grown from about 2 percent of the broader investment grade market – about two decades ago – to almost 10 percent now; though that is still relatively a small percentage, relative to the weightings in the equity market.So, can you address two questions? First, why was tech historically such a small part of investment grade? And then second, what has driven the growth sense?Michelle Wang: Technology is still a relatively young industry, right? I'm in my 40s and well over 90 percent of the companies that I cover were founded well within my lifetime. And if you add to that the fact that investment grade debt is, by definition, a later stage capital raising tool. When the business of these companies reaches sufficient scale and cash generation to be rated investment grade by the rating agencies, you wind up with just a small subset of the overall investment grade universe.The second question on what has been driving the growth? Twofold. Number one the organic maturation of the tech industry results in an increasing number of scaled investment grade companies. And then secondly, the increasing use of debt as a cheap source of capital to fund their growth. This could be to fund R&D or CapEx or, in some cases, M&A.Lindsay Tyler: Right, and I would just add in this context that my view for this year on technology credit is a more neutral one, and that's against a backdrop of being more cautious on the communications and media space.And part of that is just driven by the spread compression and the lack of dispersion that we see in the market. And you mentioned M&A and capital allocation; I do think that financial policy and changes there, whether it's investment, M&A, shareholder returns – that will be the main driver of credit spreads.But let's turn back to the conference and on the – you know, I mentioned investment. Let's talk about investment.AI has dominated the conversation here at the conference the past two years, and this year is no different. Morgan Stanley's research department has four key investment themes. One of those is AI and tech diffusion.But from the fixed income angle, there is that focus on ongoing and upcoming hyperscaler AI CapEx needs.Michelle Wang: Yep.Lindsay Tyler: There are significant cash flows generated by many of these companies, but we just discussed that the investment grade tech space has grown relative to the index in recent history.Can you discuss the scale of the technology CapEx that we're talking about and the related implications from your perspective?Michelle Wang: Let's actually get into some of the numbers. So in the past three years, total hyperscaler CapEx has increased from [$]125 billion three years ago to [$]220 billion today; and is expected to exceed [$]300 billion in 2027.The hyperscalers have all publicly stated that generative AI is key to their future growth aspirations. So, why are they spending all this money? They're investing heavily in the digital infrastructure to propel this growth. These companies, however, as you've pointed out, are some of the most scaled, best capitalized companies in the entire world. They have a combined market cap of [$]9 trillion. Among them, their balance sheet cash ranges from [$]70 to [$]100 billion per company. And their annual free cash flow, so the money that they generate organically, ranges from [$]30 to [$]75 billion.So they can certainly fund some of this CapEx organically. However, the unprecedented amount of spend for GenAI raises the probability that these hyperscalers could choose to raise capital externally.Lindsay Tyler: Got it.Michelle Wang: Now, how this capital is raised is where it gets really interesting. The most straightforward way to raise capital for a lot of these companies is just to do an investment grade bond deal.Lindsay Tyler: Yep.Michelle Wang: However, there are other more customized funding solutions available for them to achieve objectives like more favorable accounting or rating agency treatment, ways for them to offload some of their CapEx to a private credit firm. Even if that means that these occur at a higher cost of capital.Lindsay Tyler: You touched on private credit. I'd love to dig in there. These bespoke capital solutions.Michelle Wang: Right.Lindsay Tyler: I have seen it in the semiconductor space and telecom infrastructure, but can you please just shed some more light, right? How has this trend come to fruition? How are companies assessing the opportunity? And what are other key implications that you would flag?Michelle Wang: Yeah, for the benefit of the audience, Lindsay, I think just to touch a little bit…Lindsay Tyler: Some definitions,Michelle Wang: Yes, some definitions around ...Lindsay Tyler: Get some context.Michelle Wang: What we're talking about.Lindsay Tyler: Yes.So the – I think what you're referring to is investment grade companies doing asset level financing. Usually in conjunction with a private credit firm, and like all financing trends that came before it, all good financing trends, this one also resulted from the serendipitous intersection of supply and demand of capital.On the supply of capital, the private credit pocket of capital driven by large pockets of insurance capital is now north of $2 trillion and it has increased 10x in scale in the past decade. So, the need to deploy these funds is driving these private credit firms to seek out ways to invest in investment grade companies in a yield enhanced manner.Lindsay Tyler: Right. And typically, we're saying 150 to 200 basis points greater than what maybe an IG bond would yield.Michelle Wang: That's exactly right. That's when it starts to get interesting for them, right? And then the demand of capital, the demand for this type of capital, that's always existed in other industries that are more asset-heavy like telcos.However, the new development of late is the demand for capital from tech due to two megatrends that we're seeing in tech. The first is semiconductors. Building these chip factories is an extremely capital-intensive exercise, so creates a demand for capital. And then the second megatrend is what we've seen with the hyperscalers and GenerativeAI needs. Building data centers and digital infrastructure for GenerativeAI is also extremely expensive, and that creates another pocket of demand for capital that private credit conveniently kinda serves a role in.Lindsay Tyler: Right.Michelle Wang: So look, think we've talked about the ways that companies are using these tools. I'm interested to get your view, Lindsay, on the investor perspective.Lindsay Tyler: Sure.Michelle Wang: How do investors think about some of these more bespoke solutions?Lindsay Tyler: I would say that with deals that have this touch of extra complexity, it does feel that investor communication and understanding is all important. And I have found that, some of these points that you're raising – whether it's the spread pickup and the insurance capital at the asset managers and also layering in ratings implications and the deal terms. I think all of that is important for investors to get more comfortable and have a better understanding of these types of deals.The last topic I do want us to address is the macro environment. This has been another key theme with the conference and with this recent earnings season, so whether it's rate moves this year, the talk of M& A, tariffs – what's your sense on how companies are viewing and assessing macro in their decision making?Michelle Wang: There are three components to how they're thinking about it.The first is the rate move. So, the fact that we're 50 to 60 basis points lower in Treasury yields in the past month, that's welcome news for any company looking to issue debt. The second thing I'll say here is about credit spreads. They remain extremely tight. Speaking to the incredible kind of resilience of the investment grade investor base. The last thing I'll talk about is, I think, the uncertainty. [Because] that's what we're hearing a ton about in all the conversations that we've had with companies that have presented here today at the conference.Lindsay Tyler: Yeah. For my perspective, also the regulatory environment around that M&A, whether or not companies will make the move to maybe be more acquisitive with the current new administration.Michelle Wang: Right, so until the dust settles on some of these issues, it's really difficult as a corporate decision maker to do things like big transformative M&A, to make a company public when you don't know what could happen both from a the market environment and, as you point out, regulatory standpoint.The thing that's interesting is that raising debt capital as an investment grade company has some counter cyclical dynamics to it. Because risk-off sentiment usually translates into lower treasury yields and more favorable cost of debt.And then the second point is when companies are risk averse it drives sometimes cash hoarding behavior, right? So, companies will raise what they call, you know, rainy day liquidity and park it on balance sheet – just to feel a little bit better about where their balance sheets are. To make sure they're in good shape…Lindsay Tyler: Yeah, deal with the maturities that they have right here in the near term.Michelle Wang: That's exactly right. So, I think as a consequence of that, you know, we do see some tailwinds for debt issuance volumes in an uncertain environment.Lindsay Tyler: Got it. Well, appreciate all your insights. This has been great. Thank you for taking the time, Michelle, to talk during such a busy week.Michelle Wang: It's great speaking with you, Lindsay.Lindsay Tyler: And thanks to everyone listening in to this special episode recorded at the Morgan Stanley TMT Conference in San Francisco. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
- Introduction and News Segment (0:10) - Trump and Pfizer CEO Introduction (2:56) - RFK Jr. and Direct-to-Consumer Drug Advertising (4:35) - Special Report on Trump's Potential Ban on COVID Vaccines (6:06) - Call for Mass Arrests and Full Disclosure (13:35) - The FDA as a Grave Threat to America (15:07) - Interview with Mike Ferris on UBI and Economic Collapse (26:01) - Music Video: Going Back in Time is Coming Home (30:40) - Commentary on the Song and Its Message (1:06:21) - Special Report: Humanity's Future with AI (1:07:06) - Conclusion and Call to Action (1:16:58) - Replacement Theory and British Leadership (1:18:22) - British Military's Weakness and Future Conflict (1:25:41) - Historical Context and American Independence (1:28:34) - Bank of England's Financial Crisis (1:31:20) - Exploring Tom Paine's Book on Elite Manipulation (1:35:04) - Jim Mars' Book on Digital Age Mysteries (1:41:25) - Interview with Michael Ferris on AI and Gold (2:02:56) - The Role of AI in the Future Economy (2:21:46) - The Future of Work and Education (2:32:31) - The Importance of Decentralization in AI Development (2:33:28) - AI and Human Creativity (2:41:05) - Decentralized Agriculture and Local Robotics (2:43:51) - Future Outlook and Economic Revolution (2:45:59) - Confirmed Appointments and Potential Changes (2:47:50) - Humanity's Future with AI (2:50:33) - Military Operations and Cartel Threats (2:55:54) - Technological Solutions and Border Security (2:59:37) - Global Instability and Travel Advisories (3:00:08) - European Collapse and Future Outlook (3:02:53) - Censorship and the Fight for Free Speech (3:05:02) - Final Thoughts and Future Predictions (3:07:28) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com