Podcasts about ai research

  • 342PODCASTS
  • 474EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 9, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai research

Latest podcast episodes about ai research

IT Visionaries
Cisco's Vijoy Pandey: The New Internet, AI Agents, and Quantum Networks

IT Visionaries

Play Episode Listen Later Oct 9, 2025 61:04


Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:​​Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn  -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Feed The Official Libsyn Podcast
303 Landing Celebrities, Apple Podcasts Promotions, and Avoiding AI Podcast Pitfalls

The Feed The Official Libsyn Podcast

Play Episode Listen Later Oct 9, 2025 63:33


SORRY THE EPISODE IS LATE! The movie start to podcast pipeline, Spotify's founder Daniel Ek stepping down, discussing how one third of U.S. adults get their news from podcasts, do we just ask AI questions and not vet the answers? Playing with Sora 2, how to get your podcast promoted in Apple Podcasts Podcasts and of course stats: mean and median download numbers. Audience feedback drives the show. We'd love for you to contact us and keep the conversation going! Email thefeed@libsyn.com, call 412-573-1934 or leave us a message on Speakpipe! We'd love to hear from you!

Critical Thinking - Bug Bounty Podcast
Episode 142: gr3pme's full-time hunting journey update, insane AI research, and some light news

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later Oct 2, 2025 54:50


Episode 142: In this episode of Critical Thinking - Bug Bounty Podcast Rez0 and Gr3pme join forces to discuss Websocket research, Meta's $111750 Bug, PROMISQROUTE, and the opportunities afforded by going full time in Bug Bounty.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater and Rez0 on Twitter: ====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Sponsor: ThreatLocker. Check out ThreatLocker DACToday's Guest: https://x.com/gr3pme====== This Week in Bug Bounty ======New Monthly Dojo challenge and Dojo UI designThe ultimate Bug Bounty guide to exploiting race condition vulnerabilities in web applicationsWatch Our boy Brandyn on the TV====== Resources ======murtasecWebSocket Turbo Intruder: Unearthing the WebSocket GoldmineChaining Path Traversal Vulnerability to RCE — Meta's 111,750$ BugFinding vulnerabilities in modern web apps using Claude Code and OpenAI CodexMind the GapPROMISQROUTE====== Timestamps ======(00:00:00) Introduction(00:05:16) Full Time Bug Bounty and Business Startups(00:15:50) Websockets(00:22:17) Meta's $111750 Bug(00:28:38) Finding vulns using Claude Code and OpenAI Codex(00:39:32) Time-of-Check to Time-of-Use Vulns in LLM-Enabled Agents(00:45:22) PROMISQROUTE

The MAD Podcast with Matt Turck
Sonnet 4.5 & the AI Plateau Myth — Sholto Douglas (Anthropic)

The MAD Podcast with Matt Turck

Play Episode Listen Later Oct 2, 2025 70:03


Sholto Douglas, a top AI researcher at Anthropic, discusses the breakthroughs behind Claude Sonnet 4.5—the world's leading coding model—and why we might be just 2-3 years from AI matching human-level performance on most computer-facing tasks.You'll discover why RL on language models suddenly started working in 2024, how agents maintain coherency across 30-hour coding sessions through self-correction and memory systems, and why the "bitter lesson" of scale keeps proving clever priors wrong.Sholto shares his path from top-50 world fencer to Google's Gemini team to Anthropic, explaining why great blog posts sometimes matter more than PhDs in AI research. He discusses the culture at big AI labs and why Anthropic is laser-focused on coding (it's the fastest path to both economic impact and AI-assisted AI research). Sholto also discusses how the training pipeline is still "held together by duct tape" with massive room to improve, and why every benchmark created shows continuous rapid progress with no plateau in sight.Bold predictions: individuals will soon manage teams of AI agents working 24/7, robotics is about to experience coding-level breakthroughs, and policymakers should urgently track AI progress on real economic tasks. A clear-eyed look at where AI stands today and where it's headed in the next few years.AnthropicWebsite - https://www.anthropic.comTwitter - https://x.com/AnthropicAISholto DouglasLinkedIn - https://www.linkedin.com/in/sholtoTwitter - https://x.com/_sholtodouglasFIRSTMARKWebsite - https://firstmark.comTwitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/Twitter - https://twitter.com/mattturck(00:00) Intro(01:09) The Rapid Pace of AI Releases at Anthropic(02:49) Understanding Opus, Sonnet, and Haiku Model Tiers(04:14) From Australian Fencer to AI Researcher(12:01) The YouTube Effect: Mastery Through Observation(16:16) Breaking Into AI Research Without Traditional Signals(18:29) Google, Gemini, and Building Inference Stacks(23:05) Why Anthropic? Culture and Mission Differences(25:08) What Is "Taste" in AI Research?(31:46) Sonnet 4.5: Best Coding Model in the World(36:40) From 7 Hours to 30 Hours: The Long-Context Breakthrough(38:41) How AI Agents Self-Correct and Maintain Coherency(43:13) The Role of Memory in Extended Coding Sessions(46:28) Breakthroughs Behind the Performance Jump(47:42) Pre-Training vs. RL: Textbooks vs. Worked Problems(52:11) Test-Time Compute: The New Scaling Axis(55:55) Why RL Finally Started Working on LLMs in 2024(59:38) Defining AGI: Better Than Humans at Computer Tasks(01:02:05) Are We Hitting a Plateau? Evidence Says No(01:03:41) The GDP Eval: Measuring AI Across Economic Sectors(01:05:47) Preparing for 10-100x Individual Leverage & Robotics

The Jerry Agar Show
Party for Two - AI research position - Small businesses threatened by rent - Yankees or Red Sox?

The Jerry Agar Show

Play Episode Listen Later Oct 1, 2025 37:18


Mike Kakuk from AM800 joins Jerry at the party table for Party for Two. Jerry weighs in on the AI research position which is only open to disabled women. Rent is the biggest threat to Toronto's small businesses. Brad Poulos weighs in on this with Jerry. Plus - should you root for the Yankees or Red Sox to face the Blue Jays?

Invest Like the Best with Patrick O'Shaughnessy
Dylan Patel - Inside the Trillion-Dollar AI Buildout - [Invest Like the Best, EP.442]

Invest Like the Best with Patrick O'Shaughnessy

Play Episode Listen Later Sep 30, 2025 118:15


My guest today is Dylan Patel. Dylan is the founder and CEO of SemiAnalysis. At SemiAnalysis Dylan tracks the semiconductor supply chain and AI infrastructure buildout with unmatched granularity—literally watching data centers get built through satellite imagery and mapping hundreds of billions in capital flows. Our conversation explores the massive industrial buildout powering AI, from the strategic chess game between OpenAI, Nvidia, and Oracle to why we're still in the first innings of post-training and reinforcement learning. Dylan explains infrastructure realities like electrician wages doubling and companies using diesel truck engines for emergency power, while making a sobering case about US-China competition and why America needs AI to succeed. We discuss his framework for where value will accrue in the stack, why traditional SaaS economics are breaking down under AI's high cost of goods sold, and which hardware bottlenecks matter most. This is one of the most comprehensive views of the physical reality underlying the AI revolution you'll hear anywhere. Please enjoy my conversation with Dylan Patel. For the full show notes, transcript, and links to mentioned content, check out the episode page ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠ ----- This episode is brought to you by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ramp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ramp.com/invest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to sign up for free and get a $250 welcome bonus. – This episode is brought to you by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ridgeline⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ridgelineapps.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about the platform. – This episode is brought to you by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AlphaSense⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Alpha-Sense.com/Invest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://thepodcastconsultant.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:05:12) The AI Infrastructure Buildout (00:08:25) Scaling AI Models and Compute Needs (00:11:44) Reinforcement Learning and AI Training (00:14:07) The Future of AI and Compute (00:17:47) AI in Practical Applications (00:22:29) The Importance of Data and Environments in AI Training (00:29:45) Human Analogies in AI Development (00:40:34) The Challenge of Infinite Context in AI Models (00:44:08) The Bullish and Bearish Perspectives on AI (00:48:25) The Talent Wars in AI Research (00:56:54) The Power Dynamics in AI and Tech (01:13:29) The Future of AI and Its Economic Impact (01:18:55) The Gigawatt Data Center Boom (01:21:12) Supply Chain and Workforce Dynamics (01:24:23) US vs. China: AI and Power Dynamics (01:37:16) AI Startups and Innovations (01:52:44) The Changing Economics of Software (01:58:12) The Kindest Thing

The Data Exchange with Ben Lorica
How to Build and Optimize AI Research Agents

The Data Exchange with Ben Lorica

Play Episode Listen Later Sep 25, 2025 38:41


Jakub Zavrel, CEO of Zeta Alpha, joins the podcast to discuss the practical evolution from traditional enterprise search to powerful “deep research” systems. Subscribe to the Gradient Flow Newsletter

Unsupervised Learning
Ep 76: Ari Morcos from Datalogy AI and Rob Toews from Radical VC on AI Talent Wars, xAI's $200B Valuation, & Google's Comeback

Unsupervised Learning

Play Episode Listen Later Sep 24, 2025 62:54


This episode features a deep dive into the current state of AI model progress with Ari Morcos (CEO of Datalogy AI and former DeepMind/Meta researcher) and Rob Toews (partner at Radical Ventures). The conversation tackles whether model progress is genuinely slowing down or simply shifting into new paradigms, exploring the role of reinforcement learning in scaling capabilities beyond traditional pre-training. They examine the talent wars reshaping AI labs, Google's resurgence with Gemini, the sustainability of massive valuations for companies like OpenAI and Anthropic, and the infrastructure ecosystem supporting this rapid evolution. The discussion weaves together technical insights on data quality, synthetic data generation, and RL environments with strategic perspectives on acquisitions, regulatory challenges, and the future intersection of AI with physical robotics and brain-computer interfaces. (0:00) Intro(2:59) Debate on Model Progress(8:03) Challenges in AI Generalization and RL Environments(15:44) Enterprise AI and Custom Models(20:27) Google's AI Ascent and Market Impact(24:30) Valuations and Future of AI Companies(27:55) Evaluating xAI's Position in the AI Landscape(30:31) The Talent War in AI Research(35:45) The Impact of Acquihires on Startups(42:35) The Future of AI Infrastructure(48:28) The Potential of Brain-Computer Interfaces(54:45) The Evolution of AI and Robotics(1:00:50) The Importance of Data in AI Research With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

AI for Non-Profits
Julius: A Revolutionary Step for AI Research?

AI for Non-Profits

Play Episode Listen Later Sep 22, 2025 8:27


Researchers are calling Julius a milestone. Could it really be the foundation for AI that better understands us? We explore this possibility.Try AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

UiPath Daily
Examining Behind-the-Scenes AI Research with Mistral's Deep Research

UiPath Daily

Play Episode Listen Later Sep 18, 2025 12:02


In this episode, we're unveiling revolutionary ai capabilities from the perspective of Mistral's Deep Research. Discover how this work is driving the next wave of innovation and setting a new standard for innovation in AI. We'll break down the most important insights, explore real-world implications, and share why this matters now more than ever.Try AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Lance: https://www.linkedin.com/in/lance-martin-64a33b5/ How Context Fails: https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html How New Buzzwords Get Created: https://www.dbreunig.com/2025/07/24/why-the-term-context-engineering-matters.html Content Engineering: https://x.com/RLanceMartin/status/1948441848978309358 https://rlancemartin.github.io/2025/06/23/context_engineering/ https://docs.google.com/presentation/d/16aaXLu40GugY-kOpqDU4e-S0hD1FmHcNyF0rRRnb1OU/edit?usp=sharing Manus Post: https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus Cognition Post: https://cognition.ai/blog/dont-build-multi-agents Multi-Agent Researcher: https://www.anthropic.com/engineering/multi-agent-research-system Human-in-the-loop + Memory: https://github.com/langchain-ai/agents-from-scratch - Bitter Lesson in AI Engineering - Hyung Won Chung on the Bitter Lesson in AI Research: https://www.youtube.com/watch?v=orDKvo8h71o Bitter Lesson w/ Claude Code: https://www.youtube.com/watch?v=Lue8K2jqfKk&t=1s Learning the Bitter Lesson in AI Engineering: https://rlancemartin.github.io/2025/07/30/bitter_lesson/ Open Deep Research: https://github.com/langchain-ai/open_deep_research https://academy.langchain.com/courses/deep-research-with-langgraph Scaling and building things that "don't yet work": https://www.youtube.com/watch?v=p8Jx4qvDoSo - Frameworks - Roast framework at Shopify / standardization of orchestration tools: https://www.youtube.com/watch?v=0NHCyq8bBcM MCP adoption within Anthropic / standardization of protocols: https://www.youtube.com/watch?v=xlEQ6Y3WNNI How to think about frameworks: https://blog.langchain.com/how-to-think-about-agent-frameworks/ RAG benchmarking: https://rlancemartin.github.io/2025/04/03/vibe-code/ Simon's talk with memory-gone-wrong: https://simonwillison.net/2025/Jun/6/six-months-in-llms/

Unsupervised Learning
Ep 74: Chief Scientist of Together.AI Tri Dao On The End of Nvidia's Dominance, Why Inference Costs Fell & The Next 10X in Speed

Unsupervised Learning

Play Episode Listen Later Sep 10, 2025 58:37


Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8 Tri Dao, Chief Scientist at Together AI and Princeton professor who created Flash Attention and Mamba, discusses how inference optimization has driven costs down 100x since ChatGPT's launch through memory optimization, sparsity advances, and hardware-software co-design. He predicts the AI hardware landscape will shift from Nvidia's current 90% dominance to a more diversified ecosystem within 2-3 years, as specialized chips emerge for distinct workload categories: low-latency agentic systems, high-throughput batch processing, and interactive chatbots. Dao shares his surprise at AI models becoming genuinely useful for expert-level work, making him 1.5x more productive at GPU kernel optimization through tools like Claude Code and O1. The conversation explores whether current transformer architectures can reach expert-level AI performance or if approaches like mixture of experts and state space models are necessary to achieve AGI at reasonable costs. Looking ahead, Dao sees another 10x cost reduction coming from continued hardware specialization, improved kernels, and architectural advances like ultra-sparse models, while emphasizing that the biggest challenge remains generating expert-level training data for domains lacking extensive internet coverage. (0:00) Intro(1:58) Nvidia's Dominance and Competitors(4:01) Challenges in Chip Design(6:26) Innovations in AI Hardware(9:21) The Role of AI in Chip Optimization(11:38) Future of AI and Hardware Abstractions(16:46) Inference Optimization Techniques(33:10) Specialization in AI Inference(35:18) Deep Work Preferences and Low Latency Workloads(38:19) Fleet Level Optimization and Batch Inference(39:34) Evolving AI Workloads and Open Source Tooling(41:15) Future of AI: Agentic Workloads and Real-Time Video Generation(44:35) Architectural Innovations and AI Expert Level(50:10) Robotics and Multi-Resolution Processing(52:26) Balancing Academia and Industry in AI Research(57:37) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Looks Like Work
From AI Research to Alternative Healing (with Palveshey Tariq)

Looks Like Work

Play Episode Listen Later Sep 9, 2025 48:02


 In this transformative conversation, Palveshey Tariq—founder of Alternative Coaching Methods—shares her journey from quantum physics and AI research to guiding others through plant medicine ceremonies. After collecting all the accolades in STEM but feeling empty inside, a suic*de attempt led to a two-year journey of meditating 4-8 hours daily and completely rewiring her relationship with herself. What started as reaching for psilocybin mushrooms as an escape became five hours of taking ownership of her role in her own suffering. The conversation explores how we trade our authentic selves for conditional love starting in childhood, why observing our behavior (internally and externally) changes everything, and how fear drives most of our achievements until we learn to operate from integrity instead. Palveshey reveals her morning routine, why "discipline is the highest form of self-respect," and how asking "How do you know it's true?" can dismantle an entire belief system. Key Topics: From quantum physics to consciousness: the observer effect applied internally Why high achievement doesn't equal high performance (vitality and balance) Trading authenticity for love: "I am who I think you think I am" A psilocybin ceremony that wasn't an escape but a mirror The body screaming what the mind ignores: menstrual pain as communication Morning routine as self-respect: meditation, walking, yoga, reading, then clients   Notable Quotes: "Plant medicines shed light on all the dark areas and assign you homework, but you still have to go home and do the work." "There's a fine line between owning your shit and being full of it." "In order to become a graceful master, we need to look like a foolish beginner." "In order for a spiritual awakening to happen, there has to be a mental breakdown." "Creation and contribution are the antidotes for comparison and criticism". Palveshey's Powerful Question: "How do you know it's true?"—And the crucial follow-up: "What's truer?"   Key Lessons: Our subconscious starts believing love is conditional around age 2-3 The difference between religion and spirituality/devotion Integration is the most important part of plant medicine work Your body whispers before it screams—listen early Four motivators: fear, desire, duty, or love—know which drives you Turn "why me?" into "watch me" Right and wrong are illusions—feel what's right in the moment Resources Mentioned: Alternative Coaching Methods  The Diamond Cutter (and other books mentioned on this season - affiliate links) No Bullshit Spirituality newsletter - one hard truth, one simple turnaround LinkedIn for Palveshey's writings Gravitas - 1:1 accelertaors for biz owners stepping into their founder era The Curiosity Lab - Strategy sessions for leaders by Chedva You're Gonna Want to Sit Down for This - bi-weekly email packed with lessons and free tools Chedva's newsletter - Weekly musings and questions

This Week in Google (MP3)
IM 835: Glitch Lord - Inside OpenAI's Secret Struggles and the 'Empire of AI' With Karen Hao

This Week in Google (MP3)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

All TWiT.tv Shows (MP3)
Intelligent Machines 835: Glitch Lord

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

Radio Leo (Audio)
Intelligent Machines 835: Glitch Lord

Radio Leo (Audio)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

This Week in Google (Video HI)
IM 835: Glitch Lord - Inside OpenAI's Secret Struggles and the 'Empire of AI' With Karen Hao

This Week in Google (Video HI)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

All TWiT.tv Shows (Video LO)
Intelligent Machines 835: Glitch Lord

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

Radio Leo (Video HD)
Intelligent Machines 835: Glitch Lord

Radio Leo (Video HD)

Play Episode Listen Later Sep 4, 2025 164:34 Transcription Available


Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code
 AI commoditization—why all major models start to feel the same
 Western vs. Chinese open-source models and global AI power shifts
 Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact
 Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training
 DIY facial recognition: Citizen activists unmask ICE using AI tools
 Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io

The Conversation
The Conversation: UH AI research tool; Student club cultivates green thumbs

The Conversation

Play Episode Listen Later Aug 27, 2025 51:50


University of Hawaiʻi climate scientist Matthew Widlansky develops a new AI tool to help researchers explore complex data; Kaimuki High School teacher Chu Hong shares what her students are cultivating in WORMS club

Cross Talk
Seperating help from hype in AI research

Cross Talk

Play Episode Listen Later Aug 27, 2025 55:10


Today on the show we have a professor of medicine, a computer scientist and an English professor, all talking about Artificial Intelligence and research. They dive into the role AI can play, its potential, and they seperate help from hype. Guests: Michelle Ploughman, professor at the faculty of medicine at MUN; Xianta Jiang, MUN computer science professor at the department of computer science; Aaron Tucker, English professor and author.

Eye On A.I.
#281 Leon Song: The Research Driving Next-Gen Open-Source Models (Together AI)

Eye On A.I.

Play Episode Listen Later Aug 25, 2025 50:16


AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. In this episode of Eye on AI, we sit down with Leon Song, VP of Research at Together AI, to explore how open-source models and cutting-edge infrastructure are reshaping the AI landscape.    From speculative decoding to FlashAttention and RedPajama, Leon shares how Together AI is building one of the fastest, most cost-efficient AI clouds—helping enterprises fine-tune, deploy, and scale open-source models at the level of GPT-4 and beyond.   We dive into Leon's journey from leading DeepSpeed and AI for Science at Microsoft to driving system-level innovation at Together AI. Topics include: The future of open-source vs. closed-source AI models Breakthroughs in speculative decoding for faster inference How Together AI's cloud platform empowers enterprises with data sovereignty and model ownership Why open-source models like DeepSeek R1 and Llama 4 are now rivaling proprietary systems The role of GPUs vs. ASIC accelerators in scaling AI infrastructure   Whether you're an AI researcher, enterprise leader, or curious about where generative AI is heading, this conversation reveals the technology and strategy behind one of the most important players in the open-source AI movement. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI

Pharma and BioTech Daily
Pharma and Biotech Daily: Eli Lilly Raises Prices, RFK Jr. Supports mRNA Cancer Vaccines

Pharma and BioTech Daily

Play Episode Listen Later Aug 18, 2025 0:53


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma e Biotech world. Eli Lilly plans to raise drug prices in Europe in response to Trump's Most Favored Nation policy. RFK Jr. has expressed support for mRNA cancer vaccines after cutting mRNA BARDA contracts. Precigen wins FDA approval for an immunotherapy for recurrent respiratory papillomatosis, while Kennedy revives a childhood vaccine safety group and suggests mRNA vaccines could be effective for cancer. Genscript announces a new era of innovation, Bausch Health closes a California site, and Eli Lilly partners with Superluminal for obesity treatments. Trump delays tariffs, and upcoming webinars focus on AI in research, integrated supply chain strategies, and data lakes. Job opportunities include positions at Utro Biopharma, BioMarin Pharmaceutical, and AbbVie.

ASUG Talks
ASUG Talks Roundtable: Unpacking ASUG Cloud and AI Research

ASUG Talks

Play Episode Listen Later Aug 17, 2025 22:28


ASUG recently collaborated with Intel and Microsoft to examine two of the hottest topics in the SAP ecosystem: cloud migration and AI adoption. As SAP continues spurring its customer base to embrace the cloud, citing next-generation AI solutions as a critical benefit to this shift, this research sheds light into how the cloud environments ASUG members are choosing, and what processes they are using AI solutions to improve. Blake Baltazar, ASUG Senior Project & Research Analyst, and Marissa Gilbert, ASUG Research Director, join ASUG Talks this week to break down this research and its key findings. Key InsightsWhat factors drive cloud adoption The cost ASUG members reported paying for their cloud services What AI solutions respondents are leveragingRelated Resources Read ASUG's SAP S/4HANA focused research and listen to a recent podcast with the ASUG research team on the key findings. Read how Carhartt leveraged SAP Commerce Cloud to modernize its global commerce infrastructure. Join ASUG on Aug. 20 for an Ask Me Anything Community Conversation on SAP Process Integration & Orchestration (PI/PO). 

Campus Technology Insider
Faculty AI Grants, NSF and AI Research, & Tech Tactics Agenda Announced: News of the Week (8/8/25)

Campus Technology Insider

Play Episode Listen Later Aug 8, 2025 2:08


In this episode of Campus Technology Insider Podcast Shorts, Rhea Kelly highlights the latest in higher education technology. Touro University launches a Faculty Innovation Grant Program to develop AI-enhanced curricula, aiming for a fully AI-enabled institution by 2025. The National Science Foundation announces a $100 million investment in AI research, focusing on mental health, materials discovery, and human-AI collaboration. Additionally, the Tech Tactics in Education conference will address innovation challenges in K-12 and higher education. Register now for the free virtual event on September 25th. 00:00 Introduction to Campus Technology Insider 00:15 Touro University's AI-Enhanced Curricula Initiative 00:47 National Science Foundation's $100 Million AI Investment 01:27 Tech Tactics in Education Conference 2025 01:55 Conclusion and Further Resources Source links: Touro University Launches Faculty Innovation Grant Program to Advance Integration of AI into Teaching and Learning NSF Invests $100 Million in National AI Research Institutes September 2025 Tech Tactics in Education Conference Agenda Announced   Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.

A Podcast of Biblical Proportions
Collab: New AI Research into Biblical Authors Supports our Theories

A Podcast of Biblical Proportions

Play Episode Listen Later Aug 3, 2025 34:23


New AI research into the Documentary Hypothesis and the authorship of the books of Samuel reinforces our theories. Dr. Rutger Vos from the University of Leiden joins Gil to discuss. Read about this research here Join our tribe on Patreon! Check out these cool pages on the podcast's website:Home PageWho wrote the Bible: Timeline and authorsAncient maps: easy to follow maps to see which empire ruled what and whenClick here to see Exodus divided into "sources" according to the Documentary Hypothesis The podcast is written, edited and produced by Gil Kidron

ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI
Examining Secret AI Research with Mistral's Deep Research

ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI

Play Episode Listen Later Jul 23, 2025 12:02


In this episode, we're unpacking powerful ai capabilities from the perspective of Mistral's Deep Research. Discover how this work is shaping the future of deep learning and setting a new standard for innovation in AI. We'll break down the most important insights, explore real-world implications, and share why this matters now more than ever.Try AI Box: ⁠⁠https://aibox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about

The Capitol Pressroom
Legislature wants safeguards on cutting-edge AI research

The Capitol Pressroom

Play Episode Listen Later Jun 24, 2025 22:18


June 24, 2025 - Assemblymember Alex Bores, a Manhattan Democrat, discusses his legislation putting guardrails on cutting-edge artificial intelligence research and explains how the measure was curtailed during the amendment process. He also talks about unfinished business from the legislative session, weighs in on the movie Mountainhead, and considers the status of Empire AI.

Learning Tech Talks
Stanford AI Research | Microsoft AI Agent Coworkers | Workday AI Bias Lawsuit | Military AI Goes Big

Learning Tech Talks

Play Episode Listen Later Jun 20, 2025 53:35


Happy Friday, everyone! This week I'm back to my usual four updates, and while they may seem disconnected on the surface, you'll see some bigger threads running through them all.All seem to indicate we're outsourcing to AI faster than we can supervise, are layering automation on top of bias without addressing the root issues, and letting convenience override discernment in places that carry life-or-death stakes.With that, let's get into it.⸻Stanford's AI Therapy Study Shows We're Automating HarmNew research from Stanford tested how today's top LLMs are handling crisis counseling, and the results are disturbing. From stigmatizing mental illness to recommending dangerous actions in crisis scenarios, these AI therapists aren't just “not ready”… they are making things worse. I walk through what the study got right, where even its limitations point to deeper risk, and why human experience shouldn't be replaced by synthetic empathy.⸻Microsoft Says You'll Be Training AI Agents Soon, Like It or NotIn Microsoft's new 2025 Work Trend Index, 41% of leaders say they expect their teams to be training AI agents in the next five years. And 36% believe they'll be managing them. If you're hearing “agent boss” and thinking “not my problem,” think again. This isn't a future trend; it's already happening. I break down what AI agents really are, how they'll change daily work, and why organizations can't just bolt them on without first measuring human readiness.⸻Workday's Bias Lawsuit Could Reshape AI HiringWorkday is being sued over claims that its hiring algorithms discriminated against candidates based on race, age, and disability status. But here's the real issue: most companies can't even explain how their AI hiring tools make decisions. I unpack why this lawsuit could set a critical precedent, how leaders should respond now, and why blindly trusting your recruiting tech could expose you to more than just bad hires. Unchecked, it could lead to lawsuits you never saw coming.⸻Military AI Is Here, and We're Not Ready for the Moral TradeoffsFrom autonomous fighter jet simulations to OpenAI defense contracts, military AI is no longer theoretical; it's operational. The U.S. Army is staffing up with Silicon Valley execs. AI drones are already shaping modern warfare. But what happens when decisions of life and death get reduced to “green bars” on output reports? I reflect on why we need more than technical and military experts in the room and what history teaches us about what's lost when we separate force from humanity.⸻If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.—Show Notes:In this Weekly Update, Christopher Lind unpacks four critical developments in AI this week. First, he starts by breaking down Stanford's research on AI therapists and the alarming shortcomings in how large language models handle mental health crises. Then, he explores Microsoft's new workplace forecast, which predicts a sharp rise in agent-based AI tools and the hidden demands this shift will place on employees. Next, he analyzes the legal storm brewing around Workday's recruiting AI and what this could mean for hiring practices industry-wide. Finally, he closes with a timely look at the growing militarization of AI and why ethical oversight is being outpaced by technological ambition.Timestamps:00:00 – Introduction01:05 – Episode Overview02:15 – Stanford's Study on AI Therapists18:23 – Microsoft's Agent Boss Predictions30:55 – Workday's AI Bias Lawsuit43:38 – Military AI and Moral Consequences52:59 – Final Thoughts and Wrap-Up#StanfordAI #AItherapy #AgentBosses #MicrosoftWorkTrend #WorkdayLawsuit #AIbias #MilitaryAI #AIethics #FutureOfWork #AIstrategy #DigitalLeadership

Boardroom Governance with Evan Epstein
Karen Hao: Author of Empire of AI on Why "Scale at All Costs" is Not Leading Us to a Good Place

Boardroom Governance with Evan Epstein

Play Episode Listen Later Jun 12, 2025 65:17


(0:00) Intro (1:49) About the podcast sponsor: The American College of Governance Counsel(2:36) Introduction by Professor Anat Admati, Stanford Graduate School of Business. Read the event coverage from Stanford's CASI.(4:14) Start of Interview(4:45) What inspired Karen to write this book and how she got started with journalism.(8:00) OpenAI's Nonprofit Origin Story(8:45) Sam Altman and Elon Musk's Collaboration(10:39) The Shift to For-Profit(12:12) On the original split between Musk and Altman over control of OpenAI(14:36) The Concept of AI Empires(18:04) About concept of "benefit to humanity" and OpenAI's mission "to ensure that AGI benefits all of humanity"(20:30) On Sam Altman's Ouster and OpenAI's Boardroom Drama (Nov 2023) "Doomers vs Boomers"(26:05) Investor Dynamics Post-Ouster of Sam Altman(28:21) Prominent Departures from OpenAI (ie Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati, etc)(30:55) The Geopolitics of AI: U.S. vs. China(32:37) The "What about China" Card used by US companies to ward off regulation.(34:26) "Scaling at All Costs is not leading us in a good place"(36:46) Karen's preference on ethical AI development "I really want there to be more participatory AI development. And I think about the full supply chain of AI development when I say that."(39:53) Her biggest hope and fear for the future "the greatest threat of these AI empires is the erosion of democracy."(43:34) The case of Chilean Community Activism and Empowerment(47:20) Recreating human intelligence and the example of Joseph Weizenbaum, MIT (Computer Power and Human Reason, 1976)(51:15) OpenAI's current AI research capabilities: "I think it's asymptotic because they have started tapping out of their scaling paradigm"(53:26) The state (and importance of) open source development of AI. "We need things to be more open"(55:08) The Bill Gates demo on chatGPT acing the AP Biology test.(58:54) Funding academic AI research and the public policy question on the role of Government.(1:01:11) Recommendations for Startups and UniversitiesKaren Hao is the author of Empire of AI (Penguin Press, May 2025) and an award-winning journalist covering the intersections of AI & society. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License

Effective Altruism Forum Podcast
“Estimating the Substitutability between Compute and Cognitive Labor in AI Research” by Parker_Whitfill, CherylWu

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 7, 2025 20:25


Audio note: this article contains 127 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding. Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute. The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not [...] ---Outline:(00:35) Intro:(02:16) Model(02:19) Baseline CES in Compute(04:07) Conditions for a Software-Only Intelligence Explosion(07:39) Deriving the Estimation Equation(09:31) Alternative CES Formulation in Frontier Experiments(10:59) Estimation(11:02) Data(15:02) Trends(15:58) Estimation Results(18:52) ResultsThe original text contained 13 footnotes which were omitted from this narration. --- First published: June 1st, 2025 Source: https://forum.effectivealtruism.org/posts/xoX936hEvpxToeuLw/estimating-the-substitutability-between-compute-and --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Getting Smart Podcast
Catching Up! | An AI Research and Leadership Framework, Humanities 2.0, and Incentivizes that Fuel Communities of Learning

Getting Smart Podcast

Play Episode Listen Later Jun 6, 2025 47:18


In this episode of Catching Up, Nate McClennen and Mason Pashia explore the evolving landscape of education with a focus on AI leadership, applied humanities, and innovative learning spaces. They dive into key topics such as the role of AI in transforming educational practices, the shift towards applied humanities in higher education, and the potential of community spaces in schools to enhance learning experiences. With insights from recent developments and thoughtful discussions, this episode offers valuable perspectives for educators, administrators, and policymakers looking to navigate the future of education. Tune in to discover how these emerging trends are shaping new pathways for learners and educators alike. Outline (00:00) Introduction and Episode Overview (02:40) Educational Innovations and Legislation (04:41) Future of Learning: Digital Wallets and AI (10:16) Applied Humanities in Higher Education (18:20) AI in Schools: Leadership, Crowd, and Lab (28:45) Reimagining Community Spaces in Schools (33:31) AI in Job Screening and Hiring (42:25)  What's That Song? Links Watch the full video Read the full blog here Mason's LER Blog Ministry of Imagination Accreditation MSA Teach For America Should Embrace Apprenticeship Model Amid AmeriCorps Cuts LER digital wallet Michigan experiment  Educational Choice for Children Act H.R 3250 Applied Humanities AI leadership - Ethan Mollick - One Useful Thing Early Childhood and developers

Hashtag Trending
NVIDIA Criticizes US Export Controls and Getty Battles AI Copyright Infringement

Hashtag Trending

Play Episode Listen Later Jun 3, 2025 13:21 Transcription Available


In this episode of Hashtag Trending, host Jim Love discusses NVIDIA CEO Jensen Huang's criticism of US export controls on AI chips that have led to significant financial losses for his company while bolstering Chinese AI competitors like Huawei. NVIDIA faces an $8 billion revenue loss due to restricted H20 chip exports to China. Huang argues that these policies are accelerating Chinese innovation and undermining US global leadership in AI technology. The episode also highlights Getty Images CEO Craig Peters' struggle with the high costs of litigating AI copyright infringement cases. Peters reveals that even a major company like Getty cannot afford to fight every instance of AI firms using copyrighted content without permission, creating a severe economic imbalance. The script ends with an exploration of the high rate of 'hallucinations' by AI in legal research and the resulting professional risks for lawyers, emphasizing the need for more stringent fact-checking. 00:00 Introduction and Headlines 00:26 NVIDIA's Struggles with US Export Controls 03:52 Getty Images' Battle Against AI Copyright Infringement 07:13 Legal Challenges with AI-Generated Fake Case Law 10:53 The Importance of Fact-Checking in AI Research 12:29 Conclusion and Viewer Engagement

Pathmonk Presents Podcast
Safeguarding the Digital Ecosystem with AI | Ryan Ofman of Deep Media

Pathmonk Presents Podcast

Play Episode Listen Later Jun 2, 2025 34:20


Join Pathmonk Presents as we explore Deep Media with Ryan Ofman, Head of AI Research. Deep Media pioneers AI-driven deepfake detection, protecting government, social media, and financial sectors from misinformation. Ryan shares how they engage clients through conferences and viral deepfake detection reports, leveraging their website for education and acquisition. Learn about urgent “earthquake” conversions versus proactive “aftershock” inquiries, the importance of accessible AI explanations, and their work with academics for equitable AI. Discover tips for creating converting websites with engaging demos and compelling content. Tune in for insights on combating digital threats!  

Science (Video)
Artificial Intelligence and Security: A Conversation with Yaron Singer

Science (Video)

Play Episode Listen Later May 21, 2025 43:10


Yaron Singer, Vice President of AI and Security at Cisco, co-founded a company specializing in artificial intelligence solutions, which was acquired by Cisco in 2024. They developed a firewall for artificial intelligence, a tool designed to protect AI from making critical mistakes. No matter how sophisticated AI is, errors can still happen, and these errors can have far-reaching consequences. The product is designed to detect and fix such mistakes. This technology was developed long before ChatGPT and its competitors burst onto the scene, making it the hottest industry in tech investment. Join Singer as he sits down with UC San Diego professor Mikhail Belkin to discuss his work and the continued effort to make artificial intelligence secure. Series: "Data Science Channel" [Science] [Show ID: 40265]

University of California Audio Podcasts (Audio)
Artificial Intelligence and Security: A Conversation with Yaron Singer

University of California Audio Podcasts (Audio)

Play Episode Listen Later May 21, 2025 43:10


Yaron Singer, Vice President of AI and Security at Cisco, co-founded a company specializing in artificial intelligence solutions, which was acquired by Cisco in 2024. They developed a firewall for artificial intelligence, a tool designed to protect AI from making critical mistakes. No matter how sophisticated AI is, errors can still happen, and these errors can have far-reaching consequences. The product is designed to detect and fix such mistakes. This technology was developed long before ChatGPT and its competitors burst onto the scene, making it the hottest industry in tech investment. Join Singer as he sits down with UC San Diego professor Mikhail Belkin to discuss his work and the continued effort to make artificial intelligence secure. Series: "Data Science Channel" [Science] [Show ID: 40265]

Path to Mastery
The Future of Sales: Pre-Sale AI-Research Strategies with Amarpreet Kalkat

Path to Mastery

Play Episode Listen Later May 19, 2025 19:35


In this episode of The Persistent Entrepreneur, David Hill sits down with Amarpreet Kalkat, founder and CEO of Humantic AI, to explore how artificial intelligence is revolutionizing the sales process—before the first call even happens. Amarpreet shares how his platform uses AI to generate powerful personality insights and buyer intelligence that help sales professionals personalize outreach, build trust faster, and dramatically improve conversion rates. If you want to learn how to leverage AI to connect smarter and sell more effectively, this is a conversation you don't want to miss. Full Name: Amarpreet Kalkat  Email: kalkat@humantic.ai  Phone Number: +915103299004 Social Media Links: https://www.linkedin.com/in/amarpreetkalkat/ http://x.com/amarpreetkalkat    https://kalkat.substack.com/        Connect with David LINKS: www.davidhill.ai    SOCIALS:  Facebook: https://www.facebook.com/davidihill/  LinkedIn: https://www.linkedin.com/in/davidihill YouTube: https://www.youtube.com/c/DavidHillcoach  TikTok: www.tiktok.com/@davidihill Instagram: https://www.instagram.com/davidihill  X: https://twitter.com/davidihill    RING LEADER AI DEMO CALL- 774-214-2076    PODCAST SUBSCRIBE & REVIEW  https://podcasts.apple.com/us/podcast/the-persistent-entrepreneur/id1081069895  

Tech Gumbo
AI Credit Cards, AI Hallucinations Getting Worse, Zuckerberg Thinks AI Will be Your Friends, AI Research Violates Ethics Standards

Tech Gumbo

Play Episode Listen Later May 15, 2025 22:10


Top Story: Visa trying to the AI shopping experience credit card AI continues to grow, but hallucinations are getting worse Zuckerberg thinks most of your friends will be AI chat bots Researchers violated ethical standards while investigating AI usage online

Artificial Intelligence and You
256 - Guest: Diane Gutiw, AI Research Center Lead, part 2

Artificial Intelligence and You

Play Episode Listen Later May 12, 2025 28:53


This and all episodes at: https://aiandyou.net/ . How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors.  In part 2, we talk about synthetic data, digital triplets, agentic AI and continuous autonomous improvement, and best practices for compliance.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Innovation to Save the Planet
AI Research Tools AEC Firms Can't Ignore

Innovation to Save the Planet

Play Episode Listen Later May 12, 2025 36:15 Transcription Available


AI isn't just about flashy tools—it's about transforming how AEC firms research, analyze, and make decisions. In this episode of KP Unpacked - the number one Podcast in AEC, Jeff Echols and Frank Lazzaro dive deep into the power of AI research tools and how you can use them to work smarter, not harder.Key takeaways from this episode:Why AI research tools are changing the game for AEC firmsFrank's personal formula for finding 12 minutes of efficiency daily using AIReal-world examples of how AI tools outperform traditional research methodsThe top AI tools you should be using: ChatGPT Deep Research, Storm, and PerplexityHow to avoid AI's common pitfalls in data analysis and research workflowsActionable tips to start using AI research tools in your business todayWe want to tell you about something we've been quietly working on—and now it's live. It's called Catalyst. It's a space built for AEC professionals like you who are designing, building, and reimagining the future. Catalyst is where the sharpest minds in our industry connect, share, and lead what's next. If this sparked something, Catalyst is where we keep the conversation going.This uniquely active space is where the top minds in our industry come together and shape what's next in AEC. Ignite innovation. Join the waitlist at kpreddy.co. Hope to see you there!

Beyond 7 Figures: Build, Scale, Profit
Creating a One-Person Marketing Team feat. Mike Koenigs

Beyond 7 Figures: Build, Scale, Profit

Play Episode Listen Later May 9, 2025 53:12


Learn how to leverage advanced AI strategies that most business owners miss The untapped potential of artificial intelligence for business growth. The conversation explores how most people are barely scratching the surface of AI's capabilities, sharing practical examples of how AI can research prospects, create marketing campaigns, build prototypes, and even write books in a fraction of the time traditional methods require. Mike Koenigs is the author of "The AI Accelerator" and founder of the Superpower Accelerator. As one of Charles' early mentors, Mike has consistently stayed ahead of marketing trends and now helps business owners harness AI to expand their capabilities, create one-person marketing teams, elevate their authority, and scale without adding headcount. His innovative "thousand dollars cup of coffee" campaign demonstrates how AI-powered research can transform client consultations and dramatically increase revenue. KEY TAKEAWAYS: Most businesses only scratch the surface of AI potential, using basic ChatGPT instead of the full AI ecosystem. Quality AI output directly reflects quality input—effective prompt crafting is essential. Successful AI adoption requires understanding possibilities, selecting right tools, and focusing on four key functions. Being "tool agnostic" is crucial—multiple AI models provide better insights than any single tool. AI's greatest advantage comes from unleashing imagination and asking better questions. Create AI style guides to capture brand voice and apply consistently across all content. The "one-person marketing department" is now possible through comprehensive AI tools. Mike's "thousand dollars cup of coffee" model demonstrates how AI research transforms high-ticket sales. Websites: Main Website: https://superpoweraccelerator.com/ AI Accelerator: https://ai.mikekoenigs.com/ Book Website: https://aiaccelerator.mikekoenigs.com/ Social Media: LinkedIn: https://www.linkedin.com/in/mikekoenigs/ Growing your business is hard, but it doesn't have to be. In this podcast, we will be discussing top level strategies for both growing and expanding your business beyond seven figures. The show will feature a mix of pure content and expert interviews to present key concepts and fundamental topics in a variety of different formats. We believe that this format will enable our listeners to learn the most from the show, implement more in their businesses, and get real value out of the podcast. Enjoy the show. Please remember to rate, review and subscribe to the podcast so you don't miss any future episodes. Your support and reviews are important and help us to grow and improve the show. Follow Charles Gaudet and Predictable Profits on Social Media: Facebook: facebook.com/PredictableProfits Instagram: instagram.com/predictableprofits Twitter: twitter.com/charlesgaudet LinkedIn: linkedin.com/in/charlesgaudet Visit Charles Gaudet's Wesbites: www.PredictableProfits.com

Radiology Podcasts | RSNA
Tips for Writing AI Research in Radiology

Radiology Podcasts | RSNA

Play Episode Listen Later May 6, 2025 12:51


In this episode, Dr. Linda Chu speaks with Sarah Atzen, Lead Scientific Editor for Radiology, about best practices for writing AI research papers. They explore key tips from the recent article “Top 10 Tips for Writing about AI in Radiology” to help authors improve clarity, accuracy, and impact. Top 10 Tips for Writing about AI in Radiology: A Brief Guide for Authors. Atzen. Radiology 2025; 314(2):e243347.

Artificial Intelligence and You
255 - Guest: Diane Gutiw, AI Research Center Lead, part 1

Artificial Intelligence and You

Play Episode Listen Later May 5, 2025 33:14


This and all episodes at: https://aiandyou.net/ . How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors.  We talk about how enterprises manage the integration of AI at the dizzying speeds of change today, where AI does and does not impact employment, how the HR department should change in those enterprises, how to deal with hallucinations, and how to manage the risks of deploying generative AI in customer solutions.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Fit Mess
Why AI Therapists Might Actually Help Your Mental Health (And Where They Fall Short)

The Fit Mess

Play Episode Listen Later Apr 30, 2025 27:36 Transcription Available


Are your mental health issues trapped between therapy appointments? When I hit a rough patch, I turned to AI for therapy out of desperation, and was genuinely shocked by the results. The AI asked me introspective questions identical to what my human therapist uses, but went further by offering specific suggestions my therapist couldn't provide. In this episode, we explore the surprising benefits and legitimate concerns of using AI for mental wellness. You'll discover how these tools can organize your scattered thoughts, provide actionable steps for improvement, and serve as a valuable bridge between professional therapy sessions. Listen now to learn how AI might become your unexpected mental health ally – just don't forget the human connection that algorithms can't replace. Topics Discussed: My personal experience using AI as a therapy tool How AI can provide structured introspection and actionable steps for mental health The surprising effectiveness of AI-generated journaling prompts for emotional connection The potential dangers of relying solely on AI for medical or mental health advice Privacy concerns and data collection risks when sharing personal information with AI Research comparing AI vs human doctor diagnostic accuracy (77% vs 67%) How AI lacks human imagination but excels at pattern recognition and memory recall The future integration of AI tools in professional healthcare settings Finding the balance between AI assistance and human connection Using AI to transform traditional journaling practices for deeper emotional connection ---- MORE FROM THE FIT MESS: Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok Subscribe to The Fit Mess on Youtube Join our community in the Fit Mess Facebook group ---- LINKS TO OUR PARTNERS: Take control of how you'd like to feel with Apollo Neuro Explore the many benefits of cold therapy for your body with Nurecover Muse's Brain Sensing Headbands Improve Your Meditation Practice. Get started as a Certified Professional Life Coach! Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs Revamp your life with Bulletproof Coffee You Need a Budget helps you quickly get out of debt, and save money faster! Start your own podcast!    

Data Brew by Databricks
Enterprise AI: Research to Product | Data Brew | Episode 43

Data Brew by Databricks

Play Episode Listen Later Apr 10, 2025 38:03


In this episode, Dipendra Kumar, Staff Research Scientist, and Alnur Ali, Staff Software Engineer at Databricks, discuss the challenges of applying AI in enterprise environments and the tools being developed to bridge the gap between research and real-world deployment.Highlights include:- The challenges of real-world AI—messy data, security, and scalability.- Why enterprises need high-accuracy, fine-tuned models over generic AI APIs.- How QuickFix learns from user edits to improve AI-driven coding assistance.- The collaboration between research & engineering in building AI-powered tools.- The evolving role of developers in the age of generative AI.

Hospitality Daily Podcast
My AI Research Results: 87% of Hospitality Pros Already Use AI (Here's How) - Josiah Mackenzie

Hospitality Daily Podcast

Play Episode Listen Later Apr 9, 2025 16:47


In this episode, Josiah Mackenzie shares some top takeaways from his latest research, including how 87% of hospitality professionals participating in the study already use AI to improve efficiency, creativity, and guest experience. Listen now for practical examples, underutilized AI opportunities, and actionable insights you can use in your hotel or hospitality business.Also see:AI 2027 ProjectWhat AI Might Bring Hotels in 2025 - Martin SolerAmerica's Chief AI Officer for Travel Shares Advice for 2025 - Janette Roush, Brand USALess Ringing, More Hospitality: AI-Powered PBX To Give Our Teams More Time for Guests - Steven Marais, Noble House Hotels & ResortsAI & Hotel Tech Bets For Our People-First Approach - Dina Belon, Staypineapple HotelsThe Future of Hotel Management: Automation, AI, and Innovation - Sloan Dean, Remington HospitalityAI's Impact On Our Business - Ernest Lee, citizenMHow AI Helps Me Run More Profitable Hotels - Sean Murphy, The Bower50 Days, 50 Concepts: Rethinking Experiential Hospitality with Generative AI - Dylan Barahona A few more resources: If you're new to Hospitality Daily, start here. You can send me a message here with questions, comments, or guest suggestions If you want to get my summary and actionable insights from each episode delivered to your inbox each day, subscribe here for free. Follow Hospitality Daily and join the conversation on YouTube, LinkedIn, and Instagram. If you want to advertise on Hospitality Daily, here are the ways we can work together. If you found this episode interesting or helpful, send it to someone on your team so you can turn the ideas into action and benefit your business and the people you serve! Music for this show is produced by Clay Bassford of Bespoke Sound: Music Identity Design for Hospitality Brands

Serious Inquiries Only
SIO477: Debunking Bad AI Research, and Bad Coverage of AI Research

Serious Inquiries Only

Play Episode Listen Later Apr 4, 2025 44:16


Alejandro and Julia of theluddite.org join us to debunk some terrible AI research, and the bad reporting compounding the problems on top of that. Also, what is AI? Can it ever think for itself? Are you an expert in something and want to be on the show? Apply here! Please support the show on patreon! You get ad free episodes, early episodes, and other bonus content! This content is CAN credentialed, which means you can report instances of harassment, abuse, or other harm on their hotline at (617) 249-4255, or on their website at creatoraccountabilitynetwork.org.

The Daily Scoop Podcast
DOGE gains access to immigration systems; Bill to codify AI research at NSF is rebooted

The Daily Scoop Podcast

Play Episode Listen Later Apr 3, 2025 4:24


Members of Elon Musk's Department of Government Efficiency now have access to technical systems maintained by United States Citizenship and Immigration Services, according to a recent memorandum viewed by FedScoop. The memo, which was sent from and digitally signed by USCIS Chief Information Officer William McElhaney, states that Kyle Shutt, Edward Coristine, Aram Mogahaddassi and Payton Rehling were granted access to USCIS systems and data repositories, and that a Department of Homeland Security review was required to determine whether that access should continue. Coristine, 19, is one of the more polarizing members of DOGE. He previously provided assistance to a cybercrime ring through a company he operated while he was in high school, according to other news outlets. Coristine worked for a short period at Neuralink, Musk's brain implant company, and was previously stationed by DOGE at the Cybersecurity and Infrastructure Security Agency. The memo, dated March 28, asks DHS Deputy Secretary Troy Edgar to have his office review and provide direction for the four DOGE men regarding their access to the agency's “data lake” — called USCIS Data Business Intelligence Services — as well as two associated enabling technologies, Databricks and Github. The document says DHS CIO Antoine McCord and Michael Weissman, the agency's chief data officer, asked USCIS to enable Shutt and Coristine's access to the USCIS data lake in mid-March, and Mogahaddassi requested similar access days later. A bipartisan bill to fully establish a National Science Foundation-based resource aimed at providing essential tools for AI research to academics, nonprofits, small businesses and others was reintroduced in the House last week. Under the Creating Resources for Every American To Experiment with Artificial Intelligence (CREATE AI) Act of 2025 (H.R. 2385), a full-scale National AI Research Resource would be codified at NSF. While that resource currently exists in pilot form, legislation authorizing the NAIRR is needed to continue that work. Rep. Jay Obernolte, R-Calif., who sponsors the bill, said in a written statement announcing the reintroduction: “By empowering students, universities, startups, and small businesses to participate in the future of AI, we can drive innovation, strengthen our workforce, and ensure that American leadership in this critical field is broad-based and secure.” The NAIRR pilot, as it stands, is a collection of resources from the public and private sectors — such as computing power, storage, AI models, and data — that are made available to those researching AI to make the process of accessing those types of tools easier. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

The Facebook Marketing Ninja
Boost Your Business with AI Research (Here's How)

The Facebook Marketing Ninja

Play Episode Listen Later Mar 12, 2025 7:21


AI is changing the way businesses grow, and if you're not using it for research, you're falling behind. Whether it's market trends, audience insights, or competitor analysis, AI tools like ChatGPT and Gemini can give you a massive edge. In this podcast, I'll show you how to use AI research to make smarter business decisions, build a strong brand, and scale faster than ever.Want to see exactly how it works? Watch now and start leveraging AI to boost your business today.