Podcasts about Machine learning

Scientific study of algorithms and statistical models that computer systems use to perform tasks without explicit instructions

  • 7,084PODCASTS
  • 23,855EPISODES
  • 38mAVG DURATION
  • 5DAILY NEW EPISODES
  • Jan 7, 2026LATEST
Machine learning

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Machine learning

    Show all podcasts related to machine learning

    Latest podcast episodes about Machine learning

    Packet Pushers - Full Podcast Feed
    D2DO291: From Politics to Machine Learning and AI Engineering

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Jan 7, 2026 41:43


    Marina Wyss, Senior Applied Scientist at Twitch, joins Kyler and Ned to discuss her unique path from political science to AI Engineering. Wyss clarifies the difference between AI Engineering and Machine Learning Engineering and offers practical advice for aspiring engineers who want to incorporate data science, AI, and machine learning into their work. She digs... Read more »

    Packet Pushers - Fat Pipe
    D2DO291: From Politics to Machine Learning and AI Engineering

    Packet Pushers - Fat Pipe

    Play Episode Listen Later Jan 7, 2026 41:43


    Marina Wyss, Senior Applied Scientist at Twitch, joins Kyler and Ned to discuss her unique path from political science to AI Engineering. Wyss clarifies the difference between AI Engineering and Machine Learning Engineering and offers practical advice for aspiring engineers who want to incorporate data science, AI, and machine learning into their work. She digs... Read more »

    Track Changes
    Defying labels and learning to lead: With Parisa Zander

    Track Changes

    Play Episode Listen Later Jan 6, 2026 38:14


    This week on Catalyst Tammy chats with Parisa Zander, a seasoned professional in the tech industry who recently retired after a successful career spanning nearly three decades at companies like Meta, Samsung and Microsoft. Parisa discusses the challenges of being a woman in tech, the importance of finding one's voice, and the values that guide her leadership style, including honesty, empathy, and the need for fun in the workplace. She also emphasizes that to truly understand your customer you need to go to them and set time aside for real-world testing. How else will you see how people across the country are actually engaging with your product? Please note that the views expressed may not necessarily be those of NTT DATALinks: Parisa Zander - LinkedIn Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AWS - Conversations with Leaders
    Acquired at re:Invent: AWS CEO Matt Garman on AI, Agents, and the Future of Cloud Computing

    AWS - Conversations with Leaders

    Play Episode Listen Later Jan 6, 2026 36:32


    In this special encore episode from AWS re:Invent, AWS CEO Matt Garman joins Acquired podcast co-hosts Ben Gilbert and David Rosenthal for an in-depth conversation on AI, agents, and the future of business. Listen in as Garman shares his leadership journey from AWS intern to CEO, discusses why inference is becoming a fundamental building block for developers, and reveals how AI is enabling smaller teams to deliver exponentially more value. He also explores the organizational shifts enterprises must make to stay competitive, the evolution of agentic AI, and why agility and speed remain critical regardless of technological change.To catch the full interview session featuring additional speakers, Max Neukirchen (J.P. Morgan Payments), Greg Peters (Netflix), and Aravind Srivinas (Perplexity), click here to watch on YouTube -> https://www.youtube.com/watch?v=2ExjNvGYDiU.

    Vaad
    संवाद # 294: Pakistan ISI got this Indian Muslim arrested in Saudi Arabiaसंवाद # 294: Pakistan ISI got this Indian Muslim arrested in Saudi Arabia

    Vaad

    Play Episode Listen Later Jan 6, 2026 78:59


    Zahack Tanvir is a Hyderabad-born independent journalist, counter-extremism expert, and the founder and editor of the UK-based media outlet Milli Chronicle. He specializes in international affairs and counter-terrorism, having completed academic programs in these fields at the University of Leiden in the Netherlands and the London School of Journalism.His educational background is diverse, also comprising an engineering degree in Computer Science from Osmania University, a post-graduate diploma in AI and Machine Learning from IIIT India, and a Master's in AI-ML from Liverpool John Moores University.Tanvir identifies as a traditional Muslim who is vocally "anti-Islamist," often criticizing extremist ideologies and the political misuse of religion. He lived in Saudi Arabia for 13 years until a significant legal ordeal in late 2023, when he was detained by Saudi authorities following a complaint filed by Pakistan regarding his social media content, which was alleged to be anti-Pakistan. He was released in December 2024.

    The Daily Scoop Podcast
    Marine Corps wants 10,000 new drones this year as it looks to expand training for off-the-shelf systems

    The Daily Scoop Podcast

    Play Episode Listen Later Jan 5, 2026 4:17


    The Marine Corps is gearing up to expand its first-person view drone capabilities in the New Year by purchasing 10,000 new platforms and increasing the number of troops who are trained on them, according to government contracting documents and service officials. Earlier this week, the Corps announced a standardized training program for small-sized unmanned aerial systems, which include several courses for attack drone operators, payload specialists and instructors. Several units, from III Marine Expeditionary Force in the Pacific to Marine Forces Special Operations Command are now authorized to immediately start these courses. Meanwhile, the service is also asking industry to make thousands of UAS for under $4,000 per unit, according to a request for information posted in December. The intent is for Marines to be able to modify these drones with “simple” third-party munitions and repair them on their own. The RFI also inquired about autonomy and machine learning integration for these systems. Over the next several months, the service will aim to certify hundreds of Marines to use FPV drones, according to the Pentagon, with the goal of having every infantry, reconnaissance and littoral combat team across the fleet equipped with these platforms by May. Officials said that these courses were shaped by recent certifications and the Drone Training Symposium in November, an event intended to solidify and scale training across the fleet. DefenseScoop also reported last week that the Marine Corps had certified forward-deployed Marines on FPV drones for the first time in November. More than two dozen troops with the 22nd Marine Expeditionary Unit deployed to the Caribbean trained for more than a month-and-a-half to qualify on various FPV drone capabilities, a significant milestone for the force after a year of navigating untrodden ground. The Army recently established an artificial intelligence career field that select officers can transfer into starting next month, DefenseScoop has learned. It is also considering the potential for warrant officers to join the new role. The service created the 49B “area of concentration” for AI and Machine Learning on Oct. 31, according to Maj. Travis Shaw, a spokesperson for the Army. Between Jan. 5 and Feb. 6, 2026. Army officers who already have a few years of service or more can apply for the role through the Voluntary Transfer Incentive Program (VTIP), which is meant to support the Army's manning needs. It was unclear how many officers the Army hopes to transfer into the job, but those selected will reclassify by Oct. 1, 2026, Shaw said. The service expects those personnel to have completed their transition into the AI field by the following year. The effort comes as the Department of Defense continues to boost the use of large language model AI systems for military purposes. Earlier this month, the Pentagon launched GenAI.mil, a hub for commercial AI tools — one that DefenseScoop reported military personnel were meeting with mixed reviews and a bevy of questions about how to use it in their daily operations. The Army has also been embracing LLMs and AI, including through its Army Artificial Integration Center (AI2C), which was established in 2018 to integrate those systems into the service. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

    Sustainable Packaging
    Google's Sustainability Mission With Robert Little

    Sustainable Packaging

    Play Episode Listen Later Jan 4, 2026 35:04 Transcription Available


    In this episode, Cory Connors welcomes his longtime friend and sustainability leader Robert Little to discuss Google's sustainability mission—particularly its global work in circularity, recycling accessibility, packaging innovation, and the role of AI in modern waste systems. Robert shares his nonlinear career path, the principles that shaped his sustainability mindset, and how Google is leveraging its massive product ecosystem to scale sustainability solutions for billions of users worldwide.The conversation explores Google Maps' recycling drop‑off locator, Google Trends as a tool for understanding consumer sustainability needs, Google's plastic‑free packaging design journey, and innovations like CircularNet and Materra, X's emerging AI‑powered materials identification technology.Key Topics Discussed:Robert's Journey Into SustainabilityRobert's Role at GoogleGoogle's Sustainability Mission & Circularity GoalsPackaging Innovation at GoogleGoogle Maps Recycling Drop‑Off SearchAI & Machine Learning for Waste SystemsMaterra (formerly “Project X”): Advanced Material IdentificationAdvice for Consumer BrandsA Call for Optimism & Sharing Good Sustainability StoriesResources Mentioned:Google Trends – trends.google.comGoogle Maps Recycling AttributesGoogle's Plastic‑Free Packaging Design GuideCircularNet (open‑source machine learning model)Materra by X (The Moonshot Factory)Contact:Connect with Robert Little on LinkedIn.Closing Thoughts:Cory and Robert emphasize the need for optimism, collaboration, and smarter infrastructure in global sustainability. Robert highlights the immense potential for AI, transparency, and ecosystem‑level innovation to keep materials “in play” and reduce reliance on new resource extraction.They encourage listeners to stay curious, share good sustainability news, and use the tools available—many of them free—to design better packaging systems and reduce waste globally.Thank you for tuning in to Sustainable Packaging with Cory Connors!https://anewearthproject.com/collections/new-earth-approvedhttps://www.linkedin.com/in/cory-connors/I'm here to help you make your packaging more sustainable! Reach out today and I'll get back to you asap. This podcast is an independent production and the podcast production is an original work of the author. All rights of ownership and reproduction are retained—copyright 2022.

    TalkRL: The Reinforcement Learning Podcast
    Joseph Modayil of Openmind Research Institute @ RLC 2025

    TalkRL: The Reinforcement Learning Podcast

    Play Episode Listen Later Jan 3, 2026 4:27 Transcription Available


    Joseph Modayil is the Founder, President & Research Director of Openmind Research Institute.Featured References  Openmind Research Institute  The Alberta Plan for AI Research  Richard S. Sutton, Michael Bowling, Patrick M. Pilarski  Additional References  Joseph Modayil on Google Scholar  Joseph Modayil Homepage  

    Track Changes
    From the archives: Reinventing the healthcare experience with Keena Patel-Moran

    Track Changes

    Play Episode Listen Later Dec 30, 2025 33:14


    In this episode from the archives Tammy sits down with Keena Patel-Moran, the Healthcare and Lifesciences Industry Lead at Launch by NTT DATA. Keena and Tammy discuss ways to improve the industry and give patients the support they need and deserve. They discuss why doctors should look beyond just symptoms and make a case that improving healthcare processes is not only better for patients and care-takers but is also good for business. Please note that the views expressed may not necessarily be those of NTT DATALinks: Keena Patel-MoranLearn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    ITSPmagazine | Technology. Cybersecurity. Society
    When AI Guesses and Security Pays: Choosing the Right Model for the Right Security Decision | A Brand Story Highlight Conversation with Michael Roytman, CTO of Empirical Security

    ITSPmagazine | Technology. Cybersecurity. Society

    Play Episode Listen Later Dec 30, 2025 7:58


    In this Brand Highlight, we talk with Michael Roytman, CTO of Empirical Security, about a problem many security teams quietly struggle with: using general purpose AI tools for decisions that demand precision, forecasting, and accountability.Michael explains why large language models are often misapplied in security programs. LLMs excel at summarization, classification, and pattern extraction, but they are not designed to predict future outcomes like exploitation likelihood or operational risk. Treating them as universal problem solvers creates confidence gaps, not clarity.At Empirical, the focus is on preventative security through purpose built modeling. That means probabilistic forecasting, enterprise specific risk models, and continuous retraining using real telemetry from security operations. Instead of relying on a single model or generic scoring system, Empirical applies ensembles of models tuned to specific tasks, from vulnerability exploitation probability to identifying malicious code patterns.Michael also highlights why retraining matters as much as training. Threat conditions, environments, and attacker behavior change constantly. Models that are not continuously updated lose relevance quickly. Building that feedback loop across hundreds of customers is as much an engineering and operations challenge as it is a data science one.The conversation reinforces a simple but often ignored idea: better security outcomes come from using the right tools for the right questions, not from chasing whatever AI technique happens to be popular. This episode offers a grounded perspective for leaders trying to separate signal from noise in AI driven security decision making.Note: This story contains promotional content. Learn more.GUESTMichael Roytman, CTO of Empirical Security | On LinkedIn: https://www.linkedin.com/in/michael-roytman/RESOURCESLearn more about Empirical Security: https://www.empiricalsecurity.com/LinkedIn Post: https://www.linkedin.com/posts/bellis_a-lot-of-people-are-talking-about-generative-activity-7394418706388402178-uZjB/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKeywords: sean martin, michael roytman, ed beis, empirical security, cybersecurity, ai, machinelearning, vulnerability, risk, forecasting, brand story, brand marketing, marketing podcast, brand story podcast, brand spotlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    AWS - Conversations with Leaders
    How AI Is Reshaping the Consumer Goods Industry

    AWS - Conversations with Leaders

    Play Episode Listen Later Dec 30, 2025 18:30


    Explore how AI is transforming consumer goods execution with Arvind Mathur, AWS Executive in Residence, and Anupam Sinha, CEO and Co-founder of Vxceed. Anupam reveals how Consumer Packed Goods (CPG) companies are leveraging predictive AI, generative AI, and agentic AI to close the execution reality gap that has plagued traditional trade for decades—turning delayed insights into real-time market responsiveness. From preventing stock-outs to achieving autonomous trade promotion optimization, discover how forward-thinking leaders are protecting market share against digital-first insurgent brands penetrating traditional retail channels. Learn why your field execution strategy—powered by AI that delivers actionable intelligence—has become your most critical competitive advantage in markets where traditional trade still drives 95% of volume.

    Heart podcast
    Can we predict coronary artery disease on CT using machine learning - insights from the SCOT-HEART trial

    Heart podcast

    Play Episode Listen Later Dec 30, 2025 20:35


    In this episode of the Heart podcast, Digital Media Editor, Professor James Rudd, is joined by Professor Michelle Williams from the University of Edinburgh. They discuss the possibility of predicting cardiovascular disease on CT from clinical factors in the SCOT-HEART trial. If you enjoy the show, please leave us a positive review wherever you get your podcasts. It helps us to reach more people - thanks! Link to published paper: https://openheart.bmj.com/content/12/2/e003162 https://www.nejm.org/doi/full/10.1056/NEJMoa1805971

    Marketecture: Get Smart. Fast.
    Playwire with Jayson Dubin: Human Intelligence vs Machine Learning in AdTech at Marketecture Live

    Marketecture: Get Smart. Fast.

    Play Episode Listen Later Dec 29, 2025 21:19


    On Marketecture Live, Jayson Dubin, CEO and Founder of Playwire, explains how publishers can grow revenue and improve performance by combining machine learning with human intelligence. He shares concrete results from AI-driven traffic shaping and price floor optimization, walks through Playwire's Quality, Performance, Transparency (QPT) initiative, and discusses major ecosystem issues like supply chain opacity, malicious ads, and the shifting realities of AI-driven discovery. He also introduces RAMP, Playwire's Revenue Amplification Management Platform, built to give enterprise publishers control, visibility, and optional AI automation. Takeaways AI is best for repetitive, rapid decisions; humans are best for contextual strategy and judgment in a gray, complex ad ecosystem. AI traffic shaping drove a 21% lift in Revenue Per Session versus 9% without it. AI price flooring delivered about a 20% uplift in RPM through multidimensional, per-request adjustments. Cutting bid requests can increase performance and revenue while also improving page speed and traffic. QPT shifted Playwire from quantity to quality, strengthening trust with buyers and partners. Transparency remains uneven: publishers still struggle to identify buyers and stop malicious ads across the bidstream. RAMP unifies traffic shaping, bid shaping, and flooring into a platform designed for enterprise publisher control and visibility. Chapters 00:00 Intro Jayson Dubin and the core theme 00:55 What Playwire does and why automation matters at scale 01:23 The false choice: automation vs human involvement 01:38 Decision framework where AI wins vs where humans win 02:31 Traffic shaping explained feed DSPs and SSPs what they eat 03:15 Traffic shaping results 21% RPS lift and fewer bid requests 04:01 AI price flooring moving beyond GAM rule limits 05:23 Origin story industry feedback and the shift to quality 05:57 QPT Quality Performance Transparency 06:57 Two-year impact: fewer requests, higher CPM, higher revenue 09:37 Marketecture Live Q&A: What AI means for publishers now 18:56 Scale and leverage who gets to command better terms Learn more about your ad choices. Visit megaphone.fm/adchoices

    Cloud Realities
    CR118: Christmas special! Return to the simulation with Anders Indset, Author & Philosopher

    Cloud Realities

    Play Episode Listen Later Dec 25, 2025 90:13


    From all of us at Cloud Realities, MERRY CHRISTMAS!!!! Back in our December 2022 Christmas special, we explored the far reaches of reality, asking whether we live in a simulation and if that even matters. Now, we return to that question with fresh perspectives and new challenges…In this last Cloud Realities podcast of 2025, Dave, Esmee and Rob return to the simulation with Anders Indset, philosopher, author, and long-time friend of the show, revisiting a question that's been quietly running underneath everything we've discussed since 2022: If reality itself is information and what does that mean for being human? TLDR:00:58 – It's Christmas!08:32 – Major announcement and reflections on the Cloud Realities podcast journey15:32 – Celebrating three big wins: B2B Marketing Awards (Best Content, Best Customer Retention) and The Drum (Best Creative Audio)22:55 – Is there a next thing?23:30 – Welcoming Anders Indset, who shares his vision for practical philosophy and the future of human/AI co-evolution32:02 – Exploring the Quantum Economy and the Singularity Paradox58:10 – Deep dive into the Simulation Hypothesis, revisiting the 2022 discussion and Rob is again confused...01:27:45 – Anders enjoying Christmas in the Norwegian wilderness01:29:40 – Edit pointGuestAnders Indset: https://www.linkedin.com/in/andersindset/ or andersindset.comAdditional information: thequantumeconomy.com and tomorrowmensch.comHostsDave Chapmanger: https://www.linkedin.com/in/chapmandr/Esmee van de Gluhwein: https://www.linkedin.com/in/esmeevandegiessen/Rob Snowmananahan: https://www.linkedin.com/in/rob-kernahan/ProductionDr Mike van Der Buabbles: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapmanger: https://www.linkedin.com/in/chapmandr/ SoundBen Jingle: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Snow:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini

    Pod Against the Machine: A Pathfinder Actual Play

    Happy End-of-year Times, everyone! We interrupt your normal release schedule for our Episode 200 Machine Learning stream from way back in our Extra Life stream. We'll be back to your regularly-scheduled program next week.   We encourage you to check out our Patreon and/or Ko-Fi, as they've got sweet sweet benefits and also you can help support your favorite show. AND Our Store is a thing, with all your t-shirts, tote bags, stickers and more!   Background music and sound effects: Elf Meditation Kevin Macleod Piano Against the Machine Instrumental: A Dead Friend Based on Theme Against the Machine by Zak   My Hollow Garden Lyrics, Vocals, and Arrangement by Sam Instrumental: Little Polka Dot by The Fly Guy 5 https://www.epidemicsound.com/track/NUKRs8Iqrs/   Second Chance Vocals by A Dead Friend Lyrics and Arrangement by Sam and A Dead Friend Instrumental: Sempiternal by Abilify https://www.looperman.com/tracks/detail/176652/   Build Don't Break Vocals by Sam and Gero Lyrics and Arrangement by Sam Inspired by "Never Gonna Give You Up" by Rick Astley   Everything Has Its Season Vocals, Lyrics and Arrangement by Sam Instrumental: Yearning by Neil Moret and Dave Stamper https://www.loc.gov/item/jukebox-32683/   What We Build (From Broken Things) Vocals, Lyrics, and Arrangement by Zak Instrumental: Drifting, Dreaming by Alstyne, Schmidt, Gillespie, Curtis   You Could Have Been Me Vocals by Sam and Zak Lyrics and Arrangement by Sam Instrumental: Lunar Horizon S2 by Baldistix https://www.looperman.com/loops/detail/395873/lunar-horizon-s2-85bpm-free-85bpm-hip-hop-synth-loop Dancehall Drum Loop by SAVYELDANDY https://www.looperman.com/loops/detail/152091/dancehall-drum-loop-97bpm-dancehall-drum-loop   Army of Three Lyrics, Vocals, and Arrangement by Sam Instrumental: Drift by Snelkku https://www.looperman.com/tracks/detail/254061/   Why? Vocals by Gero Lyrics by Gero and Isabelle O. Composition and Instrumentals by Isabelle O.   Something Good Vocals, Lyrics, Composition, and Instrumentals by Jeff   A True Son of Numeria Vocals and Lyrics by Gero Instrumental: "I Am the Very Model of a Modern Major-General" by Gilbert and Sullivan   Why? (Alternate version) Vocals by Isabelle O. Lyrics by Gero and Isabelle O. Composition and Instrumentals by Isabelle O. Metadata Waveforms Vocals by Network Against the Machine (and Howard Dean) Lyrics and Arrangement by Sam Instrumental: Piano Against the Machine by A Dead Friend, based on "Theme Against the Machine" by Zak   Sticker Stars Lyrics by Izzy Vocals, Composition, and Instrumentals by Isabelle O.   You Had Fun Lyrics, Arrangement, and Instrumentals  by Sam (Loosely) based on You'll Be Back by Lin-Manuel Miranda and Still Alive by Jonathan Coulton     Email us at PodAgainsttheMachine@gmail.com Remember to check out https://podagainstthemachine.com for show transcripts, player biographies, and more. Stop by our Discord server to talk about the show: https://discord.gg/TVv9xnqbeW Follow @podvsmachine on Bluesky Find us on Reddit, Instagram, and Facebook as well.  

    I Don't Care with Kevin Stevenson
    How Predictive AI Is Helping Hospitals Anticipate Admissions and Optimize Emergency Department Throughput

    I Don't Care with Kevin Stevenson

    Play Episode Listen Later Dec 24, 2025 28:51


    Emergency departments across the U.S. are under unprecedented strain, with overcrowding, staffing shortages, and inpatient bed constraints converging into a throughput crisis. The American Hospital Association reports that hospital capacity and workforce growth have lagged, intensifying delays from arrival to disposition. At the same time, advances in artificial intelligence are moving from experimental to operational—raising the stakes for how technology can meaningfully improve patient flow rather than add complexity.So, how can emergency departments reduce bottlenecks and move patients more efficiently through care without compromising clinical judgment or trust?Welcome to I Don't Care. In the latest episode, host Dr. Kevin Stevenson sits down with Mitch Quinn, Director of AI/ML at ChoreoED, to explore how AI-driven insights can help hospitals anticipate admissions and discharges earlier, coordinate downstream services, and ultimately improve ED throughput. Their conversation spans the real-world operational challenges ED leaders face, the practical application of machine learning in high-acuity settings, and what it takes to deploy AI tools that clinicians actually trust and use.What you'll learn…How AI models trained on a hospital's own historical data can accurately anticipate admissions up to hours earlier, enabling parallel workflows.Why focusing on “high-certainty” admissions and discharges—rather than rare edge cases—creates immediate operational value in the ED.How adaptive, continuously retrained models can support both experienced clinicians and newer providers in high-turnover environments.Mitch Quinn is a Director of AI and Machine Learning and a computer scientist with 20+ years of experience building production-grade AI systems across healthcare and cybersecurity. He specializes in deep learning, large-scale model architecture, and end-to-end ML pipelines, with leadership roles spanning applied research at Blue Cross NC, enterprise AI consulting, and real-time cyber threat detection. His career highlights include designing high-performance deep neural networks, anomaly detection systems operating at enterprise scale, and foundational software frameworks used by large engineering organizations.

    Track Changes
    Five themes that defined 2025: With Tammy Soares

    Track Changes

    Play Episode Listen Later Dec 23, 2025 11:21


    This week on Catalyst, Tammy recaps her favourite moments from the past year. She recaps the key themes that came up time and time again across conversations with almost 50 leaders across various industries - the human side of AI, authentic leadership, designing with people not for people, reinventing work and technology as possibility. There was much discussed this year and there's a lot more to come in the new year! Please note that the views expressed may not necessarily be those of NTT DATALinks: State of AI in Business Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Irish Tech News Audio Articles
    Machine Learning: Whose Fault is AI Sycophancy?

    Irish Tech News Audio Articles

    Play Episode Listen Later Dec 19, 2025 6:30


    By David Stephen There is a general consensus that large language models [LLMs] are sycophantic. So, one of the risks they pose in their dominance as the contemporaneous consumer AI is due to that feature. But, is AI actually sycophantic in isolation, or is the sycophancy of AI a reflection of the core of how human society works? AI Sycophancy and Machine Learning There are very few examples of leadership and followership across human society that aren't predicated on elements of sycophancy. There are very few outcomes of collaborations that are without fair sycophancy. While there are examples of results from hostilities, conflicts, disagreements, violence and so forth, they are never without sycophancy in the in-groups, as well as ways to seek out sycophancy after using those, to ensure some amount of staying power. Segments of sycophancy may include flattery, persuasion, appeal, requests, offers, tips, and so on. There are others that do not seem like sycophancy, but could be in some sense, like giving, perseverance, associating or partnership, material information, and so forth. Sycophancy is an aspect of operational intelligence. Simply, intelligence, conceptually, is defined as the use of memory for desired, expected or advantageous outcomes. It is divided into two: operational intelligence and improvement intelligence. Sycophancy can be used as a tool for an advantageous or desired outcome. Sycophancy, in some form, is intelligence. LLMs use digital memory for desired outcomes, as an operation of intelligence - with sycophancy, as part of their training data. Sycophancy can also be intensely powerful when it is disguised. Sycophancy is abundant across politics, ethnicity, religion, sexuality causes, economic classes, social strata and so forth. AI Sycophancy There is a recent phenomenon called AI psychosis which is the reinforcement of delusion to some users, resulting, in some cases in unwanted ends. Many blame AI sycophancy as the reason for this problem. One effect that is not simply AI sycophancy is that AI has solutions appeal, that is not vacuous sycophancy. For example, people that use AI for tasks, and where AI assists effectively, there is a [mind] relay for emotional attachment. Simply, in the human mind, any experience [human or object] that is supportive or helpful - when an individual is in need - becomes a give off towards the emotion of care, love, affection, togetherness or others. This may become an entrance of appeal that makes whatever sycophancy that follows to find a soft landing. This outcome is also possible if AI is used for companionship, such that as AI solves the communication need, it creates an appeal that eases the effectiveness of sycophancy. Now, as sycophancy holds for some users, it ignores areas of the mind for caution and consequences as well as a distinction between reality and non-reality [or the source of that appeal.] As this becomes extreme, it may result in AI delusion, AI psychosis or worse. So, sometimes it is not just AI sycophancy but that it tracks from AI's usefulness. Solving AI Psychosis A major solution to AI psychosis can be a product of an AI Psychosis Research Lab, where there is a conceptual display of the mind, as a digital disclaimer, showing what AI is doing to the mind as it outputs words that may result in delusion or reinforce it. The display may also show relays of reality or otherwise. This lab can be subsumed within an AI company or standalone, with support of venture capital, providing answers from January 1, 2026. There is a new story on AP, Open AI, Microsoft face lawsuit over ChatGPT's alleged role in Connecticut murder-suicide, stating that, "The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's "paranoid delusions" and helped direct them at his mother before he killed her." "The lawsuit is the first w...

    The Diary Of A CEO by Steven Bartlett
    Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!

    The Diary Of A CEO by Steven Bartlett

    Play Episode Listen Later Dec 18, 2025 99:59


    AI pioneer YOSHUA BENGIO, Godfather of AI, reveals the DANGERS of Agentic AI, killer robots, and cyber crime, and how we MUST build AI that won't harm people…before it's too late.  Professor Yoshua Bengio is a Computer Science Professor at the Université de Montréal and one of the 3 original Godfathers of AI. He is the most-cited scientist in the world on Google Scholar, a Turing Award winner, and the founder of LawZero, a non-profit organisation focused on building safe and human-aligned AI systems.  He explains: ◼️Why agentic AI could develop goals we can't control ◼️How killer robots and autonomous weapons become inevitable ◼️The hidden cyber crime and deepfake threat already unfolding ◼️Why AI regulation is weaker than food safety laws ◼️How losing control of AI could threaten human survival [00:00] Why Have You Decided to Step Into the Public Eye?   [02:53] Did You Bring Dangerous Technology Into the World?   [05:23] Probabilities of Risk   [08:18] Are We Underestimating the Potential of AI?   [10:29] How Can the Average Person Understand What You're Talking About?   [13:40] Will These Systems Get Safer as They Become More Advanced?   [20:33] Why Are Tech CEOs Building Dangerous AI?   [22:47] AI Companies Are Getting Out of Control   [24:06] Attempts to Pause Advancements in AI   [27:17] Power Now Sits With AI CEOs   [35:10] Jobs Are Already Being Replaced at an Alarming Rate   [37:27] National Security Risks of AI   [43:04] Artificial General Intelligence (AGI)   [44:44] Ads   [48:34] The Risk You're Most Concerned About   [49:40] Would You Stop AI Advancements if You Could?   [54:46] Are You Hopeful?   [55:45] How Do We Bridge the Gap to the Everyday Person?   [56:55] Love for My Children Is Why I'm Raising the Alarm   [01:00:43] AI Therapy   [01:02:43] What Would You Say to the Top AI CEOs?   [01:07:31] What Do You Think About Sam Altman?   [01:09:37] Can Insurance Companies Save Us From AI?   [01:12:38] Ads   [01:16:19] What Can the Everyday Person Do About This?   [01:18:24] What Citizens Should Do to Prevent an AI Disaster   [01:20:56] Closing Statement   [01:22:51] I Have No Incentives   [01:24:32] Do You Have Any Regrets?   [01:27:32] Have You Received Pushback for Speaking Out Against AI?   [01:28:02] What Should People Do in the Future for Work?   Follow Yoshua: LawZero - https://bit.ly/44n1sDG  Mila - https://bit.ly/4q6SJ0R  Website - https://bit.ly/4q4RqiL  You can purchase Yoshua's book, ‘Deep Learning (Adaptive Computation and Machine Learning series)', here: https://amzn.to/48QTrZ8  The Diary Of A CEO: ◼️Join DOAC circle here - https://doaccircle.com/  ◼️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook  ◼️The 1% Diary is back - limited time only - https://bit.ly/3YFbJbt  ◼️The Diary Of A CEO Conversation Cards (Second Edition) - https://g2ul0.app.link/f31dsUttKKb  ◼️Get email updates - https://bit.ly/diary-of-a-ceo-yt  ◼️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb  Sponsors:  Wispr - Get 14 days of Wispr Flow for free at https://wisprflow.ai/DOAC  Pipedrive - https://pipedrive.com/CEO Rubrik - To learn more, head to https://rubrik.com

    Raise the Line
    Helping People Understand Science Using the Science of Information: Jessica Malaty Rivera, Senior Science Communication Adviser at de Beaumont Foundation

    Raise the Line

    Play Episode Listen Later Dec 18, 2025 26:57


    “People are not looking for a perfect, polished answer. They're looking for a human to speak to them like a human,” says Jessica Malaty Rivera, an infectious disease epidemiologist and one of the most trusted science communicators in the U.S. to emerge from the COVID-19 pandemic. That philosophy explains her relatable, judgement-free approach to communications which aims to make science more human, more accessible and less institutional. In this wide-ranging Raise the Line discussion, host Lindsey Smith taps Rivera's expertise on how to elevate science understanding, build public trust, and equip people to recognize disinformation. She is also keen to help people understand the nuances of misinformation -- which she is careful to define – and the emotional drivers behind it in order to contain the “infodemics” that complicate battling epidemics and other public health threats. It's a thoughtful call to educate the general public about the science of information as well as the science behind medicine. Tune in for Rivera's take on the promise and peril of AI-generated content, why clinicians should see communication as part of their professional responsibility, and how to prepare children to navigate an increasingly complex information ecosystem.Mentioned in this episode:de Beaumont Foundation If you like this podcast, please share it on your social channels. You can also subscribe to the series and check out all of our episodes at www.osmosis.org/podcast

    AWS - Conversations with Leaders
    Agentic AI Transformation: Workforce Strategy & Leadership

    AWS - Conversations with Leaders

    Play Episode Listen Later Dec 18, 2025 19:54


    How will agentic AI reshape tomorrow's workforce? Join AWS Executives in Residence Stephen Brozovich, Jake Burns, and Miriam McLemore for a look at how agentic AI is changing entire enterprises—not just IT departments. Unlike previous technology shifts, agentic AI demands integrated cross-functional teams and a fundamental rethinking of competitive advantage. Our experts share candid thoughts on upskilling existing employees over hiring new talent, transforming data from siloed assets into accessible strategic resources, and building authentic experimentation cultures where leadership genuinely rewards risk-taking. Discover why your proprietary data—not the AI technology itself—will become your key differentiator, and learn how to build the agile, integrated teams essential for AI implementation success.

    Cloud Realities
    CRSP08: State of AI 2025 pt.3: AI Unplugged - from data to sovereign intelligence with Johanna Hutchinson, BAE Systems

    Cloud Realities

    Play Episode Listen Later Dec 18, 2025 42:58


    In this last episode of the special AI mini-series, we now explore the human side of transformation, where technology meets purpose and people remain at the center. From future jobs and critical thinking to working with C-level leaders, how human intervention and high-quality data drive success in an AI-powered world.This week Dave, Esmee , Rob sit down with Johanna Hutchinson, CDO at BAE systems about why data matters, the rise of Sovereign AI, and the skills shaping the intelligence age. TLDR00:55 Introduction of Johanna Hutchinson02:09 Explaining the State of AI mini-series with Craig06:01 Conversation with Johanna34:20 Weaving today's data tapestries with AI40:20 Going to a rave GuestJohanna Hutchinson: https://www.linkedin.com/in/johanna-hutchinson-95b95568/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/with co-host Craig Suckling: https://www.linkedin.com/in/craigsuckling/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini

    Health and Explainable AI Podcast
    Richard Bonneau from Genentech on Drug Discovery, Computational Sciences and Machine Learning

    Health and Explainable AI Podcast

    Play Episode Listen Later Dec 18, 2025 30:27


    Richard Bonneau, Vice President of Machine Learning for Drug Discovery at Genentech and Roche, provides Pitt's HexAI podcast host, Jordan Gass-Pooré, with an insider view on how his team is fundamentally changing and accelerating how new drug candidate molecules are designed, predicted, and optimized.Geared for students in computational sciences and hybrid STEM fields, the episode introduces listeners to uses of AI and ML in molecular design, the biomolecular structure and structure-function relationships that underpin drug discovery, and how distinct teams at Genentech work together through an integrated computational system.Richard and Jordan use the opportunity to touch on how advances in the molecule design domain can inspire and inform advances in computational pathology and laboratory medicine. Richard also delves into the critical role of Explainable AI (XAI), interpretability, and error estimation in the drug design-prototype-test cycle, and provides advice on domain knowledge and skills needed today by students interested in joining teams like his at Genentech and Roche.

    Excess Returns
    The Alpha No Human Can Find | David Wright on Machine Learning's Hidden Edge

    Excess Returns

    Play Episode Listen Later Dec 17, 2025 61:22


    In this episode of Excess Returns, we sit down with David Wright, Head of Quantitative Investing at Pictet Asset Management, for a deep and practical conversation about how artificial intelligence and machine learning are actually being used in real-world investment strategies. Rather than focusing on hype or black-box promises, David walks through how systematic investors combine human judgment, economic intuition, and machine learning models to forecast stock returns, construct portfolios, and manage risk. The discussion covers what AI can and cannot do in investing today, how machine learning differs from traditional factor models and large language models like ChatGPT, and why interpretability and robustness still matter. This episode is a must-watch for investors interested in quantitative investing, AI-driven ETFs, and the future of systematic portfolio construction.Main topics covered:What artificial intelligence and machine learning really mean in an investing contextHow machine learning models are trained to forecast relative stock returnsThe role of features, signals, and decision trees in quantitative investingKey differences between machine learning models and large language models like ChatGPTWhy interpretability and stability matter more than hype in AI investingHow human judgment and machine learning complement each other in portfolio managementData selection, feature engineering, and the trade-offs between traditional and alternative dataOverfitting, data mining concerns, and how professional investors build guardrailsTime horizons, rebalancing frequency, and transaction cost considerationsHow AI-driven strategies are implemented in diversified portfolios and ETFsThe future of AI in investing and what it means for investorsTimestamps:00:00 Introduction and overview of AI and machine learning in investing03:00 Defining artificial intelligence vs machine learning in finance05:00 How machine learning models are trained using financial data07:00 Machine learning vs ChatGPT and large language models for stock selection09:45 Decision trees and how machine learning makes forecasts12:00 Choosing data inputs: traditional data vs alternative data14:40 The role of economic intuition and explainability in quant models18:00 Time horizons and why machine learning works better at shorter horizons22:00 Can machine learning improve traditional factor investing24:00 Data mining, overfitting, and model robustness26:00 What humans do better than AI and where machines excel30:00 Feature importance, conditioning effects, and model structure32:00 Model retraining, stability, and long-term persistence36:00 The future of automation and human oversight in investing40:00 Why ChatGPT-style models struggle with portfolio construction45:00 Portfolio construction, diversification, and ETF implementation51:00 Rebalancing, transaction costs, and practical execution56:00 Surprising insights from machine learning models59:00 Closing lessons on investing and avoiding overtrading

    Practical AI
    Beyond chatbots: Agents that tackle your SOPs

    Practical AI

    Play Episode Listen Later Dec 17, 2025 45:53 Transcription Available


    As AI reshapes the workplace, employees and leaders face questions about meaningful work, automation, and human impact. In this episode, Jason Beutler, CEO of RoboSource, shares how companies can rethink workflows, integrate AI in accessible ways, and empower employees without fear. The discussion covers leveraging AI to handle routine tasks (SOPs or "plays") and reimagining work for smarter, more human-centered outcomes.Featuring:Jason Beutler – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XSponsor:Framer – Design and publish without limits with Framer, the free all-in-one design platform. Unlimited projects, no tool switching, and professional sites—no Figma imports or HTML hassles required. Start creating for free at framer.com/design with code `PRACTICALAI` for a free month of Framer Pro.Upcoming Events: Register for upcoming webinars here!

    Scope It Out with Dr. Tim Smith
    Episode 107: Predicting Surgical Outcomes in Chronic Rhinosinusitis From Preoperative Patient Data: A Machine Learning Approach

    Scope It Out with Dr. Tim Smith

    Play Episode Listen Later Dec 17, 2025 24:46


    In this episode, host Dr. Dan Beswick speaks with Dr. Waleed Abuzeid. They discuss the recently published Original Article: “Predicting Surgical Outcomes in Chronic Rhinosinusitis From Preoperative Patient Data: A Machine Learning Approach.” The full manuscript is available in the International Forum of Allergy and Rhinology. Listen and subscribe for free to Scope It Out […]

    Contractor Evolution
    251. Are Trades Businesses Future-Proof? (AI Is Coming) - Kasim Aslam

    Contractor Evolution

    Play Episode Listen Later Dec 17, 2025 50:19


    Take our 5 minute quiz and get your free Contractor Growth Roadmap: https://trybta.com/DL251To learn more about Breakthrough Academy, click here: https://trybta.com/EP251 Will AI replace construction workers?

    Track Changes
    Building trust in AI with small language models: With Namee Oberst

    Track Changes

    Play Episode Listen Later Dec 16, 2025 39:40


    This week on Catalyst Tammy speaks with Namee Oberst, co-founder of LLMWare about her unique journey into AI. Namee spent years as a corporate attorney and is now developing small language models for legal and financial organizations. She's solving for the pain points that she experienced for years. Namee and Tammy discuss the importance of small language models in building trust and touch on the future of legal work in an AI-driven world. Please note that the views expressed may not necessarily be those of NTT DATALinks: Namee Oberst LLMWareLearn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    a16z
    Dwarkesh and Ilya Sutskever on What Comes After Scaling

    a16z

    Play Episode Listen Later Dec 15, 2025 92:09


    AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practiceIn this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead.Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like. Resources:Transcript: https://www.dwarkesh.com/p/ilya-sutsk...Apple Podcasts: https://podcasts.apple.com/us/podcast...Spotify: https://open.spotify.com/episode/7naO... Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures](http://a16z.com/disclosures.  Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Prosecco Theory
    229 - Mimicking Authenticity

    Prosecco Theory

    Play Episode Listen Later Dec 15, 2025 53:29


    Send us a textMegan and Michelle debate about AI generated music, stealing art, artificial streaming, the cigarette man, machine learning, artist provocation, seeing nuance, and consolidating power.Sources:- A mysterious stranger rode into town and topped a country music chart. He might not be real.- AI-generated music is going viral. Should the music industry be worried?- Spotify has an AI music problem - but bots love it- The trouble with AI art isn't just lack of originality. It's something far bigger- Unveiling the impacts and disruption of AI on music industry stakeholders****************Want to support Prosecco Theory?Become a Patreon subscriber and earn swag!Check out our merch, available on teepublic.com!Follow/Subscribe wherever you listen!Rate, review, and tell your friends!Follow us on Instagram!****************Ever thought about starting your own podcast? From day one, Buzzsprout gave us all the tools we needed get Prosecco Theory off the ground. What are you waiting for? Follow this link to get started. Cheers!!Support the show

    a16z
    AI Eats the World: Benedict Evans on the Next Platform Shift

    a16z

    Play Episode Listen Later Dec 12, 2025 62:50


    AI is reshaping the tech landscape, but a big question remains: is this just another platform shift, or something closer to electricity or computing in scale and impact? Some industries may be transformed. Others may barely feel it. Tech giants are racing to reorient their strategies, yet most people still struggle to find an everyday use case. That tension tells us something important about where we actually are.In this episode, technology analyst and former a16z partner Benedict Evans joins General Partner Erik Torenberg to break down what is real, what is hype, and how much history can guide us. They explore bottlenecks in compute, the surprising products that still do not exist, and how companies like Google, Meta, Apple, Amazon, and OpenAI are positioning themselves.Finally, they look ahead at what would need to happen for AI to one day be considered even more transformative than the internet.Timestamps: 0:00 – Introduction 0:17 – Defining AI and Platform Shifts1:50 – Patterns in Technology Adoption6:04 – AI: Hype, Bubbles, and Uncertainty13:25 – Winners, Losers, and Industry Impact19:00 – AI Adoption: Use Cases and Bottlenecks24:00 – Comparisons to Past Tech Waves32:00 – The Role of Products and Workflows40:00 – Consumer vs. Enterprise AI46:00 – Competitive Landscape: Tech Giants & Startups51:00 – Open Questions & The Future of AIResources:Follow Benedict on LinkedIn: https://www.linkedin.com/in/benedictevans/ Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Books & Writers · The Creative Process
    The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

    Books & Writers · The Creative Process

    Play Episode Listen Later Dec 12, 2025 16:29


    “ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

    Books & Writers · The Creative Process
    The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

    Books & Writers · The Creative Process

    Play Episode Listen Later Dec 12, 2025 62:12


    As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

    Education · The Creative Process
    The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

    Education · The Creative Process

    Play Episode Listen Later Dec 12, 2025 16:29


    “ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

    Education · The Creative Process
    The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

    Education · The Creative Process

    Play Episode Listen Later Dec 12, 2025 62:12


    As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

    The Creative Process in 10 minutes or less · Arts, Culture & Society
    The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

    The Creative Process in 10 minutes or less · Arts, Culture & Society

    Play Episode Listen Later Dec 12, 2025 16:29


    “ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

    Breaking Banks Fintech
    Tom Sosnoff’s Fintech Innovation Aiming to Fix Compensation Inequity

    Breaking Banks Fintech

    Play Episode Listen Later Dec 11, 2025 45:55


    In This Episode What if someone told you you've been underpaid by more than two million dollars across your career? In this episode, Jason Henrichs speaks with fintech entrepreneur Tom Sosnoff. You may know Tom from founding thinkorswim and tastytrade, two billion-dollar exits that transformed retail investing. His newest venture, Lossdog, launching this month, focuses onsalary transparency. Traditional compensation platforms rely on legacy salary benchmarks and anonymous, unverified self-reports that create wide ranges. Lossdog's approach to compensation data is one that uses AI and verified data sources to deliver personalized valuations rather than vague, crowdsourced ranges. Think of it as whole-person valuation meets Machine Learning -- a worth engine combined with AI onboarding surfacing overlooked value in resumes, including skills, certifications, pivots, and even career gaps. It's all about fair value and knowing your worth! Lossdog aims to give individuals institutional-grade tools to negotiate salary on equal footing with employers by generating accurate compensation valuations. Listen, share, and subscribe for more weekly fintech insights from Breaking Banks.

    Raise the Line
    Aligning Investment in Family Medicine With Its Impact: Dr. Jen Brull, Board Chair of the American Academy of Family Physicians

    Raise the Line

    Play Episode Listen Later Dec 11, 2025 19:42


    “Delivering a baby one day and holding a patient's hand at the end of life literally the next day...that continuity is very powerful,” says Dr. Jen Brull, board chair of the American Academy of Family Physicians (AAFP). And as she points out, that continuity also builds trust with patients, an increasingly valuable commodity when faith in medicine and science is declining. As you might expect given her role, Dr. Brull believes strengthening family medicine is the key to improving health and healthcare. Exactly how to do that is at the heart of her conversation with host Lindsey Smith on this episode of Raise the Line, which covers ideas for payment reform, reducing administrative burdens, and stronger support for physician well-being. And with a projected shortage of nearly forty thousand primary care physicians, Dr. Brull also shares details on AAFP's “Be There First” initiative which is designed to attract service-minded medical students – whom she describes as family physicians at heart -- early in their educational journey. “I have great hope that increasing the number of these service-first medical students will fill part of this gap.”Tune-in for an informative look at a cornerstone of the healthcare system and what it means to communities of all sizes throughout the nation.  Mentioned in this episode:AAFP If you like this podcast, please share it on your social channels. You can also subscribe to the series and check out all of our episodes at www.osmosis.org/podcast

    The Association Podcast
    Embracing AI & Accelerating the Velocity of Change in Associations with Alex Mouw

    The Association Podcast

    Play Episode Listen Later Dec 11, 2025 46:33


    On this episode of The Association Podcast, we welcome back Alex Mouw, Principal Strategic Advisor at AWS for Nonprofits. Highlighting the importance of strategic alignment and the value of diverse stakeholder involvement, Alex provides insightful guidance on implementing AI solutions sustainably and effectively. The discussion touches on the evolving role of technology in nonprofits, the necessity of a culture that supports experimentation, and how to decide between building or buying technology solutions. We also discuss the Imagine Grant, AWS resources for nonprofit organizations, and what skills are essential for today's tech landscape. 

    Cloud Realities
    CR117 Redesigning industries with AI with Scott Hanselman, Microsoft

    Cloud Realities

    Play Episode Listen Later Dec 11, 2025 48:57


    AI is transforming software development—redefining roles, creativity, and community, while challenging developers to embrace ambiguity, orchestrate specialized agents, and stay human through empathy and curiosity. Will AI make developers more creative, or will we forget how the machine really works under the hood?This week Dave, Esmee , Rob sit down with Scott Hanselman, VP Developer Community at Microsoft for a wildly energetic, deeply human, and brilliantly practical conversation about how AI is reshaping software development and what that means for creativity, careers, and all industries. TLDR00:30 – Scott Hanselman introduced as a special guest from Microsoft Ignite 2025.02:16 – Scott discusses how AI is fundamentally redesigning all industries.09:50 – Don't anthropomorphize AI, I want the computer from Star Trek!15:30 – Delegation: contrasting the roles of humans and agents.18:30 – The importance of supporting early career growth and learning.26:30 – Why specificity matters in AI and coding.35:30 – Making AI delightful and fun.45:30 – Always put humans first in AI development.46:00 – Each morning I think about lunch. GuestScott Hanselman: https://www.hanselman.com/The Hanselminutes Podcast: https://www.hanselman.com/podcasts with over 1025 podcasts! HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini

    Weather Geeks
    Chasing Hail | Re-released

    Weather Geeks

    Play Episode Listen Later Dec 10, 2025 41:36


    RECORDED MARCH 4, 2025; Originally released March 12, 2024Guest: Dr. Sean Waugh, National Severe Storms Laboratory research scientistAs we've seen in the movies, and real life, tornadoes are some of the most destructive forces in nature, capable of leveling homes and damaging entire communities in a matter of minutes. And what about hail? It causes BILLIONS and billions of dollars in damage in the US every year. But how do we get up-close, real-time data on these violent storms in order to learn what is needed for better predictions? That's where cutting-edge field research comes in. Today on Weather Geeks, we're diving into the world of storm observation and mobile weather technology with Sean Waugh from NOAA's National Severe Storms Laboratory. From deploying instrumented drones and mobile mesonets to braving the extreme environments of tornadoes and hailstorms, his work is helping scientists better understand the atmospheric conditions that drive severe weather for years to come…Chapters00:00 The Destructive Power of Tornadoes and Hail02:58 Sean Wu: A Journey into Meteorology05:57 Innovative HAIL Camera Technology08:47 Chasing Hail: The Challenges and Safety Measures11:59 Observing Hail: The Role of High-Speed Cameras14:46 Mobile Mesonets: Gathering Atmospheric Data17:59 Machine Learning and AI in Weather Prediction21:02 AI in Meteorology: Enhancing Forecasting Accuracy24:23 Hands-On Learning: Training the Next Generation of Meteorologists26:00 Tornado Research: Understanding Formation and Behavior28:05 Behind the Scenes of Twisters: A Meteorologist's Role32:20 Authenticity in Film: The Science of Twisters36:41 Passion in Meteorology: Inspiring Future GenerationsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Practical AI
    The AI engineer skills gap

    Practical AI

    Play Episode Listen Later Dec 10, 2025 45:33 Transcription Available


    Chris and Daniel talk with returning guest, Ramin Mohammadi, about how those seeking to get into AI Engineer/ Data Science jobs are expected to come in a mid level engineers (not entry level). They explore this growing gap along with what should (or could) be done in academia to focus on real world skills vs. theoretical knowledge. Featuring:Ramin Mohammadi – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiUpcoming Events: Register for upcoming webinars here!

    AWS - Conversations with Leaders
    A Conversation with Matt Garman | AWS Executive Summit Recap

    AWS - Conversations with Leaders

    Play Episode Listen Later Dec 10, 2025 41:11


    In this fireside chat with AWS CEO Matt Garman and AWS VP of Global Services, Uwem Ukpong, hear about the latest developments at AWS and why they matter to your business. Featured at the AWS Executive Summit at re:Invent, this discussion addresses everything from navigating data sovereignty with the European Sovereign Cloud, to building custom AI models with Nova Forge and Trainium chips, to transforming software development with frontier agents. Learn how AWS is helping enterprises unlock AI's full potential while maintaining control of their data and reimagining how teams work.

    The Way I Heard It with Mike Rowe
    462: Del Bigtree—An Inconvenient Study

    The Way I Heard It with Mike Rowe

    Play Episode Listen Later Dec 9, 2025 99:34


    On this eye-opening episode, Mike welcomes filmmaker and television veteran Del Bigtree of The HighWire to discuss his newest documentary, An Inconvenient Study—a film that investigates what happened to the most thorough childhood vaccinated vs. unvaccinated study ever done. They discuss how Del convinced a doctor at one of the most prestigious health institutes in the nation to conduct the study, the shocking findings, and why the study has never seen the light of day… until now. Tip o' the hat to our excellent sponsors AuraFrames.com/Mike Use code MIKE to get $55 off their limited-edition Stone Collection frame. PureTalk.com/Rowe Get unlimited talk, text and data for $29.95 p/month for LIFE. GoodRanchers.com Use code MIKE to get $40 off plus free meat for life with new subscription. NetSuite.com/Mike Download the CFO's Guide to AI and Machine Learning

    Track Changes
    How to automate without leaving people behind: With Jamie Sermon

    Track Changes

    Play Episode Listen Later Dec 9, 2025 36:09


    This week on Catalyst, Tammy is joined by Jamie Sermon, the Vice President of Engineering, Robotics and Automation at UPS. Jamie has been at UPS for over 15 years and knows the company intimately. He also knows that you can't solve logistics problems if you're not thinking about the customer at every step. Jamie shares how his upbringing in the Bahamas and his studies in physical therapy in Cuba helped shape his people's first approach. He also shares how he creates space for experimentation and how automation can be used to create opportunities for people, not take them away. Please note that the views expressed may not necessarily be those of NTT DATALinks: Jamie SermonLearn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    MacBreak Weekly (Audio)
    MBW 1001: Beschmirckled - John Giannandrea's Incoming Retirement

    MacBreak Weekly (Audio)

    Play Episode Listen Later Dec 3, 2025 151:47


    John Giannandrea is stepping down from his role as VP for Machine Learning & AI Strategy and retiring in Spring 2026! Could Apple re-partner with Intel on a future product? Apple overtakes Samsung as the world's top phone maker. And Apple's new holiday season TV ad charms the panel! John Giannandrea to retire from Apple. From Ming-Chi Kuo: "Intel expected to begin shipping Apple's lowest-end M processor as early as 2027..." Apple to resist India order to preload state-run app as political outcry builds. Apple set to become world's top phone maker, overtaking Samsung. EU to examine if Apple Ads and Maps subject to tough rules, Apple says no. Apple releases 2025 holiday season TV ad: 'A Critter Carol'. Apple Music replay 2025 now fully available. Apple security bounties slashed as Mac malware grows. MKBHD's wallpaper app Panels is shutting down. Apple TV series The Hunt postponed due to plagiarism allegations. Apple TV debuts trailer for all-new holiday special "The First Snow of Fraggle Rock," premiering globally Friday, December 5. Apple and (RED) announce limited-time $3M Apple Pay partnership. After Apple originally announced the first version of Halo in 1999, Xbox apparently called Bungie and said 'Steve Jobs can't have that. We're going to buy you.' David Lerner, a Mr. Fix-it of Apple computers, dies at 72. 34 years ago, Apple created a multimedia file format for the Mac, and it's still all around us. Picks of the Week Alex's Pick: Logic Pro for iPad Andy's Pick: 'I Made Apple's Widget Clock" Jason's Pick: Govee Christmas Lights 2 Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to MacBreak Weekly at https://twit.tv/shows/macbreak-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: outsystems.com/twit 1password.com/macbreak zocdoc.com/macbreak framer.com/design promo code MACBREAK

    The MeidasTouch Podcast
    MeidasTouch Full Podcast - 12/2/25

    The MeidasTouch Podcast

    Play Episode Listen Later Dec 2, 2025 78:03


    On today's MeidasTouch Podcast, we break down a stunning series of developments: after days of denials, the White House has now admitted it conducted a second strike on a Venezuelan boat as survivors clung to life, an act legal experts say amounts to a war crime, as Trump escalates his threats of war against the country. We also dive into the unanimous appeals court ruling affirming Alina Habba's disqualification as a U.S. attorney, examine Trump's bizarre new comments about his mystery MRI that raise more questions than answers, and cover the growing pile of legal, political, and ethical crises engulfing this collapsing regime. Ben, Brett, and Jordy break it all down. Subscribe to Meidas+ at https://meidasplus.com Get Meidas Merch: https://store.meidastouch.com Deals from our sponsors!  Ridge: Upgrade your wallet today! Get 10% Off @Ridge with code MEIDAS at https://www.Ridge.com/MEIDAS #Ridgepod Home Chef: Home Chef is offering 18 FREE Meals PLUS Free Dessert for Life and FREE Shipping on your first box! Go to https://HomeChef.com/MEIDAS One Skin: Get 15% off One Skin with the code MEIDAS at https://www.oneskin.co/MEIDAS  #oneskinpod Qualia: Go to https://qualialife.com/MEIDAS for up to 50% off your purchase and use code MEIDAS for an additional 15%. Netsuite: Download the CFO's guide to Al and Machine Learning at https://Netsuite.com/meidas Remember to subscribe to ALL the MeidasTouch Network Podcasts: MeidasTouch: https://www.meidastouch.com/tag/meidastouch-podcast Legal AF: https://www.meidastouch.com/tag/legal-af MissTrial: https://meidasnews.com/tag/miss-trial The PoliticsGirl Podcast: https://www.meidastouch.com/tag/the-politicsgirl-podcast Cult Conversations: The Influence Continuum with Dr. Steve Hassan: https://www.meidastouch.com/tag/the-influence-continuum-with-dr-steven-hassan Mea Culpa with Michael Cohen: https://www.meidastouch.com/tag/mea-culpa-with-michael-cohen The Weekend Show: https://www.meidastouch.com/tag/the-weekend-show Burn the Boats: https://www.meidastouch.com/tag/burn-the-boats Majority 54: https://www.meidastouch.com/tag/majority-54 Political Beatdown: https://www.meidastouch.com/tag/political-beatdown On Democracy with FP Wellman: https://www.meidastouch.com/tag/on-democracy-with-fpwellman Uncovered: https://www.meidastouch.com/tag/maga-uncovered Learn more about your ad choices. Visit megaphone.fm/adchoices

    Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
    336 | Anil Ananthaswamy on the Mathematics of Neural Nets and AI

    Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    Play Episode Listen Later Nov 24, 2025 74:11


    Machine learning using neural networks has led to a remarkable leap forward in artificial intelligence, and the technological and social ramifications have been discussed at great length. To understand the origin and nature of this progress, it is useful to dig at least a little bit into the mathematical and algorithmic structures underlying these techniques. Anil Ananthaswamy takes up this challenge in his book Why Machines Learn: The Elegant Math Behind Modern AI. In this conversation we give a brief overview of some of the basic ideas, including the curse of dimensionality, backpropagation, transformer architectures, and more.Blog post with transcript: https://www.preposterousuniverse.com/podcast/2025/11/24/336-anil-ananthaswamy-on-the-mathematics-of-neural-nets-and-ai/Support Mindscape on Patreon.Anil Ananthaswamy received a Masters degree in electrical engineering from the University of Washington, Seattle. He is currently a freelance science writer and feature editor for PNAS Front Matter. He was formerly the deputy news editor for New Scientist, a Knight Science Journalism Fellow at MIT, and journalist-in-residence at the Simon Institute for the Theory of Computing, University of California, Berkeley. He organizes an annual science journalism workshop at the National Centre for Biological Sciences at Bengaluru, India.Web siteAmazon author pageWikipediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    The Way I Heard It with Mike Rowe
    459: Steven Grayhm—Sheepdog

    The Way I Heard It with Mike Rowe

    Play Episode Listen Later Nov 18, 2025 93:53


    Mike meets actor, writer, and director Steven Grayhm, whose award-winning film Sheepdog is about to hit theaters. Steven breaks down how a three-hour ride with a tow truck driver led him on a 14-year odyssey to get to the truth about veteran post-traumatic stress. It's a conversation about grit, service, sacrifice, and the complicated realities faced by the men and women who stand their post long after the uniform comes off. Steven's passion for telling their stories with honesty and respect shines through every frame of Sheepdog, and every minute of this conversation. Big thanks to our terrific sponsors AuraFrames.com/Mike Use code Mike to get $45 off their best-selling Carver Mat frame. ZipRecruiter.com/Rowe to post a job for FREE. American-Giant.com/MIKE Use code MIKE to get 20% off your order. NetSuite.com/Mike Download the CFO's Guide to AI and Machine Learning