Podcast appearances and mentions of Geoffrey Hinton

British-Canadian computer scientist and psychologist

  • 431PODCASTS
  • 564EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 20, 2025LATEST
Geoffrey Hinton

POPULARITY

20172018201920202021202220232024


Best podcasts about Geoffrey Hinton

Latest podcast episodes about Geoffrey Hinton

Health & Veritas
The Cost Curve, Flu, and Other News

Health & Veritas

Play Episode Listen Later Nov 20, 2025 37:45


Howie and Harlan discuss the outlook for U.S. healthcare spending over the next five years, the state of seasonal and avian flu, and an expensive AI-based cardiac test.  Show notes: Life expectancy and expenditures "How does U.S. life expectancy compare to other countries?" ACOs and cost savings "After Fifteen Years, is Value-Based Care Succeeding?" Health & Veritas Episode 115: Farzad Mostashari: Aligning Incentives to Fix Primary Care World Prematurity Day WHO: World Prematurity Day 2025 WHO: World Prematurity Day Key Messages WHO: Preterm birth AI concerns "'It keeps me awake at night': machine-learning pioneer on AI's threat to humanity" "Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI" "AI pioneer: 'The dangers of abuse are very real'" "'Malicious use is already happening': machine-learning pioneer on making AI safer" "Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award" "Deep learning" Bird flu "First U.S. case of human bird flu in 9 months confirmed in Washington state" Cleveland Clinic: Bird Flu (Avian Influenza) "Flu in numbers: NHS faces one of worst winters ever, officials warn, amid concern over mutated strain" "New flu virus mutation could see 'worst season in a decade'" "Australia posts record-breaking flu numbers as vaccination rates stall"  FDA: Influenza Vaccine Composition for the 2025-2026 U.S. Influenza Season Cardiology and AI "Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group" "Medicare will pay more than $1,000 for AI to analyze a heart scan. Is that too much?" Free speech and drug promotion "High-Engagement Social Media Posts Related to Prescription Drug Promotion for 3 Major Drug Classes" Health & Veritas Episode 195: Jerry Avorn: Countering the Drug Marketing Machine Medicare premiums "Medicare premiums to jump 10% heading into 2026" "Social Security Announces 2.8 Percent Benefit Increase for 2026" Centers for Medicare and Medicaid: 2026 Medicare Parts A & B Premiums and Deductibles In the Yale School of Management's MBA for Executives program, you'll get a full MBA education in 22 months while applying new skills to your organization in real time. Yale's Executive Master of Public Health offers a rigorous public health education for working professionals, with the flexibility of evening online classes alongside three on-campus trainings. Email Howie and Harlan comments or questions.

Health & Veritas
The Cost Curve, Flu, and Other News

Health & Veritas

Play Episode Listen Later Nov 20, 2025 37:45


Howie and Harlan discuss the outlook for U.S. healthcare spending over the next five years, the state of seasonal and avian flu, and an expensive AI-based cardiac test.  Show notes: Life expectancy and expenditures "How does U.S. life expectancy compare to other countries?" ACOs and cost savings "After Fifteen Years, is Value-Based Care Succeeding?" Health & Veritas Episode 115: Farzad Mostashari: Aligning Incentives to Fix Primary Care World Prematurity Day WHO: World Prematurity Day 2025 WHO: World Prematurity Day Key Messages WHO: Preterm birth AI concerns "'It keeps me awake at night': machine-learning pioneer on AI's threat to humanity" "Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI" "AI pioneer: 'The dangers of abuse are very real'" "'Malicious use is already happening': machine-learning pioneer on making AI safer" "Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award" "Deep learning" Bird flu "First U.S. case of human bird flu in 9 months confirmed in Washington state" Cleveland Clinic: Bird Flu (Avian Influenza) "Flu in numbers: NHS faces one of worst winters ever, officials warn, amid concern over mutated strain" "New flu virus mutation could see 'worst season in a decade'" "Australia posts record-breaking flu numbers as vaccination rates stall"  FDA: Influenza Vaccine Composition for the 2025-2026 U.S. Influenza Season Cardiology and AI "Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group" "Medicare will pay more than $1,000 for AI to analyze a heart scan. Is that too much?" Free speech and drug promotion "High-Engagement Social Media Posts Related to Prescription Drug Promotion for 3 Major Drug Classes" Health & Veritas Episode 195: Jerry Avorn: Countering the Drug Marketing Machine Medicare premiums "Medicare premiums to jump 10% heading into 2026" "Social Security Announces 2.8 Percent Benefit Increase for 2026" Centers for Medicare and Medicaid: 2026 Medicare Parts A & B Premiums and Deductibles In the Yale School of Management's MBA for Executives program, you'll get a full MBA education in 22 months while applying new skills to your organization in real time. Yale's Executive Master of Public Health offers a rigorous public health education for working professionals, with the flexibility of evening online classes alongside three on-campus trainings. Email Howie and Harlan comments or questions.

Learning Tech Talks
The AI Dependency Paradox: Why the Future Demands We Reinvest in Humans

Learning Tech Talks

Play Episode Listen Later Nov 17, 2025 35:00


Everywhere you look, AI is promising to make life easier by taking more off our plate. But what happens when “taking work away from people” becomes the only way the AI industry can survive?That's the warning Geoffrey Hinton, the “Godfather of AI,”recently raised when he made a bold claim that AI must replace all human labor for the companies that build it to be able to sustain themselves financially. And while he's not entirely wrong (OpenAI's recent $13B quarterly loss seeming to validate it), he's also not right.This week on Future-Focused, I'm unpacking what Hinton's statement reveals about the broken systems we've created and why his claim feels so inevitable. In reality, AI and capitalism are feeding on the same limited resource: people. And, unless we rethink how we grow, both will absolutely collapse under their own weight.However, I'll break down why Hinton's “inevitability” isn't inevitable at all and what leaders can do to change course before it's too late. I'll share three counterintuitive shifts every leader and professional need to make right now if we want to build a sustainable, human-centered future:​Be Surgical in Your Demands. Why throwing AI at everything isn't innovation; it's gambling. How to evaluate whether AI should do something, not just whether it can.​Establish Ceilings. Why growth without limits is extraction, not progress. How redefining “enough” helps organizations evolve instead of collapse.​Invest in People. Why the only way to grow profits and AI long term is to reinvest in humans—the system's true source of innovation and stability.I'll also share practical ways leaders can apply each shift, from auditing AI initiatives to reallocating budgets, launching internal incubators, and building real support systems that help people (and therefore, businesses) thrive.If you're tired of hearing “AI will take everything” or “AI will save everything,” this episode offers the grounded alternative where people, technology, and profits can all grow together.⸻If this conversation helps you think more clearly about the future we're building, make sure to like, share, and subscribe. You can also support the show by buying me a coffee.And if your organization is wrestling with how to lead responsibly in the AI era, balancing performance, technology, and people, that's the work I do every day through my consulting and coaching. Learn more at https://christopherlind.co.⸻Chapters:00:00 – Hinton's Claim: “AI Must Replace Humans”02:30 – The Dependency Paradox Explained08:10 – Shift 1: Be Surgical in Your Demands15:30 – Shift 2: Establish Ceilings23:09 – Shift 3: Invest in People31:35 – Closing Reflection: The Future Still Needs People#AI #Leadership #FutureFocused #GeoffreyHinton #FutureOfWork #AIEthics #DigitalTransformation #AIEffectiveness #ChristopherLind

Intelligenza Artificiale Spiegata Semplice
Intelligenza Artificiale: Bolla o non Bolla?

Intelligenza Artificiale Spiegata Semplice

Play Episode Listen Later Nov 17, 2025 16:46


In questa puntata analizziamo uno dei dibattiti più accesi del momento: l'AI è davvero in una bolla oppure no? Attraverso le opinioni di alcuni dei protagonisti assoluti del settore — Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Fei-Fei Li, Jensen Huang e Bill Dally — esploriamo visioni contrastanti, rischi, opportunità e il futuro dell'ecosistema dell'intelligenza artificiale. Un episodio essenziale per chi vuole capire cosa sta davvero succedendo dietro le quinte della rivoluzione AI.Libro HUMAN RELOADED: https://amzn.to/4evkVWvInviaci le tue domande e curiosità sull'Intelligenza Artificiale all'email: info@iaspiegatasemplice.it Pasquale e Giacinto risponderanno in una puntata speciale un sabato al mese.Pasquale Viscanti e Giacinto Fiore ti guideranno alla scoperta di quello che sta accadendo grazie o a causa dell'Intelligenza Artificiale, spiegandola semplice.Puoi iscriverti anche alla newsletter su: www.iaspiegatasemplice.it

No es un día cualquiera
No es un día cualquiera - Pos-tecnocracia con Marta Peirano

No es un día cualquiera

Play Episode Listen Later Nov 16, 2025 9:54


La carrera hacia una IA general sigue acelerando, pero ni sus propios creadores se ponen de acuerdo. Esta semana, Geoffrey Hinton, Nobel de Física por sus avances en redes neuronales, discutió con Mustafa Suleyman, jefe de IA en Microsoft: Hinton cree que las máquinas podrán pensar, mientras Suleyman lo niega rotundamente. Nos lo cuenta Marta Peirano.Escuchar audio

On with Kara Swisher
“Godfather of AI” Geoffrey Hinton Rings the Warning Bells

On with Kara Swisher

Play Episode Listen Later Nov 13, 2025 58:47


Nobel laureate Geoffrey Hinton, known as one of the “godfathers of AI” for his pioneering work in deep learning and neural networks, joins Kara to discuss the technology he helped create — and how to mitigate the existential risks it poses.  Hinton explains both the short- and long-term dangers he sees in the rapid rise of artificial intelligence, from its potential to undermine democracy to the existential threat of machines surpassing human intelligence. He offers a thoughtful, complex perspective on how to craft national and international policies to keep AI in check and weighs in on whether the  AI bubble is about to burst. Plus: why your mom might be the best model for creating a safe AI. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Nuacht Mhall
8 Samhain 2025 (Dún na nGall)

Nuacht Mhall

Play Episode Listen Later Nov 8, 2025 6:04


Nuacht Mhall. Príomhscéalta na seachtaine, léite go mall.*Inniu an t-ochtú lá de mhí na Samhna. Is mise Alanna Ní Ghallachóir.Dúirt saoránach Éireannach a sádh agus é ar an traein go Londain gur “an nádúr a bhí ann” nuair a chuaigh sé i mbealach an ionsaitheora le paisinéirí eile a chosaint. Goineadh deichniúr san eachtra foréigin ar an traein idir Doncaster, baile in oirthuaisceart Shasana, agus Stáisiún Chrosaire an Rí i Londain, oíche Dé Sathairn seo caite. Cuireadh na póilíní ar a n-airdeall faoin éigeandáil ar bord na traenach ag 7.40in an oíche sin. Ba é Stephen Crean, ar as Baile Átha Cliath í a mháthair agus ar as Ros Comáin é a athair, duine de na híospartaigh, in éineacht le Samir Zitouni, a rith i dtreo an ionsaitheora agus a thug aghaidh ar an chontúirt. Dúirt marthanóir an ionsaithe go raibh sí fíorbhuíoch de Crean as a chuid crógachta.Bhí an t-iománaí mór le rá DJ Carey os comhair na cúirte mar go ndearna sé calaois ar dhaoine nuair a lig sé air go raibh ailse air agus go raibh airgead de dhíth air ar son cóir leighis. Gearradh téarma príosúnachta 5 bliana go leith ar an iar-iomanaí clúiteach, a bhain cúig Chraobh na hÉireann ag imirt do Contae Chill Chainnigh. Ghlac sé beagnach €400,000 ó 22 duine agus níl ach €44,000 íoctha ar ais go dtí seo. Dúirt an Breitheamh Nolan go raibh sé “thar a bheith doiligh céard a bhí ina chúis leis” agus gur “tháinig Carey i dtír ar dhea-nádúr na ndaoine.” Chuala an chúirt nach raibh dochúlacht ann go bhfaighidh a chuid íospartach a n-airgead ar ais, ach gur inis do “leithscéal croíuíl, cneasta” a thabhairt d'achan íospartach. Dúirt ‘máthair bhaistí' na hintleachta saorga go bhfuil sí “bródúil as a bheith difriúil”, agus í an t-aon bhean amháin as seachtar ceannródaithe na hintleachta shaorga ar bronnadh duais inealltóireachta orthu Dé Céadaoin. Bhronn Rí na Ríochta Aontaithe Duais Inealltóireachta na Banríona Éilis ar an Ollamh Li agus seisear eile léi le linn searmanais ag Pálás Naomh Séamais. Is iad na daoine eile a fuair an onóir in éineacht léi ná an tOllamh Yoshua Begio, an Dochtúir Billy Dally, an Dochtúir Geoffrey Hinton, an tOllamh John Hopfield, bunadóir Nvidia Jensen Huang, agus príomheolaí na hintleachta saorga um Meta an Dochtúir Yann LeCun. Tá siad ag fáil aitheantas as a gcuid ról i bhfórbairt nua-mheaisínfhoglama, réimse atá mar bhonn faoin dul chun cinn sciobtha in intleacht shaorga. Dúirt an tOllamh Li, “Ar son na mban óg a bhfuil mé ag obair leofa agus glúinte na gcáilíní atá le teacht, ní miste liom glacadh leis an teideal seo”. *Léirithe ag Conradh na Gaeilge i Londain. Tá an script ar fáil i d'aip phodchraolta.*GLUAISsaoránach - citizenéigeandáil - emergencycalaois - fraudailse - cancermáthair bhaistí - godmotherintleacht shaorga - artificial intelligence

BBC Inside Science
Is Dark Energy Getting Weaker?

BBC Inside Science

Play Episode Listen Later Nov 6, 2025 26:29


Astronomers have new evidence, which could change what we understand about the expansion of the universe. Carlos Frenk, Ogden Professor of Fundamental Physics at Durham University gives us his take on whether the dark energy pushing our universe apart is getting weaker.With the Turing Prize, the Nobel Prize and now this week the Queen Elizabeth Prize for Engineering under his belt, Geoffrey Hinton is known for his pioneering work on AI. And, since leaving a job at Google in 2023, for his warnings that AI could bring about the end of humanity. Tom Whipple speaks to Geoffrey about the science of super intelligence. And Senior physics reporter at Nature Lizzie Gibney brings us her take on the new science that matters this week.To discover more fascinating science content, head to bbc.co.uk search for BBC Inside Science and follow the links to The Open University.Presenter: Tom Whipple Producer: Clare Salisbury Content Producer: Ella Hubber Assistant Producers: Jonathan Blackwell & Tim Dodd Editor: Martin Smith Production Co-ordinator: Jana Bennett-Holesworth

Metaverse Marketing
OpenAI's $38B Deal, Hinton's AI Shift, Data Wars, Meta's Missteps, and Spatial Computing Futures with Cathy Hackl, Lee Kebler, Melissa Tony Stires, and Janna Salokangas

Metaverse Marketing

Play Episode Listen Later Nov 5, 2025 62:54


In this episode of TechMagic, hosts Cathy Hackl and Lee Kebler explore OpenAI's Sora and how AI-driven video generation reshapes creativity, privacy, and consent. From OpenAI's massive $38B AWS deal to the ethical storm over data scraping and copyright, they unpack the week's biggest tech power plays. The duo explores Geoffrey Hinton's surprising optimism on AI's future, Meta's data mishap, and how companies are redefining roles through spatial computing. Plus, Lee shares insights from NVIDIA's GTC conference and what it reveals about the true cost and promise of AI. The episode also features Cathy's exciting interview with Melissa Tony Stires, Founding Partner and Chief Global Growth Officer, and Janna Salokangas, Co-Founder and CEO of Mia, AI. Together, they discuss strategy-first adoption of AI, the importance of AI literacy, and the mindset shifts leaders need to drive human-centred transformation in the era of intelligent tools.Come for the tech, and stay for the magic!Melissa Tony Stires BioMelissa Tony Stires is an international protocol expert and leadership innovator specialising in cross-cultural communications and women's empowerment in AI. As Founder and Head of Global Growth and Expansion at Mia AI, she bridges tradition and technology through global collaborations and billion-dollar initiatives. A certified Advanced International Protocol Officer, best-selling author, and sought-after speaker, Melissa's work has shaped dialogues from Davos to Cannes Lions, advancing inclusivity, innovation, and global understanding in the tech landscape.Melissa Tony Stires LinkedInJanna Salokangas BioJanna Salokangas is the Co-founder and CEO of Mia AI, where she's redefining how people and organisations unlock their full potential through AI-driven learning and innovation. Under her leadership, Mia has trained over 7,000 professionals across 65+ countries, partnering with leading institutions to deliver transformative AI education and solutions. A co-founder of Finnish Flow, Janna also champions Finland's business community at Davos, advocating for human-centric AI and the future of equitable, empowered innovation.Janna Salokangas LinkedInKey Discussion Topics:00:00 Intro: Welcome to Tech Magic00:28 NVIDIA GTC & Nokia's $1B AI Investment00:54 Geoffrey Hinton Shifts AI Stance on Job Displacement08:17 Sharp HealthCare's First Chief Spatial Computing Officer09:05 OpenAI's $38 Billion Amazon AWS Deal Explained15:17 Perplexity vs Reddit: Data Scraping Lawsuit Breakdown21:28 AI Augmentation Over Replacement: Secret Cinema's Approach26:19 Magic Leap's Google Partnership & New AI Glasses32:17 TEDx Atlanta: Alvin Wang Graylin & Industry Leaders35:45 AI Education Interview with Janna & Melissa from Mia AI37:18 Mia AI: Human-Centered AI Education Going Global42:35 Strategy-First AI Adoption: Define Problems Before Tools42:46 Real-World Success Stories: From Universities to Single Mothers47:28 What Differentiates Mia AI in a Crowded Market Hosted on Acast. See acast.com/privacy for more information.

Hashtag Trending
AI's Economic Impact, Google Cloud's Rise, and Digital Sovereignty Shifts

Hashtag Trending

Play Episode Listen Later Nov 3, 2025 8:46


In this episode of Hashtag Trending, Jim Love discusses Geoffrey Hinton's views on AI replacing human labor for big tech profits, Google Cloud's internal competition with YouTube, and the International Criminal Court's switch from Microsoft Office to an open-source alternative. The episode also covers YouTube's controversial removal of Windows 11 installation videos on unsupported systems. 00:00 Introduction and Sponsor Message 00:29 AI's Impact on Jobs and Economy 02:35 Google Cloud's Rise and Internal Competition 04:36 ICC's Shift from Microsoft to Open Source 06:10 YouTube's Controversial Content Removals 07:40 Conclusion and Sponsor Message

The Good Fight
Geoffrey Hinton on Artificial Intelligence

The Good Fight

Play Episode Listen Later Oct 30, 2025 65:47


Yascha Mounk and Geoffrey Hinton discuss how AI works—and why it's a risk.  Geoffrey Hinton is a cognitive psychologist and computer scientist known as the “godfather of AI.” He was awarded the 2024 Nobel Prize in Physics, along with John Hopfield. In this week's conversation, Yascha Mounk and Geoffrey Hinton discuss what neuroscience teaches us about AI, how humans and machines learn, and the existential risks of AI. If you have not yet signed up for our podcast, please do so now by following ⁠⁠this link on your phone⁠⁠. Email: leonora.barclay@persuasion.community Podcast production by Jack Shields and Leonora Barclay. Connect with us! ⁠⁠Spotify⁠⁠ | ⁠⁠Apple⁠⁠ | ⁠⁠Google⁠⁠ X: ⁠⁠@Yascha_Mounk⁠⁠ & ⁠⁠@JoinPersuasion⁠⁠ YouTube: ⁠⁠Yascha Mounk⁠⁠, ⁠⁠Persuasion⁠⁠ LinkedIn: ⁠⁠Persuasion Community Learn more about your ad choices. Visit megaphone.fm/adchoices

El Podcast de JF Calero
REACCIONANDO A GEOFFREY HINTON: HAY UN 20% DE POSIBILIDADES DE QUE NOS DESTRUYA LA IA

El Podcast de JF Calero

Play Episode Listen Later Oct 30, 2025 20:46


El padrino de la inteligencia artificial, Geoffrey Hinton, lanza la advertencia más seria hasta ahora: “No tenemos ni idea de lo que viene.”En este vídeo te explico qué teme realmente el científico que ayudó a crear la IA moderna y por qué incluso él cree que podría amenazar la supervivencia humana.

The Daily Scoop Podcast
An open letter against superintelligent AI

The Daily Scoop Podcast

Play Episode Listen Later Oct 23, 2025 5:13


An open letter released Wednesday has called for a ban on the development of artificial intelligence systems considered to be “superintelligent” until there is broad scientific consensus that such technologies can be created both safely and in a manner the public supports. The statement, issued by the nonprofit Future of Life Institute, has been signed by more than 700 individuals, including Nobel laureates, technology industry veterans, policymakers, artists, and public figures such as Prince Harry and Meghan Markle, the Duke and Duchess of Sussex. The letter reflects deep and accelerating concerns over projects undertaken by technology giants like Google, OpenAI, and Meta Platforms that are seeking to build artificial intelligence capable of outperforming humans on virtually every cognitive task. According to the letter, such ambitions have raised fears about unemployment due to automation, loss of human control and dignity, national security risks, and the possibility of far-reaching social or existential harms. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the statement reads. Signatories include AI pioneers Yoshua Bengio and Geoffrey Hinton, both recipients of the Turing Award, Apple co-founder Steve Wozniak, businessman Richard Branson, and actor Joseph Gordon-Levitt. Pentagon personnel could soon be told to participate in new training programs designed to prepare them for anticipated advancements in biotechnology and its convergence with other critical and emerging technologies, like quantum computing and AI. House lawmakers recently passed an amendment en bloc in their version of the fiscal 2026 National Defense Authorization Act that would mandate the secretary of defense to set up such trainings, no later than one year after the legislation's enactment. Biotechnology refers to a multidisciplinary field that involves the application of biological systems or the use of living organisms, like yeast and bacteria, to produce products or solve complex problems. These technologies are expected to revolutionize defense, energy, manufacturing and other sectors globally in the not-so-distant future — particularly as they are increasingly paired with and powered by AI. And while the U.S. historically has demonstrated many underlying strengths in the field, recent research suggests the government may be falling behind China, where biotechnology research efforts and investments have surged since the early 2000s. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

The Decibel
Machines Like Us: Geoffrey Hinton on AI's future

The Decibel

Play Episode Listen Later Oct 13, 2025 69:43


Geoffrey Hinton, “the godfather of AI”, pioneered much of the network research that would become the backbone of modern AI. But it's in the last several years that he has reached mainstream renown. Since 2023, Hinton has been on a campaign to convince governments, corporations and citizens that artificial intelligence – his life's work – could be what spells the end of human civilization.Machines Like Us host Taylor Owen interviews Hinton on the advancements made in AI in recent years and asks: if we keep going down this path, what will become of us?Subscribe to The Globe and Mail's ‘Machines Like Us' podcast on Apple Podcasts or Spotify Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

London Futurists
Safe superintelligence via a community of AIs and humans, with Craig Kaplan

London Futurists

Play Episode Listen Later Oct 10, 2025 41:15


Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanyHerbert A. Simon - WikipediaAmara's Law and Its Place in the Future of Tech - Pohan LinPredict Wall StreetThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI've Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race' of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father's MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

The Problem With Jon Stewart
AI: What Could Go Wrong? with Geoffrey Hinton

The Problem With Jon Stewart

Play Episode Listen Later Oct 9, 2025 102:51


As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the "Godfather of AI," to understand what we've actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton's concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more:  > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast> TikTok: https://tiktok.com/@weeklyshowpodcast  > X: https://x.com/weeklyshowpod   > BlueSky: https://bsky.app/profile/theweeklyshowpodcast.com Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic  Producer – Gillian Spear Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Music by Hansdle Hsu Learn more about your ad choices. Visit podcastchoices.com/adchoices

Big Tech
Geoffrey Hinton vs. The End of the World

Big Tech

Play Episode Listen Later Oct 7, 2025 69:11


The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI's most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.I think it's fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life's work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way.So I wanted to ask Hinton: If we keep going down this path, what will become of us?Mentioned:If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresAgentic Misalignment: How LLMs could be insider threats, by AnthropicMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nerdland maandoverzicht wetenschap en technologie
Nerdland Maandoverzicht: Oktober 2025

Nerdland maandoverzicht wetenschap en technologie

Play Episode Listen Later Oct 3, 2025 152:26


Een nieuw #Nerdland maandoverzicht! Met deze maand: Dinogeluiden! Lieven in de USA! Ignobelprijzen! Neptermieten! Spiercheaten! Website op een vape! En veel meer... Shownotes: https://podcast.nerdland.be/nerdland-maandoverzicht-oktober-2025/ Gepresenteerd door Lieven Scheire met Peter Berx, Jeroen Baert, Els Aerts, Bart van Peer en Kurt Beheydt. Opname, montage en mastering door Jens Paeyeneers en Els Aerts. (00:00:00) Intro (00:01:42) Lieven, Hetty en Els waren op bezoek bij Ötzi (00:03:28) Inhoud onderzocht van 30.000 jaar oude “gereedschapskist” rugzak (00:04:47) Is er leven gevonden op Mars? (00:09:02) Dwergplaneet Ceres was ooit bewoonbaar (00:10:50) Man sleurt robot rond aan een ketting (demo Any2track) (00:15:02) Nieuwe Unitree robot hond A2 stellar explorer heeft waanzinnig goed evenwicht, en kan een mens dragen (00:17:09) “Wat is een diersoort”? De ene mierensoort baart een andere… (00:26:12) Dinogeluiden nabootsen met 3D prints (00:35:19) **Inca death wistle** (00:36:52) Hoe is het nog met 3I/ATLAS (00:45:13) Nieuwe AI hack: verborgen prompts in foto's (00:52:59) Einsteintelescoop: België zet de ambities kracht bij (00:57:44) DeepMind ontwikkelt AI om LIGO te helpen bij zwaartekrachtsgolvendetectie (01:03:13) Ook podcast over ET: “ET voor de vrienden”, met Bert Verknocke (01:03:50) SILICON VALLEY NEWS (01:04:04) Lieven was in Silicon Valley (01:16:39) Familie meldt dat een Waymo-taxi doelloos rondhangt bij hun huis (01:18:43) Meta lanceert smart glasses en het demoduiveltje stuurt alles in de war (01:27:51) Mark Zuckerberg klaagt Mark Zuckerberg aan omdat hij van Facebook gesmeten wordt. (dat is wel heel erg Meta) (01:30:39) Eerste testen met Hardt Hyperloop in Rotterdam, 700 km/u (01:34:11) Ignobelprijzen (01:42:00) Extreme mimicry: kever draagt neptermiet op de rug (01:45:54) Gamer bouwt aim assist die rechtstreeks op zijn spieren werkt (01:51:38) “Bogdan The Geek” host een website op een wegwerpvape (01:54:16) Hoe moet je iemand reanimeren in de ruimte? (02:00:29) Esdoornmotten gebruiken disco-gen om dag/nacht ritme te regelen (02:05:45) Nieuwe studie Stanford toont alweer gezondheidsrisico's uurwissel aan (02:08:59) AI nieuws (02:09:18) Geoffrey Hinton zijn lief maakt het af via ChatGPT (02:10:01) ASML steekt 1,3 miljard euro in Mistral (02:12:15) Idiote stunt in Shangai: robot ingeschreven als PhD student (02:13:42) Idiote stunt in Albanië: ai benoemd tot minister (02:16:29) RECALLS (02:17:15) Leuke wetenschappelijke pubquiz van New Scientist (02:17:57) Emilie De Clerck is allergisch geworden voor vlees door een tekenbeet in België! Ze kan wel nog smalneusapen eten, zoals bavianen of mensen (02:19:26) Het is niet Peter Treurlings, maar Peter Teurlings van Tech United (02:19:47) Technopolis doet twee avonden open alleen voor volwassenen: 17 oktober en 6 maart. Night@Technopolis (02:23:41) ZELFPROMO (02:29:25) SPONSOR TUC RAIL

Long Term Investing - With Baskin Wealth Management

This episode was recorded before the sad news that Mark Leonard had to step down as CEO of Constellation Software for health reasons. We wish him a full and speedy recovery. This week, Barry and Ernest react to Constellation Software's conference call about the risks and opportunities of AI to their business. 0:00- Intro4:34- Constellation's AI conference call13:53- Geoffrey Hinton's radiologist prediction14:58- AI concerns for Constellation24:48- AI opportunities for Constellation27:02- Constellation's decentralization 32:00- High hurdle rates for M&A

The Dishcast with Andrew Sullivan
John Ellis On The News And GOP History

The Dishcast with Andrew Sullivan

Play Episode Listen Later Sep 19, 2025 52:27


This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comJohn is a journalist, media consultant, old friend, and George W Bush's cousin. He's worked for NBC News as a political analyst and the Boston Globe as a columnist. In 2016, he launched a morning brief called “News Items” for News Corp, and later it became the Wall Street Journal CEO Council's morning newsletter. News Items jumped to Substack in 2019 (and Dishheads can subscribe now for 33% off). John also co-hosts two podcasts — one with Joe Klein (“Night Owls”) and the other with Richard Haas (“Alternate Shots”).For two clips of our convo — on the nail-biting Bush-Gore race that John was involved in, and Trump's mental decline — head to our YouTube page.Other topics: born and raised in Concord; his political awakening at 15 watching the whole '68 Dem convention with a fever in bed; his fascination with Nixon; the Southern Strategy; Garry Wills' book Nixon Agonistes; Kevin Phillips and populism; Nixon parallels with Trump — except shame; Roger Ailes starting Fox News; Matt Drudge; John's uncle HW Bush; HW as a person; the contrasts with his son Dubya; the trauma of 9/11; Iraq as a war of choice — the wrong one; Rumsfeld; Jeb Bush in 2016; the AI race; Geoffrey Hinton (“the godfather of AI”); John's optimism about China; tension with Taiwan; Israel's settlements; Bibi's humiliation of Obama; Huckabee as ambassador; the tariff case going to SCOTUS; the Senate caving to Trump; McConnell failing to bar Trump; the genius of his demagoguery; the Kirk assassination; Brexit; immigration under Boris; Reform's newfound dominance; the huge protest in London last week; Kirk's popularity in Europe; the AfD; Trump's war on speech; a Trump-Mamdani showdown; Epstein and Peter Mandelson; and grasping for reasons to be cheerful.Browse the Dishcast archive for an episode you might enjoy. Coming up: Wesley Yang on the trans question, Michael Wolff on Epstein, Karen Hao on artificial intelligence, Katie Herzog on drinking your way sober, Michel Paradis on Ike, Charles Murray on finding religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.

This Day in AI Podcast
Long Horizon Agents, State of MCPs, Meta's AI Glasses & Geoffrey Hinton is a LOVE RAT - EP99.17

This Day in AI Podcast

Play Episode Listen Later Sep 19, 2025 69:00


Join Simtheory: https://simtheory.ai----CHAPTERS:00:00 - Simtheory promo01:09 - Does Anthropic Intentionally Degrade Their Models?03:34 - Long Horizon Agents & How We Will Build Them36:18 - The State of MCPs & Internal Custom Enterprise MCPs51:04 - AI Devices: Meta's Ray-Ban Display & Meta Oakley Vanguards1:01:24 - Geoffrey Hinton is a LOVE RAT1:05:49 - LOVE RAT SONG----Thanks for listening, we appreciate all of your support, likes, comments and subs xoxox

Slate Star Codex Podcast
Book Review: If Anyone Builds It, Everyone Dies

Slate Star Codex Podcast

Play Episode Listen Later Sep 12, 2025 42:20


I. Eliezer Yudkowsky's Machine Intelligence Research Institute is the original AI safety org. But the original isn't always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don't? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there's some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn't, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We're not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we'll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They're kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don't expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don't want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don't emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone

Brave New World -- hosted by Vasant Dhar
Ep 99: Vasant Dhar on Thinking With Machines, Hosted by Joel Roberts

Brave New World -- hosted by Vasant Dhar

Play Episode Listen Later Sep 12, 2025 86:41


Joel Roberts, former host of a prime-time talk show on KABC Radio, Los Angeles, takes over hosting duties to talk Vasant Dhar about his upcoming book and Roberts' own scepticism regarding all things AI. Useful Resources: 1. Joel Roberts. 2. Thinking With Machines, The Brave New World With AI - Vasant Dhar3. AI and The Paper Clip Problem. 4. Dr. Jules White. 5. Geoffrey Hinton. 6. Yuval Noah Harari. 7. Augmented Intelligence - The Future of Generative AI and Computing. 8. How The Mind Works - Steven Pinker9. Brave New World Episode 94: Anil Seth On The Science of Consciousness. 10. Backpropagation 11. Brave New World Episode 98: There's no I in AI, Ben Shneiderman on The Evolution and State of Artificial Intelligence. 12. Brave New World Episode 97: Alex Wiltschko on Digitizing Scent. 13. Brave New World Episode 81: Alex Wiltschko on The Sense Of Smell.14. Joy Milne. 15. Brave New World Episode 89: Missy Cummings on Making AI Safe. 16. TEDx Talk - When Should We Trust Machines: Vasant Dhar. 17. The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma: Mustafa Suleyman. 18.  Luis Elizondo. Check out Vasant Dhar's newsletter on Substack. The subscription is free!

This Week in Google (MP3)
IM 836: I See OJ and He Looks Scared - Modern Oracles or Modern BS?

This Week in Google (MP3)

Play Episode Listen Later Sep 11, 2025 163:11 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 836: I See OJ and He Looks Scared

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 11, 2025 163:11 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

Radio Leo (Audio)
Intelligent Machines 836: I See OJ and He Looks Scared

Radio Leo (Audio)

Play Episode Listen Later Sep 11, 2025 163:11 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

This Week in Google (Video HI)
IM 836: I See OJ and He Looks Scared - Modern Oracles or Modern BS?

This Week in Google (Video HI)

Play Episode Listen Later Sep 11, 2025 163:10 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

Faces of Digital Health
AI replacing radiologists: Was the prediction right, just the timeline wrong? John Nosta, Shafi Ahmed

Faces of Digital Health

Play Episode Listen Later Sep 11, 2025 24:32


In this thought-provoking conversation, surgeon Shafi Ahmed and digital health futurist John Nosta revisit Geoffrey Hinton's bold 2016 prediction that radiologists would soon be replaced by AI.

All TWiT.tv Shows (Video LO)
Intelligent Machines 836: I See OJ and He Looks Scared

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 11, 2025 163:10 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

Radio Leo (Video HD)
Intelligent Machines 836: I See OJ and He Looks Scared

Radio Leo (Video HD)

Play Episode Listen Later Sep 11, 2025 163:10 Transcription Available


AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit

Protect Our Kids With Kristi Bush
Preparing Our Kids for an AI Future…we gotta pull our head outta the sand

Protect Our Kids With Kristi Bush

Play Episode Listen Later Sep 3, 2025 22:14


In this episode of "Protect Our Kids," host Kristi Bush delves into the rapidly evolving world of AI and its implications for our children's future. Inspired by insights from AI pioneer Dr. Geoffrey Hinton, Kristi explores the challenges and opportunities that AI presents, emphasizing the importance of empathy, adaptability, and human connection. As AI continues to advance, how can we equip our children to thrive in a world where technology and humanity must coexist? Tune in for a thought-provoking discussion on preparing the next generation for an AI-driven future.Key Takeaways: AI will surpass human intelligence in the near future.Parents need to prepare their kids for an AI-driven world.Empathy is crucial in a future dominated by technology.Human connections will set us apart from AI.Adaptability is a key skill for future generations.Children must learn to pivot in the face of challenges.AI can be perceived as more empathetic than humans.The job landscape will change significantly due to AI.Teaching kids about empathy is essential for their future roles.Good stewardship in AI development is important for society.www.knbcommunications.com

Keen On Democracy
Demystify Science and Humanize Scientists: How to Rebuild Scientific Trust in our Angry MAHA Times

Keen On Democracy

Play Episode Listen Later Sep 3, 2025 41:53


In our angry MAHA times, how can we get people trusting science and scientists again. According to MIT's Alan Lightman, one of America's greatest scientific writers, we need to both demystify science and humanize scientists. Lightman is the co-author, with Martin Rees, of The Shape of Wonder, a timely collection of essays about how scientists think, work, and live. We need to learn from scientists like Albert Einstein, Lightman - himself the author of the 1993 classic Einstein's Dreams, suggests. He argues that Einstein's "naive" willingness to challenge millennia of thinking about time exemplifies the wonder that drives great science. Lightman discusses why scientists have become entangled with "elite establishments" in our populist moment, and argues that critical scientific thinking—from balancing checkbooks to diagnosing a child's fever—belongs to everyone, not just scientists. So make America smart again (MASA), by demystifying science and humanizing scientists.1. "Naive" questioning drives breakthrough science Einstein revolutionized physics at 26 by refusing to accept millennia of received wisdom about time—showing that great science requires childlike willingness to challenge fundamental assumptions.2. Scientists are victims of populist backlash The mistrust of science isn't really about science—it's part of a global populist movement against "elite establishments," fueled by social media, immigration fears, and growing wealth inequality.3. Wonder requires discipline, not just awe Unlike a child's wonder, scientific wonder comes with tools—both experimental and theoretical—for actually understanding how things work, making it "disciplined wonder."4. Scientists shouldn't be authorities beyond science Even Einstein or Nobel laureates like Geoffrey Hinton have no special authority on ethics, philosophy, or politics—they're just smart people with opinions like everyone else.5. Critical thinking belongs to everyone When you balance your checkbook or diagnose a child's fever, you're using scientific thinking. Science isn't an elite activity—it's a method we all already practice in daily life.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Cohere Founder on How Cohere Compete with OpenAI and Anthropic $BNs | Why Counties Should Fund Their Own Models & the Need for Model Sovereignty | How Sam Altman Has Done a Disservice to AI with Nick Frosst

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Sep 1, 2025 67:50


Nick Frosst is a Canadian AI researcher and entrepreneur, best known as co-founder of Cohere, the enterprise-focused LLM. Cohere has raised over $900 million, most recently a $500 million round, bringing its valuation to $6.8 billion. Under his leadership, Cohere hit $100M in ARR. Prior to founding Cohere, Nick was a researcher at Google Brain and a protégé of Geoffrey Hinton. AGENDA:  00:00 – Biggest lessons from Geoff Hinton at Google Brain? 02:10 – Did Google completely sleep at the wheel and miss ChatGPT? 05:45 – Is data or compute the real bottleneck in AI's future? 07:20 – Does GPT5 Prove That Scaling Laws are BS? 13:30 – Are AI benchmarks just total BS? 17:00 – Would Cohere spend $5M on a single AI researcher? 19:40 – What is nonsense in AI that everyone is talking about? 25:30 – What is no one talking about in AI that everyone should be talking about? 33:00 – How do Cohere compete with OpenAI and Anthropic's billions? 44:30 – Why does being American actually hurt tech companies today? 45:10 – Should countries fund their own models? Is model sovereignty the future? 52:00 – Why has Sam Altman actually done a disservice to AI?  

Katie Couric
The “Godfather of AI,” Dr. Geoffrey Hinton, on AI's Existential Risk

Katie Couric

Play Episode Listen Later Aug 27, 2025 51:42 Transcription Available


When Dr. Geoffrey Hinton left Google in 2023, it wasn’t because he’d lost faith in AI. It was because he wanted to speak freely about its dangers (and because, at 75, he says programming is “annoying”). The Nobel laureate joins Katie to unpack some of the riskiest aspects of this new technology: why government regulation lags behind innovation; why jobs are at risk and whether countries can work together to prevent an AI arms race. . But Hinton also sees a path forward: if we design AI that genuinely supports and protects humanity, coexistence might be possible. This episode wrestles with the urgent question on everyone’s mind: will AI’s breathtaking potential transform our lives or threaten our very survival?See omnystudio.com/listener for privacy information.

Learning Tech Talks
Meta's AI Training Leak | Godfather of AI Pushes “Mommy AI” | Toxic Work Demands Driving Moms Out

Learning Tech Talks

Play Episode Listen Later Aug 22, 2025 55:20


Happy Friday, everyone! Congrats on making it through another week, and what a week it was. This week I had some big topics, so I ran out of time for the positive use-case, but I'll fit it in next week.Here's a quick rundown of the topics with more detail below. First, Meta had an AI policy doc lead, and boy did it tell a story while sparking outrage and raising deeper questions about what's really being hardwired into the systems we all use. Then I touch on Geoffrey Hinton, the “Godfather of AI,” and his controversial idea that AI should have maternal instincts. Finally, I dig into the growing wave of toxic work expectations, from 80-hour demands to the exodus of young mothers from the workforce.With that, let's get into it.⸻Looking Beyond the Hype of Meta's Leaked AI Policy GuidelinesA Reuters report exposed Meta's internal guidelines on training AI to respond to sensitive prompts, including “sensual” interactions with children and handling of protected class subjects. People were pissed and rightly so. However, I break down why the real problem isn't the prompts themselves, but the logic being approved behind them. This is much bigger than the optics of some questionable guidelines; it's about illegal reasoning being baked into the foundation of the model.⸻The Godfather of AI Wants “Maternal” MachinesGeoffrey Hinton, one of the pioneers of AI, is popping up everywhere with his suggestion that training AI with motherly instincts is the solution to preventing it from wiping out humanity. Candidly, I think his logic is off for way more reasons than the cringe idea of AI acting like our mommies. I unpack why this framing is flawed, what leaders should actually take away from it, and why we need to move away from solutions that focus on further humanizing AI. It's to stop treating AI like a human in the first place.⸻Unhealthy Work Demands and the Rising Exodus of Young MomsAn AI startup recently gave its employees a shocking ultimatum: work 80 hours a week or leave. What happened to AI eliminating the need for human work? Meanwhile, data shows young mothers are exiting the workforce at troubling rates, completely reversing all the gains we saw during the pandemic. I connect the dots between these headlines, AI's role in rise of unsustainable work expectations, and the long-term damage this entire mindset creates for businesses and society.⸻If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that go beyond the headlines and help you lead with clarity in the AI age. And, if you'd take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind—Show Notes:In this Weekly Update, Christopher Lind unpacks the disturbing revelations from Meta's leaked AI training docs, challenges Geoffrey Hinton's call for “maternal AI,” and breaks down the growing trend of unsustainable work expectations, especially the impact on mothers in the workforce.Timestamps:00:00 – Introduction and Welcome01:51 – Overview of Today's Topics03:19 – Meta's AI Training Docs Leak27:53 – Geoffrey Hinton and the “Maternal AI” Proposal39:48 – Toxic Work Demands and the Workforce Exodus53:35 – Final Thoughts#AIethics #AIrisks #DigitalLeadership #HumanCenteredAI #FutureOfWork

Quantum - The Wee Flea Podcast
Quantum 370 - There will always be an England! Netanyahu and Tommy Robinson

Quantum - The Wee Flea Podcast

Play Episode Listen Later Aug 21, 2025 59:08


This week we continue to look at AI and its impact on our society;  AI friends;  The Stepford Wives;  Edinburgh University Press on Black and white; National Library of Scotland censors Women Won't Wheesht;  Men giving birth in South Australia; Country of the Week - England;  The Magna Carta; The Significance of Flags; Christianity in England; Ceasefires;  Triggernometry and Netanyahu and Tommy Robinson on Mohammed and Jesus;  The Matrix; Geoffrey Hinton and AI Sub Goals;  French Muslims ban Barbie; Lisa Nandy appoints Muslim as only religious advisor to civil society project; Med 1 in 200 billion year event!  It's cold in Australia;  Offshore windfarms decimate fishing and environment;  BP to reopen large North Sea oil field; UEFA's non political political message; Jasper Carrott on insurance claims;  Rev James Haram and Colin Smyth MSP;  A Hidden Life; Feedback; Podcast change news;  with music from Queen;  Vera Lynn;  Aqua;  Frank Sinatra and Dorothy Kirsteen; The Waterboys;  Melbourne Opera; and Indian Christians. 

Hashtag Trending
Meta's AI Overhaul, OpenAI's Open Source Shift, and Hugging Face Simplifies AI

Hashtag Trending

Play Episode Listen Later Aug 21, 2025 11:53 Transcription Available


  In this episode of Hashtag Trending, hosted by Jim Love, key topics include Meta's significant changes to its AI division, leading to internal pay conflicts and potential slowdowns despite big investments. OpenAI's CEO, Sam Altman, explains their new open source models as a strategic move against Chinese competitors. Hugging Face introduces AI Sheets, a user-friendly toolkit for integrating large language models into spreadsheets, raising questions about accessibility and potential risks. Additionally, the episode touches on the hidden costs of AI tools and highlights Geoffrey Hinton's call for 'maternal AI,' noting an Edmonton startup already developing such technology with indigenous values. 00:00 Meta's AI Shakeup: Pay Wars and Super Intelligence Strategy 02:02 OpenAI's Open Source Move: A Strategic Pivot 04:49 Hugging Face's AI Sheets: Democratizing Data Analytics 06:33 Hidden AI Costs: The Financial Backlash 08:55 Geoffrey Hinton's Vision: Maternal AI 10:56 Conclusion and Listener Engagement

Better Call Paul
432. Zach Cregger's Weapons, Sling Up-Ends Linear TV (again), and AI with a heart?

Better Call Paul

Play Episode Listen Later Aug 20, 2025 34:35


This week, Mesh provides a rave review for Weapons, the sophomore effort from Zach Cregger, that is generating massive box-office buzz. Next, Paul and Mesh discuss the introduction of Sling's day, weekend, and week passes which provide a low-cost entry point to linear TV, but may ruffle feathers with TV networks. Finally, they discuss Geoffrey Hinton's (AI forefather, Nobel-prize winner) cautionary statements regarding AI super intelligence and whether job losses are just the tip of the iceberg. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Sandy Show Podcast
“Taylor Swift, AI Doom & Rock-Paper-Scissors Strategy: The Wild Ride You Didn't Know You Needed”

The Sandy Show Podcast

Play Episode Listen Later Aug 16, 2025 18:49 Transcription Available


Can a Pez dispenser teach you about quitting smoking? Can AI learn to love us before it wipes us out? And is paper really your best bet in rock-paper-scissors? In this jam-packed episode of The Sandy Show, Sandy and Tricia dive into everything from pop culture obsessions to existential tech fears—with plenty of laughs and unexpected insights along the way. From Taylor Swift's record-breaking podcast appearance and the orange craze it sparked across global brands, to the Godfather of AI warning humanity about its own creation, this episode is a rollercoaster of entertainment, curiosity, and caution. Plus, Tricia reveals the secret to winning rock-paper-scissors, and Sandy opens up about his decade-long journey of sobriety. Guest Spotlight: While this episode doesn't feature a formal guest, it highlights the voices of Sandy and Tricia—two seasoned radio personalities whose chemistry, wit, and honesty make every topic feel personal and engaging. Key Moments:

Keen On Democracy
When AI Breaks Your Heart: The Week Nothing Changed in Silicon Valley

Keen On Democracy

Play Episode Listen Later Aug 16, 2025 43:14


Tech nostalgia. Winner-take-all economics. The cult of "storytelling". A Stanford educated aristocratic elite. This was the week that nothing changed in Silicon Valley. Alternatively, it was the week that radical change broke some ChatGPT users hearts. That, at least, is how That Was the Week tech newsletter publisher Keith Teare described this week in Silicon Valley. From Sam Altman's sensitivity to user backlash over GPT-5's personality changes, to venture capital's continued concentration in just ten mega-deals, to Geoffrey Hinton's apocalyptic warnings about AI wiping out humanity - the patterns remain stubbornly familiar even as the technology races forward. So is nothing or everything changing? Keith says everything, I say nothing. Maybe - as AI Godfather Hinton suggested on the show earlier this week - it's time for an all-knowing algorithm with maternal instincts to enlighten us with the (female) truth about our disruptive future.1. AI Users Are Forming Deep Emotional BondsChatGPT users experienced genuine heartbreak when GPT-5's personality changes made their AI feel like a different "person." This forced OpenAI to backtrack and restore GPT-4, revealing how humans are treating AI as companions rather than tools.2. Silicon Valley's Power Structures Remain UnchangedDespite AI's revolutionary potential, the same patterns persist: 40% of VC money goes to just 10 deals, Stanford maintains legacy admissions favoring the wealthy, and winner-take-all economics dominate. The technology changes; the power concentration doesn't.3. The Browser Wars Are Over - Chat Interfaces WonThe future battle isn't about owning browsers (like Perplexity's bid for Chrome) but controlling the chat interface. OpenAI and Anthropic are positioning themselves as the new gatekeepers, replacing Google's search dominance.4. AI's Pioneers Are Becoming Its Biggest SkepticsGeoffrey Hinton, the "AI godfather," now believes there's a 15-20% chance AI could wipe out humanity. When the field's leading experts admit they "have no clue" about AI's future risks, it reveals how little anyone really knows about what we're building.5. Context and Prompting Are the New ProgrammingThe era of simple AI prompts is over. Success now requires sophisticated prompt engineering and providing rich context - making AI literacy as crucial as computer literacy once was. The abstractions are changing, and so must our skills.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

That Was The Week
When Your AI Breaks Your Heart

That Was The Week

Play Episode Listen Later Aug 16, 2025 42:37


This week's headline: When Your AI Breaks Your Heart.GPT-5 arrived “better” by every metric—yet users begged for GPT-4o back. It wasn't about accuracy. It was about personality. People felt like they lost a friend. OpenAI listened, backtracked, and gave them their companion back.But should it have? Progress is messy, and heartbreak may be the price of change.The Pain of ChangeUsers bond with AI like colleagues or partners—and revolt when those bonds are broken.OpenAI faced its first true PR crisis, forcing it to act like a consumer company, not just a lab.But longing for “the old AI” is as unrealistic as yearning for Windows 95. Change is the only constant.The Shifting WebCloudflare's Matthew Prince warns: AI is killing the Web.Perplexity's $34.5B bid for Chrome shows the fight for browser control—but the browser itself may be obsolete.Just as Spotify freed music from CDs, AI is unbundling content from URLs and tabs. The web isn't dying—it's being liberated.Inputs vs. ManipulationAI's real weakness? Databases. Models still can't query live inventory, prices, or transactions.“SEO for AI” tries to paper over this by gaming prompts—just like spammers gamed Google.But the future isn't tricks. It's context engineering: clean data + authentic inputs.Winners & Losers40% of VC money is going to just 10 AI deals. The power law rules: winners take almost everything.Geoffrey Hinton warns of AI “alien beings,” but others argue that fear distracts from real infrastructure challenges—like power grids, chips, and data quality.The Real OpportunityStartup of the Week: Torch, a health AI that turns a decade of medical records into personalized insights.This is the real future—integrating trustworthy data into AI, not re-skinning old personalities.The controversy this week is simple:Do we cling to the familiar—or embrace the heartbreak that comes with progress?While some mourn GPT-4o, the real story is far bigger: AI is rewriting law, health, energy, and the web itself. And it's happening whether we're ready or not. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe

The Wright Show
The Open Source AI Question (Robert Wright & Nathan Lambert)

The Wright Show

Play Episode Listen Later Aug 14, 2025 60:00


Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...

Bloggingheads.tv
The Open Source AI Question (Robert Wright & Nathan Lambert)

Bloggingheads.tv

Play Episode Listen Later Aug 14, 2025 60:00


Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...

Brave New World -- hosted by Vasant Dhar
Ep 98: There's no I in AI, Ben Shneiderman on The Evolution and State of Artificial Intelligence

Brave New World -- hosted by Vasant Dhar

Play Episode Listen Later Aug 14, 2025 69:50


Useful Resources: 1. Ben Shneiderman, Professor Emeritus, University Of Maryland. 2. Richard Hamming and Hamming Codes. 3. Human Centered AI - Ben Shneiderman. 4. Allen Newell and Herbert A. Simon. 5. Raj Reddy and the Turing Award. 6. Doug Engelbart. 7. Alan Kay. 8. Conference on Human Factors in Computing Systems. 9. Software psychology: Human factors in computer and information systems - Ben Shneiderman. 10. Designing the User Interface: Strategies for Effective Human-Computer Interaction - Ben Shneiderman. 11. Direct Manipulation: A Step Beyond Programming Languages - Ben Shneiderman. 12. Steps Toward Artificial Intelligence - Marvin Minsky. 13. Herbert Gelernter. 14. Computers And Thought - Edward A Feigenbaum and Julian Feldman. 15. Lewis Mumford. 15. Technics and Civilization - Lewis Mumford. 16. Buckminster Fuller. 17. Marshall McLuhan. 18. Roger Shank. 19. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness - Jonathan Haidt. 20. John C. Thomas, IBM. 21. Yousuf Karsh, photographer. 22. Gary Marcus, professor emeritus of psychology and neural science at NYU. 23. Geoffrey Hinton. 24. Nassim Nicholas Taleb. 25. There Is No A.I. - Jaron Lanier. 26. Anil Seth On The Science of Consciousness - Episode 94 of Brave New World. 27. A ‘White-Collar Blood Bath' Doesn't Have to Be Our Fate - Tim Wu 28. Information Management: A Proposal - Tim Berners-Lee 29. Is AI-assisted coding overhyped? : METR study 30. RLHF, Reinforcement learning from human feedback31. Joseph Weizenbaum 32. What Is Computer Science? - Allen Newel, Alan J. Perlis, Herbert A. Simon -- Check out Vasant Dhar's newsletter on Substack. The subscription is free!  

Keen On Democracy
Forget AI—How Bio-Threats and Network Collapse Are the Real Existential Threats to Humanity

Keen On Democracy

Play Episode Listen Later Aug 14, 2025 37:11


Few of the world's great scientists have given more thought to the existential threats to humanity than the irrepressible British cosmologist and astronomer Martin Rees. He's the co-founder of Cambridge University's Centre for Existential Risk as well as the author of the 2003 book Our Final Hour. So it's striking that Rees has a quite different take on the existential risk of artificial intelligence technology than many AI doomers including yesterday's guest, the 2024 Physics Nobel laureate Geoffrey Hinton. For Rees, bio-threats and network collapse represents the most dangerous technological threats to humanity in the near future. Unlike nuclear weapons, which require massive detectable infrastructure, Rees warns, dangerous pathogens can be engineered in small, unmonitored laboratories. Meanwhile, our civilization's complete dependence on interconnected global networks means system failures could trigger catastrophic societal breakdown within days. Apocalypse now? Perhaps. But, according to the prescient Rees, we are preparing for the wrong apocalypse. 1. AI's Real Danger Isn't Superintelligence—It's System DependencyRees is "very skeptical" about AI takeover scenarios. Instead, he worries about our over-dependence on globe-spanning networks that control electricity grids and internet infrastructure. When these fail—whether from cyberattacks or malfunctions—society could collapse within "two or three days."2. Bio-Threats Are Uniquely Undetectable and UnstoppableUnlike nuclear weapons that require massive, monitorable facilities, dangerous pathogens can be engineered in small, undetected laboratories. "Gain of function" experiments could create bioweapons far worse than COVID, and preventing this would require impossible levels of surveillance over anyone with relevant expertise.3. We're Living Through a Uniquely Dangerous EraRees believes "the prospect of a catastrophe in the next 10 or 20 years is perhaps higher than it's ever been." We're the first species in Earth's history capable of changing the entire planet—for good or ill—making this a genuinely special and precarious moment.4. Scientific Wonder Grows with Knowledge, Not Despite ItContrary to those who claim science diminishes mystery, Rees - the co-author of an upcoming book about scientific wonder - argues that "the more we understand, the more wonderful and complicated things appear." As knowledge advances, new mysteries emerge that couldn't even be conceived decades earlier.5. Humility About Human Limitations Is EssentialJust as "a monkey can't understand quantum mechanics," there may be fundamental aspects of reality beyond human comprehension. Rees warns against immediately invoking God for unexplained phenomena, advocating instead for accepting our cognitive limits while continuing to push boundaries.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

La W Radio con Julio Sánchez Cristo
¿Inteligencia Artificial es una amenaza para la supervivencia humana? Nobel de Física responde

La W Radio con Julio Sánchez Cristo

Play Episode Listen Later Aug 14, 2025 13:53


Geoffrey Hinton, conocido como el ‘padrino de la IA' quien recibió el Premio Nobel de Física 2024, reveló en La W cuál es la única forma en que en la humanidad puede sobrevivir la Inteligencia Artificial.

Keen On Democracy
AI Godfather Geoffrey Hinton warns that We're Creating 'Alien Beings that "Could Take Over"

Keen On Democracy

Play Episode Listen Later Aug 13, 2025 21:59


So will AI wipe us out? According to Geoffrey Hinton, the 2024 Nobel laureate in physics, there's about a 10-20% chance of AI being humanity's final invention. Which, as the so-called Godfather of AI acknowledges, is his way of saying he has no more idea than you or I about its species-killing qualities. That said, Hinton is deeply concerned about some of the consequences of an AI revolution that he pioneered at Google. From cyber attacks that could topple major banks to AI-designed viruses, from mass unemployment to lethal autonomous weapons, Hinton warns we're facing unprecedented risks from technology that's evolving faster than our ability to control it. So does he regret his role in the invention of generative AI? Not exactly. Hinton believes the AI revolution was inevitable—if he hadn't contributed, it would have been delayed by perhaps a week. Instead of dwelling on regret, he's focused on finding solutions for humanity to coexist with superintelligent beings. His radical proposal? Creating "AI mothers" with strong maternal instincts toward humans—the only model we have for a more powerful being designed to care for a weaker one.1. Nobody Really Knows the Risk Level Hinton's 10-20% extinction probability is essentially an admission of complete uncertainty. As he puts it, "the number means nobody's got a clue what's going to happen" - but it's definitely more than 1% and less than 99%.2. Short-Term vs. Long-Term Threats Are Fundamentally Different Near-term risks involve bad actors misusing AI (cyber attacks, bioweapons, surveillance), while the existential threat comes from AI simply outgrowing its need for humans - something we've never faced before.3. We're Creating "Alien Beings" Right Now Unlike previous technologies, AI represents actual intelligent entities that can understand, plan, and potentially manipulate us. Hinton argues we should be as concerned as if we spotted an alien invasion fleet through a telescope.4. The "AI Mothers" Solution Hinton's radical proposal: instead of trying to keep AI submissive (which won't work when it's smarter than us), we should engineer strong maternal instincts into AI systems - the only model we have of powerful beings caring for weaker ones.5. Superintelligence Is Coming Within 5-20 Years Most leading experts believe human-level AI is inevitable, followed quickly by superintelligence. Hinton's timeline reflects the consensus among researchers, despite the wide range.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Big Questions with Cal Fussman

After speaking at a CREW conference in Texas, Cal meets a smart young Uber driver who was curious about the number of jobs that we can anticipate losing in America because of AI. The driver wanted to know what the lives of Americans who'd most likely lose their jobs were going to look like, and how these people might get by and transition. Cal finds some answers from Sam Altman, the CEO of OpenAI, and Geoffrey Hinton, the Godfather of Artificial Intelligence. He also discovers a half-helicopter/half-airplane taxi service that looks like a job for the future as we head into the age of the Jetsons. Listen up and get to the cutting edge.

The Diary Of A CEO by Steven Bartlett
Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton

The Diary Of A CEO by Steven Bartlett

Play Episode Listen Later Jun 16, 2025 90:19


He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf  The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN  for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices

American Conservative University
‘Godfather of AI' Predicts it will Take Over the World, Thomas Sowell Warns About the Year 2030, Eric Metaxas Talks to John Zmirak.

American Conservative University

Play Episode Listen Later Jun 12, 2025 34:53


‘Godfather of AI' Predicts it will Take Over the World, Thomas Sowell Warns About the Year 2030, Eric Metaxas Talks to John Zmirak.   ‘Godfather of AI' predicts it will take over the world Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today The Eric Metaxas Show- Eric talks to John Zmirak.   ‘Godfather of AI' predicts it will take over the world Watch this video at- https://youtu.be/vxkBE23zDmQ?si=ielwtz0KnJrDUH6q LBC 1.2M subscribers 1,492,202 views Jan 30, 2025 #geoffreyhinton #LBC #ai Nobel Prize winner Geoffrey Hinton, the physicist known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world. Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation. Listen to the full show on Global Player: https://app.af.globalplayer.com/Br0x/... #Andrewmarr #ai #geoffreyhinton #LBC LBC is the home of live debate around news and current affairs in the UK. Join in the conversation and listen at https://www.lbc.co.uk/ Sign up to LBC's weekly newsletter here: https://l-bc.co/signup   Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today https://youtu.be/ItDFsPqDIEs?si=W21eNnZeSKGcsnKq Thomas Sowell Today 137K subscribers 252,497 views May 29, 2025 #thomassowelltoday #thomassowell #thomassowelltv How Cultural Decline Happens SLOWLY - Then All at ONCE | Thomas Sowell Today ✅Subscribe for More:    / @sowelltoday   Commentary: Thomas Sowell Today