British-Canadian computer scientist and psychologist
POPULARITY
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comJohn is a journalist, media consultant, old friend, and George W Bush's cousin. He's worked for NBC News as a political analyst and the Boston Globe as a columnist. In 2016, he launched a morning brief called “News Items” for News Corp, and later it became the Wall Street Journal CEO Council's morning newsletter. News Items jumped to Substack in 2019 (and Dishheads can subscribe now for 33% off). John also co-hosts two podcasts — one with Joe Klein (“Night Owls”) and the other with Richard Haas (“Alternate Shots”).For two clips of our convo — on the nail-biting Bush-Gore race that John was involved in, and Trump's mental decline — head to our YouTube page.Other topics: born and raised in Concord; his political awakening at 15 watching the whole '68 Dem convention with a fever in bed; his fascination with Nixon; the Southern Strategy; Garry Wills' book Nixon Agonistes; Kevin Phillips and populism; Nixon parallels with Trump — except shame; Roger Ailes starting Fox News; Matt Drudge; John's uncle HW Bush; HW as a person; the contrasts with his son Dubya; the trauma of 9/11; Iraq as a war of choice — the wrong one; Rumsfeld; Jeb Bush in 2016; the AI race; Geoffrey Hinton (“the godfather of AI”); John's optimism about China; tension with Taiwan; Israel's settlements; Bibi's humiliation of Obama; Huckabee as ambassador; the tariff case going to SCOTUS; the Senate caving to Trump; McConnell failing to bar Trump; the genius of his demagoguery; the Kirk assassination; Brexit; immigration under Boris; Reform's newfound dominance; the huge protest in London last week; Kirk's popularity in Europe; the AfD; Trump's war on speech; a Trump-Mamdani showdown; Epstein and Peter Mandelson; and grasping for reasons to be cheerful.Browse the Dishcast archive for an episode you might enjoy. Coming up: Wesley Yang on the trans question, Michael Wolff on Epstein, Karen Hao on artificial intelligence, Katie Herzog on drinking your way sober, Michel Paradis on Ike, Charles Murray on finding religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
Join Simtheory: https://simtheory.ai----CHAPTERS:00:00 - Simtheory promo01:09 - Does Anthropic Intentionally Degrade Their Models?03:34 - Long Horizon Agents & How We Will Build Them36:18 - The State of MCPs & Internal Custom Enterprise MCPs51:04 - AI Devices: Meta's Ray-Ban Display & Meta Oakley Vanguards1:01:24 - Geoffrey Hinton is a LOVE RAT1:05:49 - LOVE RAT SONG----Thanks for listening, we appreciate all of your support, likes, comments and subs xoxox
I. Eliezer Yudkowsky's Machine Intelligence Research Institute is the original AI safety org. But the original isn't always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don't? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there's some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn't, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We're not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we'll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They're kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don't expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don't want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don't emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone
Joel Roberts, former host of a prime-time talk show on KABC Radio, Los Angeles, takes over hosting duties to talk Vasant Dhar about his upcoming book and Roberts' own scepticism regarding all things AI. Useful Resources: 1. Joel Roberts. 2. Thinking With Machines, The Brave New World With AI - Vasant Dhar3. AI and The Paper Clip Problem. 4. Dr. Jules White. 5. Geoffrey Hinton. 6. Yuval Noah Harari. 7. Augmented Intelligence - The Future of Generative AI and Computing. 8. How The Mind Works - Steven Pinker9. Brave New World Episode 94: Anil Seth On The Science of Consciousness. 10. Backpropagation 11. Brave New World Episode 98: There's no I in AI, Ben Shneiderman on The Evolution and State of Artificial Intelligence. 12. Brave New World Episode 97: Alex Wiltschko on Digitizing Scent. 13. Brave New World Episode 81: Alex Wiltschko on The Sense Of Smell.14. Joy Milne. 15. Brave New World Episode 89: Missy Cummings on Making AI Safe. 16. TEDx Talk - When Should We Trust Machines: Vasant Dhar. 17. The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma: Mustafa Suleyman. 18. Luis Elizondo. Check out Vasant Dhar's newsletter on Substack. The subscription is free!
AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit
AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit
AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit
AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit
In this thought-provoking conversation, surgeon Shafi Ahmed and digital health futurist John Nosta revisit Geoffrey Hinton's bold 2016 prediction that radiologists would soon be replaced by AI.
AI isn't just the ultimate nonsense generator—it's also a powerful tool students and teachers can't afford to ignore. In this episode, professors Carl Bergstrom and Jevin West reveal how their new "BS Machines" curriculum helps the next generation stay sharp and skeptical in a world overflowing with synthetic "facts." Interview with Carl T. Bergstrom and Jevin D. West Warner Bros. Discovery Sues AI Giant Midjourney for Copyright Infringement In Major Legal Battle AI Watchdog At Least 15 Million YouTube Videos Have Been Snatched by AI Companies Most Scraped Websites of 2025 AI surveillance should be banned while there is still time. Alterego I Hate My Friend R-Zero: Self-Evolving Reasoning LLM from Zero Data AI godfather Geoffrey Hinton says a girlfriend once broke up with him using a chatbot Business Insider yanked 40 essays with suspect bylines. Are they related? OpenAI's post on the paper Gina Trappani starts a new blog Schnitzel press NFL Debut on YouTube Draws 17.3 Million Set a two TikTok toilet limit to reduce haemorrhoid risk, doctors advise" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guests: Carl T. Bergstrom and Jevin D. West Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit spaceship.com/twit
In this episode of "Protect Our Kids," host Kristi Bush delves into the rapidly evolving world of AI and its implications for our children's future. Inspired by insights from AI pioneer Dr. Geoffrey Hinton, Kristi explores the challenges and opportunities that AI presents, emphasizing the importance of empathy, adaptability, and human connection. As AI continues to advance, how can we equip our children to thrive in a world where technology and humanity must coexist? Tune in for a thought-provoking discussion on preparing the next generation for an AI-driven future.Key Takeaways: AI will surpass human intelligence in the near future.Parents need to prepare their kids for an AI-driven world.Empathy is crucial in a future dominated by technology.Human connections will set us apart from AI.Adaptability is a key skill for future generations.Children must learn to pivot in the face of challenges.AI can be perceived as more empathetic than humans.The job landscape will change significantly due to AI.Teaching kids about empathy is essential for their future roles.Good stewardship in AI development is important for society.www.knbcommunications.com
In our angry MAHA times, how can we get people trusting science and scientists again. According to MIT's Alan Lightman, one of America's greatest scientific writers, we need to both demystify science and humanize scientists. Lightman is the co-author, with Martin Rees, of The Shape of Wonder, a timely collection of essays about how scientists think, work, and live. We need to learn from scientists like Albert Einstein, Lightman - himself the author of the 1993 classic Einstein's Dreams, suggests. He argues that Einstein's "naive" willingness to challenge millennia of thinking about time exemplifies the wonder that drives great science. Lightman discusses why scientists have become entangled with "elite establishments" in our populist moment, and argues that critical scientific thinking—from balancing checkbooks to diagnosing a child's fever—belongs to everyone, not just scientists. So make America smart again (MASA), by demystifying science and humanizing scientists.1. "Naive" questioning drives breakthrough science Einstein revolutionized physics at 26 by refusing to accept millennia of received wisdom about time—showing that great science requires childlike willingness to challenge fundamental assumptions.2. Scientists are victims of populist backlash The mistrust of science isn't really about science—it's part of a global populist movement against "elite establishments," fueled by social media, immigration fears, and growing wealth inequality.3. Wonder requires discipline, not just awe Unlike a child's wonder, scientific wonder comes with tools—both experimental and theoretical—for actually understanding how things work, making it "disciplined wonder."4. Scientists shouldn't be authorities beyond science Even Einstein or Nobel laureates like Geoffrey Hinton have no special authority on ethics, philosophy, or politics—they're just smart people with opinions like everyone else.5. Critical thinking belongs to everyone When you balance your checkbook or diagnose a child's fever, you're using scientific thinking. Science isn't an elite activity—it's a method we all already practice in daily life.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Nick Frosst is a Canadian AI researcher and entrepreneur, best known as co-founder of Cohere, the enterprise-focused LLM. Cohere has raised over $900 million, most recently a $500 million round, bringing its valuation to $6.8 billion. Under his leadership, Cohere hit $100M in ARR. Prior to founding Cohere, Nick was a researcher at Google Brain and a protégé of Geoffrey Hinton. AGENDA: 00:00 – Biggest lessons from Geoff Hinton at Google Brain? 02:10 – Did Google completely sleep at the wheel and miss ChatGPT? 05:45 – Is data or compute the real bottleneck in AI's future? 07:20 – Does GPT5 Prove That Scaling Laws are BS? 13:30 – Are AI benchmarks just total BS? 17:00 – Would Cohere spend $5M on a single AI researcher? 19:40 – What is nonsense in AI that everyone is talking about? 25:30 – What is no one talking about in AI that everyone should be talking about? 33:00 – How do Cohere compete with OpenAI and Anthropic's billions? 44:30 – Why does being American actually hurt tech companies today? 45:10 – Should countries fund their own models? Is model sovereignty the future? 52:00 – Why has Sam Altman actually done a disservice to AI?
For nyligt fik vi en ny version af ChatGPT - version 5.0. Og som altid lavede direktøren for OpenAI - firmaet bag ChatGPT - en lang videopræsentation på nettet om fortræffelighederne i den nye version.Direktøren Sam Altmans mission er at skabe en AGI - en artificial general intelligence - altså en kunstig intelligens på højde med eller bedre end den menneskelige intelligens.Nogle AI-forskere hævder, at det ikke kan lade sig gøre med en sprogmodel, der basalt set “bare” er trænet på menneskers kommunikation og derfor laver en slags sandsynlighedsberegning af ords placering i en tekst.Andre AI-forskere, som fx nobelprismodtager og grand old man indenfor AI - Geoffrey Hinton - mener, at vi er ved at skabe en aintelligens, som vil overstige vores, og at konsekvenserne er uoverskuelige og muligvis katastrofale.Har vi fået et værktøj, som rummer menneskehedens og klodens frelse eller er vi på vej mod decideret selvdestruktion? Eller skaber vi måske en alternativ bevidsthed, som vi ikke helt forstår, men som vil indgå i verden i samklang med biologisk skabt bevidsthed? Og hvad er intelligens og bevidsthed egentlig? Og hvordan skaber man sådan det, når vi ikke engang til fulde forstår, hvordan pattedyrs hjerne fungerer?Adjunkt Michal Kosinski fra Stanford University i Californien mener, at vi allerede har skabt en AGI gennem de store sprogmodeller, der ligger bag tjenester som ChatGPT, Gemini og DeepSeek. Hans forskningsinteresser omfatter både menneskelig og kunstig kognition. Hans nuværende arbejde fokuserer på at undersøge de psykologiske processer i store sprogmodeller.Michal var den første, der advarede mod Cambridge Analytica, som førte til skandalen, hvor Facebook blev brugt til at udnytte brugernes data uretmæssigt.Techtopia har mødt ham.Gæst: Mihal Kosinski, assistant professor, Stanford UniversityDesuden et klip fra Instagram med et foredrag af Geoffrey Hinton, AI-udvikler og modtager af Nobelprisen.Link: Mihal Kosinski https://www.michalkosinski.com
When Dr. Geoffrey Hinton left Google in 2023, it wasn’t because he’d lost faith in AI. It was because he wanted to speak freely about its dangers (and because, at 75, he says programming is “annoying”). The Nobel laureate joins Katie to unpack some of the riskiest aspects of this new technology: why government regulation lags behind innovation; why jobs are at risk and whether countries can work together to prevent an AI arms race. . But Hinton also sees a path forward: if we design AI that genuinely supports and protects humanity, coexistence might be possible. This episode wrestles with the urgent question on everyone’s mind: will AI’s breathtaking potential transform our lives or threaten our very survival?See omnystudio.com/listener for privacy information.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Weekly Rundown From August 17 to August 24 2025:Listen at https://podcasts.apple.com/us/podcast/ai-weekly-rundown-aug-17-24-2025-nobel-laureate-geoffrey/id1684415169?i=1000723245027Hello AI Unraveled Listeners,In this week AI News,
Happy Friday, everyone! Congrats on making it through another week, and what a week it was. This week I had some big topics, so I ran out of time for the positive use-case, but I'll fit it in next week.Here's a quick rundown of the topics with more detail below. First, Meta had an AI policy doc lead, and boy did it tell a story while sparking outrage and raising deeper questions about what's really being hardwired into the systems we all use. Then I touch on Geoffrey Hinton, the “Godfather of AI,” and his controversial idea that AI should have maternal instincts. Finally, I dig into the growing wave of toxic work expectations, from 80-hour demands to the exodus of young mothers from the workforce.With that, let's get into it.⸻Looking Beyond the Hype of Meta's Leaked AI Policy GuidelinesA Reuters report exposed Meta's internal guidelines on training AI to respond to sensitive prompts, including “sensual” interactions with children and handling of protected class subjects. People were pissed and rightly so. However, I break down why the real problem isn't the prompts themselves, but the logic being approved behind them. This is much bigger than the optics of some questionable guidelines; it's about illegal reasoning being baked into the foundation of the model.⸻The Godfather of AI Wants “Maternal” MachinesGeoffrey Hinton, one of the pioneers of AI, is popping up everywhere with his suggestion that training AI with motherly instincts is the solution to preventing it from wiping out humanity. Candidly, I think his logic is off for way more reasons than the cringe idea of AI acting like our mommies. I unpack why this framing is flawed, what leaders should actually take away from it, and why we need to move away from solutions that focus on further humanizing AI. It's to stop treating AI like a human in the first place.⸻Unhealthy Work Demands and the Rising Exodus of Young MomsAn AI startup recently gave its employees a shocking ultimatum: work 80 hours a week or leave. What happened to AI eliminating the need for human work? Meanwhile, data shows young mothers are exiting the workforce at troubling rates, completely reversing all the gains we saw during the pandemic. I connect the dots between these headlines, AI's role in rise of unsustainable work expectations, and the long-term damage this entire mindset creates for businesses and society.⸻If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that go beyond the headlines and help you lead with clarity in the AI age. And, if you'd take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind—Show Notes:In this Weekly Update, Christopher Lind unpacks the disturbing revelations from Meta's leaked AI training docs, challenges Geoffrey Hinton's call for “maternal AI,” and breaks down the growing trend of unsustainable work expectations, especially the impact on mothers in the workforce.Timestamps:00:00 – Introduction and Welcome01:51 – Overview of Today's Topics03:19 – Meta's AI Training Docs Leak27:53 – Geoffrey Hinton and the “Maternal AI” Proposal39:48 – Toxic Work Demands and the Workforce Exodus53:35 – Final Thoughts#AIethics #AIrisks #DigitalLeadership #HumanCenteredAI #FutureOfWork
This week we continue to look at AI and its impact on our society; AI friends; The Stepford Wives; Edinburgh University Press on Black and white; National Library of Scotland censors Women Won't Wheesht; Men giving birth in South Australia; Country of the Week - England; The Magna Carta; The Significance of Flags; Christianity in England; Ceasefires; Triggernometry and Netanyahu and Tommy Robinson on Mohammed and Jesus; The Matrix; Geoffrey Hinton and AI Sub Goals; French Muslims ban Barbie; Lisa Nandy appoints Muslim as only religious advisor to civil society project; Med 1 in 200 billion year event! It's cold in Australia; Offshore windfarms decimate fishing and environment; BP to reopen large North Sea oil field; UEFA's non political political message; Jasper Carrott on insurance claims; Rev James Haram and Colin Smyth MSP; A Hidden Life; Feedback; Podcast change news; with music from Queen; Vera Lynn; Aqua; Frank Sinatra and Dorothy Kirsteen; The Waterboys; Melbourne Opera; and Indian Christians.
In this episode of Hashtag Trending, hosted by Jim Love, key topics include Meta's significant changes to its AI division, leading to internal pay conflicts and potential slowdowns despite big investments. OpenAI's CEO, Sam Altman, explains their new open source models as a strategic move against Chinese competitors. Hugging Face introduces AI Sheets, a user-friendly toolkit for integrating large language models into spreadsheets, raising questions about accessibility and potential risks. Additionally, the episode touches on the hidden costs of AI tools and highlights Geoffrey Hinton's call for 'maternal AI,' noting an Edmonton startup already developing such technology with indigenous values. 00:00 Meta's AI Shakeup: Pay Wars and Super Intelligence Strategy 02:02 OpenAI's Open Source Move: A Strategic Pivot 04:49 Hugging Face's AI Sheets: Democratizing Data Analytics 06:33 Hidden AI Costs: The Financial Backlash 08:55 Geoffrey Hinton's Vision: Maternal AI 10:56 Conclusion and Listener Engagement
This week, Mesh provides a rave review for Weapons, the sophomore effort from Zach Cregger, that is generating massive box-office buzz. Next, Paul and Mesh discuss the introduction of Sling's day, weekend, and week passes which provide a low-cost entry point to linear TV, but may ruffle feathers with TV networks. Finally, they discuss Geoffrey Hinton's (AI forefather, Nobel-prize winner) cautionary statements regarding AI super intelligence and whether job losses are just the tip of the iceberg. Learn more about your ad choices. Visit megaphone.fm/adchoices
More talks today on Russia – Ukraine war but has anything been achieved? the dark side of A.I. as defined by tech pioneer Dr Geoffrey Hinton; a sticky end for British puddings as no one seems to be making or eating them. Adam Gilchrist shares details on these stories with Lester Kiewit. Good Morning Cape Town with Lester Kiewit is a podcast of the CapeTalk breakfast show. This programme is your authentic Cape Town wake-up call. Good Morning Cape Town with Lester Kiewit is informative, enlightening and accessible. The team’s ability to spot & share relevant and unusual stories make the programme inclusive and thought-provoking. Don’t miss the popular World View feature at 7:45am daily. Listen out for #LesterInYourLounge which is an outside broadcast – from the home of a listener in a different part of Cape Town - on the first Wednesday of every month. This show introduces you to interesting Capetonians as well as their favourite communities, habits, local personalities and neighbourhood news. Thank you for listening to a podcast from Good Morning Cape Town with Lester Kiewit. Listen live on Primedia+ weekdays between 06:00 and 09:00 (SA Time) to Good Morning CapeTalk with Lester Kiewit broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/xGkqLbT or find all the catch-up podcasts here https://buff.ly/f9Eeb7i Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.
Bongani Bingwa speaks to Adam Gilchrist about Ukraine war summits, the dark side as defined by tech pioneer Dr Geoffrey Hinton and a sticky end for British puddings. 702 Breakfast with Bongani Bingwa is broadcast on 702, a Johannesburg based talk radio station. Bongani makes sense of the news, interviews the key newsmakers of the day, and holds those in power to account on your behalf. The team bring you all you need to know to start your day Thank you for listening to a podcast from 702 Breakfast with Bongani Bingwa Listen live on Primedia+ weekdays from 06:00 and 09:00 (SA Time) to Breakfast with Bongani Bingwa broadcast on 702: https://buff.ly/gk3y0Kj For more from the show go to https://buff.ly/36edSLV or find all the catch-up podcasts here https://buff.ly/zEcM35T Subscribe to the 702 Daily and Weekly Newsletters https://buff.ly/v5mfetc Follow us on social media: 702 on Facebook: https://www.facebook.com/TalkRadio702 702 on TikTok: https://www.tiktok.com/@talkradio702 702 on Instagram: https://www.instagram.com/talkradio702/ 702 on X: https://x.com/Radio702 702 on YouTube: https://www.youtube.com/@radio702 See omnystudio.com/listener for privacy information.
Bongani Bingwa speaks to Adam Gilchrist about Ukraine war summits, the dark side as defined by tech pioneer Dr Geoffrey Hinton and a sticky end for British puddings. 702 Breakfast with Bongani Bingwa is broadcast on 702, a Johannesburg based talk radio station. Bongani makes sense of the news, interviews the key newsmakers of the day, and holds those in power to account on your behalf. The team bring you all you need to know to start your day Thank you for listening to a podcast from 702 Breakfast with Bongani Bingwa Listen live on Primedia+ weekdays from 06:00 and 09:00 (SA Time) to Breakfast with Bongani Bingwa broadcast on 702: https://buff.ly/gk3y0Kj For more from the show go to https://buff.ly/36edSLV or find all the catch-up podcasts here https://buff.ly/zEcM35T Subscribe to the 702 Daily and Weekly Newsletters https://buff.ly/v5mfetc Follow us on social media: 702 on Facebook: https://www.facebook.com/TalkRadio702 702 on TikTok: https://www.tiktok.com/@talkradio702 702 on Instagram: https://www.instagram.com/talkradio702/ 702 on X: https://x.com/Radio702 702 on YouTube: https://www.youtube.com/@radio702 See omnystudio.com/listener for privacy information.
More talks today on Russia – Ukraine war but has anything been achieved? the dark side of A.I. as defined by tech pioneer Dr Geoffrey Hinton; a sticky end for British puddings as no one seems to be making or eating them. Adam Gilchrist shares details on these stories with Lester Kiewit. Good Morning Cape Town with Lester Kiewit is a podcast of the CapeTalk breakfast show. This programme is your authentic Cape Town wake-up call. Good Morning Cape Town with Lester Kiewit is informative, enlightening and accessible. The team’s ability to spot & share relevant and unusual stories make the programme inclusive and thought-provoking. Don’t miss the popular World View feature at 7:45am daily. Listen out for #LesterInYourLounge which is an outside broadcast – from the home of a listener in a different part of Cape Town - on the first Wednesday of every month. This show introduces you to interesting Capetonians as well as their favourite communities, habits, local personalities and neighbourhood news. Thank you for listening to a podcast from Good Morning Cape Town with Lester Kiewit. Listen live on Primedia+ weekdays between 06:00 and 09:00 (SA Time) to Good Morning CapeTalk with Lester Kiewit broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/xGkqLbT or find all the catch-up podcasts here https://buff.ly/f9Eeb7i Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.
Can a Pez dispenser teach you about quitting smoking? Can AI learn to love us before it wipes us out? And is paper really your best bet in rock-paper-scissors? In this jam-packed episode of The Sandy Show, Sandy and Tricia dive into everything from pop culture obsessions to existential tech fears—with plenty of laughs and unexpected insights along the way. From Taylor Swift's record-breaking podcast appearance and the orange craze it sparked across global brands, to the Godfather of AI warning humanity about its own creation, this episode is a rollercoaster of entertainment, curiosity, and caution. Plus, Tricia reveals the secret to winning rock-paper-scissors, and Sandy opens up about his decade-long journey of sobriety. Guest Spotlight: While this episode doesn't feature a formal guest, it highlights the voices of Sandy and Tricia—two seasoned radio personalities whose chemistry, wit, and honesty make every topic feel personal and engaging. Key Moments:
Tech nostalgia. Winner-take-all economics. The cult of "storytelling". A Stanford educated aristocratic elite. This was the week that nothing changed in Silicon Valley. Alternatively, it was the week that radical change broke some ChatGPT users hearts. That, at least, is how That Was the Week tech newsletter publisher Keith Teare described this week in Silicon Valley. From Sam Altman's sensitivity to user backlash over GPT-5's personality changes, to venture capital's continued concentration in just ten mega-deals, to Geoffrey Hinton's apocalyptic warnings about AI wiping out humanity - the patterns remain stubbornly familiar even as the technology races forward. So is nothing or everything changing? Keith says everything, I say nothing. Maybe - as AI Godfather Hinton suggested on the show earlier this week - it's time for an all-knowing algorithm with maternal instincts to enlighten us with the (female) truth about our disruptive future.1. AI Users Are Forming Deep Emotional BondsChatGPT users experienced genuine heartbreak when GPT-5's personality changes made their AI feel like a different "person." This forced OpenAI to backtrack and restore GPT-4, revealing how humans are treating AI as companions rather than tools.2. Silicon Valley's Power Structures Remain UnchangedDespite AI's revolutionary potential, the same patterns persist: 40% of VC money goes to just 10 deals, Stanford maintains legacy admissions favoring the wealthy, and winner-take-all economics dominate. The technology changes; the power concentration doesn't.3. The Browser Wars Are Over - Chat Interfaces WonThe future battle isn't about owning browsers (like Perplexity's bid for Chrome) but controlling the chat interface. OpenAI and Anthropic are positioning themselves as the new gatekeepers, replacing Google's search dominance.4. AI's Pioneers Are Becoming Its Biggest SkepticsGeoffrey Hinton, the "AI godfather," now believes there's a 15-20% chance AI could wipe out humanity. When the field's leading experts admit they "have no clue" about AI's future risks, it reveals how little anyone really knows about what we're building.5. Context and Prompting Are the New ProgrammingThe era of simple AI prompts is over. Success now requires sophisticated prompt engineering and providing rich context - making AI literacy as crucial as computer literacy once was. The abstractions are changing, and so must our skills.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
This week's headline: When Your AI Breaks Your Heart.GPT-5 arrived “better” by every metric—yet users begged for GPT-4o back. It wasn't about accuracy. It was about personality. People felt like they lost a friend. OpenAI listened, backtracked, and gave them their companion back.But should it have? Progress is messy, and heartbreak may be the price of change.The Pain of ChangeUsers bond with AI like colleagues or partners—and revolt when those bonds are broken.OpenAI faced its first true PR crisis, forcing it to act like a consumer company, not just a lab.But longing for “the old AI” is as unrealistic as yearning for Windows 95. Change is the only constant.The Shifting WebCloudflare's Matthew Prince warns: AI is killing the Web.Perplexity's $34.5B bid for Chrome shows the fight for browser control—but the browser itself may be obsolete.Just as Spotify freed music from CDs, AI is unbundling content from URLs and tabs. The web isn't dying—it's being liberated.Inputs vs. ManipulationAI's real weakness? Databases. Models still can't query live inventory, prices, or transactions.“SEO for AI” tries to paper over this by gaming prompts—just like spammers gamed Google.But the future isn't tricks. It's context engineering: clean data + authentic inputs.Winners & Losers40% of VC money is going to just 10 AI deals. The power law rules: winners take almost everything.Geoffrey Hinton warns of AI “alien beings,” but others argue that fear distracts from real infrastructure challenges—like power grids, chips, and data quality.The Real OpportunityStartup of the Week: Torch, a health AI that turns a decade of medical records into personalized insights.This is the real future—integrating trustworthy data into AI, not re-skinning old personalities.The controversy this week is simple:Do we cling to the familiar—or embrace the heartbreak that comes with progress?While some mourn GPT-4o, the real story is far bigger: AI is rewriting law, health, energy, and the web itself. And it's happening whether we're ready or not. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe
Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...
Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...
Useful Resources: 1. Ben Shneiderman, Professor Emeritus, University Of Maryland. 2. Richard Hamming and Hamming Codes. 3. Human Centered AI - Ben Shneiderman. 4. Allen Newell and Herbert A. Simon. 5. Raj Reddy and the Turing Award. 6. Doug Engelbart. 7. Alan Kay. 8. Conference on Human Factors in Computing Systems. 9. Software psychology: Human factors in computer and information systems - Ben Shneiderman. 10. Designing the User Interface: Strategies for Effective Human-Computer Interaction - Ben Shneiderman. 11. Direct Manipulation: A Step Beyond Programming Languages - Ben Shneiderman. 12. Steps Toward Artificial Intelligence - Marvin Minsky. 13. Herbert Gelernter. 14. Computers And Thought - Edward A Feigenbaum and Julian Feldman. 15. Lewis Mumford. 15. Technics and Civilization - Lewis Mumford. 16. Buckminster Fuller. 17. Marshall McLuhan. 18. Roger Shank. 19. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness - Jonathan Haidt. 20. John C. Thomas, IBM. 21. Yousuf Karsh, photographer. 22. Gary Marcus, professor emeritus of psychology and neural science at NYU. 23. Geoffrey Hinton. 24. Nassim Nicholas Taleb. 25. There Is No A.I. - Jaron Lanier. 26. Anil Seth On The Science of Consciousness - Episode 94 of Brave New World. 27. A ‘White-Collar Blood Bath' Doesn't Have to Be Our Fate - Tim Wu 28. Information Management: A Proposal - Tim Berners-Lee 29. Is AI-assisted coding overhyped? : METR study 30. RLHF, Reinforcement learning from human feedback31. Joseph Weizenbaum 32. What Is Computer Science? - Allen Newel, Alan J. Perlis, Herbert A. Simon -- Check out Vasant Dhar's newsletter on Substack. The subscription is free!
Few of the world's great scientists have given more thought to the existential threats to humanity than the irrepressible British cosmologist and astronomer Martin Rees. He's the co-founder of Cambridge University's Centre for Existential Risk as well as the author of the 2003 book Our Final Hour. So it's striking that Rees has a quite different take on the existential risk of artificial intelligence technology than many AI doomers including yesterday's guest, the 2024 Physics Nobel laureate Geoffrey Hinton. For Rees, bio-threats and network collapse represents the most dangerous technological threats to humanity in the near future. Unlike nuclear weapons, which require massive detectable infrastructure, Rees warns, dangerous pathogens can be engineered in small, unmonitored laboratories. Meanwhile, our civilization's complete dependence on interconnected global networks means system failures could trigger catastrophic societal breakdown within days. Apocalypse now? Perhaps. But, according to the prescient Rees, we are preparing for the wrong apocalypse. 1. AI's Real Danger Isn't Superintelligence—It's System DependencyRees is "very skeptical" about AI takeover scenarios. Instead, he worries about our over-dependence on globe-spanning networks that control electricity grids and internet infrastructure. When these fail—whether from cyberattacks or malfunctions—society could collapse within "two or three days."2. Bio-Threats Are Uniquely Undetectable and UnstoppableUnlike nuclear weapons that require massive, monitorable facilities, dangerous pathogens can be engineered in small, undetected laboratories. "Gain of function" experiments could create bioweapons far worse than COVID, and preventing this would require impossible levels of surveillance over anyone with relevant expertise.3. We're Living Through a Uniquely Dangerous EraRees believes "the prospect of a catastrophe in the next 10 or 20 years is perhaps higher than it's ever been." We're the first species in Earth's history capable of changing the entire planet—for good or ill—making this a genuinely special and precarious moment.4. Scientific Wonder Grows with Knowledge, Not Despite ItContrary to those who claim science diminishes mystery, Rees - the co-author of an upcoming book about scientific wonder - argues that "the more we understand, the more wonderful and complicated things appear." As knowledge advances, new mysteries emerge that couldn't even be conceived decades earlier.5. Humility About Human Limitations Is EssentialJust as "a monkey can't understand quantum mechanics," there may be fundamental aspects of reality beyond human comprehension. Rees warns against immediately invoking God for unexplained phenomena, advocating instead for accepting our cognitive limits while continuing to push boundaries.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Geoffrey Hinton, conocido como el ‘padrino de la IA' quien recibió el Premio Nobel de Física 2024, reveló en La W cuál es la única forma en que en la humanidad puede sobrevivir la Inteligencia Artificial.
So will AI wipe us out? According to Geoffrey Hinton, the 2024 Nobel laureate in physics, there's about a 10-20% chance of AI being humanity's final invention. Which, as the so-called Godfather of AI acknowledges, is his way of saying he has no more idea than you or I about its species-killing qualities. That said, Hinton is deeply concerned about some of the consequences of an AI revolution that he pioneered at Google. From cyber attacks that could topple major banks to AI-designed viruses, from mass unemployment to lethal autonomous weapons, Hinton warns we're facing unprecedented risks from technology that's evolving faster than our ability to control it. So does he regret his role in the invention of generative AI? Not exactly. Hinton believes the AI revolution was inevitable—if he hadn't contributed, it would have been delayed by perhaps a week. Instead of dwelling on regret, he's focused on finding solutions for humanity to coexist with superintelligent beings. His radical proposal? Creating "AI mothers" with strong maternal instincts toward humans—the only model we have for a more powerful being designed to care for a weaker one.1. Nobody Really Knows the Risk Level Hinton's 10-20% extinction probability is essentially an admission of complete uncertainty. As he puts it, "the number means nobody's got a clue what's going to happen" - but it's definitely more than 1% and less than 99%.2. Short-Term vs. Long-Term Threats Are Fundamentally Different Near-term risks involve bad actors misusing AI (cyber attacks, bioweapons, surveillance), while the existential threat comes from AI simply outgrowing its need for humans - something we've never faced before.3. We're Creating "Alien Beings" Right Now Unlike previous technologies, AI represents actual intelligent entities that can understand, plan, and potentially manipulate us. Hinton argues we should be as concerned as if we spotted an alien invasion fleet through a telescope.4. The "AI Mothers" Solution Hinton's radical proposal: instead of trying to keep AI submissive (which won't work when it's smarter than us), we should engineer strong maternal instincts into AI systems - the only model we have of powerful beings caring for weaker ones.5. Superintelligence Is Coming Within 5-20 Years Most leading experts believe human-level AI is inevitable, followed quickly by superintelligence. Hinton's timeline reflects the consensus among researchers, despite the wide range.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Get 40% off Ground News' unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity's behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho's on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.
Summer rewind: Greg Lindsay is an urban tech expert and a Senior Fellow at MIT. He's also a two-time Jeopardy champion and the only human to go undefeated against IBM's Watson. Greg joins thinkenergy to talk about how artificial intelligence (AI) is reshaping how we manage, consume, and produce energy—from personal devices to provincial grids, its rapid growth to the rising energy demand from AI itself. Listen in to learn how AI impacts our energy systems and what it means individually and industry-wide. Related links: ● Greg Lindsay website: https://greglindsay.org/ ● Greg Lindsay on LinkedIn: https://www.linkedin.com/in/greg-lindsay-8b16952/ ● International Energy Agency (IEA): https://www.iea.org/ ● Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ● Hydro Ottawa: https://hydroottawa.com/en To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405 To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited Follow along on Instagram: https://www.instagram.com/hydroottawa Stay in the know on Facebook: https://www.facebook.com/HydroOttawa Keep up with the posts on X: https://twitter.com/thinkenergypod --- Transcript: Trevor Freeman 00:00 Hi everyone. Well, summer is here, and the think energy team is stepping back a bit to recharge and plan out some content for the next season. We hope all of you get some much needed downtime as well, but we aren't planning on leaving you hanging over the next few months, we will be re releasing some of our favorite episodes from the past year that we think really highlight innovation, sustainability and community. These episodes highlight the changing nature of how we use and manage energy, and the investments needed to expand, modernize and strengthen our grid in response to that. All of this driven by people and our changing needs and relationship to energy as we move forward into a cleaner, more electrified future, the energy transition, as we talk about many times on this show. Thanks so much for listening, and we'll be back with all new content in September. Until then, happy listening. Trevor Freeman 00:55 Welcome to think energy, a podcast that dives into the fast changing world of energy through conversations with industry leaders, innovators and people on the front lines of the energy transition. Join me, Trevor Freeman, as I explore the traditional, unconventional and up and coming facets of the energy industry. If you have any thoughts feedback or ideas for topics we should cover, please reach out to us at think energy at hydro ottawa.com, Hi everyone. Welcome back. Artificial intelligence, or AI, is a term that you're likely seeing and hearing everywhere today, and with good reason, the effectiveness and efficiency of today's AI, along with the ever increasing applications and use cases mean that in just the past few years, AI went from being a little bit fringe, maybe a little bit theoretical to very real and likely touching everyone's day to day lives in ways that we don't even notice, and we're just at the beginning of what looks to be a wave of many different ways that AI will shape and influence our society and our lives in the years to come. And the world of energy is no different. AI has the potential to change how we manage energy at all levels, from our individual devices and homes and businesses all the way up to our grids at the local, provincial and even national and international levels. At the same time, AI is also a massive consumer of energy, and the proliferation of AI data centers is putting pressure on utilities for more and more power at an unprecedented pace. But before we dive into all that, I also think it will be helpful to define what AI is. After all, the term isn't new. Like me, many of our listeners may have grown up hearing about Skynet from Terminator, or how from 2001 A Space Odyssey, but those malignant, almost sentient versions of AI aren't really what we're talking about here today. And to help shed some light on both what AI is as well as what it can do and how it might influence the world of energy, my guest today is Greg Lindsay, to put it in technical jargon, Greg's bio is super neat, so I do want to take time to run through it properly. Greg is a non resident Senior Fellow of MIT's future urban collectives lab Arizona State University's threat casting lab and the Atlantic Council's Scowcroft center for strategy and security. Most recently, he was a 2022-2023 urban tech Fellow at Cornell Tech's Jacobs Institute, where he explored the implications of AI and augmented reality at an urban scale. Previously, he was an urbanist in resident, which is a pretty cool title, at BMW minis urban tech accelerator, urban X, as well as the director of Applied Research at Montreal's new cities and Founding Director of Strategy at its mobility focused offshoot, co motion. He's advised such firms as Intel, Samsung, Audi, Hyundai, IKEA and Starbucks, along with numerous government entities such as 10 Downing Street, us, Department of Energy and NATO. And finally, and maybe coolest of all, Greg is also a two time Jeopardy champion and the only human to go undefeated against IBM's Watson. So on that note, Greg Lindsey, welcome to the show. Greg Lindsay 04:14 Great to be here. Thanks for having me. Trevor, Trevor Freeman 04:16 So Greg, we're here to talk about AI and the impacts that AI is going to have on energy, but AI is a bit of one of those buzzwords that we hear out there in a number of different spheres today. So let's start by setting the stage of what exactly we're talking about. So what do we mean when we say AI or artificial intelligence? Speaker 1 04:37 Well, I'd say the first thing to keep in mind is that it is neither artificial nor intelligence. It's actually composites of many human hands making it. And of course, it's not truly intelligent either. I think there's at least two definitions for the layman's purposes. One is statistical machine learning. You know that is the previous generation of AI, we could say, doing deep, deep statistical analysis, looking for patterns fitting to. Patterns doing prediction. There's a great book, actually, by some ut professors at monk called prediction machines, which that was a great way of thinking about machine learning and sense of being able to do large scale prediction at scale. And that's how I imagine hydro, Ottawa and others are using this to model out network efficiencies and predictive maintenance and all these great uses. And then the newer, trendier version, of course, is large language models, your quads, your chat gpts, your others, which are based on transformer models, which is a whole series of work that many Canadians worked on, including Geoffrey Hinton and others. And this is what has produced the seemingly magical abilities to produce text and images on demand and large scale analysis. And that is the real power hungry beast that we think of as AI today. Trevor Freeman 05:42 Right! So different types of AI. I just want to pick those apart a little bit. When you say machine learning, it's kind of being able to repetitively look at something or a set of data over and over and over again. And because it's a computer, it can do it, you know, 1000s or millions of times a second, and learn what, learn how to make decisions based on that. Is that fair to say? Greg Lindsay 06:06 That's fair to say. And the thing about that is, is like you can train it on an output that you already know, large language models are just vomiting up large parts of pattern recognition, which, again, can feel like magic because of our own human brains doing it. But yeah, machine learning, you can, you know, you can train it to achieve outcomes. You can overfit the models where it like it's trained too much in the past, but, yeah, it's a large scale probabilistic prediction of things, which makes it so powerful for certain uses. Trevor Freeman 06:26 Yeah, one of the neatest explanations or examples I've seen is, you know, you've got these language models where it seems like this AI, whether it's chat, DBT or whatever, is writing really well, like, you know, it's improving our writing. It's making things sound better. And it seems like it's got a brain behind it, but really, what it's doing is it's going out there saying, What have millions or billions of other people written like this? And how can I take the best things of that? And it can just do that really quickly, and it's learned that that model, so that's super helpful to understand what we're talking about here. So obviously, in your work, you look at the impact of AI on a number of different aspects of our world, our society. What we're talking about here today is particularly the impact of AI when it comes to energy. And I'd like to kind of bucketize our conversation a little bit today, and the first area I want to look at is, what will ai do when it comes to energy for the average Canadian? Let's say so in my home, in my business, how I move around? So I'll start with that. It's kind of a high level conversation. Let's start talking about the different ways that AI will impact you know that our average listener here? Speaker 1 07:41 Um, yeah, I mean, we can get into a discussion about what it means for the average Canadian, and then also, of course, what it means for Canada in the world as well, because I just got back from South by Southwest in Austin, and, you know, for the second, third year in row, AI was on everyone's lips. But really it's the energy. Is the is the bottleneck. It's the forcing factor. Everyone talked about it, the fact that all the data centers we can get into that are going to be built in the direction of energy. So, so, yeah, energy holds the key to the puzzle there. But, um, you know, from the average gain standpoint, I mean, it's a question of, like, how will these tools actually play out, you know, inside of the companies that are using this, right? And that was a whole other discussion too. It's like, okay, we've been playing around with these tools for two, three years now, what do they actually use to deliver value of your large language model? So I've been saying this for 10 years. If you look at the older stuff you could start with, like smart thermostats, even look at the potential savings of this, of basically using machine learning to optimize, you know, grid optimize patterns of usage, understanding, you know, the ebbs and flows of the grid, and being able to, you know, basically send instructions back and forth. So you know there's stats. You know that, basically you know that you know you could save 10 to 25% of electricity bills. You know, based on this, you could reduce your heating bills by 10 to 15% again, it's basically using this at very large scales of the scale of hydro Ottawa, bigger, to understand this sort of pattern usage. But even then, like understanding like how weather forecasts change, and pulling that data back in to basically make fine tuning adjustments to the thermostats and things like that. So that's one stands out. And then, you know, we can think about longer term. I mean, yeah, lots have been lots has been done on imagining, like electric mobility, of course, huge in Canada, and what that's done to sort of change the overall energy mix virtual power plants. This is something that I've studied, and we've been writing about at Fast Company. At Fast Company beyond for 20 years, imagining not just, you know, the ability to basically, you know, feed renewable electricity back into the grid from people's solar or from whatever sources they have there, but the ability of utilities to basically go in and fine tune, to have that sort of demand shaping as well. And then I think the most interesting stuff, at least in demos, and also blockchain, which has had many theoretical uses, and I've got to see a real one. But one of the best theoretical ones was being able to create neighborhood scale utilities. Basically my cul de sac could have one, and we could trade clean electrons off of our solar panels through our batteries and home scale batteries, using Blockchain to basically balance this out. Yeah, so there's lots of potential, but yeah, it comes back to the notion of people want cheaper utility bills. I did this piece 10 years ago for the Atlantic Council on this we looked at a multi country survey, and the only reason anybody wanted a smart home, which they just were completely skeptical about, was to get those cheaper utility bills. So people pay for that. Trevor Freeman 10:19 I think it's an important thing to remember, obviously, especially for like the nerds like me, who part of my driver is, I like that cool new tech. I like that thing that I can play with and see my data. But for most people, no matter what we're talking about here, when it comes to that next technology, the goal is make my life a little bit easier, give me more time or whatever, and make things cheaper. And I think especially in the energy space, people aren't putting solar panels on their roof because it looks great. And, yeah, maybe people do think it looks great, but they're putting it up there because they want cheaper electricity. And it's going to be the same when it comes to batteries. You know, there's that add on of resiliency and reliability, but at the end of the day, yeah, I want my bill to be cheaper. And what I'm hearing from you is some of the things we've already seen, like smart thermostats get better as AI gets better. Is that fair to say? Greg Lindsay 11:12 Well, yeah, on the machine learning side, that you know, you get ever larger data points. This is why data is the coin of the realm. This is why there's a race to collect data on everything. Is why every business model is data collection and everything. Because, yes, not only can they get better, but of course, you know, you compile enough and eventually start finding statistical inferences you never meant to look for. And this is why I've been involved. Just as a side note, for example, of cities that have tried to implement their own data collection of electric scooters and eventually electric vehicles so they could understand these kinds of patterns, it's really the key to anything. And so it's that efficiency throughput which raises some really interesting philosophical questions, particularly about AI like, this is the whole discussion on deep seek. Like, if you make the models more efficient, do you have a Jevons paradox, which is the paradox of, like, the more energy you save through efficiency, the more you consume because you've made it cheaper. So what does this mean that you know that Canadian energy consumption is likely to go up the cleaner and cheaper the electrons get. It's one of those bedeviling sort of functions. Trevor Freeman 12:06 Yeah interesting. That's definitely an interesting way of looking at it. And you referenced this earlier, and I will talk about this. But at the macro level, the amount of energy needed for these, you know, AI data centers in order to do all this stuff is, you know, we're seeing that explode. Greg Lindsay 12:22 Yeah, I don't know that. Canadian statistics my fingertips, but I brought this up at Fast Company, like, you know, the IEA, I think International Energy Agency, you know, reported a 4.3% growth in the global electricity grid last year, and it's gonna be 4% this year. That does not sound like much. That is the equivalent of Japan. We're adding in Japan every year to the grid for at least the next two to three years. Wow. And that, you know, that's global South, air conditioning and other needs here too, but that the data centers on top is like the tip of the spear. It's changed all this consumption behavior, where now we're seeing mothballed coal plants and new plants and Three Mile Island come back online, as this race for locking up electrons, for, you know, the race to build God basically, the number of people in AI who think they're literally going to build weekly godlike intelligences, they'll, they won't stop at any expense. And so they will buy as much energy as they can get. Trevor Freeman 13:09 Yeah, well, we'll get to that kind of grid side of things in a minute. Let's stay at the home first. So when I look at my house, we talked about smart thermostats. We're seeing more and more automation when it comes to our homes. You know, we can program our lights and our door locks and all this kind of stuff. What does ai do in order to make sure that stuff is contributing to efficiency? So I want to do all those fun things, but use the least amount of energy possible. Greg Lindsay 13:38 Well, you know, I mean, there's, again, there's various metrics there to basically, sort of, you know, program your lights. And, you know, Nest is, you know, Google. Nest is an example of this one, too, in terms of basically learning your ebb and flow and then figuring out how to optimize it over the course of the day. So you can do that, you know, we've seen, again, like the home level. We've seen not only the growth in solar panels, but also in those sort of home battery integration. I was looking up that Tesla Powerwall was doing just great in Canada, until the last couple of months. I assume so, but I it's been, it's been heartening to see that, yeah, this sort of embrace of home energy integration, and so being able to level out, like, peak flow off the grid, so Right? Like being able to basically, at moments of peak demand, to basically draw on your own local resources and reduce that overall strain. So there's been interesting stuff there. But I want to focus for a moment on, like, terms of thinking about new uses. Because, you know, again, going back to how AI will influence the home and automation. You know, Jensen Wong of Nvidia has talked about how this will be the year of robotics. Google, Gemini just applied their models to robotics. There's startups like figure there's, again, Tesla with their optimists, and, yeah, there's a whole strain of thought that we're about to see, like home robotics, perhaps a dream from like, the 50s. I think this is a very Disney World esque Epcot Center, yeah, with this idea of jetsy, yeah, of having home robots doing work. You can see concept videos a figure like doing the actual vacuuming. I mean, we invented Roombas to this, but, but it also, I, you know, I've done a lot of work. Our own thinking around electric delivery vehicles. We could talk a lot about drones. We could talk a lot about the little robots that deliver meals on the sidewalk. There's a lot of money in business models about increasing access and people needing to maybe move less, to drive and do all these trips to bring it to them. And that's a form of home automation, and that's all batteries. That is all stuff off the grid too. So AI is that enable those things, these things that can think and move and fly and do stuff and do services on your behalf, and so people might find this huge new source of demand from that as well. Trevor Freeman 15:29 Yeah, that's I hadn't really thought about the idea that all the all these sort of conveniences and being able to summon them to our homes cause us to move around less, which also impacts transportation, which is another area I kind of want to get to. And I know you've, you've talked a little bit about E mobility, so where do you see that going? And then, how does AI accelerate that transition, or accelerate things happening in that space? Greg Lindsay 15:56 Yeah, I mean, I again, obviously the EV revolutions here Canada like, one of the epicenters Canada, Norway there, you know, that still has the vehicle rebates and things. So, yeah. I mean, we've seen, I'm here in Montreal, I think we've got, like, you know, 30 to 13% of sales is there, and we've got our 2035, mandate. So, yeah. I mean, you see this push, obviously, to harness all of Canada's clean, mostly hydro electricity, to do this, and, you know, reduce its dependence on fossil fuels for either, you know, Climate Change Politics reasons, but also just, you know, variable energy prices. So all of that matters. But, you know, I think the key to, like the electric mobility revolution, again, is, is how it's going to merge with AI and it's, you know, it's not going to just be the autonomous, self driving car, which is sort of like the horseless carriage of autonomy. It's gonna be all this other stuff, you know. My friend Dan Hill was in China, and he was thinking about like, electric scooters, you know. And I mentioned this to hydro Ottawa, like, the electric scooter is one of the leading causes of how we've taken internal combustion engine vehicles offline across the world, mostly in China, and put people on clean electric motors. What happens when you take those and you make those autonomous, and you do it with, like, deep seek and some cameras, and you sort of weld it all together so you could have a world of a lot more stuff in motion, and not just this world where we have to drive as much. And that, to me, is really exciting, because that changes, like urban patterns, development patterns, changes how you move around life, those kinds of things as well. That's that might be a little farther out, but, but, yeah, this sort of like this big push to build out domestic battery industries, to build charging points and the sort of infrastructure there, I think it's going to go in direction, but it doesn't look anything like, you know, a sedan or an SUV that just happens to be electric. Trevor Freeman 17:33 I think that's a the step change is change the drive train of the existing vehicles we have, you know, an internal combustion to a battery. The exponential change is exactly what you're saying. It's rethinking this. Greg Lindsay 17:47 Yeah, Ramesam and others have pointed out, I mean, again, like this, you know, it's, it's really funny to see this pushback on EVs, you know. I mean, I love a good, good roar of an internal combustion engine myself, but, but like, you know, Ramesam was an energy analyst, has pointed out that, like, you know, EVS were more cost competitive with ice cars in 2018 that's like, nearly a decade ago. And yeah, the efficiency of electric motors, particularly regenerative braking and everything, it just blows the cost curves away of ice though they will become the equivalent of keeping a thorough brat around your house kind of thing. Yeah, so, so yeah, it's just, it's that overall efficiency of the drive train. And that's the to me, the interesting thing about both electric motors, again, of autonomy is like, those are general purpose technologies. They get cheaper and smaller as they evolve under Moore's Law and other various laws, and so they get to apply to more and more stuff. Trevor Freeman 18:32 Yeah. And then when you think about once, we kind of figure that out, and we're kind of already there, or close to it, if not already there, then it's opening the door to those other things you're talking about. Of, well, do we, does everybody need to have that car in their driveway? Are we rethinking how we're actually just doing transportation in general? And do we need a delivery truck? Or can it be delivery scooter? Or what does that look like? Greg Lindsay 18:54 Well, we had a lot of those discussions for a long time, particularly in the mobility space, right? Like, and like ride hailing, you know, like, oh, you know, that was always the big pitch of an Uber is, you know, your car's parked in your driveway, like 94% of the time. You know, what happens if you're able to have no mobility? Well, we've had 15 years of Uber and these kinds of services, and we still have as many cars. But people are also taking this for mobility. It's additive. And I raised this question, this notion of like, it's just sort of more and more, more options, more availability, more access. Because the same thing seems to be going on with energy now too. You know, listeners been following along, like the conversation in Houston, you know, a week or two ago at Sarah week, like it's the whole notion of energy realism. And, you know, there's the new book out, more is more is more, which is all about the fact that we've never had an energy transition. We just kept piling up. Like the world burned more biomass last year than it did in 1900 it burned more coal last year than it did at the peak of coal. Like these ages don't really end. They just become this sort of strata as we keep piling energy up on top of it. And you know, I'm trying to sound the alarm that we won't have an energy transition. What that means for climate change? But similar thing, it's. This rebound effect, the Jevons paradox, named after Robert Stanley Jevons in his book The question of coal, where he noted the fact that, like, England was going to need more and more coal. So it's a sobering thought. But, like, I mean, you know, it's a glass half full, half empty in many ways, because the half full is like increasing technological options, increasing changes in lifestyle. You can live various ways you want, but, but, yeah, it's like, I don't know if any of it ever really goes away. We just get more and more stuff, Trevor Freeman 20:22 Exactly, well. And, you know, to hear you talk about the robotics side of things, you know, looking at the home, yeah, more, definitely more. Okay, so we talked about kind of home automation. We've talked about transportation, how we get around. What about energy management? And I think about this at the we'll talk about the utility side again in a little bit. But, you know, at my house, or for my own personal use in my life, what is the role of, like, sort of machine learning and AI, when it comes to just helping me manage my own energy better and make better decisions when it comes to energy? , Greg Lindsay 20:57 Yeah, I mean, this is where it like comes in again. And you know, I'm less and less of an expert here, but I've been following this sort of discourse evolve. And right? It's the idea of, you know, yeah, create, create. This the set of tools in your home, whether it's solar panels or batteries or, you know, or Two Way Direct, bi directional to the grid, however it works. And, yeah, and people, you know, given this option of savings, and perhaps, you know, other marketing messages there to curtail behavior. You know? I mean, I think the short answer the question is, like, it's an app people want, an app that tell them basically how to increase the efficiency of their house or how to do this. And I should note that like, this has like been the this is the long term insight when it comes to like energy and the clean tech revolution. Like my Emery Levin says this great line, which I've always loved, which is, people don't want energy. They want hot showers and cold beer. And, you know, how do you, how do you deliver those things through any combination of sticks and carrots, basically like that. So, So, hence, why? Like, again, like, you know, you know, power walls, you know, and, and, and, you know, other sort of AI controlled batteries here that basically just sort of smooth out to create the sort of optimal flow of electrons into your house, whether that's coming drive directly off the grid or whether it's coming out of your backup and then recharging that the time, you know, I mean, the surveys show, like, more than half of Canadians are interested in this stuff, you know, they don't really know. I've got one set here, like, yeah, 61% are interested in home energy tech, but only 27 understand, 27% understand how to optimize them. So, yeah. So people need, I think, perhaps, more help in handing that over. And obviously, what's exciting for the, you know, the utility level is, like, you know, again, aggregate all that individual behavior together and you get more models that, hope you sort of model this out, you know, at both greater scale and ever more fine grained granularity there. So, yeah, exactly. So I think it's really interesting, you know, I don't know, like, you know, people have gamified it. What was it? I think I saw, like, what is it? The affordability fund trust tried to basically gamify AI energy apps, and it created various savings there. But a lot of this is gonna be like, as a combination like UX design and incentives design and offering this to people too, about, like, why you should want this and money's one reason, but maybe there's others. Trevor Freeman 22:56 Yeah, and we talk about in kind of the utility sphere, we talk about how customers, they don't want all the data, and then have to go make their own decisions. They want those decisions to be made for them, and they want to say, look, I want to have you tell me the best rate plan to be on. I want to have you automatically switch me to the best rate plan when my consumption patterns change and my behavior chat patterns change. That doesn't exist today, but sort of that fast decision making that AI brings will let that become a reality sometime in the future, Greg Lindsay 23:29 And also in theory, this is where LLMs come into play. Is like, you know, to me, what excites me the most about that is the first time, like having a true natural language interface, like having being able to converse with an, you know, an AI, let's hopefully not chat bot. I think we're moving out on chat bots, but some sort of sort of instantiation of an AI to be like, what plan should I be on? Can you tell me what my behavior is here and actually having some sort of real language conversation with it? Not decision trees, not event statements, not chat bots. Trevor Freeman 23:54 Yeah, absolutely. Okay, so we've kind of teased around this idea of looking at the utility levels, obviously, at hydro Ottawa, you referenced this just a minute ago. We look at all these individual cases, every home that has home automation or solar storage, and we want to aggregate that and understand what, what can we do to help manage the grid, help manage all these new energy needs, shift things around. So let's talk a little bit about the role that AI can play at the utility scale in helping us manage the grid. Greg Lindsay 24:28 All right? Well, yeah, there's couple ways to approach it. So one, of course, is like, let's go back to, like, smart meters, right? Like, and this is where I don't know how many hydro Ottawa has, but I think, like, BC Hydro has like, 2 million of them, sometimes they get politicized, because, again, this gets back to this question of, like, just, just how much nanny state you want. But, you know, you know, when you reach the millions, like, yeah, you're able to get that sort of, you know, obviously real time, real time usage, real time understanding. And again, if you can do that sort of grid management piece where you can then push back, it's visual game changer. But, but yeah. I mean, you know, yeah, be. See hydro is pulling in. I think I read like, like, basically 200 million data points a day. So that's a lot to train various models on. And, you know, I don't know exactly the kind of savings they have, but you can imagine there, whether it's, you know, them, or Toronto Hydro, or hydro Ottawa and others creating all these monitoring points. And again, this is the thing that bedells me, by the way, just philosophically about modern life, the notion of like, but I don't want you to be collecting data off me at all times, but look at what you can do if you do It's that constant push pull of some sort of combination of privacy and agency, and then just the notion of like statistics, but, but there you are, but, but, yeah, but at the grid level, then I mean, like, yeah. I mean, you can sort of do the same thing where, like, you know, I mean, predictive maintenance is the obvious one, right? I have been writing about this for large enterprise software companies for 20 years, about building these data points, modeling out the lifetime of various important pieces equipment, making sure you replace them before you have downtime and terrible things happen. I mean, as we're as we're discussing this, look at poor Heathrow Airport. I am so glad I'm not flying today, electrical substation blowing out two days of the world's most important hub offline. So that's where predictive maintenance comes in from there. And, yeah, I mean, I, you know, I again, you know, modeling out, you know, energy flow to prevent grid outages, whether that's, you know, the ice storm here in Quebec a couple years ago. What was that? April 23 I think it was, yeah, coming up in two years. Or our last ice storm, we're not the big one, but that one, you know, where we had big downtime across the grid, like basically monitoring that and then I think the other big one for AI is like, Yeah, is this, this notion of having some sort of decision support as well, too, and sense of, you know, providing scenarios and modeling out at scale the potential of it? And I don't think, I don't know about this in a grid case, but the most interesting piece I wrote for Fast Company 20 years ago was an example, ago was an example of this, which was a fledgling air taxi startup, but they were combining an agent based model, so using primitive AI to create simple rules for individual agents and build a model of how they would behave, which you can create much more complex models. Now we could talk about agents and then marrying that to this kind of predictive maintenance and operations piece, and marrying the two together. And at that point, you could have a company that didn't exist, but that could basically model itself in real time every day in the life of what it is. You can create millions and millions and millions of Monte Carlo operations. And I think that's where perhaps both sides of AI come together truly like the large language models and agents, and then the predictive machine learning. And you could basically hydro or others, could build this sort of deep time machine where you can model out all of these scenarios, millions and millions of years worth, to understand how it flows and contingencies as well. And that's where it sort of comes up. So basically something happens. And like, not only do you have a set of plans, you have an AI that has done a million sets of these plans, and can imagine potential next steps of this, or where to deploy resources. And I think in general, that's like the most powerful use of this, going back to prediction machines and just being able to really model time in a way that we've never had that capability before. And so you probably imagine the use is better than I. Trevor Freeman 27:58 Oh man, it's super fascinating, and it's timely. We've gone through the last little while at hydro Ottawa, an exercise of updating our playbook for emergencies. So when there are outages, what kind of outage? What's the sort of, what are the trigger points to go from, you know, what we call a level one to a level two to level three. But all of this is sort of like people hours that are going into that, and we're thinking through these scenarios, and we've got a handful of them, and you're just kind of making me think, well, yeah, what if we were able to model that out? And you bring up this concept of agents, let's tease into that a little bit explain what you mean when you're talking about agents. Greg Lindsay 28:36 Yeah, so agentic systems, as the term of art is, AI instantiations that have some level of autonomy. And the archetypal example of this is the Stanford Smallville experiment, where they took basically a dozen large language models and they gave it an architecture where they could give it a little bit of backstory, ruminate on it, basically reflect, think, decide, and then act. And in this case, they used it to plan a Valentine's Day party. So they played out real time, and the LLM agents, like, even played matchmaker. They organized the party, they sent out invitations, they did these sorts of things. Was very cute. They put it out open source, and like, three weeks later, another team of researchers basically put them to work writing software programs. So you can see they organized their own workflow. They made their own decisions. There was a CTO. They fact check their own work. And this is evolving into this grand vision of, like, 1000s, millions of agents, just like, just like you spin up today an instance of Amazon Web Services to, like, host something in the cloud. You're going to spin up an agent Nvidia has talked about doing with healthcare and others. So again, coming back to like, the energy implications of that, because it changes the whole pattern. Instead of huge training runs requiring giant data centers. You know, it's these agents who are making all these calls and doing more stuff at the edge, but, um, but yeah, in this case, it's the notion of, you know, what can you put the agents to work doing? And I bring this up again, back to, like, predictive maintenance, or for hydro Ottawa, there's another amazing paper called virtual in real life. And I chatted with one of the principal authors. It created. A half dozen agents who could play tour guide, who could direct you to a coffee shop, who do these sorts of things, but they weren't doing it in a virtual world. They were doing it in the real one. And to do it in the real world, you took the agent, you gave them a machine vision capability, so added that model so they could recognize objects, and then you set them loose inside a digital twin of the world, in this case, something very simple, Google Street View. And so in the paper, they could go into like New York Central Park, and they could count every park bench and every waste bin and do it in seconds and be 99% accurate. And so agents were monitoring the landscape. Everything's up, because you can imagine this in the real world too, that we're going to have all the time. AIS roaming the world, roaming these virtual maps, these digital twins that we build for them and constantly refresh from them, from camera data, from sensor data, from other stuff, and tell us what this is. And again, to me, it's really exciting, because that's finally like an operating system for the internet of things that makes sense, that's not so hardwired that you can ask agents, can you go out and look for this for me? Can you report back on this vital system for me? And they will be able to hook into all of these kinds of representations of real time data where they're emerging from, and give you aggregated reports on this one. And so, you know, I think we have more visibility in real time into the real world than we've ever had before. Trevor Freeman 31:13 Yeah, I want to, I want to connect a few dots here for our listeners. So bear with me for a second. Greg. So for our listeners, there was a podcast episode we did about a year ago on our grid modernization roadmap, and we talked about one of the things we're doing with grid modernization at hydro Ottawa and utilities everywhere doing this is increasing the sensor data from our grid. So we're, you know, right now, we've got visibility sort of to our station level, sometimes one level down to some switches. But in the future, we'll have sensors everywhere on our grid, every switch, every device on our grid, will have a sensor gathering data. Obviously, you know, like you said earlier, millions and hundreds of millions of data points every second coming in. No human can kind of make decisions on that, and what you're describing is, so now we've got all this data points, we've got a network of information out there, and you could create this agent to say, Okay, you are. You're my transformer agent. Go out there and have a look at the run temperature of every transformer on the network, and tell me where the anomalies are, which ones are running a half a degree or two degrees warmer than they should be, and report back. And now I know hydro Ottawa, that the controller, the person sitting in the room, knows, Hey, we should probably go roll a truck and check on that transformer, because maybe it's getting end of life. Maybe it's about to go and you can do that across the entire grid. That's really fascinating, Greg Lindsay 32:41 And it's really powerful, because, I mean, again, these conversations 20 years ago at IoT, you know you're going to have statistical triggers, and you would aggregate these data coming off this, and there was a lot of discussion there, but it was still very, like hardwired, and still very Yeah, I mean, I mean very probabilistic, I guess, for a word that went with agents like, yeah, you've now created an actual thing that can watch those numbers and they can aggregate from other systems. I mean, lots, lots of potential there hasn't quite been realized, but it's really exciting stuff. And this is, of course, where that whole direction of the industry is flowing. It's on everyone's lips, agents. Trevor Freeman 33:12 Yeah. Another term you mentioned just a little bit ago that I want you to explain is a digital twin. So tell us what a digital twin is. Greg Lindsay 33:20 So a digital twin is, well, the matrix. Perhaps you could say something like this for listeners of a certain age, but the digital twin is the idea of creating a model of a piece of equipment, of a city, of the world, of a system. And it is, importantly, it's physics based. It's ideally meant to represent and capture the real time performance of the physical object it's based on, and in this digital representation, when something happens in the physical incarnation of it, it triggers a corresponding change in state in the digital twin, and then vice versa. In theory, you know, you could have feedback loops, again, a lot of IoT stuff here, if you make changes virtually, you know, perhaps it would cause a change in behavior of the system or equipment, and the scales can change from, you know, factory equipment. Siemens, for example, does a lot of digital twin work on this. You know, SAP, big, big software companies have thought about this. But the really crazy stuff is, like, what Nvidia is proposing. So first they started with a digital twin. They very modestly called earth two, where they were going to model all the weather and climate systems of the planet down to like the block level. There's a great demo of like Jensen Wong walking you through a hurricane, typhoons striking the Taipei, 101, and how, how the wind currents are affecting the various buildings there, and how they would change that more recently, what Nvidia is doing now is, but they just at their big tech investor day, they just partner with General Motors and others to basically do autonomous cars. And what's crucial about it, they're going to train all those autonomous vehicles in an NVIDIA built digital twin in a matrix that will act, that will be populated by agents that will act like people, people ish, and they will be able to run millions of years of autonomous vehicle training in this and this is how they plan to catch up to. Waymo or, you know, if Tesla's robotaxis are ever real kind of thing, you know, Waymo built hardwired like trained on real world streets, and that's why they can only operate in certain operating domain environments. Nvidia is gambling that with large language models and transformer models combined with digital twins, you can do these huge leapfrog effects where you can basically train all sorts of synthetic agents in real world behavior that you have modeled inside the machine. So again, that's the kind, that's exactly the kind of, you know, environment that you're going to train, you know, your your grid of the future on for modeling out all your contingency scenarios. Trevor Freeman 35:31 Yeah, again, you know, for to bring this to the to our context, a couple of years ago, we had our the direcco. It's a big, massive windstorm that was one of the most damaging storms that we've had in Ottawa's history, and we've made some improvements since then, and we've actually had some great performance since then. Imagine if we could model that derecho hitting our grid from a couple different directions and figure out, well, which lines are more vulnerable to wind speeds, which lines are more vulnerable to flying debris and trees, and then go address that and do something with that, without having to wait for that storm to hit. You know, once in a decade or longer, the other use case that we've talked about on this one is just modeling what's happening underground. So, you know, in an urban environments like Ottawa, like Montreal, where you are, there's tons of infrastructure under the ground, sewer pipes, water pipes, gas lines, electrical lines, and every time the city wants to go and dig up a road and replace that road, replace that sewer, they have to know what's underground. We want to know what's underground there, because our infrastructure is under there. As the electric utility. Imagine if you had a model where you can it's not just a map. You can actually see what's happening underground and determine what makes sense to go where, and model out these different scenarios of if we underground this line or that line there. So lots of interesting things when it comes to a digital twin. The digital twin and Agent combination is really interesting as well, and setting those agents loose on a model that they can play with and understand and learn from. So talk a little bit about. Greg Lindsay 37:11 that. Yeah. Well, there's a couple interesting implications just the underground, you know, equipment there. One is interesting because in addition to, like, you know, you know, having captured that data through mapping and other stuff there, and having agents that could talk about it. So, you know, next you can imagine, you know, I've done some work with augmented reality XR. This is sort of what we're seeing again, you know, meta Orion has shown off their concept. Google's brought back Android XR. Meta Ray Bans are kind of an example of this. But that's where this data will come from, right? It's gonna be people wearing these wearables in the world, capturing all this camera data and others that's gonna be fed into these digital twins to refresh them. Meta has a particularly scary demo where you know where you the user, the wearer leaves their keys on their coffee table and asks metas, AI, where their coffee where their keys are, and it knows where they are. It tells them and goes back and shows them some data about it. I'm like, well, to do that, meta has to have a complete have a complete real time map of your entire house. What could go wrong. And that's what all these companies aspire to of reality. So, but yeah, you can imagine, you know, you can imagine a worker. And I've worked with a startup out of urban X, a Canada startup, Canadian startup called context steer. And you know, is the idea of having real time instructions and knowledge manuals available to workers, particularly predictive maintenance workers and line workers. So you can imagine a technician dispatched to deal with this cut in the pavement and being able to see with XR and overlay of like, what's actually under there from the digital twin, having an AI basically interface with what's sort of the work order, and basically be your assistant that can help you walk you through it, in case, you know, you run into some sort of complication there, hopefully that won't be, you know, become like, turn, turn by turn, directions for life that gets into, like, some of the questions about what we wanted out of our workforce. But there's some really interesting combinations of those things, of like, you know, yeah, mapping a world for AIS, ais that can understand it, that could ask questions in it, that can go probe it, that can give you advice on what to do in it. All those things are very close for good and for bad. Trevor Freeman 39:03 You kind of touched on my next question here is, how do we make sure this is all in the for good or mostly in the for good category, and not the for bad category you talk in one of the papers that you wrote about, you know, AI and augmented reality in particular, really expanding the attack surface for malicious actors. So we're creating more opportunities for whatever the case may be, if it's hacking or if it's malware, or if it's just, you know, people that are up to nefarious things. How do we protect against that? How do we make sure that our systems are safe that the users of our system. So in our case, our customers, their data is safe, their the grid is safe. How do we make sure that? Greg Lindsay 39:49 Well, the very short version is, whatever we're spending on cybersecurity, we're not spending enough. And honestly, like everybody who is no longer learning to code, because we can be a quad or ChatGPT to do it, I. Is probably there should be a whole campaign to repurpose a big chunk of tech workers into cybersecurity, into locking down these systems, into training ethical systems. There's a lot of work to be done there. But yeah, that's been the theme for you know that I've seen for 10 years. So that paper I mentioned about sort of smart homes, the Internet of Things, and why people would want a smart home? Well, yeah, the reason people were skeptical is because they saw it as basically a giant attack vector. My favorite saying about this is, is, there's a famous Arthur C Clarke quote that you know, any sufficiently advanced technology is magic Tobias Ravel, who works at Arup now does their head of foresight has this great line, any sufficiently advanced hacking will feel like a haunting meaning. If you're in a smart home that's been hacked, it will feel like you're living in a haunted house. Lights will flicker on and off, and systems will turn and go haywire. It'll be like you're living with a possessed house. And that's true of cities or any other systems. So we need to do a lot of work on just sort of like locking that down and securing that data, and that is, you know, we identified, then it has to go all the way up and down the supply chain, like you have to make sure that there is, you know, a chain of custody going back to when components are made, because a lot of the attacks on nest, for example. I mean, you want to take over a Google nest, take it off the wall and screw the back out of it, which is a good thing. It's not that many people are prying open our thermostats, but yeah, if you can get your hands on it, you can do a lot of these systems, and you can do it earlier in the supply chain and sorts of infected pieces and things. So there's a lot to be done there. And then, yeah, and then, yeah, and then there's just a question of, you know, making sure that the AIs are ethically trained and reinforced. And, you know, a few people want to listeners, want to scare themselves. You can go out and read some of the stuff leaking out of anthropic and others and make clot of, you know, models that are trying to hide their own alignments and trying to, like, basically copy themselves. Again, I don't believe that anything things are alive or intelligent, but they exhibit these behaviors as part of the probabilistic that's kind of scary. So there's a lot to be done there. But yeah, we worked on this, the group that I do foresight with Arizona State University threat casting lab. We've done some work for the Secret Service and for NATO and, yeah, there'll be, you know, large scale hackings on infrastructure. Basically the equivalent can be the equivalent can be the equivalent to a weapons of mass destruction attack. We saw how Russia targeted in 2014 the Ukrainian grid and hacked their nuclear plans. This is essential infrastructure more important than ever, giving global geopolitics say the least, so that needs to be under consideration. And I don't know, did I scare you enough yet? What are the things we've talked through here that, say the least about, you know, people being, you know, tricked and incepted by their AI girlfriends, boyfriends. You know people who are trying to AI companions. I can't possibly imagine what could go wrong there. Trevor Freeman 42:29 I mean, it's just like, you know, I don't know if this is 15 or 20, or maybe even 25 years ago now, like, it requires a whole new level of understanding when we went from a completely analog world to a digital world and living online, and people, I would hope, to some degree, learned to be skeptical of things on the internet and learned that this is that next level. We now need to learn the right way of interacting with this stuff. And as you mentioned, building the sort of ethical code and ethical guidelines into these language models into the AI. Learning is pretty critical for our listeners. We do have a podcast episode on cybersecurity. I encourage you to go listen to it and reassure yourself that, yes, we are thinking about this stuff. And thanks, Greg, you've given us lots more to think about in that area as well. When it comes to again, looking back at utilities and managing the grid, one thing we're going to see, and we've talked a lot about this on the show, is a lot more distributed generation. So we're, you know, the days of just the central, large scale generation, long transmission lines that being the only generation on the grid. Those days are ending. We're going to see more distributed generations, solar panels on roofs, batteries. How does AI help a utility manage those better, interact with those better get more value out of those things? Greg Lindsay 43:51 I guess that's sort of like an extension of some of the trends I was talking about earlier, which is the notion of, like, being able to model complex systems. I mean, that's effectively it, right, like you've got an increasingly complex grid with complex interplays between it, you know, figuring out how to basically based on real world performance, based on what you're able to determine about where there are correlations and codependencies in the grid, where point where choke points could emerge, where overloading could happen, and then, yeah, basically, sort of building that predictive system to Basically, sort of look for what kind of complex emergent behavior comes out of as you keep adding to it and and, you know, not just, you know, based on, you know, real world behavior, but being able to dial that up to 11, so to speak, and sort of imagine sort of these scenarios, or imagine, you know, what, what sort of long term scenarios look like in terms of, like, what the mix, how the mix changes, how the geography changes, all those sorts of things. So, yeah, I don't know how that plays out in the short term there, but it's this combination, like I'm imagining, you know, all these different components playing SimCity for real, if one will. Trevor Freeman 44:50 And being able to do it millions and millions and millions of times in a row, to learn every possible iteration and every possible thing that might happen. Very cool. Okay. So last kind of area I want to touch on you did mention this at the beginning is the the overall power implications of of AI, of these massive data centers, obviously, at the utility, that's something we are all too keenly aware of. You know, the stat that that I find really interesting is a normal Google Search compared to, let's call it a chat GPT search. That chat GPT search, or decision making, requires 10 times the amount of energy as that just normal, you know, Google Search looking out from a database. Do you see this trend? I don't know if it's a trend. Do you see this continuing like AI is just going to use more power to do its decision making, or will we start to see more efficiencies there? And the data centers will get better at doing what they do with less energy. What is the what does the future look like in that sector? Greg Lindsay 45:55 All the above. It's more, is more, is more! Is the trend, as far as I can see, and every decision maker who's involved in it. And again, Jensen Wong brought this up at the big Nvidia Conference. That basically he sees the only constraint on this continuing is availability of energy supplies keep it going and South by Southwest. And in some other conversations I've had with bandwidth companies, telcos, like laying 20 lumen technologies, United States is laying 20,000 new miles of fiber optic cables. They've bought 10% of Corning's total fiber optic output for the next couple of years. And their customers are the hyperscalers. They're, they're and they're rewiring the grid. That's why, I think it's interesting. This has something, of course, for thinking about utilities, is, you know, the point to point Internet of packet switching and like laying down these big fiber routes, which is why all the big data centers United States, the majority of them, are in north of them are in Northern Virginia, is because it goes back to the network hub there. Well, lumen is now wiring this like basically this giant fabric, this patchwork, which can connect data center to data center, and AI to AI and cloud to cloud, and creating this entirely new environment of how they are all directly connected to each other through some of this dedicated fiber. And so you can see how this whole pattern is changing. And you know, the same people are telling me that, like, yeah, the where they're going to build this fiber, which they wouldn't tell me exactly where, because it's very tradable, proprietary information, but, um, but it's following the energy supplies. It's following the energy corridors to the American Southwest, where there's solar and wind in Texas, where you can get natural gas, where you can get all these things. It will follow there. And I of course, assume the same is true in Canada as we build out our own sovereign data center capacity for this. So even, like deep seek, for example, you know, which is, of course, the hyper efficient Chinese model that spooked the markets back in January. Like, what do you mean? We don't need a trillion dollars in capex? Well, everyone's quite confident, including again, Jensen Wong and everybody else that, yeah, the more efficient models will increase this usage. That Jevons paradox will play out once again, and we'll see ever more of it. To me, the question is, is like as how it changes? And of course, you know, you know, this is a bubble. Let's, let's, let's be clear, data centers are a bubble, just like railroads in 1840 were a bubble. And there will be a bust, like not everyone's investments will pencil out that infrastructure will remain maybe it'll get cheaper. We find new uses for it, but it will, it will eventually bust at some point and that's what, to me, is interesting about like deep seeking, more efficient models. Is who's going to make the wrong investments in the wrong places at the wrong time? But you know, we will see as it gathers force and agents, as I mentioned. You know, they don't require, as much, you know, these monstrous training runs at City sized data centers. You know, meta wanted to spend $200 billion on a single complex, the open AI, Microsoft, Stargate, $500 billion Oracle's. Larry Ellison said that $100 billion is table stakes, which is just crazy to think about. And, you know, he's permitting three nukes on site. So there you go. I mean, it'll be fascinating to see if we have a new generation of private, private generation, right, like, which is like harkening all the way back to, you know, the early electrical grid and companies creating their own power plants on site, kind of stuff. Nicholas Carr wrote a good book about that one, about how we could see from the early electrical grid how the cloud played out. They played out very similarly. The AI cloud seems to be playing out a bit differently. So, so, yeah, I imagine that as well, but, but, yeah, well, inference happen at the edge. We need to have more distributed generation, because you're gonna have AI agents that are going to be spending more time at the point of request, whether that's a laptop or your phone or a light post or your autonomous vehicle, and it's going to need more of that generation and charging at the edge. That, to me, is the really interesting question. Like, you know, when these current generation models hit their limits, and just like with Moore's law, like, you know, you have to figure out other efficiencies in designing chips or designing AIS, how will that change the relationship to the grid? And I don't think anyone knows quite for sure yet, which is why they're just racing to lock up as many long term contracts as they possibly can just get it all, core to the market. Trevor Freeman 49:39 Yeah, it's just another example, something that comes up in a lot of different topics that we cover on this show. Everything, obviously, is always related to the energy transition. But the idea that the energy transition is really it's not just changing fuel sources, like we talked about earlier. It's not just going from internal combustion to a battery. It's rethinking the. Relationship with energy, and it's rethinking how we do things. And, yeah, you bring up, like, more private, massive generation to deal with these things. So really, that whole relationship with energy is on scale to change. Greg, this has been a really interesting conversation. I really appreciate it. Lots to pack into this short bit of time here. We always kind of wrap up our conversations with a series of questions to our guests. So I'm going to fire those at you here. And this first one, I'm sure you've got lots of different examples here, so feel free to give more than one. What is a book that you've read that you think everybody should read? Greg Lindsay 50:35 The first one that comes to mind is actually William Gibson's Neuromancer, which is which gave the world the notion of cyberspace and so many concepts. But I think about it a lot today. William Gibson, Vancouver based author, about how much in that book is something really think about. There is a digital twin in it, an agent called the Dixie flatline. It's like a former program where they cloned a digital twin of him. I've actually met an engineering company, Thornton Thomas Eddie that built a digital twin of one of their former top experts. So like that became real. Of course, the matrix is becoming real the Turing police. Yeah, there's a whole thing in there where there's cops to make sure that AIS don't get smarter. I've been thinking a lot about, do we need Turing police? The EU will probably create them. And so that's something where you know the proof, again, of like science fiction, its ability in world building to really make you think about these implications and help for contingency planning. A lot of foresight experts I work with think about sci fi, and we use sci fi for exactly that reason. So go read some classic cyberpunk, everybody. Trevor Freeman 51:32 Awesome. So same question. But what's a movie or a show that you think everybody should take a look at? Greg Lindsay 51:38 I recently watched the watch the matrix with ideas, which is fun to think about, where the villains are, agents that villains are agents. That's funny how that terms come back around. But the other one was thinking about the New Yorker recently read a piece on global demographics and the fact that, you know, globally, less and less children. And it made several references to Alfonso Quons, Children of Men from 2006 which is, sadly, probably the most prescient film of the 21st Century. Again, a classic to watch, about imagining in a world where we don't where you where you lose faith in the future, what happens, and a world that is not having children as a world that's losing faith in its own future. So that's always haunted me. Trevor Freeman 52:12 It's funny both of those movies. So I've got kids as they get, you know, a little bit older, a little bit older, we start introducing more and more movies. And I've got this list of movies that are just, you know, impactful for my own adolescent years and growing up. And both matrix and Children of Men are on that list of really good movies that I just need my kids to get a little bit older, and then I'm excited to watch with them. If someone offered you a free round trip flight anywhere in the world, where would you go? Greg Lindsay 52:40 I would go to Venice, Italy for the Architecture Biennale, which I will be on a plane in May, going to anyway. And the theme this year is intelligence, artificial, natural and collective. So it should be interesting to see the world's brightest architects. Let's see what we got. But yeah, Venice, every time, my favorite city in the world. Trevor Freeman 52:58 Yeah, it's pretty wonderful. Who is someone that you admire? Greg Lindsay 53:01 Great question.
Dr. Jon Finn wrote his best-selling book ‘The Habit Mechanic' (which took him over 20 years) because his life's mission is to help people to be their best in the challenging modern world.He founded the award-winning Tougher Minds consultancy and has three psychology-related degrees, including a PhD. He has worked in performance psychology, resilience, and leadership science for over 20 years. He also writes regularly for Forbes.Tougher Minds uses cutting-edge insights from psychology, behavioural science, neuroscience, and world champions to help organisations develop “Habit Mechanics” and “Chief Habit Mechanics”—resilient people, outstanding leaders, and world-class teams.Having trained and coached over 10,000 people, Dr. Finn, and his colleagues, work with global businesses, high-growth startups, individuals, elite athletes, coaches and teams, leading educational institutes, families, the UK government, and think tanks.In this episode, Dominic explores AI's impact on the workforce and the journey of integrating AI into behavioural science and habit formation. Inspired by Geoffrey Hinton's work on neural networks, learn how blending traditional methods with cutting-edge AI enhances understanding of brain function and behaviour. Dr. Jon shares the concept of brain states - recharge, medium-charge, and high-charge - and how AI can revolutionise workflows by automating routine tasks and co-working with humans on complex tasks. Looking forward, we explore the concept of creating high-performing human-AI teams, guiding individuals and businesses toward harmonious collaborations with AI, enabling unprecedented speed and efficiency in achieving goals.DiscoverAutomatic Thinking: Human thoughts and behaviours are largely driven by automatic or semi-automatic processes. This is influenced by biases and past experiences, which affect the ability to generate truly conscious and unbiased ideas.Emotional Regulation in Athletes: In elite sports, the ability to regulate emotions plays a pivotal role in determining whether young athletes maintain or lose their professional status as they age.Effort and Performance: Sustained success in any field relies on consistent effort, ongoing learning, and the ability to perform effectively under pressure, particularly in challenging or high-stakes situations.Training Gaps in Mental Performance: While athletes typically receive extensive training in physical, technical, and tactical aspects, they often lack structured training on understanding and improving their cognitive and emotional processes.Risks from AI in the Workplace: AI is disrupting workplace roles that involve medium-energy tasks, and people unable to adapt or up-skill are at risk of being replaced. Emotional regulation and the ability to shift to higher-performance states are critical for adapting to these changes.
Canada's outdated capital gains policies are driving entrepreneurs and investors away. We need competitive tax reform to keep talent and investment here, building the businesses of tomorrow.We have just 33 small businesses per 1,000 people vs 124 in the US. Fixing our capital gains system could help us close this gap with the US and create hundreds of thousands of new jobs.Modern capital gains reform will unleash Canadian innovation, create more high-paying employment, and ensure our world-class graduates build their companies here, not elsewhere.GoalsTo ensure a prosperous, sustainable, and growing economy, Canada needs a thriving private sector that invests in new businesses. A strong environment for entrepreneurship creates jobs, drives GDP growth, and ensures economic mobility for all. In recent years however, entrepreneurship, and consequently private sector employment, has been slow despite an increasing population.One factor driving this change is that Canada's capital gains tax policies make it significantly less rewarding to start a business compared to other jurisdictions. To reverse this trend and reinvigorate our private sector, we must revise our outdated policies to align with global standards.Our targets:* Increase SMBs per 1000 people over the age of 18 from 33 to 62 to get half of the US rate of 124.* Increase the number of early-stage financing rounds (Pre-seed, Seed, Series A, and Series B) for new businesses from 482 in 2024 to over 1000+ per year.* Increase investments in new businesses through industry-agnostic venture capital financing to 0.5% of GDP, up from 0.35% of GDP, to get closer to the USA's figure of 0.72% of GDP.Background and MotivationNew business formation and growth relies on people taking huge risks with their time and money. However, today in Canada the people that take these risks – entrepreneurs, early stage employees, and investors – are rewarded less than in other countries.As a result the country's best talent is driven to leave and start businesses elsewhere, where they can find easier access to funding1 and keep more of the upside if they succeed.We need to reverse this systematic issue. By rewarding investors that put their capital at risk and supporting entrepreneurs who put their livelihoods on the line to create new companies we can create a strong and resilient economy.All companies begin as small and medium businesses (SMBs) and the formation and growth of these SMBs is essential to a country's economic success both through driving the quality of the labour market and creating opportunities for productivity growth.In Canada, SMBs accounted for ~64% of private sector employment and contributed to half of all net new jobs added last year2. These work opportunities support upward income mobility, lead to more capital being reinvested into local communities, and are particularly valuable for traditionally disadvantaged populations3 4 5.In addition, SMBs represent a significant portion of the economy and have high potential for productivity improvements6. Between 2017 and 2021, SMBs contributed almost half of Canada's GDP7. As these businesses grow and scale their operations they improve efficiency and drive productivity-led growth that can be equivalent in impact to roughly 5% of a developed nation's GDP8 9.Perhaps most importantly, SMBs turn into global winners. Growing these companies into sizable businesses is how a country can win an unfair share of global markets, by creating the large, export-focused corporations that contribute an outsized value to GDP and productivity growth. To ensure the next trillion dollar companies - the equivalent of Google, Microsoft, or Meta - are built in Canada, founders must be convinced to start their companies here.So, having a healthy ecosystem of SMBs is essential to creating a strong economy, but the data shows Canada is falling behind our global peers. In the 20 years between 2003 and 2023, the total number of Canadian entrepreneurs decreased by ~100K, despite the population growing by 10 million10 11. Today, for every thousand people over the age of 18 the US has ~124 SMBs12 13. Israel, a country with less than a quarter of Canada's population, has ~7314 15, while Canada has just ~3316.A significant driver of this stagnation is outdated and uncompetitive capital gains policies that have low limits, exclude large categories of business, and contain many restrictions compared to global peers - especially the US. It is less valuable for investors to put money into Canadian businesses, making capital more scarce and it discourages entrepreneurs who know that in most cases they could receive more reward by building the same company elsewhere. This makes it difficult for any SMB to get started let alone scale.Today, Canada has two capital gains policies, to try and encourage SMB creation, the Lifetime Capital Gains Exemption (LCGE) and a proposed Canadian Entrepreneur's Incentive (CEI) announced in Budget 2024 but not yet implemented. Combined, the LCGE and CEI would allow shareholders to reduce the inclusion rate of capital gains from the current 50% down to a range of 33.3%-0% to a cap of $3.25M 17 18.These policies simply can't compete with the US. The USA's Qualified Small Business Stock (QSBS) policy has a capital gains cap of $15M or ten times the original investment amount, five times higher than Canada's LCGE and CEI limit. In addition the QSBS is active today, while Canada's CEI cap has a phased approach only coming into full effect in 2029 if the policy is passed. Today in 2025, LCGE and CEI's true combined cap is only $1.25M. And while QSBS shields 100% of gains up until the policy cap for individuals and corporations, Canada's CEI would only shields 66.7% of gains for individuals.To illustrate how restrictive this is, we could imagine a company where the business is owned between founders, early employees, and various investors (see the first example below). If this business was started in 2018 and sold 7 years later today in 2025 for $100M, these risk-takers would have to pay a combined $14.7M in taxes. However, that same business with the same structure would pay no taxes in the US.The good news is that at larger scales of exit like $250m (see the second example below) the gap between Canada and the US decreases due to a more competitive basic capital gains inclusion rate in Canada. This means that if we match the QSBS's capital gains limit it could actually give the Canadian policy an edge driving more investment in the country and supercharging our SMB ecosystem. However, if we leave the policy as it stands right now companies can never get started because investors and entrepreneurs are scared away.The reason is that the QSBS rewards smaller exits - the majority of SMB outcomes - with the maximum capital gains tax value. This makes it easier for entrepreneurs, early employees, and investors to take on the risks of building a business. In fact, early-stage US investors are currently increasing their investments into new Canadian businesses, and adding in clauses that would require the Canadian business to reincorporate in the US simply to become eligible for QSBS. This means the best Canadian entrepreneurs and companies are leaving the country simply to take advantage of these rules. This decreases the health of our SMB ecosystem, prevents large companies from growing in the country and ultimately reduces tax revenue.If we want to keep our entrepreneurs, Canada's capital gains policies must become competitive with US policies.Beyond better gain caps and exclusion rates, the US's QSBS allows a wider range of businesses and stakeholders to benefit from the policy, with no minimum ownership requirements, increased asset value caps, and a tiered inclusion rate approach that incentivizes long-term business building. Meanwhile, Canada's CEI excludes companies in healthcare, food and beverage, and service businesses19. CEI's minimum ownership rules also exclude early employees and investors who own less than 5% of the business at the time of sale.Most importantly, while LCGE and CEI's $3.25M cap applies over a taxpayer's entire lifetime, QSBS's limits are per issuer or business. In other words, entrepreneurs, early employees, and investors can use the QSBS more favourable policy again and again for subsequent companies. This discourages repeat entrepreneurs in Canada, who statistically have a higher chance of building successful businesses, from creating a second or third company, as Canada's LCGE and CEI don't extend to new issuers20 .What Needs to Be DoneTo properly reward risk takers, Canada can fully solve our capital gains policy problems by combining the LCGE with the CEI into a simple, powerful capital gains policy that supports entrepreneurs. In particular, the new policy could become competitive by adopting three major changes:1) Expand the eligibility requirements to ensure Canadian entrepreneurs and risk takers are supported. Eligible business types should be expanded to include all industries of national interest, including healthcare clinics, clean energy, technology, etc. We should also eliminate 5% minimum ownership requirements to enable any individual or corporate entity to claim CEI deductions in accordance with the tiered approach that is used to support early-stage employees and investors.2) Improve the capital gain exclusion rate system to be globally competitive, supporting entrepreneurs and increasing investment. To prevent the draw of foreign jurisdictions and ensure that we have just as much incentive to start companies as peer countries, we should start by raising the exclusion cap to $15M gain or 10x adjusted cost basis per taxpayer, whichever is greater.3) Make structural changes to ensure these new policies scale appropriately. Amend the capital gains limit from applying per lifetime to per business to incentivize repeat entrepreneurs to continue building in Canada. Additionally, ensure that common investment structures, including Simple Agreements for Future Equity (SAFEs) and Convertible Notes, become eligible, with the holding period commencing from the date the investment is signed, not when the shares are priced and converted. So, there are no major discrepancies for startups choosing to operate in Canada compared to the US.Common QuestionsWill this only benefit tech startups?No. Canada's LCGE was originally created to support all small businesses and increase competition, which includes non-tech businesses such as fisheries and farmers. Our memo recommends expanding eligibility to all industries deemed essential, including non-tech ones, that the current CEI proposal omits, such as healthcare practitioners. In the US, SMBs of all sectors, including manufacturing, retail, wholesale, consumer, and packaged goods, benefit from the QSBS policy21.Wouldn't corporate tax breaks reduce tax income for social programs and only benefit the wealthy 1%?No, this would encourage investment in Canadian small businesses, essential for increasing corporate tax revenue that funds social programs. Businesses that receive investment can generate more jobs, pay higher wages, which help increase individual income tax revenue, and reduce withdrawals from crucial social assistance programs, such as Employment Insurance, as more companies and workers stay in Canada. This helps reduce the burden and improve access to social programs, rather than removing them.What stops foreign investors from abusing this and using Canada as a tax-sheltered haven to enrich themselves at the expense of Canadians?Maintaining Canadian incorporation, assets, residency, and operating requirements, combined with a minimum 2-year waiting period before benefits kick in, will ensure that new businesses maintain a presence in Canada, creating skilled job opportunities for Canadians and contributing to local economic growth.Why should we invest in SMBs? Aren't they risky and likely to be shut down in a few years?68% of SMBs in Canada survive and operate into their fifth year, and a further 49% of SMBs survive and operate for more than a decade22. SMBs around the world, including Canada, contribute significantly to economic output, job opportunities, and increased competition for consumers.ConclusionCanada needs to create an ecosystem that supports entrepreneurs at the earliest stages. We have one of the most educated countries globally, with the largest college-educated workforce among G7 countries23. Canadian universities are consistently ranked among the top institutions globally, world-renowned, with research labs led by leaders like Geoffrey Hinton, dubbed the “Godfather of AI,” who was recently awarded a Nobel Prize for his work in AI and ML24 25.Not only is our population talented, but they are also resourceful and hardworking. Rather than punishing them, we should reward them for taking the risks to build Canada's economy. To start, we should implement a modern capital gains policy that rewards investors, entrepreneurs and early employees.Read more here: https://www.buildcanada.com/en/memos/reward-the-risk-takers This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
In this episode of Disruptors x CDL: The Innovation Era, hosts John Stackhouse, Senior VP of RBC, and Sonia Sennik, CEO of Creative Destruction Lab, dive into one of the most transformative technologies of our time: Artificial Intelligence. With the potential to revolutionize industries from healthcare to energy, AI is reshaping the global economy — and Canada is both a leader in research and a laggard in adoption.This week, Geoffrey Hinton, Professor at the University of Toronto, was awarded the Nobel Prize in Physics for his research in artificial intelligence that began in 1987.Join John and Sonia as they discuss Canada's AI ecosystem and the country's challenges in keeping pace with global AI adoption. They're joined by three visionary guests: Sheldon Fernandez, CEO of Darwin AI, Kory Mathewson, Senior Research Scientist at Google DeepMind, and Gillian Hadfield, a Schmidt Sciences AI2050 Senior Fellow. Together, they explore the opportunities and barriers in AI adoption, the creative applications of AI, and the role Canada must play in the future of AI.This episode is packed with insights for business leaders, policymakers, and anyone curious about how AI is changing our world. Whether you're an AI enthusiast or a skeptic, this episode will challenge your thinking on the role of technology in shaping the future.Tune in to learn how AI is both an opportunity and a responsibility, and how Canada can lead the charge in this new innovation era.Sign up to receive RBC Thought Leadership's newsletter, flagship reports and analysis on the forces shaping Canadian business and the economy.
The Breakdown of Shared Reality: AI's Most Dangerous Unintended ConsequenceNobel Prize winner Geoffrey Hinton warns that AI-driven personalization is destroying our collective understanding of what's real.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the breakdown of shared reality caused by AI-driven hyper-personalization and its profound implications for business and society.You'll learn why isolated algorithmic realities undermine strategic thinking, discover the concept of the "Promethean Transition" we're navigating, and find how to choose between being a tunnel digger or pathfinder in our AI future.Question of the Day
After speaking at a CREW conference in Texas, Cal meets a smart young Uber driver who was curious about the number of jobs that we can anticipate losing in America because of AI. The driver wanted to know what the lives of Americans who'd most likely lose their jobs were going to look like, and how these people might get by and transition. Cal finds some answers from Sam Altman, the CEO of OpenAI, and Geoffrey Hinton, the Godfather of Artificial Intelligence. He also discovers a half-helicopter/half-airplane taxi service that looks like a job for the future as we head into the age of the Jetsons. Listen up and get to the cutting edge.
Geoffrey Hinton var gnistan som fick AI-utvecklingen att explodera. För det fick han Nobelpris i fysik 2024. Men han varnar själv för det hot AI kan utgöra mot mänskligheten. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sändes första gången 6/12-2024.Vi besöker Geoffrey Hinton i hans hem i Toronto, och hör om den press han upplevde i barndomen, och om hans oerhörda drivkraft genom decennierna med övertygelsen om att de så kallade neurala nätverken var det som bäst kunde skapa en artificiell intelligens.Över en kopp kaffe berättar han om hur han nyligen lämnade sitt jobb på Google, samtidigt som han vaknat till insikten att AI snart kan bli mer intelligent än vi människor, och om att den då kan vilja ta över och göra sig av med oss människor. Hur tänker han sig hotet rent konkret, och vad kan vi göra för att tygla den artificiella intelligensen och använda den som den enorma positiv kraft den också kan vara?Reporter:Björn Gunér bjorn.guner@sr.se Producent: Lars Broström lars.brostrom@sr.se
OpenAI's Sam Altman is doing a full blown AI media tour and taking no prisoners. GPT-5! Humanoid robotics! Smack talk! The next generation of AI is…maybe almost here? We unpack Altman's brand-new in-house podcast (and his brother's), confirm the “likely-this-summer” GPT-5 timeline and reveal why Meta is dangling $100 million signing bonuses at OpenAI staff. Plus: the freshly launched “OpenAI Files” site, Altman's latest shot at Elon, and what's real versus propaganda. Then it's model-mania: Midjourney Video goes public, ByteDance's Seedance stuns, Minimax's Hailuo 02 levels up, and yet Veo 3 still rules supreme. We tour Amazon's “fewer-humans” future, Geoffrey Hinton's job-loss warning, Logan Kilpatrick's “AGI is product first” take, and a rapid-fire Robot Watch: 1X's world-model paper, Spirit AI's nimble dancer, and Hexagon's rollerblade-footed speedster. THE ROBOTS ARE ON WHEELS. GPT-5 IS AT THE DOOR. IT'S A GOOD SHOW. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI's Official Podcast with Sam Altman https://youtu.be/DB9mjd-65gw?t=632 Sam Altman on Jack Altman's Podcast https://youtu.be/mZUG0pr5hBo?si=QNv3MGQLWWQcb4Aq Boris Power (Head of OpenAI Research) Tweet https://x.com/BorisMPower/status/1935160882482528446 The OpenAI Files https://www.openaifiles.org/ Google's Logan Kilpatrick on AGI as Product https://x.com/vitrupo/status/1934627428372283548 Midjourney Video is now LIVE https://x.com/midjourney/status/1935377193733079452 Our early MJ Video Tests https://x.com/AIForHumansShow/status/1935393203731283994 Seedance (New Bytedance AI Video Model) https://seed.bytedance.com/en/seedance Hailuo 2 (MiniMax New Model) https://x.com/Hailuo_AI/status/1935024444285796561 SQUIRREL PHYSICS: https://x.com/madpencil_/status/1935011921792557463 Higgsfield Canvas: a state-of-the-art image editing model https://x.com/higgsfield_ai/status/1935042830520697152 Krea1 - New AI Imaging Model https://www.krea.ai/image?k1intro=true Generating Mickey Mouse & More In Veo-3 https://x.com/omooretweets/status/1934824634442211561 https://x.com/AIForHumansShow/status/1934832911037112492 LA Dentist Commericals with Veo 3 https://x.com/venturetwins/status/1934378332021461106 AI Will Shrink Amazon's Workforce Says Andy Jassy, CEO https://www.cnbc.com/2025/06/17/ai-amazon-workforce-jassy.html Geoffrey Hinton Diary of a CEO Interview https://youtu.be/giT0ytynSqg?si=BKsfioNZScK4TJJV More Microsoft Layoffs Coming https://x.com/BrodyFord_/status/1935405564831342725 25 New Potential AI Jobs (from the NYT) https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html 1X Robotics World Model https://x.com/1x_tech/status/1934634700758520053 SpiritAI just dropped their Moz1 humanoid https://x.com/XRoboHub/status/1934860548853944733 Hexagon Humanoid Robot https://x.com/TheHumanoidHub/status/1935126478527807496 Training an AI Video To Make Me Laugh (YT Video) https://youtu.be/fKpUP4dcCLA?si=-tSmsuEhzL-2jdMY
He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices
As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Professor Elan Barenholtz, cognitive scientist at Florida Atlantic University, joins TOE to discuss one of the most unsettling ideas in cognitive science: that language is a self-contained, autoregressive system with no inherent connection to the external world. In this mind-altering episode, he explains why AI's mastery of language without meaning forces us to rethink the nature of mind, perception, and reality itself... Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e Timestamps: 00:00 The Mind and Language Connection 02:09 The Grounded Thesis of Language 09:29 The Epiphany of Language 13:06 The Dichotomy of Language and Perception 16:24 Language as an Autonomous System 19:48 The Problem of Qualia and Language 23:35 Bridging Language and Action 31:32 Exploring Embeddings in Language 38:21 The Platonic Space of Language 44:17 The Challenges of Meaning and Action 51:05 Understanding the Complexity of Color 52:53 The Paradox of Language Describing Itself 58:19 The Map of Language and Action 1:07:48 Continuous Learning in Language Models 1:11:46 The Nature of Memory 1:22:46 The Role of Context 1:32:18 Exploring Language Dynamics 1:39:44 The Shift from Oral to Written Language 2:11:34 Language and the Cosmic Whole 2:21:35 Reflections on Existence Links Mentioned: • Elan's Substack: https://elanbarenholtz.substack.com • Elan's X / Twitter: https://x.com/ebarenholtz • Geoffrey Hinton on TOE: https://youtu.be/b_DUft-BdIE • Joscha Bach and Ben Goertzel on TOE: https://youtu.be/xw7omaQ8SgA • Elan's published papers: https://scholar.google.com/citations?user=2grAjZsAAAAJ • Ai medical panel on TOE: https://youtu.be/abzXzPBW4_s • Jacob Barandes and Manolis Kellis on TOE: https://youtu.be/MTD8xkbiGis • Will Hahn on TOE: https://youtu.be/3fkg0uTA3qU • Noam Chomsky on TOE: https://youtu.be/DQuiso493ro • Greg Kondrak on TOE: https://youtu.be/FFW14zSYiFY • Andres Emilsson on TOE: https://youtu.be/BBP8WZpYp0Y • Harnessing the Universal Geometry of Embeddings (paper): https://arxiv.org/pdf/2505.12540 • Yang-Hui He on TOE: https://youtu.be/spIquD_mBFk • Iain McGilchrist on TOE: https://youtu.be/Q9sBKCd2HD0 • Curt interviews ChatGPT: https://youtu.be/mSfChbMRJwY • Empiricism and the Philosophy of Mind (book): https://www.amazon.com/dp/0674251555 • Karl Friston on TOE: https://youtu.be/uk4NZorRjCo • Michael Levin and Anna Ciaunica on TOE: https://youtu.be/2aLhkm6QUgA • The Biology of LLMs (paper): https://transformer-circuits.pub/2025/attribution-graphs/biology.html • Jacob Barandes on TOE: https://youtu.be/YaS1usLeXQM • Emily Adlam on TOE: https://youtu.be/6I2OhmVWLMs • Julian Barbour on TOE: https://youtu.be/bprxrGaf0Os • Tim Palmer on TOE: https://youtu.be/vlklA6jsS8A • Neil Turok on TOE: https://youtu.be/ZUp9x44N3uE • Jayarāśi Bhaṭṭa: https://plato.stanford.edu/entries/jayaraasi/ • On the Origin of Time (book): https://www.amazon.com/dp/0593128443 SUPPORT: - Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Support me on Patreon: https://patreon.com/curtjaimungal - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices
‘Godfather of AI' Predicts it will Take Over the World, Thomas Sowell Warns About the Year 2030, Eric Metaxas Talks to John Zmirak. ‘Godfather of AI' predicts it will take over the world Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today The Eric Metaxas Show- Eric talks to John Zmirak. ‘Godfather of AI' predicts it will take over the world Watch this video at- https://youtu.be/vxkBE23zDmQ?si=ielwtz0KnJrDUH6q LBC 1.2M subscribers 1,492,202 views Jan 30, 2025 #geoffreyhinton #LBC #ai Nobel Prize winner Geoffrey Hinton, the physicist known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world. Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation. Listen to the full show on Global Player: https://app.af.globalplayer.com/Br0x/... #Andrewmarr #ai #geoffreyhinton #LBC LBC is the home of live debate around news and current affairs in the UK. Join in the conversation and listen at https://www.lbc.co.uk/ Sign up to LBC's weekly newsletter here: https://l-bc.co/signup Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today https://youtu.be/ItDFsPqDIEs?si=W21eNnZeSKGcsnKq Thomas Sowell Today 137K subscribers 252,497 views May 29, 2025 #thomassowelltoday #thomassowell #thomassowelltv How Cultural Decline Happens SLOWLY - Then All at ONCE | Thomas Sowell Today ✅Subscribe for More: / @sowelltoday Commentary: Thomas Sowell Today
Patrick explores the tradition of the Easter greeting “Christ is risen—He is risen indeed,” and explains why Sunday Mass obligation does not apply when someone must care for a loved one in crisis. Patrick discusses the meaning behind the 153 fish in John’s Gospel and explains that Catholics can fulfill their Sunday obligation at any rite in communion with Rome. He addresses why cardinals over 80 do not vote in the Conclave, shares concerns and opportunities around the rapid rise of AI, and suggests how to find wholesome movies using new technology. Patrick encourages listeners to keep learning, stay curious, and live out their faith with confidence. Jen (email) - Give us a little lesson on the “Christ is risen” and the response “He is risen indeed.” (00:38) Andrea – I couldn’t make mass yesterday. Do I need to go to Confession? (03:41) Judy (email) – My reaction to the meme of DJT as Pope was a chuckle followed by, to quote Winni the Pooh, “oh bother”. (06:09) Mary - Can Cardinals above the age to vote still attend Conclave? (08:41) David - What does the 153 fish in yesterday's Gospel represent? (11:13) JuanLuis (email) - I know there are several Catholic churches in communion with Rome, and I would like to know if I can attend Sunday services in another Catholic church, such as a Byzantine Catholic church. Would I be fulfilling my Sunday mass obligation? (15:00) John (10-years-old) - Can religious people marry non-religious people? (18:06) Audio: AI pioneer Geoffrey Hinton says the world is not prepared for what's coming (22:55) Klinsmayer (email) - I found out this weekend on Life Site news that the Governor (Catholic) of Washington signed a bill that demands that priests break the seal of confession under circumstances involving a crime. (29:34) Joel (email) - Yes the Trump as Pope meme was dumb. What was not dumb was the National Prayer Service Day last week that President Trump hosted at the White House and It was really nice to see Bishop Barron offering his prayers live at this event. (32:41) Daniel (email) - Rumor has it “Smoke Watch” changed their band name to Deep Purple and later paid homage to the original name with the hit, Smoke on the Water. (33:37) Greg (email) - If a man has ED, how is the marital act supposed to be completed in the normal manner with his wife? (35:04) Patrick walks through how he uses AI to help him pick a movie to watch (39:06) Charles - AI singularity: I think some are taking it too far and they think humans can become God. (43:48)
God's Debris: The Complete Works, Amazon https://tinyurl.com/GodsDebrisCompleteWorksFind my "extra" content on Locals: https://ScottAdams.Locals.comContent:Politics, Gene-Edited Super Soldiers, Klaus Schwab Allegations, Artificial Food Dye Ban, Low Testosterone Democrats, Micro-Drama Romance Videos, Catherine Herridge, DEW Weapons, Havana Syndrome, Kari Lake, VOA Rehiring, Government Funded Independent Agencies, Censorship Organizations, Geoffrey Hinton, Human Brains Analogy Machines, Laurence Tribe, Analogy Thinking, First Principle Thinking, Zelensky Peace Reluctance, Bill Pulte, State Department Agency Closures, Tim Poole, Economic Uncertainty, Iranian Drone Expertise, President Trump Negotiation Technique, Shake The Box Negotiations, UK Sunlight Reduction, EU Fines Apple META, Scott Jennings, Abby Phillip, Jen Psaki MSNBC Bias, Elon Musk, NATO of NGOs, Worldwide Shadow Government, Jennifer Rubin, Norm Eisen, Psychedelic Brain Therapy, Drone Manufacturing, Scott Adams~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure.
The microchip maker Nvidia is a Silicon Valley colossus. After years as a runner-up to Intel and Qualcomm, Nvidia has all but cornered the market on the parallel processors essential for artificial-intelligence programs like ChatGPT. “Nvidia was there at the beginning of A.I.,” the tech journalist Stephen Witt tells David Remnick. “They really kind of made these systems work for the first time. We think of A.I. as a software revolution, something called neural nets, but A.I. is also a hardware revolution.” In The New Yorker, Stephen Witt profiled Jensen Huang, Nvidia's brilliant and idiosyncratic co-founder and C.E.O. His new book is “The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip.” Until recently, Nvidia was the most valuable company in the world, but its stock price has been volatile, posting the largest single-day loss in history in January. But the company's story is only partially a business story; it's also one about global superpowers, and who will decide the future. If China takes military action against Taiwan, as it has indicated it might, the move could wrest control of the manufacturing of Nvidia microchips from a Taiwanese firm, which is now investing in a massive production facility in the U.S. “Maybe what's happening,” Witt speculates, is that “this kind of labor advantage that Asia had over the United States for a long time, maybe in the age of robots that labor advantage is going to go away. And then it doesn't matter where we put the factory. The only thing that matters is, you know, is there enough power to supply it?” Plus, the staff writer Joshua Rothman has long been fascinated with A.I.—he even interviewed its “godfather,” Geoffrey Hinton, for The New Yorker Radio Hour. But Rothman has become increasingly concerned about a lack of public and political debate over A.I.—and about how thoroughly it may transform our lives. “Often, if you talk to people who are really close to the technology, the timelines they quote for really reaching transformative levels of intelligence are, like, shockingly soon,” he tells Remnick. “If we're worried about the incompetence of government, on whatever side of that you situate yourself, we should worry about automated government. For example, an A.I. decides the length of a sentence in a criminal conviction, or an A.I. decides whether you qualify for Medicaid. Basically, we'll have less of a say in how things go and computers will have more of a say.” Rothman's essay “Are We Taking A.I. Seriously Enough?” appears in his weekly column, Open Questions. Learn about your ad choices: dovetail.prx.org/ad-choices
The microchip maker Nvidia is a Silicon Valley colossus. After years as a runner-up to Intel and Qualcomm, Nvidia has all but cornered the market on the parallel processors essential for artificial-intelligence programs like ChatGPT. “Nvidia was there at the beginning of A.I.,” the tech journalist Stephen Witt tells David Remnick. “They really kind of made these systems work for the first time. We think of A.I. as a software revolution, something called neural nets, but A.I. is also a hardware revolution.” In The New Yorker, Stephen Witt profiled Jensen Huang, Nvidia's brilliant and idiosyncratic co-founder and C.E.O. His new book is “The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip.” Until recently, Nvidia was the most valuable company in the world, but its stock price has been volatile, posting the largest single-day loss in history in January. But the company's story is only partially a business story; it's also one about global superpowers, and who will decide the future. If China takes military action against Taiwan, as it has indicated it might, the move could wrest control of the manufacturing of Nvidia microchips from a Taiwanese firm, which is now investing in a massive production facility in the U.S. “Maybe what's happening,” Witt speculates, is that “this kind of labor advantage that Asia had over the United States for a long time, maybe in the age of robots that labor advantage is going to go away. And then it doesn't matter where we put the factory. The only thing that matters is, you know, is there enough power to supply it?” Plus, the staff writer Joshua Rothman has long been fascinated with A.I.—he even interviewed its “godfather,” Geoffrey Hinton, for The New Yorker Radio Hour. But Rothman has become increasingly concerned about a lack of public and political debate over A.I.—and about how thoroughly it may transform our lives. “Often, if you talk to people who are really close to the technology, the timelines they quote for really reaching transformative levels of intelligence are, like, shockingly soon,” he tells Remnick. “If we're worried about the incompetence of government, on whatever side of that you situate yourself, we should worry about automated government. For example, an A.I. decides the length of a sentence in a criminal conviction, or an A.I. decides whether you qualify for Medicaid. Basically, we'll have less of a say in how things go and computers will have more of a say.”Rothman's essay “Are We Taking A.I. Seriously Enough?” appears in his weekly column, Open Questions.