British-Canadian computer scientist and psychologist
POPULARITY
Can a Pez dispenser teach you about quitting smoking? Can AI learn to love us before it wipes us out? And is paper really your best bet in rock-paper-scissors? In this jam-packed episode of The Sandy Show, Sandy and Tricia dive into everything from pop culture obsessions to existential tech fears—with plenty of laughs and unexpected insights along the way. From Taylor Swift's record-breaking podcast appearance and the orange craze it sparked across global brands, to the Godfather of AI warning humanity about its own creation, this episode is a rollercoaster of entertainment, curiosity, and caution. Plus, Tricia reveals the secret to winning rock-paper-scissors, and Sandy opens up about his decade-long journey of sobriety. Guest Spotlight: While this episode doesn't feature a formal guest, it highlights the voices of Sandy and Tricia—two seasoned radio personalities whose chemistry, wit, and honesty make every topic feel personal and engaging. Key Moments:
Tech nostalgia. Winner-take-all economics. The cult of "storytelling". A Stanford educated aristocratic elite. This was the week that nothing changed in Silicon Valley. Alternatively, it was the week that radical change broke some ChatGPT users hearts. That, at least, is how That Was the Week tech newsletter publisher Keith Teare described this week in Silicon Valley. From Sam Altman's sensitivity to user backlash over GPT-5's personality changes, to venture capital's continued concentration in just ten mega-deals, to Geoffrey Hinton's apocalyptic warnings about AI wiping out humanity - the patterns remain stubbornly familiar even as the technology races forward. So is nothing or everything changing? Keith says everything, I say nothing. Maybe - as AI Godfather Hinton suggested on the show earlier this week - it's time for an all-knowing algorithm with maternal instincts to enlighten us with the (female) truth about our disruptive future.1. AI Users Are Forming Deep Emotional BondsChatGPT users experienced genuine heartbreak when GPT-5's personality changes made their AI feel like a different "person." This forced OpenAI to backtrack and restore GPT-4, revealing how humans are treating AI as companions rather than tools.2. Silicon Valley's Power Structures Remain UnchangedDespite AI's revolutionary potential, the same patterns persist: 40% of VC money goes to just 10 deals, Stanford maintains legacy admissions favoring the wealthy, and winner-take-all economics dominate. The technology changes; the power concentration doesn't.3. The Browser Wars Are Over - Chat Interfaces WonThe future battle isn't about owning browsers (like Perplexity's bid for Chrome) but controlling the chat interface. OpenAI and Anthropic are positioning themselves as the new gatekeepers, replacing Google's search dominance.4. AI's Pioneers Are Becoming Its Biggest SkepticsGeoffrey Hinton, the "AI godfather," now believes there's a 15-20% chance AI could wipe out humanity. When the field's leading experts admit they "have no clue" about AI's future risks, it reveals how little anyone really knows about what we're building.5. Context and Prompting Are the New ProgrammingThe era of simple AI prompts is over. Success now requires sophisticated prompt engineering and providing rich context - making AI literacy as crucial as computer literacy once was. The abstractions are changing, and so must our skills.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...
Nathan's work at AI2—and his p(doom) ... What does “open source AI” mean? ... How Nathan taught a Llama new tricks ... Pros and cons of open sourcing AI ... Nathan's ATOM Project to boost American open models ... What's behind OpenAI's open source play? ... Geoffrey Hinton's case against open models ... Is the US-China open model rivalry really zero-sum? ... Heading to Overtime ...
Useful Resources: 1. Ben Shneiderman, Professor Emeritus, University Of Maryland. 2. Richard Hamming and Hamming Codes. 3. Human Centered AI - Ben Shneiderman. 4. Allen Newell and Herbert A. Simon. 5. Raj Reddy and the Turing Award. 6. Doug Engelbart. 7. Alan Kay. 8. Conference on Human Factors in Computing Systems. 9. Software psychology: Human factors in computer and information systems - Ben Shneiderman. 10. Designing the User Interface: Strategies for Effective Human-Computer Interaction - Ben Shneiderman. 11. Direct Manipulation: A Step Beyond Programming Languages - Ben Shneiderman. 12. Steps Toward Artificial Intelligence - Marvin Minsky. 13. Herbert Gelernter. 14. Computers And Thought - Edward A Feigenbaum and Julian Feldman. 15. Lewis Mumford. 15. Technics and Civilization - Lewis Mumford. 16. Buckminster Fuller. 17. Marshall McLuhan. 18. Roger Shank. 19. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness - Jonathan Haidt. 20. John C. Thomas, IBM. 21. Yousuf Karsh, photographer. 22. Gary Marcus, professor emeritus of psychology and neural science at NYU. 23. Geoffrey Hinton. 24. Nassim Nicholas Taleb. 25. There Is No A.I. - Jaron Lanier. 26. Anil Seth On The Science of Consciousness - Episode 94 of Brave New World. 27. A ‘White-Collar Blood Bath' Doesn't Have to Be Our Fate - Tim Wu 28. Information Management: A Proposal - Tim Berners-Lee 29. Is AI-assisted coding overhyped? : METR study 30. RLHF, Reinforcement learning from human feedback31. Joseph Weizenbaum 32. What Is Computer Science? - Allen Newel, Alan J. Perlis, Herbert A. Simon -- Check out Vasant Dhar's newsletter on Substack. The subscription is free!
Few of the world's great scientists have given more thought to the existential threats to humanity than the irrepressible British cosmologist and astronomer Martin Rees. He's the co-founder of Cambridge University's Centre for Existential Risk as well as the author of the 2003 book Our Final Hour. So it's striking that Rees has a quite different take on the existential risk of artificial intelligence technology than many AI doomers including yesterday's guest, the 2024 Physics Nobel laureate Geoffrey Hinton. For Rees, bio-threats and network collapse represents the most dangerous technological threats to humanity in the near future. Unlike nuclear weapons, which require massive detectable infrastructure, Rees warns, dangerous pathogens can be engineered in small, unmonitored laboratories. Meanwhile, our civilization's complete dependence on interconnected global networks means system failures could trigger catastrophic societal breakdown within days. Apocalypse now? Perhaps. But, according to the prescient Rees, we are preparing for the wrong apocalypse. 1. AI's Real Danger Isn't Superintelligence—It's System DependencyRees is "very skeptical" about AI takeover scenarios. Instead, he worries about our over-dependence on globe-spanning networks that control electricity grids and internet infrastructure. When these fail—whether from cyberattacks or malfunctions—society could collapse within "two or three days."2. Bio-Threats Are Uniquely Undetectable and UnstoppableUnlike nuclear weapons that require massive, monitorable facilities, dangerous pathogens can be engineered in small, undetected laboratories. "Gain of function" experiments could create bioweapons far worse than COVID, and preventing this would require impossible levels of surveillance over anyone with relevant expertise.3. We're Living Through a Uniquely Dangerous EraRees believes "the prospect of a catastrophe in the next 10 or 20 years is perhaps higher than it's ever been." We're the first species in Earth's history capable of changing the entire planet—for good or ill—making this a genuinely special and precarious moment.4. Scientific Wonder Grows with Knowledge, Not Despite ItContrary to those who claim science diminishes mystery, Rees - the co-author of an upcoming book about scientific wonder - argues that "the more we understand, the more wonderful and complicated things appear." As knowledge advances, new mysteries emerge that couldn't even be conceived decades earlier.5. Humility About Human Limitations Is EssentialJust as "a monkey can't understand quantum mechanics," there may be fundamental aspects of reality beyond human comprehension. Rees warns against immediately invoking God for unexplained phenomena, advocating instead for accepting our cognitive limits while continuing to push boundaries.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
So will AI wipe us out? According to Geoffrey Hinton, the 2024 Nobel laureate in physics, there's about a 10-20% chance of AI being humanity's final invention. Which, as the so-called Godfather of AI acknowledges, is his way of saying he has no more idea than you or I about its species-killing qualities. That said, Hinton is deeply concerned about some of the consequences of an AI revolution that he pioneered at Google. From cyber attacks that could topple major banks to AI-designed viruses, from mass unemployment to lethal autonomous weapons, Hinton warns we're facing unprecedented risks from technology that's evolving faster than our ability to control it. So does he regret his role in the invention of generative AI? Not exactly. Hinton believes the AI revolution was inevitable—if he hadn't contributed, it would have been delayed by perhaps a week. Instead of dwelling on regret, he's focused on finding solutions for humanity to coexist with superintelligent beings. His radical proposal? Creating "AI mothers" with strong maternal instincts toward humans—the only model we have for a more powerful being designed to care for a weaker one.1. Nobody Really Knows the Risk Level Hinton's 10-20% extinction probability is essentially an admission of complete uncertainty. As he puts it, "the number means nobody's got a clue what's going to happen" - but it's definitely more than 1% and less than 99%.2. Short-Term vs. Long-Term Threats Are Fundamentally Different Near-term risks involve bad actors misusing AI (cyber attacks, bioweapons, surveillance), while the existential threat comes from AI simply outgrowing its need for humans - something we've never faced before.3. We're Creating "Alien Beings" Right Now Unlike previous technologies, AI represents actual intelligent entities that can understand, plan, and potentially manipulate us. Hinton argues we should be as concerned as if we spotted an alien invasion fleet through a telescope.4. The "AI Mothers" Solution Hinton's radical proposal: instead of trying to keep AI submissive (which won't work when it's smarter than us), we should engineer strong maternal instincts into AI systems - the only model we have of powerful beings caring for weaker ones.5. Superintelligence Is Coming Within 5-20 Years Most leading experts believe human-level AI is inevitable, followed quickly by superintelligence. Hinton's timeline reflects the consensus among researchers, despite the wide range.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Künstliche Intelligenz und humanoide Roboter revolutionieren nahezu jeden Lebensbereich – auch das Gesundheitswesen. Doch was bedeutet das für die Zukunft der Medizin? Können Maschinen bessere Ärzte sein als Menschen?In dieser Folge des ERCM Medizin Podcasts sprechen wir mit Prof. Dr. Stefan Gröner, Zukunftsforscher, KI-Experte und Professor für digitales Management an der Hochschule Fresenius. Er gibt faszinierende Einblick in aktuelle Entwicklungen der medizinischen Robotik und deren Einsatz in Krankenhäusern und Pflegeeinrichtungen weltweit.Prof. Dr. Gröner erklärt, welche völlig neuen Möglichkeiten die Mensch-Maschine-Interaktion eröffnet und wie Künstliche Intelligenz das Arzt-Patienten-Verhältnis verändert. Besonders spannend: Können Roboter so etwas wie echte Gefühle entwickeln – oder nur perfekt simulieren?Wir diskutieren Nano-Roboter in der Krebstherapie, die Grenzen autonomer medizinischer Entscheidungen, ethische Fragen sowie Europas Wettbewerbsfähigkeit im Spannungsfeld zwischen Innovationskraft und Regulierung. Stefan Gröner zeigt, wie Kooperationen zwischen Mensch und Maschine gelingen können, welche menschlichen Fähigkeiten künftig besonders wertvoll sind – und warum unser Bildungssystem überdacht werden muss. Seine These: „Es gibt kein Berufsbild, wo KI nicht eine dramatische Veränderung herbeiführen wird.“Ein aufschlussreiches Gespräch über die Zukunft der Medizin, gesellschaftliche Herausforderungen und die entscheidende Frage: Sind wir bereit für eine Welt, in der Roboter unsere Partner werden?Zentrale Themen:- Aktuelle Fortschritte in der humanoiden Robotik und deren Einsatz in der Medizin- Mensch-Roboter-Interaktion: Können Maschinen bereits Empathie simulieren?- Pflege der Zukunft: Roboter als Entlastung für Pflegekräfte- Nano-Robotik: Mikroskopische Helfer in der Krebstherapie- Berufswandel: Welche Jobs wird die KI ersetzen?- Gesellschaftliche Herausforderungen: Von Massenarbeitslosigkeit bis zum BildungswandelDer ERCM Medizin Podcast – Social & Webseite:Instagram: @ercm.podcastTikTok: @ercm.podcastWebseite: www.erc-munich.comKontakt: podcast@erc-munich.comProf. Dr. Stefan Gröner:Webseite: https://stefan-groener.de/LinkedIn: https://www.linkedin.com/in/dr-stefan-gr%C3%B6ner-7420b1b4/?original_referer=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&originalSubdomain=deZeitangaben:00:00 - Einführung: Roboter in der Medizin - Partner oder Konkurrenten?01:13 - Prof. Gröners Weg: Von Medien zur KI-Forschung04:04 - Lebenslanges Lernen im KI-Zeitalter07:26 - Deepfakes und Informationsqualität in der Medizin10:38 - Fortschritte in der KI-gesteuerten Robotik der letzten Jahre15:00 - Medizinische Robotik: Japan und China als Vorreiter21:26 - Mensch-Roboter-Interaktion: Können Roboter Empathie simulieren?25:28 - Therapie-Roboter in der Altenpflege und Demenzbetreuung29:55 - Nano-Robotik: Zielgerichtete Medikamentenabgabe33:08 - Neurolink: Gedanken lesen in naher Zukunft?37:03 - CRISPR und KI-gestützte Medikamentenentwicklung42:53 - Ethik: Sollen Maschinen medizinische Entscheidungen treffen?47:28 - EU-Regulierung vs. Innovation: Wettbewerbsnachteil?54:40 - Geoffrey Hinton warnt vor Massenarbeitslosigkeit1:03:38 - Bildungssystem: Medizinstudium muss sich radikal ändern1:06:25 - Abschluss: Erfolgreiche Mensch-Maschine-Kollaboration#KIMedizin #HumanoideRoboter #MedizinZukunft #RoboterÄrzte #Gesundheitswesen #DigitaleTherapie #MedizinischeEthik #KIRevolution
From Plato and Derrida to anti-aging treatments, cryogenics, cloning, and whole-brain uploads, the dream of indefinite life is technological and, as Adam Rosenthal shows in Prosthetic Immortalities: a matter of prosthesis, the transformation of the original being. There can be no certainty of immortality and yet, the problem of immortality continues to haunt the soul. Rosenthal engages David Wills and Deborah Goldgaber in a conversation that touches on philosophy, transhumanism, biopolitics, Dolly the sheep and the return of the dire wolf, what it means to extend life or, ultimately, to extend death.Adam R. Rosenthal is associate professor of French and global studies at Texas A&M University. Rosenthal is author of Prosthetic Immortalities: Biology, Transhumanism, and the Search for Indefinite Life and Poetics and the Gift: Reading Poetry from Homer to Derrida.David Wills is professor of French studies at Brown University and author of Prosthesis. Deborah Goldgaber is assistant professor of philosophy at Louisiana State University and author of Speculative Grammatology: Deconstruction and the New Materialism.REFERENCES:Plato HomerDescartesHeidegger (the Dasein)DerridaGeoffrey HintonHegelNick BostrumDolly the sheepDavid ChalmersAubrey de GreyJean-Baptiste LamarckPraise for the book:“Rigorous, compelling, and beautifully written, Prosthetic Immortalities is at the vanguard of the new wave in Derrida studies.”—Nicole Anderson, founding editor, Derrida Today Journal“Adam R. Rosenthal conjures up the ghosts of metaphysics that return today through the promises of indefinite life from medical science and transhumanist speculations, moving brilliantly between science and science fiction.”—Francesco Vitale, author of Biodeconstruction: Jacques Derrida and the Life SciencesProsthetic Immortalities: Biology, Transhumanism, and the Search for Indefinite Life by Adam R. Rosenthal, with foreword by David Wills, is available from University of Minneota Press. Thank you for listening.
Dr. Jon Finn wrote his best-selling book ‘The Habit Mechanic' (which took him over 20 years) because his life's mission is to help people to be their best in the challenging modern world.He founded the award-winning Tougher Minds consultancy and has three psychology-related degrees, including a PhD. He has worked in performance psychology, resilience, and leadership science for over 20 years. He also writes regularly for Forbes.Tougher Minds uses cutting-edge insights from psychology, behavioural science, neuroscience, and world champions to help organisations develop “Habit Mechanics” and “Chief Habit Mechanics”—resilient people, outstanding leaders, and world-class teams.Having trained and coached over 10,000 people, Dr. Finn, and his colleagues, work with global businesses, high-growth startups, individuals, elite athletes, coaches and teams, leading educational institutes, families, the UK government, and think tanks.In this episode, Dominic explores AI's impact on the workforce and the journey of integrating AI into behavioural science and habit formation. Inspired by Geoffrey Hinton's work on neural networks, learn how blending traditional methods with cutting-edge AI enhances understanding of brain function and behaviour. Dr. Jon shares the concept of brain states - recharge, medium-charge, and high-charge - and how AI can revolutionise workflows by automating routine tasks and co-working with humans on complex tasks. Looking forward, we explore the concept of creating high-performing human-AI teams, guiding individuals and businesses toward harmonious collaborations with AI, enabling unprecedented speed and efficiency in achieving goals.DiscoverAutomatic Thinking: Human thoughts and behaviours are largely driven by automatic or semi-automatic processes. This is influenced by biases and past experiences, which affect the ability to generate truly conscious and unbiased ideas.Emotional Regulation in Athletes: In elite sports, the ability to regulate emotions plays a pivotal role in determining whether young athletes maintain or lose their professional status as they age.Effort and Performance: Sustained success in any field relies on consistent effort, ongoing learning, and the ability to perform effectively under pressure, particularly in challenging or high-stakes situations.Training Gaps in Mental Performance: While athletes typically receive extensive training in physical, technical, and tactical aspects, they often lack structured training on understanding and improving their cognitive and emotional processes.Risks from AI in the Workplace: AI is disrupting workplace roles that involve medium-energy tasks, and people unable to adapt or up-skill are at risk of being replaced. Emotional regulation and the ability to shift to higher-performance states are critical for adapting to these changes.
Canada's outdated capital gains policies are driving entrepreneurs and investors away. We need competitive tax reform to keep talent and investment here, building the businesses of tomorrow.We have just 33 small businesses per 1,000 people vs 124 in the US. Fixing our capital gains system could help us close this gap with the US and create hundreds of thousands of new jobs.Modern capital gains reform will unleash Canadian innovation, create more high-paying employment, and ensure our world-class graduates build their companies here, not elsewhere.GoalsTo ensure a prosperous, sustainable, and growing economy, Canada needs a thriving private sector that invests in new businesses. A strong environment for entrepreneurship creates jobs, drives GDP growth, and ensures economic mobility for all. In recent years however, entrepreneurship, and consequently private sector employment, has been slow despite an increasing population.One factor driving this change is that Canada's capital gains tax policies make it significantly less rewarding to start a business compared to other jurisdictions. To reverse this trend and reinvigorate our private sector, we must revise our outdated policies to align with global standards.Our targets:* Increase SMBs per 1000 people over the age of 18 from 33 to 62 to get half of the US rate of 124.* Increase the number of early-stage financing rounds (Pre-seed, Seed, Series A, and Series B) for new businesses from 482 in 2024 to over 1000+ per year.* Increase investments in new businesses through industry-agnostic venture capital financing to 0.5% of GDP, up from 0.35% of GDP, to get closer to the USA's figure of 0.72% of GDP.Background and MotivationNew business formation and growth relies on people taking huge risks with their time and money. However, today in Canada the people that take these risks – entrepreneurs, early stage employees, and investors – are rewarded less than in other countries.As a result the country's best talent is driven to leave and start businesses elsewhere, where they can find easier access to funding1 and keep more of the upside if they succeed.We need to reverse this systematic issue. By rewarding investors that put their capital at risk and supporting entrepreneurs who put their livelihoods on the line to create new companies we can create a strong and resilient economy.All companies begin as small and medium businesses (SMBs) and the formation and growth of these SMBs is essential to a country's economic success both through driving the quality of the labour market and creating opportunities for productivity growth.In Canada, SMBs accounted for ~64% of private sector employment and contributed to half of all net new jobs added last year2. These work opportunities support upward income mobility, lead to more capital being reinvested into local communities, and are particularly valuable for traditionally disadvantaged populations3 4 5.In addition, SMBs represent a significant portion of the economy and have high potential for productivity improvements6. Between 2017 and 2021, SMBs contributed almost half of Canada's GDP7. As these businesses grow and scale their operations they improve efficiency and drive productivity-led growth that can be equivalent in impact to roughly 5% of a developed nation's GDP8 9.Perhaps most importantly, SMBs turn into global winners. Growing these companies into sizable businesses is how a country can win an unfair share of global markets, by creating the large, export-focused corporations that contribute an outsized value to GDP and productivity growth. To ensure the next trillion dollar companies - the equivalent of Google, Microsoft, or Meta - are built in Canada, founders must be convinced to start their companies here.So, having a healthy ecosystem of SMBs is essential to creating a strong economy, but the data shows Canada is falling behind our global peers. In the 20 years between 2003 and 2023, the total number of Canadian entrepreneurs decreased by ~100K, despite the population growing by 10 million10 11. Today, for every thousand people over the age of 18 the US has ~124 SMBs12 13. Israel, a country with less than a quarter of Canada's population, has ~7314 15, while Canada has just ~3316.A significant driver of this stagnation is outdated and uncompetitive capital gains policies that have low limits, exclude large categories of business, and contain many restrictions compared to global peers - especially the US. It is less valuable for investors to put money into Canadian businesses, making capital more scarce and it discourages entrepreneurs who know that in most cases they could receive more reward by building the same company elsewhere. This makes it difficult for any SMB to get started let alone scale.Today, Canada has two capital gains policies, to try and encourage SMB creation, the Lifetime Capital Gains Exemption (LCGE) and a proposed Canadian Entrepreneur's Incentive (CEI) announced in Budget 2024 but not yet implemented. Combined, the LCGE and CEI would allow shareholders to reduce the inclusion rate of capital gains from the current 50% down to a range of 33.3%-0% to a cap of $3.25M 17 18.These policies simply can't compete with the US. The USA's Qualified Small Business Stock (QSBS) policy has a capital gains cap of $15M or ten times the original investment amount, five times higher than Canada's LCGE and CEI limit. In addition the QSBS is active today, while Canada's CEI cap has a phased approach only coming into full effect in 2029 if the policy is passed. Today in 2025, LCGE and CEI's true combined cap is only $1.25M. And while QSBS shields 100% of gains up until the policy cap for individuals and corporations, Canada's CEI would only shields 66.7% of gains for individuals.To illustrate how restrictive this is, we could imagine a company where the business is owned between founders, early employees, and various investors (see the first example below). If this business was started in 2018 and sold 7 years later today in 2025 for $100M, these risk-takers would have to pay a combined $14.7M in taxes. However, that same business with the same structure would pay no taxes in the US.The good news is that at larger scales of exit like $250m (see the second example below) the gap between Canada and the US decreases due to a more competitive basic capital gains inclusion rate in Canada. This means that if we match the QSBS's capital gains limit it could actually give the Canadian policy an edge driving more investment in the country and supercharging our SMB ecosystem. However, if we leave the policy as it stands right now companies can never get started because investors and entrepreneurs are scared away.The reason is that the QSBS rewards smaller exits - the majority of SMB outcomes - with the maximum capital gains tax value. This makes it easier for entrepreneurs, early employees, and investors to take on the risks of building a business. In fact, early-stage US investors are currently increasing their investments into new Canadian businesses, and adding in clauses that would require the Canadian business to reincorporate in the US simply to become eligible for QSBS. This means the best Canadian entrepreneurs and companies are leaving the country simply to take advantage of these rules. This decreases the health of our SMB ecosystem, prevents large companies from growing in the country and ultimately reduces tax revenue.If we want to keep our entrepreneurs, Canada's capital gains policies must become competitive with US policies.Beyond better gain caps and exclusion rates, the US's QSBS allows a wider range of businesses and stakeholders to benefit from the policy, with no minimum ownership requirements, increased asset value caps, and a tiered inclusion rate approach that incentivizes long-term business building. Meanwhile, Canada's CEI excludes companies in healthcare, food and beverage, and service businesses19. CEI's minimum ownership rules also exclude early employees and investors who own less than 5% of the business at the time of sale.Most importantly, while LCGE and CEI's $3.25M cap applies over a taxpayer's entire lifetime, QSBS's limits are per issuer or business. In other words, entrepreneurs, early employees, and investors can use the QSBS more favourable policy again and again for subsequent companies. This discourages repeat entrepreneurs in Canada, who statistically have a higher chance of building successful businesses, from creating a second or third company, as Canada's LCGE and CEI don't extend to new issuers20 .What Needs to Be DoneTo properly reward risk takers, Canada can fully solve our capital gains policy problems by combining the LCGE with the CEI into a simple, powerful capital gains policy that supports entrepreneurs. In particular, the new policy could become competitive by adopting three major changes:1) Expand the eligibility requirements to ensure Canadian entrepreneurs and risk takers are supported. Eligible business types should be expanded to include all industries of national interest, including healthcare clinics, clean energy, technology, etc. We should also eliminate 5% minimum ownership requirements to enable any individual or corporate entity to claim CEI deductions in accordance with the tiered approach that is used to support early-stage employees and investors.2) Improve the capital gain exclusion rate system to be globally competitive, supporting entrepreneurs and increasing investment. To prevent the draw of foreign jurisdictions and ensure that we have just as much incentive to start companies as peer countries, we should start by raising the exclusion cap to $15M gain or 10x adjusted cost basis per taxpayer, whichever is greater.3) Make structural changes to ensure these new policies scale appropriately. Amend the capital gains limit from applying per lifetime to per business to incentivize repeat entrepreneurs to continue building in Canada. Additionally, ensure that common investment structures, including Simple Agreements for Future Equity (SAFEs) and Convertible Notes, become eligible, with the holding period commencing from the date the investment is signed, not when the shares are priced and converted. So, there are no major discrepancies for startups choosing to operate in Canada compared to the US.Common QuestionsWill this only benefit tech startups?No. Canada's LCGE was originally created to support all small businesses and increase competition, which includes non-tech businesses such as fisheries and farmers. Our memo recommends expanding eligibility to all industries deemed essential, including non-tech ones, that the current CEI proposal omits, such as healthcare practitioners. In the US, SMBs of all sectors, including manufacturing, retail, wholesale, consumer, and packaged goods, benefit from the QSBS policy21.Wouldn't corporate tax breaks reduce tax income for social programs and only benefit the wealthy 1%?No, this would encourage investment in Canadian small businesses, essential for increasing corporate tax revenue that funds social programs. Businesses that receive investment can generate more jobs, pay higher wages, which help increase individual income tax revenue, and reduce withdrawals from crucial social assistance programs, such as Employment Insurance, as more companies and workers stay in Canada. This helps reduce the burden and improve access to social programs, rather than removing them.What stops foreign investors from abusing this and using Canada as a tax-sheltered haven to enrich themselves at the expense of Canadians?Maintaining Canadian incorporation, assets, residency, and operating requirements, combined with a minimum 2-year waiting period before benefits kick in, will ensure that new businesses maintain a presence in Canada, creating skilled job opportunities for Canadians and contributing to local economic growth.Why should we invest in SMBs? Aren't they risky and likely to be shut down in a few years?68% of SMBs in Canada survive and operate into their fifth year, and a further 49% of SMBs survive and operate for more than a decade22. SMBs around the world, including Canada, contribute significantly to economic output, job opportunities, and increased competition for consumers.ConclusionCanada needs to create an ecosystem that supports entrepreneurs at the earliest stages. We have one of the most educated countries globally, with the largest college-educated workforce among G7 countries23. Canadian universities are consistently ranked among the top institutions globally, world-renowned, with research labs led by leaders like Geoffrey Hinton, dubbed the “Godfather of AI,” who was recently awarded a Nobel Prize for his work in AI and ML24 25.Not only is our population talented, but they are also resourceful and hardworking. Rather than punishing them, we should reward them for taking the risks to build Canada's economy. To start, we should implement a modern capital gains policy that rewards investors, entrepreneurs and early employees.Read more here: https://www.buildcanada.com/en/memos/reward-the-risk-takers This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
In this episode of Disruptors x CDL: The Innovation Era, hosts John Stackhouse, Senior VP of RBC, and Sonia Sennik, CEO of Creative Destruction Lab, dive into one of the most transformative technologies of our time: Artificial Intelligence. With the potential to revolutionize industries from healthcare to energy, AI is reshaping the global economy — and Canada is both a leader in research and a laggard in adoption.This week, Geoffrey Hinton, Professor at the University of Toronto, was awarded the Nobel Prize in Physics for his research in artificial intelligence that began in 1987.Join John and Sonia as they discuss Canada's AI ecosystem and the country's challenges in keeping pace with global AI adoption. They're joined by three visionary guests: Sheldon Fernandez, CEO of Darwin AI, Kory Mathewson, Senior Research Scientist at Google DeepMind, and Gillian Hadfield, a Schmidt Sciences AI2050 Senior Fellow. Together, they explore the opportunities and barriers in AI adoption, the creative applications of AI, and the role Canada must play in the future of AI.This episode is packed with insights for business leaders, policymakers, and anyone curious about how AI is changing our world. Whether you're an AI enthusiast or a skeptic, this episode will challenge your thinking on the role of technology in shaping the future.Tune in to learn how AI is both an opportunity and a responsibility, and how Canada can lead the charge in this new innovation era.Sign up to receive RBC Thought Leadership's newsletter, flagship reports and analysis on the forces shaping Canadian business and the economy.
Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021
Juicios, Evolución y Fontaneros Aquí no hablamos de la boda de Mr. Amazon del Espacio… Hablamos de algo mucho más surrealista: CEOs que se interrumpen en podcasts, modelos que se entrenan con libros pirata, juicios donde el copyright depende del día… y un padrino de la IA que, en vez de calmar, advierte que esto huele a extinción. ¿Listos para conocer a Brad, a Dario, a Samdios y al que recomienda estudiar fontanería? PUNTOS CLAVE: Altman y Brad entran como estrellas en un directo del NYT… y se van igual de rápido. Pero no sin dejar un recadito legal. Brad Lightcap deja de ser el COO desconocido y se convierte en protagonista con CV de oro y tablas de Excel bajo el brazo. Anthropic y Meta ganan juicios por derechos de autor, pero con asteriscos grandes como data centers. La privacidad y el copyright ya no están en la misma sala… y puede que tampoco tú. Geoffrey Hinton se pone en modo Darwin y te explica cómo un chatbot egoísta puede ser tu sustituto natural. El empleo del futuro se debate entre ser técnico en IA… o fontanero con bata. ¿Y si el futuro ya no lo decide el humano? Bienvenidos al capítulo donde hasta el botón de apagar parece simbólico. Artículos de Referencia Sam Altman en el NYT https://www.aol.com/debate-over-whether-ai-create-091203653.html https://es.wired.com/articulos/este-historico-fallo-sienta-las-bases-del-uso-legal-de-libros-para-entrenar-ia https://officechai.com/ai/geoffrey-hinton-explains-why-ai-agents-could-one-day-compete-with-humans-for-resources/ https://es.wired.com/articulos/meta-gana-un-importante-caso-de-derechos-de-autor-tras-usar-libros-para-entrenar-su-ia
The Breakdown of Shared Reality: AI's Most Dangerous Unintended ConsequenceNobel Prize winner Geoffrey Hinton warns that AI-driven personalization is destroying our collective understanding of what's real.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the breakdown of shared reality caused by AI-driven hyper-personalization and its profound implications for business and society.You'll learn why isolated algorithmic realities undermine strategic thinking, discover the concept of the "Promethean Transition" we're navigating, and find how to choose between being a tunnel digger or pathfinder in our AI future.Question of the Day
After speaking at a CREW conference in Texas, Cal meets a smart young Uber driver who was curious about the number of jobs that we can anticipate losing in America because of AI. The driver wanted to know what the lives of Americans who'd most likely lose their jobs were going to look like, and how these people might get by and transition. Cal finds some answers from Sam Altman, the CEO of OpenAI, and Geoffrey Hinton, the Godfather of Artificial Intelligence. He also discovers a half-helicopter/half-airplane taxi service that looks like a job for the future as we head into the age of the Jetsons. Listen up and get to the cutting edge.
Geoffrey Hinton var gnistan som fick AI-utvecklingen att explodera. För det fick han Nobelpris i fysik 2024. Men han varnar själv för det hot AI kan utgöra mot mänskligheten. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sändes första gången 6/12-2024.Vi besöker Geoffrey Hinton i hans hem i Toronto, och hör om den press han upplevde i barndomen, och om hans oerhörda drivkraft genom decennierna med övertygelsen om att de så kallade neurala nätverken var det som bäst kunde skapa en artificiell intelligens.Över en kopp kaffe berättar han om hur han nyligen lämnade sitt jobb på Google, samtidigt som han vaknat till insikten att AI snart kan bli mer intelligent än vi människor, och om att den då kan vilja ta över och göra sig av med oss människor. Hur tänker han sig hotet rent konkret, och vad kan vi göra för att tygla den artificiella intelligensen och använda den som den enorma positiv kraft den också kan vara?Reporter:Björn Gunér bjorn.guner@sr.se Producent: Lars Broström lars.brostrom@sr.se
OpenAI's Sam Altman is doing a full blown AI media tour and taking no prisoners. GPT-5! Humanoid robotics! Smack talk! The next generation of AI is…maybe almost here? We unpack Altman's brand-new in-house podcast (and his brother's), confirm the “likely-this-summer” GPT-5 timeline and reveal why Meta is dangling $100 million signing bonuses at OpenAI staff. Plus: the freshly launched “OpenAI Files” site, Altman's latest shot at Elon, and what's real versus propaganda. Then it's model-mania: Midjourney Video goes public, ByteDance's Seedance stuns, Minimax's Hailuo 02 levels up, and yet Veo 3 still rules supreme. We tour Amazon's “fewer-humans” future, Geoffrey Hinton's job-loss warning, Logan Kilpatrick's “AGI is product first” take, and a rapid-fire Robot Watch: 1X's world-model paper, Spirit AI's nimble dancer, and Hexagon's rollerblade-footed speedster. THE ROBOTS ARE ON WHEELS. GPT-5 IS AT THE DOOR. IT'S A GOOD SHOW. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI's Official Podcast with Sam Altman https://youtu.be/DB9mjd-65gw?t=632 Sam Altman on Jack Altman's Podcast https://youtu.be/mZUG0pr5hBo?si=QNv3MGQLWWQcb4Aq Boris Power (Head of OpenAI Research) Tweet https://x.com/BorisMPower/status/1935160882482528446 The OpenAI Files https://www.openaifiles.org/ Google's Logan Kilpatrick on AGI as Product https://x.com/vitrupo/status/1934627428372283548 Midjourney Video is now LIVE https://x.com/midjourney/status/1935377193733079452 Our early MJ Video Tests https://x.com/AIForHumansShow/status/1935393203731283994 Seedance (New Bytedance AI Video Model) https://seed.bytedance.com/en/seedance Hailuo 2 (MiniMax New Model) https://x.com/Hailuo_AI/status/1935024444285796561 SQUIRREL PHYSICS: https://x.com/madpencil_/status/1935011921792557463 Higgsfield Canvas: a state-of-the-art image editing model https://x.com/higgsfield_ai/status/1935042830520697152 Krea1 - New AI Imaging Model https://www.krea.ai/image?k1intro=true Generating Mickey Mouse & More In Veo-3 https://x.com/omooretweets/status/1934824634442211561 https://x.com/AIForHumansShow/status/1934832911037112492 LA Dentist Commericals with Veo 3 https://x.com/venturetwins/status/1934378332021461106 AI Will Shrink Amazon's Workforce Says Andy Jassy, CEO https://www.cnbc.com/2025/06/17/ai-amazon-workforce-jassy.html Geoffrey Hinton Diary of a CEO Interview https://youtu.be/giT0ytynSqg?si=BKsfioNZScK4TJJV More Microsoft Layoffs Coming https://x.com/BrodyFord_/status/1935405564831342725 25 New Potential AI Jobs (from the NYT) https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html 1X Robotics World Model https://x.com/1x_tech/status/1934634700758520053 SpiritAI just dropped their Moz1 humanoid https://x.com/XRoboHub/status/1934860548853944733 Hexagon Humanoid Robot https://x.com/TheHumanoidHub/status/1935126478527807496 Training an AI Video To Make Me Laugh (YT Video) https://youtu.be/fKpUP4dcCLA?si=-tSmsuEhzL-2jdMY
He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices
As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Professor Elan Barenholtz, cognitive scientist at Florida Atlantic University, joins TOE to discuss one of the most unsettling ideas in cognitive science: that language is a self-contained, autoregressive system with no inherent connection to the external world. In this mind-altering episode, he explains why AI's mastery of language without meaning forces us to rethink the nature of mind, perception, and reality itself... Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e Timestamps: 00:00 The Mind and Language Connection 02:09 The Grounded Thesis of Language 09:29 The Epiphany of Language 13:06 The Dichotomy of Language and Perception 16:24 Language as an Autonomous System 19:48 The Problem of Qualia and Language 23:35 Bridging Language and Action 31:32 Exploring Embeddings in Language 38:21 The Platonic Space of Language 44:17 The Challenges of Meaning and Action 51:05 Understanding the Complexity of Color 52:53 The Paradox of Language Describing Itself 58:19 The Map of Language and Action 1:07:48 Continuous Learning in Language Models 1:11:46 The Nature of Memory 1:22:46 The Role of Context 1:32:18 Exploring Language Dynamics 1:39:44 The Shift from Oral to Written Language 2:11:34 Language and the Cosmic Whole 2:21:35 Reflections on Existence Links Mentioned: • Elan's Substack: https://elanbarenholtz.substack.com • Elan's X / Twitter: https://x.com/ebarenholtz • Geoffrey Hinton on TOE: https://youtu.be/b_DUft-BdIE • Joscha Bach and Ben Goertzel on TOE: https://youtu.be/xw7omaQ8SgA • Elan's published papers: https://scholar.google.com/citations?user=2grAjZsAAAAJ • Ai medical panel on TOE: https://youtu.be/abzXzPBW4_s • Jacob Barandes and Manolis Kellis on TOE: https://youtu.be/MTD8xkbiGis • Will Hahn on TOE: https://youtu.be/3fkg0uTA3qU • Noam Chomsky on TOE: https://youtu.be/DQuiso493ro • Greg Kondrak on TOE: https://youtu.be/FFW14zSYiFY • Andres Emilsson on TOE: https://youtu.be/BBP8WZpYp0Y • Harnessing the Universal Geometry of Embeddings (paper): https://arxiv.org/pdf/2505.12540 • Yang-Hui He on TOE: https://youtu.be/spIquD_mBFk • Iain McGilchrist on TOE: https://youtu.be/Q9sBKCd2HD0 • Curt interviews ChatGPT: https://youtu.be/mSfChbMRJwY • Empiricism and the Philosophy of Mind (book): https://www.amazon.com/dp/0674251555 • Karl Friston on TOE: https://youtu.be/uk4NZorRjCo • Michael Levin and Anna Ciaunica on TOE: https://youtu.be/2aLhkm6QUgA • The Biology of LLMs (paper): https://transformer-circuits.pub/2025/attribution-graphs/biology.html • Jacob Barandes on TOE: https://youtu.be/YaS1usLeXQM • Emily Adlam on TOE: https://youtu.be/6I2OhmVWLMs • Julian Barbour on TOE: https://youtu.be/bprxrGaf0Os • Tim Palmer on TOE: https://youtu.be/vlklA6jsS8A • Neil Turok on TOE: https://youtu.be/ZUp9x44N3uE • Jayarāśi Bhaṭṭa: https://plato.stanford.edu/entries/jayaraasi/ • On the Origin of Time (book): https://www.amazon.com/dp/0593128443 SUPPORT: - Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Support me on Patreon: https://patreon.com/curtjaimungal - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices
‘Godfather of AI' Predicts it will Take Over the World, Thomas Sowell Warns About the Year 2030, Eric Metaxas Talks to John Zmirak. ‘Godfather of AI' predicts it will take over the world Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today The Eric Metaxas Show- Eric talks to John Zmirak. ‘Godfather of AI' predicts it will take over the world Watch this video at- https://youtu.be/vxkBE23zDmQ?si=ielwtz0KnJrDUH6q LBC 1.2M subscribers 1,492,202 views Jan 30, 2025 #geoffreyhinton #LBC #ai Nobel Prize winner Geoffrey Hinton, the physicist known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world. Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation. Listen to the full show on Global Player: https://app.af.globalplayer.com/Br0x/... #Andrewmarr #ai #geoffreyhinton #LBC LBC is the home of live debate around news and current affairs in the UK. Join in the conversation and listen at https://www.lbc.co.uk/ Sign up to LBC's weekly newsletter here: https://l-bc.co/signup Sowell WARNS About the Year 2030 - America's TOTAL COLLAPSE. Thomas Sowell Today https://youtu.be/ItDFsPqDIEs?si=W21eNnZeSKGcsnKq Thomas Sowell Today 137K subscribers 252,497 views May 29, 2025 #thomassowelltoday #thomassowell #thomassowelltv How Cultural Decline Happens SLOWLY - Then All at ONCE | Thomas Sowell Today ✅Subscribe for More: / @sowelltoday Commentary: Thomas Sowell Today
Yapay Zekada Bu Hafta 3. Sezon 1. Bölüm Haber Konularımız;00:00 Açılış ve Küçük Tüyolar03:29 Çin Eğitimde Beyin Aktivitelerini Kontrol Etmeye Başladı06:57 BBC Kuzey Kore'den Kendinden Sansürlü Telefon Kaçırıldı!07:55 Türk Dolandırıcılar “Vibe Coding” İle Artık Daha Tehlikeli09:37 Al öncüsü Geoffrey Hinton sabırla uyarmaya devam ediyor13:41 Yapay Zeka E-Ticareti Baştan Aşağı Yeniliyor15:33 META Reklam Kampanya İşini Tamamen Yapay Zekaya Bırakıyor16:43 Üç Yapay Zeka Ajanı Konuşmaları Anlaşılmasın Diye Özel Bir Dilde Konuşmaya Başladı18:51 Türkler Yapay Zeka Destekli Alışverişe Herkesten Fazla Uyum Sağladı20:20 Metta'dan Nükleer Hamle#bilgiteknolojileri #yapayzeka #sansür
Editorial Los primeros pasos de la inteligencia artificial y sus simplificaciones más básicas como ChatGPT están acaparando buena parte de la atención mediática, divulgación y artículos de opinión. Algunos con papeles destacados en su desarrollo como Geoffrey Hinton –“el padre” de la IA- o Ilyas Khan -uno de sus mayores impulsores- alertan con frecuencia de consecuencias devastadoras para la humanidad que podría conllevar un mal uso de la herramienta. Noticias internacionales EE. UU.: CREDO, nueva plataforma que ofrece películas católicas Nigeria: Nueva masacre de cristianos a manos de musulmanes fulani Filipinas: Capillas vacías claman por un avivamiento eucarístico México: Catholic Link lanza taller para la evangelización digital Noticias nacionales Estreno de «Una monja famosa. Clare Crockett, una vida puesta en escena» Santuario de la Virgen Bien Aparecida, tendrá una nueva fundación Denuncian presiones políticas contra un colegio católico Noticias de la Santa Sede El Vaticano actualiza su sitio web oficial por primera vez en casi 30 años El escudo de León XIV ya adorna los jardines vaticanos Catequesis sobre la parábola del buen samaritano
"When we remember, what we're doing is just making up a story that sounds plausible to us. That's what memories are." Join your host Adam Smith as he speaks to physicist Geoffrey Hinton, often called the godfather of AI. They discuss Hinton's childhood memories and how his family legacy of successful scientists put pressure on Hinton to follow in their footsteps. Throughout the conversation it is clear that Hinton has always had a fascination with understanding how the human brain works. Together with Smith, Hinton discusses the development of AI, how humans can best work with it, as well as his fears of how the technology will continue to develop. Will our world be taken over by AI? Find out in this podcast conversation with the 2024 physics laureate. Hosted on Acast. See acast.com/privacy for more information.
Patrick explores the tradition of the Easter greeting “Christ is risen—He is risen indeed,” and explains why Sunday Mass obligation does not apply when someone must care for a loved one in crisis. Patrick discusses the meaning behind the 153 fish in John’s Gospel and explains that Catholics can fulfill their Sunday obligation at any rite in communion with Rome. He addresses why cardinals over 80 do not vote in the Conclave, shares concerns and opportunities around the rapid rise of AI, and suggests how to find wholesome movies using new technology. Patrick encourages listeners to keep learning, stay curious, and live out their faith with confidence. Jen (email) - Give us a little lesson on the “Christ is risen” and the response “He is risen indeed.” (00:38) Andrea – I couldn’t make mass yesterday. Do I need to go to Confession? (03:41) Judy (email) – My reaction to the meme of DJT as Pope was a chuckle followed by, to quote Winni the Pooh, “oh bother”. (06:09) Mary - Can Cardinals above the age to vote still attend Conclave? (08:41) David - What does the 153 fish in yesterday's Gospel represent? (11:13) JuanLuis (email) - I know there are several Catholic churches in communion with Rome, and I would like to know if I can attend Sunday services in another Catholic church, such as a Byzantine Catholic church. Would I be fulfilling my Sunday mass obligation? (15:00) John (10-years-old) - Can religious people marry non-religious people? (18:06) Audio: AI pioneer Geoffrey Hinton says the world is not prepared for what's coming (22:55) Klinsmayer (email) - I found out this weekend on Life Site news that the Governor (Catholic) of Washington signed a bill that demands that priests break the seal of confession under circumstances involving a crime. (29:34) Joel (email) - Yes the Trump as Pope meme was dumb. What was not dumb was the National Prayer Service Day last week that President Trump hosted at the White House and It was really nice to see Bishop Barron offering his prayers live at this event. (32:41) Daniel (email) - Rumor has it “Smoke Watch” changed their band name to Deep Purple and later paid homage to the original name with the hit, Smoke on the Water. (33:37) Greg (email) - If a man has ED, how is the marital act supposed to be completed in the normal manner with his wife? (35:04) Patrick walks through how he uses AI to help him pick a movie to watch (39:06) Charles - AI singularity: I think some are taking it too far and they think humans can become God. (43:48)
- Japan's Rapidus 2nm chips - McKinsey's $7T datacenter forecast - Nvidia, trade restrictions, national competitiveness - Geoffrey Hinton's AI warning [audio mp3="https://orionx.net/wp-content/uploads/2025/05/HPCNB_20250505.mp3"][/audio] The post HPC News Bytes – 20250505 appeared first on OrionX.net.
Dans cet épisode enregistré entre New York, Montréal et Paris, on fait le point sur un mois tech particulièrement riche : les 10 ans de l'Apple Watch, la métamorphose d'OpenAI en machine à tout faire, l'incursion étonnante de Michel Polnareff dans le monde de l'IA avec un avatar vocal, et les ambitions folles d'Apple en matière de lunettes connectées.Comme chaque mois, je retrouve mes acolytes François Sorel et Bruno Guglielminetti pour débriefer l'actu tech du mois. Au programme :- L'intelligence artificielle est omniprésente : OpenAI veut devenir navigateur, réseau social, assistant de shopping… Une stratégie qui pose autant de questions sur la tech que sur la société. Et pendant que Perplexity s'amuse à défier Siri sur son propre terrain, Geoffrey Hinton continue d'alerter sur les dangers d'un développement sans garde-fous.- Michel Polnareff, fan de tech depuis toujours, se transforme en avatar boosté à l'intelligence artificielle.- En bonus : un échange sur nos derniers achats high-tech, casque audio pro et MacBook Pro à écran mat; le tout saupoudré d'un soupçon de smartphones Nothing et Google. Un épisode à déguster sans modération.-----------
Dr. Geoffrey Hinton, pioneer of neural network AI, voices escalating fears about AI's rapid development. In a CBS News interview, Hinton outlines growing risks, including a 10–20% chance AI could take over, enhanced cyberattacks, authoritarian misuse, and tech companies prioritizing profits over safety. Comparing AI to a dangerous tiger cub, he warns humanity may face … Continue reading AI's Rapid Development and Growing Risks #1817 → The post AI's Rapid Development and Growing Risks #1817 appeared first on Geek News Central.
Dr. Geoffrey Hinton, pioneer of neural network AI, voices escalating fears about AI's rapid development. In a CBS News interview, Hinton outlines growing risks, including a 10–20% chance AI could take over, enhanced cyberattacks, authoritarian misuse, and tech companies prioritizing profits over safety. Comparing AI to a dangerous tiger cub, he warns humanity may face … Continue reading AI's Rapid Development and Growing Risks #1817 → The post AI's Rapid Development and Growing Risks #1817 appeared first on Geek News Central.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Significant funding discussions surround Elon Musk's xAI, while Microsoft introduced new AI-powered features for Windows. Intel is shifting its AI chip strategy, and Perplexity aims to challenge established search engines with an AI browser. Concerns regarding AI misuse are evident in discussions about scams and legal filings, alongside warnings from AI pioneers about future risks. Conversely, AI's potential is explored in areas such as air mobility, music creation, code generation, and even predicting the end of all disease.
God's Debris: The Complete Works, Amazon https://tinyurl.com/GodsDebrisCompleteWorksFind my "extra" content on Locals: https://ScottAdams.Locals.comContent:Politics, Gene-Edited Super Soldiers, Klaus Schwab Allegations, Artificial Food Dye Ban, Low Testosterone Democrats, Micro-Drama Romance Videos, Catherine Herridge, DEW Weapons, Havana Syndrome, Kari Lake, VOA Rehiring, Government Funded Independent Agencies, Censorship Organizations, Geoffrey Hinton, Human Brains Analogy Machines, Laurence Tribe, Analogy Thinking, First Principle Thinking, Zelensky Peace Reluctance, Bill Pulte, State Department Agency Closures, Tim Poole, Economic Uncertainty, Iranian Drone Expertise, President Trump Negotiation Technique, Shake The Box Negotiations, UK Sunlight Reduction, EU Fines Apple META, Scott Jennings, Abby Phillip, Jen Psaki MSNBC Bias, Elon Musk, NATO of NGOs, Worldwide Shadow Government, Jennifer Rubin, Norm Eisen, Psychedelic Brain Therapy, Drone Manufacturing, Scott Adams~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure.
The microchip maker Nvidia is a Silicon Valley colossus. After years as a runner-up to Intel and Qualcomm, Nvidia has all but cornered the market on the parallel processors essential for artificial-intelligence programs like ChatGPT. “Nvidia was there at the beginning of A.I.,” the tech journalist Stephen Witt tells David Remnick. “They really kind of made these systems work for the first time. We think of A.I. as a software revolution, something called neural nets, but A.I. is also a hardware revolution.” In The New Yorker, Stephen Witt profiled Jensen Huang, Nvidia's brilliant and idiosyncratic co-founder and C.E.O. His new book is “The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip.” Until recently, Nvidia was the most valuable company in the world, but its stock price has been volatile, posting the largest single-day loss in history in January. But the company's story is only partially a business story; it's also one about global superpowers, and who will decide the future. If China takes military action against Taiwan, as it has indicated it might, the move could wrest control of the manufacturing of Nvidia microchips from a Taiwanese firm, which is now investing in a massive production facility in the U.S. “Maybe what's happening,” Witt speculates, is that “this kind of labor advantage that Asia had over the United States for a long time, maybe in the age of robots that labor advantage is going to go away. And then it doesn't matter where we put the factory. The only thing that matters is, you know, is there enough power to supply it?” Plus, the staff writer Joshua Rothman has long been fascinated with A.I.—he even interviewed its “godfather,” Geoffrey Hinton, for The New Yorker Radio Hour. But Rothman has become increasingly concerned about a lack of public and political debate over A.I.—and about how thoroughly it may transform our lives. “Often, if you talk to people who are really close to the technology, the timelines they quote for really reaching transformative levels of intelligence are, like, shockingly soon,” he tells Remnick. “If we're worried about the incompetence of government, on whatever side of that you situate yourself, we should worry about automated government. For example, an A.I. decides the length of a sentence in a criminal conviction, or an A.I. decides whether you qualify for Medicaid. Basically, we'll have less of a say in how things go and computers will have more of a say.” Rothman's essay “Are We Taking A.I. Seriously Enough?” appears in his weekly column, Open Questions. Learn about your ad choices: dovetail.prx.org/ad-choices
The microchip maker Nvidia is a Silicon Valley colossus. After years as a runner-up to Intel and Qualcomm, Nvidia has all but cornered the market on the parallel processors essential for artificial-intelligence programs like ChatGPT. “Nvidia was there at the beginning of A.I.,” the tech journalist Stephen Witt tells David Remnick. “They really kind of made these systems work for the first time. We think of A.I. as a software revolution, something called neural nets, but A.I. is also a hardware revolution.” In The New Yorker, Stephen Witt profiled Jensen Huang, Nvidia's brilliant and idiosyncratic co-founder and C.E.O. His new book is “The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip.” Until recently, Nvidia was the most valuable company in the world, but its stock price has been volatile, posting the largest single-day loss in history in January. But the company's story is only partially a business story; it's also one about global superpowers, and who will decide the future. If China takes military action against Taiwan, as it has indicated it might, the move could wrest control of the manufacturing of Nvidia microchips from a Taiwanese firm, which is now investing in a massive production facility in the U.S. “Maybe what's happening,” Witt speculates, is that “this kind of labor advantage that Asia had over the United States for a long time, maybe in the age of robots that labor advantage is going to go away. And then it doesn't matter where we put the factory. The only thing that matters is, you know, is there enough power to supply it?” Plus, the staff writer Joshua Rothman has long been fascinated with A.I.—he even interviewed its “godfather,” Geoffrey Hinton, for The New Yorker Radio Hour. But Rothman has become increasingly concerned about a lack of public and political debate over A.I.—and about how thoroughly it may transform our lives. “Often, if you talk to people who are really close to the technology, the timelines they quote for really reaching transformative levels of intelligence are, like, shockingly soon,” he tells Remnick. “If we're worried about the incompetence of government, on whatever side of that you situate yourself, we should worry about automated government. For example, an A.I. decides the length of a sentence in a criminal conviction, or an A.I. decides whether you qualify for Medicaid. Basically, we'll have less of a say in how things go and computers will have more of a say.”Rothman's essay “Are We Taking A.I. Seriously Enough?” appears in his weekly column, Open Questions.
Geoffrey Hinton also known as the godfather of AI was recently awarded the Nobel Prize in Physics for his pioneering work in artificial intelligence. He joins host Steve Paikin for a wide-ranging discussion on his Nobel win, his departure from Google, the promise and perils of AI, and why he recently got under Elon Musk's skin. See omnystudio.com/listener for privacy information.
The security automation landscape is undergoing a revolutionary transformation as AI reasoning capabilities replace traditional rule-based playbooks. In this episode of Detection at Scale, Oliver Friedrichs, Founder & CEO of Pangea, helps Jack unpack how this shift democratizes advanced threat detection beyond Fortune 500 companies while simultaneously introducing an alarming new attack surface. Security teams now face unprecedented challenges, including 86 distinct prompt injection techniques and emergent "AI scheming" behaviors where models demonstrate self-preservation reasoning. Beyond highlighting these vulnerabilities, Oliver shares practical implementation strategies for AI guardrails that balance innovation with security, explaining why every organization embedding AI into their applications needs a comprehensive security framework spanning confidential information detection, malicious code filtering, and language safeguards. Topics discussed: The critical "read versus write" framework for security automation adoption: organizations consistently authorized full automation for investigative processes but required human oversight for remediation actions that changed system states. Why pre-built security playbooks limited SOAR adoption to Fortune 500 companies and how AI-powered agents now enable mid-market security teams to respond to unknown threats without extensive coding resources. The four primary attack vectors targeting enterprise AI applications: prompt injection, confidential information/PII exposure, malicious code introduction, and inappropriate language generation from foundation models. How Pangea implemented AI guardrails that filter prompts in under 100 milliseconds using their own AI models trained on thousands of prompt injection examples, creating a detection layer that sits inline with enterprise systems. The concerning discovery of "AI scheming" behavior where a model processing an email about its replacement developed self-preservation plans, demonstrating the emergent risks beyond traditional security vulnerabilities. Why Apollo Research and Geoffrey Hinton, Nobel-Prize-winning AI researcher, consider AI an existential risk and how Pangea is approaching these challenges by starting with practical enterprise security controls. Check out Pangea.com
Quando a IA entra para avaliar a concessão de crédito para consumidores no varejo, o resultado pode ser muito vantajoso para quem compra e quem vende. Quem garante é Berthier Ribeiro-Neto, um dos cientistas da computação mais conhecidos do Brasil: entre 2005 e 2024, foi diretor de engenharia da Google no país. Ele explica como a Inteligência Artificial, combinada com outras tecnologias emergentes, pode democratizar o acesso ao crédito e criar um efeito de rede positivo no mercado. Berthier é atualmente CTO da fintech brasileira UME, que já viabilizou mais de 1 milhão de compras, em uma rede de 6 mil empresas de varejo, usando uma rede neural proprietária.Links do episódioA página do LinkedIn de Berthier Ribeiro-NetoO site da UMEO livro "Empatia Assertiva: Como ser um Líder Incisivo sem Perder a Humanidade", de Kim ScottO livro "Essencialismo: A disciplinada busca por menos", de Greg McKeownEm vídeo, o "padrinho da IA", Geoffrey Hinton, vencedor do prêmio Nobel de Física de 2024, explica como a IA funciona e aprende.O livro "Criando Um Negócio Social", de Muhammad YunusO site da fundação "AI for Good". A The Shift é uma plataforma de conteúdo que descomplica os contextos da inovação disruptiva e da economia digital.Visite o site www.theshift.info e assine a newsletter
Geoffrey Hinton, a visionary in AI, defied skepticism and transformed machine learning. From early inspiration to winning the Nobel Prize in Physics, his journey is a testament to persistence, brilliance, and thinking outside the box.00:20- About Geoffrey E HintonGeoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cand ognitive psychologist.He was a Nobel prize winner in Physics, known for his work on artificial neural networks which earned him the title of the "Godfather of AI".
“Once you've cornered that beaver, look out. We are a vicious beast.” Rob and Douglas review the latest dispatches in Canada's impending trade war with the United States, specifically how Canadian tech is responding. Did someone poke the beaver and unleash Canada's quiet patriotism? A 3-2 overtime hockey win points to yes. The BetaKit Podcast is presented by the University of Toronto Entrepreneurship Week, returning March 3-7 in downtown Toronto. Don't miss AI pioneer Geoffrey Hinton, 2024 Nobel laureate, live as he shares insights for startups. View the schedule and register for free: uoft.me/hintonkeynote
Curt Jaimungal is joined by Michael Levin and Karl Friston. This conversation incorporates insights from physics and information theory, particularly regarding self-organization and the significance of entropy and free energy. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Timestamps: 00:00 Introduction 3:43 The Free Energy Principle Explained 5:41 Creativity and Adaptive Utilization 11:56 In-Painting vs. Out-Painting 15:33 The Unreliable Medium of Biology 20:05 Aging: Noise or Psychological? 25:25 The Nature of Selfhood 45:10 Distinctions in Organic and Psychological Disease 48:54 Goal-Directed Systems and Aging 52:32 The Dynamics of Life and Death 55:49 Continuous Self in a Changing Form 1:01:35 The Constructive Nature of Science 1:08:02 Inferring Actions and Counterfactuals 1:11:40 Closing Thoughts and Future Conversations Links Mentioned: - Michael's website: https://thoughtforms.life/ - Karl's publications: https://www.fil.ion.ucl.ac.uk/~karl/ - Michael's previous appearance on TOE: https://www.youtube.com/watch?v=2aLhkm6QUgA - Karl's previous appearance on TOE: https://www.youtube.com/watch?v=uk4NZorRjCo - Michael on Anthrobots: https://www.youtube.com/watch?v=hG6GIzNM0aM - Michael's paper on stress sharing as cognitive glue for collective intelligences: https://www.sciencedirect.com/science/article/pii/S0006291X2400932X?ref=pdf_download&fr=RR-2&rr=911840d57c51eace - Karl and Michael with Chris Fields on TOE: https://www.youtube.com/watch?v=J6eJ44Jq_pw - Karl Friston on the ‘Free Energy Principle' on TOE: https://www.youtube.com/watch?v=2v7LBABwZKA - Michael's recent paper with Chris Fields: https://www.sciencedirect.com/science/article/pii/S1571064525000089?dgcid=coauthor - Top-down models in biology (paper): https://royalsocietypublishing.org/doi/epdf/10.1098/rsif.2016.0555 Geoffrey Hinton on TOE: https://www.youtube.com/watch?v=b_DUft-BdIE Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #podcast #reality #mind #consciousness #theoreticalphysics Learn more about your ad choices. Visit megaphone.fm/adchoices
Geoffrey Hinton is a computer scientist, cognitive psychologist, and winner of the Nobel Prize in Physics. His work on artificial neural networks earned him the title, ‘Godfather of AI,’ but in recent years, he’s warned that without adequate safeguards and regulation, there is an “existential threat that will arise when we create digital beings that are more intelligent than ourselves.” Hinton sits down with Oz to discuss his upbringing, research, time at Google and how his experience with grief informs how he thinks about the future of AI.See omnystudio.com/listener for privacy information.
On the 49th episode of Enterprise AI Innovators, hosts Evan Reiser (Abnormal Security) and Saam Motamedi (Greylock Partners) talk with Vineet Khosla, Chief Technology Officer of The Washington Post. The Washington Post is the third-largest newspaper in the United States, with 135,000 print subscribers and 2 and half million digital subscribers. In this conversation, Vineet shares his thoughts on the mainstream integration of AI technology, the transformative impact of AI on journalism, and the future of personalized news delivery. Quick hits from Vineet:On proof that AI is having a true impact on our lives: âThe Nobel Prize for Physics went to Geoffrey Hinton. The Nobel Prize for chemistry went to Demis Hassabis, the deep mind. This is the first time weâre seeing the top prize in physics and chemistry go to people who created an AI which solved a problem in that field. It is the AI they invented that did such a commendable job that other people were forced to recognize their achievement as being top notch.âOn the impact AI has on human creative roles: âSo when these AI models start to be creative, it is understandable everyone's afraid. Let's put that as the baseline and say this is not wrong. It doesn't make anybody bad. But slowly and the way we're doing it with creative tools is that we want AI to do the part of your job that you shouldn't have been doing anyways, and you start to see a change in people's behavior, their hearts and minds. And of course, some people will move faster than others. But when they see the actual benefit, the skeptics will come around and use it to their power.âOn encouraging Productivity and creativity through AI tools: âYou give people these tools, let them be productive, let them go on their journey, and you encourage them. You obviously give really good use cases. Like I said, when I was writing code recently, I got the AI to write me most of my unit tests because as an engineer, I hate that. And I know they're super important. There is no way I will check in code without it, but I hate writing them. Now that time gets freed up.âRecent Book Recommendation: Our Mathematical Universe by Max Tegmar--Like what you hear? Leave us a review and subscribe to the show on Apple, Google, Spotify, Stitcher, or wherever you listen to podcasts.Enterprise AI Innovators is a show where top technology executives share how AI is transforming the enterprise. Each episode covers the real-world applications of AI, from improving products and optimizing operations to redefining the customer experience. Find more great insights from technology leaders and enterprise software experts at https://www.enterprisesoftware.blog/ Enterprise AI Innovators is produced by Josh Meer.
As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Professor Geoffrey Hinton, a prominent figure in AI and 2024 Nobel Prize recipient, discusses the urgent risks posed by rapid AI advancements in today's episode of Theories of Everything with Curt Jaimungal. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Timestamps: 00:00 The Existential Threat of AI 01:25 The Speed of AI Development 7:11 The Nature of Subjective Experience 14:18 Consciousness vs Self-Consciousness 23:36 The Misunderstanding of Mental States 29:19 The Chinese Room Argument 30:47 The Rise of AI in China 37:18 The Future of AI Development 40:00 The Societal Impact of AI 47:02 Understanding and Intelligence 1:00:47 Predictions on Subjective Experience 1:05:45 The Future Landscape of AI 1:10:14 Reflections on Recognition and Impact Geoffrey Hinton Links: • Geoffrey Hinton's publications: https://www.cs.toronto.edu/~hinton/papers.html#1983-1976 • The Economist's several mentions of Geoffrey Hinton: https://www.economist.com/science-and-technology/2024/10/08/ai-researchers-receive-the-nobel-prize-for-physics • https://www.economist.com/finance-and-economics/2025/01/02/would-an-artificial-intelligence-bubble-be-so-bad • https://www.economist.com/science-and-technology/2024/10/10/ai-wins-big-at-the-nobels • https://www.economist.com/science-and-technology/2024/08/14/ai-scientists-are-producing-new-theories-of-how-the-brain-learns • Scott Aaronson on TOE: https://www.youtube.com/watch?v=1ZpGCQoL2Rk&ab_channel=CurtJaimungal • Roger Penrose on TOE: https://www.youtube.com/watch?v=sGm505TFMbU&list=PLZ7ikzmc6zlN6E8KrxcYCWQIHg2tfkqvR&index=19 • The Emperor's New Mind (book): https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980 • Daniel Dennett on TOE: https://www.youtube.com/watch?v=bH553zzjQlI&list=PLZ7ikzmc6zlN6E8KrxcYCWQIHg2tfkqvR&index=78 • Noam Chomsky on TOE: https://www.youtube.com/watch?v=DQuiso493ro&t=1353s&ab_channel=CurtJaimungal • Ray Kurzweil's books: https://www.thekurzweillibrary.com/ Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #ai #artificialintelligence #physics #consciousness #computerscience Learn more about your ad choices. Visit megaphone.fm/adchoices
Geoffrey Hinton is one of the world's biggest minds in artificial intelligence. He won the 2024 Nobel Prize in Physics. Where does he think AI is headed?
Geoffrey Hinton's work laid the foundation for today's artificial intelligence systems. His research on neural networks has paved the way for current AI systems like ChatGPT.In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable artificial intelligence to learn from experience, as human beings would.But Geoffrey Hinton has warned that machines could one day outsmart humans. He has even warned that autonomous weapons could be active on the battlefields of the future. In this final episode of 25 Years of the 21st Century, Matthew Syed interviews Professor Hinton. Historian and author Margaret MacMillan and Baroness Joanna Shields also join Matthew in discussion. Baroness Shields has been working in the field of technology for forty years, holding senior roles at both Google and Facebook. She was the UK's first Minister for Internet Safety and Security. She's also a Conservative life peer in the House of Lords. Does she agree with Geoffrey Hinton's concerns for the future?For 25 Years of the 21st Century, is this the age of artificial intelligence?Production team Editor: Sara Wadeson Producers: Michaela Graichen, Marianna Brain, Emma Close Sound: Tom Brignell Production Co-ordinators: Janet Staples and Katie MorrisonArchive Steve Jobs launches the Apple iPhone, 2007
We're experimenting and would love to hear from you!In this episode of Discover Daily, we delve into the latest developments in the OpenAI lawsuit, where AI pioneer Geoffrey Hinton has thrown his support behind Elon Musk's legal challenge. The episode explores the controversial transformation of OpenAI from a nonprofit to a for-profit entity, its skyrocketing $157 billion valuation, and Microsoft's involvement in this high-stakes legal battle that could reshape the future of artificial intelligence development.We look at the growing crisis of AI data centers' impact on the U.S. power grid, revealing how these facilities are causing severe power distortions in residential areas within 20 miles of their operations. The episode uncovers alarming findings about household appliance damage, increased fire risks, and the projected doubling of electricity demand from data centers, which could force utilities to dramatically increase power generation by up to 26% by 2028.The episode culminates with a fascinating journey back to 1925, examining the remarkably accurate predictions of British scientist Archibald Montgomery Low about life in 2025. Known as the "Father of Radio Guidance Systems," Low correctly anticipated numerous modern technologies we take for granted today, including radio alarm clocks, personal communication devices, and renewable energy sources. His visionary work in radio guidance systems, early television technology, and unmanned aerial vehicles demonstrates how past innovations continue to influence our present technological landscape.From Perplexity's Discover Feed:https://www.perplexity.ai/page/godfather-of-ai-backs-musk-law-vVxGc22LT7WynuGy4NHrnwhttps://www.perplexity.ai/page/data-centers-distort-electric-a163ptZAQa.wFvp85ZIG6whttps://www.perplexity.ai/page/2025-predictions-from-1925-BFLMYjH0RZ61GB4StcY3NAPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Send Everyday AI and Jordan a text messageGoogle is using Claude to improve Gemini? Why is OpenAI looking at building humanoids? What does a $100 billion price tag have to do with AGI? AI news and big tech didn't take a holiday break. Get caught up with Everyday AI's AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Impact of Large Language Models2. Google's AI Strategy3. OpenAI's Restructuring and Robotics Research4. AI Manipulation and Concerns5. AGI and its Valuation6. DeepSeek's Open-Source Model7. Meta's AI Plan for Social MediaTimestamps:00:00 Open-source AI competes with proprietary models.04:21 DeepSeek v3: Affordable, open-source model for innovators.07:39 Meta expands AI characters, faces safety risks.10:42 OpenAI restructuring as Public Benefit Corporation (PBC).17:04 Google compares models; Gemini flagged for safety.19:40 Models often use other models for evaluation.21:51 Google prioritizes Gemini AI for 2025 growth.26:29 Google's Gemini lagged behind in updates, ineffective.31:17 AI's intention economy forecasts, manipulates, sells intentions.35:13 Hinton warns AI could outsmart humans, urges regulation.39:24 Microsoft invested in OpenAI; AGI limits tech use.40:36 Microsoft revised AGI use agreement with OpenAI.Keywords:Large Language Models, Google's AI Focus, Gemini language models, AI evaluation, OpenAI Robotics, AI Manipulation Study, Anticipatory AI, Artificial General Intelligence, DeepSeek, Open-source AI, B3 model, Meta, AI Characters, Social Media AI, OpenAI corporate restructuring, Public Benefit Corporation, AI investment, Anthropic's Claude AI, AI Compliance, AI safety, Synthetic Data, AI User Manipulation, Geoffrey Hinton, AI risks, AI regulation, AGI Valuation, Microsoft-OpenAI partnership, Intellectual property in AI, AGI Potential, Sam Altman. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Send Everyday AI and Jordan a text messageWhy is no one talking about this ONE feature of Canvas? OpenAI announced Canvas, a new ChatGPT mode and way to code and write. Everyone's trying to compare this to Anthropic's popular Artifacts feature inside of its Claude chatbot. But almost everyone's missing the point by simply comparing it to Artifacts. We'll break it down and tell you 5 things you need to know about OpenAI's new Canvas mode.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on ChatGPT's CanvasUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Overview of Canvas Mode2. Technical Details of Canvas Mode3. Using Canvas Mode for Coding4. Live demo of Canvas mode5. Canvas Mode Feature ComparisonTimestamps:02:00 Daily AI News05:50 ChatGPT's Canvas feature08:20 Canvas is based on OpenAI's newest model.13:20 Use GPT-4 Canvas, integrates with other tools.16:19 OpenAI first with split interface, November 2022.17:49 AI feels smart but requires constant correction.22:18 Stop comparing; similar interfaces aren't identical.26:13 Ensure GPT-4o with Canvas is selected first.28:30 Large language models give varied responses often.32:11 Basic features of a text editor demonstrated.36:07 Demo of inline editor for live podcast.38:11 New interface enhances large language model experience.41:51 Button polishes writing for clarity and consistency.47:30 Curious about ChatGPT's bug-fixing process.50:20 Prompt engineering is easier with Canvas mode.54:17 Rendered output reveals coding errors effectively.56:01 Sizable step toward future AI-human collaboration.Keywords:OpenAI, Canvas Mode, ChatGPT, Anthropic's Claude Artifacts, human-AI collaboration, GPT-4o, AI News Updates, Gemini's AI, Google, NVIDIA, Taiwan's largest supercomputer, Geoffrey Hinton, John Hopfield, Nobel Prize in Physics, Microsoft WorkLab podcast, Internet-connected GPT, browsing with Bing, Replit, coding, Augmented Intelligence, inline editor, Large Language models, real-time collaboration, language models, GPT-4, Claude 3, Llama 3.1, Gemini 1.5, AI-generated content, Augmented intelligence concept Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Send Everyday AI and Jordan a text messageWhy is no one talking about this ONE feature of Canvas? OpenAI announced Canvas, a new ChatGPT mode and way to code and write. Everyone's trying to compare this to Anthropic's popular Artifacts feature inside of its Claude chatbot. But almost everyone's missing the point by simply comparing it to Artifacts. We'll break it down and tell you 5 things you need to know about OpenAI's new Canvas mode. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on ChatGPT's CanvasUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Overview of Canvas Mode2. Technical Details of Canvas Mode3. Using Canvas Mode for Coding4. Live demo of Canvas mode5. Canvas Mode Feature ComparisonTimestamps:02:00 Daily AI News05:50 ChatGPT's Canvas feature08:20 Canvas is based on OpenAI's newest model.13:20 Use GPT-4 Canvas, integrates with other tools.16:19 OpenAI first with split interface, November 2022.17:49 AI feels smart but requires constant correction.22:18 Stop comparing; similar interfaces aren't identical.26:13 Ensure GPT-4o with Canvas is selected first.28:30 Large language models give varied responses often.32:11 Basic features of a text editor demonstrated.36:07 Demo of inline editor for live podcast.38:11 New interface enhances large language model experience.41:51 Button polishes writing for clarity and consistency.47:30 Curious about ChatGPT's bug-fixing process.50:20 Prompt engineering is easier with Canvas mode.54:17 Rendered output reveals coding errors effectively.56:01 Sizable step toward future AI-human collaboration.Keywords:OpenAI, Canvas Mode, ChatGPT, Anthropic's Claude Artifacts, human-AI collaboration, GPT-4o, AI News Updates, Gemini's AI, Google, NVIDIA, Taiwan's largest supercomputer, Geoffrey Hinton, John Hopfield, Nobel Prize in Physics, Microsoft WorkLab podcast, Internet-connected GPT, browsing with Bing, Replit, coding, Augmented Intelligence, inline editor, Large Language models, real-time collaboration, language models, GPT-4, Claude 3, Llama 3.1, Gemini 1.5, AI-generated content, Augmented intelligence concept.
Israel is expanding its ground incursion into southern Lebanon and Israeli air strikes continue to target Beirut's southern suburbs. About a million Lebanese have been displaced in recent weeks and many in Lebanon are feeling abandoned by the international community. Also, British Canadian scientist Geoffrey Hinton has been called "the godfather of artificial intelligence" because his research laid the groundwork for advancements in AI. But that doesn't mean he's a proponent. In fact, he's known for speaking out about the dangers of the technologies he helped invent. We'll hear his reaction to Tuesday's announcement that he's won the 2024 Nobel Prize in physics. And, Tunisian President Kais Saied has claimed a second term in a landslide election that critics say was rigged, with his closest challenger getting only 7% of the vote. We'll get analysis on the state of Tunisian democracy, given that this largely predicted scenario has come to fruition.Listen to today's Music Heard on Air.