Podcasts about Yoshua Bengio

  • 155PODCASTS
  • 283EPISODES
  • 37mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jun 4, 2025LATEST
Yoshua Bengio

POPULARITY

20172018201920202021202220232024


Best podcasts about Yoshua Bengio

Latest podcast episodes about Yoshua Bengio

Improve the News
Lee S.Korea win, Dutch PM resignation and ‘honest' AI venture

Improve the News

Play Episode Listen Later Jun 4, 2025 33:06


Lee Jae-myung wins South Korea's presidential election, Dutch Prime Minister Dick Schoof resigns, the U.S. approves a plan to integrate foreign fighters in the Syrian Army, an American consulting firm leaves the Gaza Humanitarian Foundation, the White House seeks Congress' approval to codify DOGE cuts, a report warns that around 7 billion people worldwide lack full civil rights, U.S. Homeland Security is sued over its DNA collection program, U.S. officials dismiss reports that FEMA's chief was unaware of the US' hurricane, Bill Gates commits the majority of his $200B fortune to Africa, AI pioneer Yoshua Bengio launches a $30M nonprofit to build “honest” AI systems. Sources: www.verity.news

TED Talks Daily
Will AI make humans extinct? | Yoshua Bengio

TED Talks Daily

Play Episode Listen Later May 20, 2025 15:01


Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.Want to help shape TED's shows going forward? Fill out our survey! Hosted on Acast. See acast.com/privacy for more information.

TED Talks Daily (SD video)
Will AI make humans extinct? | Yoshua Bengio

TED Talks Daily (SD video)

Play Episode Listen Later May 20, 2025 14:49


Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.

TED Talks Daily (HD video)
Will AI make humans extinct? | Yoshua Bengio

TED Talks Daily (HD video)

Play Episode Listen Later May 20, 2025 14:49


Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.

Sensemaker
Are Chatbots afraid of dying?

Sensemaker

Play Episode Listen Later Apr 29, 2025 9:35


Since December 2024, a run of scientific papers has illustrated AI's capacity for deception. Deep learning pioneer Yoshua Bengio says the consequences could be existential.Writer: Patricia ClarkeProducer: Ada BaruméHost: Tomini BabsPhotography: Joe MeeExecutive Producer: Rebecca Moore Hosted on Acast. See acast.com/privacy for more information.

TeknoSafari's Podcast
Çin Amerika ile Eğleniyor! Yapay Zekada Bu Hafta #21

TeknoSafari's Podcast

Play Episode Listen Later Mar 14, 2025 19:54


1. MANUS AI geldi ortalık karıştı. Kendi çıkmadan benzerleri çıktı: openmanus, ANUS, OWL2. Deepseek R2 için önce 17 Mart dendi, sonra yalanlandı3. Modern Yapay Zekanın Babalarından Yoshua Bengio da, Regüle Edilmemesi Halinde Yapay Zekanın İnsanlık İçin Varoluşsal Risk Doğuracağını Söylüyor!4. NVIDIA GEN3C'yi sunuyor tek veya seyrek görüntülü görüntülerden, kamera kontrolünü ve 3B tutarlılığını koruyarak fotogerçekçi videolar üretebilen yeni bir yöntem5. Çin'in ABD Büyükelçisi Xie Feng, kontrolsüz riskleri önlemek için yapay zeka konusunda işbirliği çağrısı yaptı.6. McDonald's büyük bir yapay zeka yenilemesi planlıyor Şirketin satış noktaları, öngörücü bakım ve sipariş doğruluğu için yapay zeka araçlarının yanı sıra bir "yapay zeka sanal yöneticisi"ne sahip olacak. Bu yetenekler için uç bilgi işlem sistemlerini dağıtmak üzere Google Cloud ile birlikte çalışıyor7. iPhone üreticisi Foxconn, ileri düzey akıl yürütme alanında ilk LLM programı olan FoxBrain'i duyurdu8. Portekiz'in ilk Yapay Zeka emlakçısı, sektör için büyük bir dönüm noktası olan 100 milyon doların üzerinde satış yaptı. Başlangıç eSelf AI tarafından inşa edilen ve Porta da Frente Christie's tarafından kullanılan bu AI ajanı hızlı cevaplar, sanal turlar ve canlı pazar güncellemeleriyle ev satın almayı kolaylaştırıyor.• esnada yurdum: Niğde'de mumya satmaya çalışan 6 şüpheli yakalandı.9. Alibaba'dan bir hamle daha: VACE All-in-One Video Creation and Editing10. OpenAI, Agent SDK'sını yeni yayınladı ve bu, AI geliştiricileri için oyunun kurallarını değiştiriyor Yapay zeka ajanlarının inşası haftalardan dakikalara düştü.11. Manus'un arkasındaki ekip, otonom aracının Çince versiyonunu geliştirmek için Alibaba'nın Qwen'iyle ortaklık kurdu İş birliği, Manus'u Qwen'in açık kaynaklı modelleri ve bilgi işlem altyapısıyla entegre edecek12. Çin'in yeni silikonsuz çipi Intel'i %40 daha fazla hız ve %10 daha az enerjiyle geride bırakıyor "Yeni bizmut bazlı transistör, silikonun sınırlamalarını aşarak daha yüksek verimlilik sunarak çip tasarımında devrim yaratabilir."#yapayzeka #teknoloji #deepresearch

Philosophy for our times
Are machines already conscious? | Yoshua Bengio, Sabine Hossenfelder, Nick Lane, and Hilary Lawson

Philosophy for our times

Play Episode Listen Later Mar 4, 2025 50:19


The consciousness testCould an artificial intelligence be capable of genuine conscious experience?Coming from a range of different scientific and philosophical perspectives, Yoshua Bengio, Sabine Hossenfelder, Nick Lane, and Hilary Lawson dive deep into the question of whether artificial intelligence systems like ChatGPT could one day become self-aware, and whether they have already achieved this state.Yoshua Bengio is a Turing Award-winning computer scientist. Sabine Hossenfelder is a science YouTuber and theoretical physicist. Nick Lane is an evolutionary biochemist. Hilary Lawson is a post-postmodern philosopher.To witness such topics discussed live buy tickets for our upcoming festival: https://howthelightgetsin.org/festivals/And visit our website for many more articles, videos, and podcasts like this one: https://iai.tv/You can find everything we referenced here: https://linktr.ee/philosophyforourtimesAnd don't hesitate to email us at podcast@iai.tv with your thoughts or questions on the episode! Who do you agree or disagree with?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Monde Numérique - Jérôme Colombain
[Edito] Que faut-il retenir du Sommet de Paris sur l'IA ?

Monde Numérique - Jérôme Colombain

Play Episode Listen Later Feb 13, 2025 7:50


Le Sommet pour l'Action sur l'intelligence artificielle s'est tenu à Paris du 6 au 12 février 2025, réunissant chefs d'État, entreprises et experts du secteur. Entre ambitions politiques et annonces stratégiques, cet événement a mis en lumière la volonté de la France et de l'Europe de se positionner comme une alternative aux dominations américaine et chinoise en matière d'IA.Ce sommet se déroulait dans deux lieux symboliques : le Grand Palais, où régnait une ambiance politique et institutionnelle et un accès réservé à un nombre limité de participants, et à Station F qui accueillait l'écosystème d'entrepreneurs et qui affichait complet. Sur le plan politique, Emmanuel Macron a réaffirmé sa volonté de faire de la France un leader en intelligence artificielle. 58 pays, dont la France, l'Inde et la Chine, ont signé une déclaration pour une IA "ouverte", "inclusive" et "éthique". Mais les Etats-Unis et la Grande Bretagne n'ont pas signé. Sur le plan économique, on retiendra parmi les annonces marquantes : 109 milliards d'euros investis en France, notamment pour la construction de data centers, un objectif de formation de 100 000 data scientists par an en France et 200 milliards d'euros d'investissements européens annoncés par Ursula von der Leyen.Côté entreprises, Mistral AI a été la grande vedette du sommet, avec des annonces de partenariats avec les télécoms. Free, Orange et Bouygues Telecom vont intégrer de l'IA générative dans leurs offres. Enfin, côté people, Sam Altman (présent) et Elon Musk (absent) se sont livrés à une joute sur X autour d'une hypothétique acquisition d'OpenAI.Sur le plan réglementaire, l'Union européenne semble vouloir assouplir ses restrictions afin de ne pas freiner l'innovation. Toutefois, les enjeux de cybersécurité, de désinformation et de déstabilisation politique restent préoccupants, comme l'a rappelé le scientifique Yoshua Bengio.L'IA est plus que jamais au cœur des enjeux économiques et géopolitiques mondiaux. Reste à savoir si l'Europe pourra réellement imposer une « troisième voie » face aux modèles américain et chinois.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique

POLITICO Dispatch
An AI safety expert's plea in Paris

POLITICO Dispatch

Play Episode Listen Later Feb 7, 2025 20:03


World leaders and tech luminaries will be flocking to Paris in the days ahead for the AI Action Summit. These global gatherings started over a year ago, but since then, the international AI agenda has shifted dramatically. The focus on mitigating the technology's risks is now all about rolling it out fast. On POLITICO Tech, host Steven Overly talks to AI pioneer and professor Yoshua Bengio about the state of the AI safety debate, and why he's urging leaders not to give up on it.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Beyond The Valley
Fears around uncontrollable AI are growing — two top AI scientists explain why

Beyond The Valley

Play Episode Listen Later Feb 4, 2025 30:33


Max Tegmark's Future of Life Institute has called for a pause on the development on advanced AI systems. Tegmark is concerned that the world is moving toward artificial intelligence that can't be controlled, one that could pose an existential threat to humanity. Yoshua Bengio, often dubbed as one of the "godfathers of AI" shares similar concerns. In this special Davos edition of CNBC's Beyond the Valley, Tegmark and Bengio join senior technology correspondent Arjun Kharpal to discuss AI safety and worst case scenarios, such as AI that could try to keep itself alive at the expense of others.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Monde Numérique - Jérôme Colombain
[Debrief Transat] DeepSeek, risques de l'IA et ambitions de X

Monde Numérique - Jérôme Colombain

Play Episode Listen Later Feb 3, 2025 19:07


Cette semaine, dans notre débrief transatlantique, retour sur une secousse majeure dans l'industrie de l'intelligence artificielle avec l'arrivée de DeepSeek, ce modèle chinois d'IA qui bouscule les équilibres en proposant une alternative plus frugale aux modèles américains. Quelles conséquences pour le marché de l'IA et les réglementations européennes ?Nous abordons également les ambitions d'Elon Musk pour transformer X en une super app financière avec un partenariat stratégique signé avec Visa. Un pari audacieux, mais la confiance des utilisateurs est-elle au rendez-vous ?Enfin, focus sur le nouvel appel du chercheur Yoshua Bengio, qui alerte une fois de plus sur les risques liés à l'IA. Son dernier rapport met en garde contre trois dangers majeurs : l'usage malveillant, les biais algorithmiques et les bouleversements systémiques du marché du travail. Quels enseignements en tirer, à l'approche du sommet européen sur l'IA ?-----------

Mon Carnet, l'actu numérique
{RÉFLEXION} - DeepSeek, X et Visa avec Jérôme Colombain

Mon Carnet, l'actu numérique

Play Episode Listen Later Jan 30, 2025 16:39


Dans leur échange hebdomadaire, Bruno Guglielminetti et Jérôme Colombain reviennent sur plusieurs sujets technologiques marquants de la semaine. DeepSeek, l'IA chinoise, qui secoue l'industrie. X et Visa qui annoncent un partenariat pour intégrer des paiements sur la plateforme d'Elon Musk, mais la confiance des utilisateurs reste un enjeu majeur. Enfin, Yoshua Bengio publie un rapport sur les risques de l'IA avancée, soulignant les dangers liés aux cyberattaques, à la désinformation et à l'impact sur l'emploi.

I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle
Épisode 102- Projet Stargate - Techno-césarisme et techno-broligarchisme

I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle

Play Episode Listen Later Jan 23, 2025 55:44


Avec mes baristIAs (Sylvain, Véronique et Benjamin), nous explorons quelques actualités de la semaine, dont le projet Stargate aux États-Unis. Également, derniers commentaires et analyses concernant les propos de Yoshua Bengio lors de notre 100e épisode.  Au programme:Stargate – La porte des étoiles – « Think big!»Les geeks au pouvoir - Politique de puissance et rêve technologique.Frugalité algorithmique, efficacité et sobriété numérique. Et son contraire!Techno-césarisme et techno-broligarchisme. Les limites conceptuelles et logiques de l'analogie anthropomorphique.De la responsabilité sociale des porte-paroles et figures de proue de l'IA.Des risques de la tentative de domestication de l'IA. Changement de cible: Et si on créait artificiellement une intelligence générale « différente » de la nôtre...          Bonne écoute!Production et animation: Jean-François Sénéchal, Ph.DCollaborateurs et collaboratrices (BaristIAs): Véronique Tremblay, Sylvain Munger et Benjamin LeblancCollaborateurs et collaboratrices:  Véronique Tremblay, Stéphane Minéo, Fredérick Plamondon, Shirley Plumerand, Sylvain Munger Ph.D, Ève Gaumond, Benjamin Leblanc. OBVIA Observatoire international sur les impacts sociétaux de l'intelligence artificielleDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

David Bombal
#490: How To Learn AI in 2025 (If I Started Over)

David Bombal

Play Episode Listen Later Jan 20, 2025 46:27


Big thanks to Brilliant for sponsoring this video! To try everything Brilliant has to offer for free for a full 30 days and 20% discount visit: https://Brilliant.org/DavidBombal // Mike SOCIAL // X: / _mikepound Website: https://www.nottingham.ac.uk/research... // YouTube video reference // Teach your AI with Dr Mike Pound (Computerphile): • Train your AI with Dr Mike Pound (Com... Has Generative AI Already Peaked? - Computerphile: • Has Generative AI Already Peaked? - C... // Courses Reference // Deep Learning: https://www.coursera.org/specializati... AI For Everyone by Andrew Ng: https://www.coursera.org/learn/ai-for... Pytorch Tutorials: https://pytorch.org/tutorials/ Pytorch Github: https://github.com/pytorch/pytorch Pytorch Tensors: https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... Python for Everyone: https://www.py4e.com/ // BOOK // Deep learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville: https://amzn.to/3vmu4LP // PyTorch // Github: https://github.com/pytorch Website: https://pytorch.org/ Documentation: / pytorch // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:43 - Introduction 01:04 - State of AI in 2025 02:10 - AGI Hype: Realistic Expectations 03:15 - Sponsored Section 04:30 - Is AI Plateauing or Advancing? 06:26 - Overhype in AI Features Across Industries 08:01 - Is It Too Late to Start in AI? 09:16 - Where to Start in 2025 10:20 - Recommended Courses and Progression Paths 13:26 - Should I Go to School for AI? 14:18 - Learning AI Independently with Resources Online 17:24 - Machine Learning Progression 19:09 - What is a Notebook? 20:10 - Is AI the Top Skill to Learn in 2025? 23:49 - Other Niches and Fields 25:05 - Cyber Using AI 26:31 - AI on Different Platforms 27:13 - AI isn't Needed Everywhere 29:57 - Leveraging AI 30:35 - AI as a Productivity Tool 31:55 - Retrieval Augmented Generation 33:28 - Concerns About Privacy with AI 36:01 - The Difference Between GPU's, CPU's, NPU's etc. 37:30 - The Release of Sora38:56 - Will AI Take Our Job? 41:00 - Nvidia Says We Don't Need Developers 43:47 - Devin Announcement 44:59 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only.

Machine Learning Street Talk
Yoshua Bengio - Designing out Agency for Safe AI

Machine Learning Street Talk

Play Episode Listen Later Jan 15, 2025 101:53


Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe Yoshua Bengio: https://x.com/Yoshua_Bengio https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en https://yoshuabengio.org/ https://en.wikipedia.org/wiki/Yoshua_Bengio TOC: 1. AI Safety Fundamentals [00:00:00] 1.1 AI Safety Risks and International Cooperation [00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development [00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities [00:15:15] 1.4 Reward Tampering and AI Agency Risks [00:25:17] 1.5 Alignment Challenges and Instrumental Convergence 2. AI Architecture and Safety Design [00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals [00:35:02] 2.2 Separating Intelligence from Goals in AI Systems [00:40:40] 2.3 Non-Agent AI as Scientific Tools [00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks 3. Global Governance and Security [00:49:50] 3.1 International AI Competition and Hardware Governance [00:51:58] 3.2 Military and Security Implications of AI Development [00:56:07] 3.3 Personal Evolution of AI Safety Perspectives [01:00:25] 3.4 AI Development Scaling and Global Governance Challenges [01:12:10] 3.5 AI Regulation and Corporate Oversight 4. Technical Innovations [01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers [01:26:02] 4.2 GFlowNets and Symbolic Computation [01:30:47] 4.3 Neural Dynamics and Consciousness [01:34:38] 4.4 AI Creativity and Scientific Discovery SHOWNOTES (Transcript, references, best clips etc): https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0 CORE REFS (full list in shownotes and pinned comment): [00:00:15] Bengio et al.: "AI Risk" Statement https://www.safe.ai/work/statement-on-ai-risk [00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review) https://hdsr.mitpress.mit.edu/pub/w974bwb0 [00:40:45] Munk Debate on AI existential risk, featuring Bengio https://munkdebates.com/debates/artificial-intelligence [00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety https://arxiv.org/abs/2408.05284 [00:51:20] Bengio (2024) memo on hardware-based AI governance verification https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf [01:12:55] Bengio's involvement in EU AI Act code of practice https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice [01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie) https://arxiv.org/abs/2410.14817 [01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference https://arxiv.org/pdf/2111.09266 [01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie) https://arxiv.org/pdf/2302.06403

Adam Carolla Show
Comedian Ben Gleib + “The Mooch” Anthony Scaramucci

Adam Carolla Show

Play Episode Listen Later Nov 26, 2024 133:41 Transcription Available


Comedian Ben Gleib returns to the show and they open by talking about a hiking trail “Karen” in Colorado, the great magnet that connects all of Adam's pizza orders, and the hot dog options at Crypto.com Arena. Next, Jason “Mayhem” Miller joins to read the news including stories about Elon Musk joking about buying MSNBC with a risqué meme, how crows can hold grudges against individual humans for up to 17 years, tech pioneer Yoshua Bengio's warning that AI systems could turn against humans, and Cher telling Howard Stern that she is fully aware men expect 'fabulous sex' from her. Then, former White House Communications Director Anthony Scaramucci returns to talk about why the government can't be run like a business, why he approves of Trump nominating Robert Kennedy Jr. as the Department of Health & Human Services secretary, and the weird insult he received from a journalist. For more with Ben Gleib: ● PODCAST: Last Week on Earth w/ Ben Gleib ● NEW SPECIAL: The Mad King - Available on YouTube ● INSTAGRAM: @bengleib For more with Anthony Scaramucci: ● PODCAST: The Rest is Politics US ● INSTAGRAM: @scaramucci ● TWITTER/X: @scaramucci Thank you for supporting our sponsors: ● http://Meater.com ● QualiaLife.com/Adam ● http://OReillyAuto.com/Adam

The Readout
Navigating National Security in the age of AI

The Readout

Play Episode Listen Later Nov 4, 2024 19:32


Aspen Strategy Group executive director Anja Manuel joins the podcast to discuss issues surrounding AI and national security, and a new series of original papers and op-eds called “Intelligent Defense: Navigating National Security in the Age of AI.” The papers are authored by Aspen Strategy Group members including: Manuel, Mark Esper, General David Petraeus, David Ignatius, Nick Kristof, Steve Bowsher, Joseph S. Nye, Jr., Yoshua Bengio, Senator Chris Coons, Kent Walker, Jennifer Ewbank, Daniel Poneman, Eileen O'Connor, and Graham Allison.

Big Tech
Yoshua Bengio Doesn't Think We're Ready for Superhuman AI. We're Building it Anyway.

Big Tech

Play Episode Listen Later Sep 24, 2024 41:49


A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed. While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy. And then there was Yoshua Bengio. Bengio is one of AI's pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn't be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio. But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he's dedicated himself to AI safety. He's a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute.  And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it's too late. Mentioned:“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton“Computing Machinery and Intelligence” by Alan Turing“International Scientific Report on the Safety of Advanced AI” “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”Further reading:“‘Deep Learning' Guru Reveals the Future of AI” by Cade Metz“Montréal Declaration for a Responsible Development of Artificial Intelligence” “This A.I. Subculture's Motto: Go, Go, Go” By Kevin Roose“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - How you can help pass important AI legislation with 10 minutes of effort by ThomasW

The Nonlinear Library

Play Episode Listen Later Sep 16, 2024 4:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...

Mon Carnet, l'actu numérique
{ENTREVUE} - Hugo Larochelle, Google DeepMind

Mon Carnet, l'actu numérique

Play Episode Listen Later Sep 16, 2024 16:40


Hugo Larochelle, chercheur principal au laboratoire de Google DeepMind à Montréal discute de l'évolution de l'intelligence artificielle en 2024 et de son potentiel transformateur. On parle de l'importance d'orienter l'IA vers des applications bénéfiques, tout en reconnaissant les responsabilités qui en découlent. Larochelle évoque son travail en bioacoustique, où l'IA est utilisée pour reconnaître des espèces animales à partir de sons enregistrés, notamment les oiseaux. Ces outils sont partagés en source ouverte pour soutenir la recherche en écologie. Il parle également de sa collaboration avec des figures clés comme Yoshua Bengio et Geoffrey Hinton, qui ont influencé sa vision à long terme.

Mon Carnet, l'actu numérique
{ENTREVUE} - Rencontre avec Valérie Pisano, la PDG de Mila

Mon Carnet, l'actu numérique

Play Episode Listen Later Sep 16, 2024 15:36


Rencontre avec Valérie Pisano, PDG de Mila. On discute du rôle clé de l'institut québécois d'intelligence artificielle, fondé par Yoshua Bengio. Valérie Pisano met en avant l'importance de la communauté scientifique et du leadership de Bengio dans le rayonnement de Mila. Et on revient sur son approche de l'écosystème qu'est Mila.

POLITICO Dispatch
The AI pioneer with a warning for Gov. Gavin Newsom

POLITICO Dispatch

Play Episode Listen Later Sep 13, 2024 17:01


Washington isn't poised to pass major AI legislation. Ottawa isn't either. So Canadian computer scientist Yoshua Bengio, one of the “godfathers” of artificial intelligence, is looking to Sacramento. He's urging California Gov. Gavin Newsom to sign an AI safety bill by month's end — and facing off against influential tech executives who want it killed. On today's POLITICO Tech, Bengio explains why he thinks California needs to regulate now.

Privacy Please
S5, E221 - How Senate Bill 1047 Could Change AI

Privacy Please

Play Episode Listen Later Sep 5, 2024 8:11 Transcription Available


Send us a textCalifornia's Senate Bill 1047 is on the brink of becoming a law, and we're here to break down what that means for the tech industry and society at large. Tune in as I dissect how this controversial bill mandates rigorous testing of AI systems to identify potential harms such as cybersecurity risks and threats to critical infrastructure. I've got insights from policymakers, including Senator Scott Weiner, who argues that the bill formalizes safety measures already accepted by top AI firms. Amidst passionate debates, hear how tech giants like Google and Meta push back against the regulations, fearing they could cripple innovation, especially for startups. Meanwhile, proponents, including whistleblowers from OpenAI and notable figures like Elon Musk and Yoshua Bengio, champion the necessity of such rules to mitigate substantial AI risks. We'll also explore the broader legislative landscape that aims to combat deep fakes, and automated discrimination, and safeguard the likeness of deceased individuals in AI-generated content. Support the show

The Current
Does Yoshua Bengio regret helping to create AI?

The Current

Play Episode Listen Later Sep 5, 2024 21:48


Yoshua Bengio helped to create artificial intelligence, and now he wishes he'd included an off switch. The Montreal computer scientist explains why he's worried about the rapidly developing technology, and how it could be reined in. 

The Nonlinear Library
AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott

The Nonlinear Library

Play Episode Listen Later Sep 1, 2024 8:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024), published by Matt MacDermott on September 1, 2024 on The AI Alignment Forum. Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience. The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions. I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve advances in Bayesian machine learning, and also probably solving ELK to get the harm estimates? My answer to that is: yes, I think so. I think Yoshua does too, and that that's the centre of his research agenda. Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety". Bounding the probability of harm from an AI to create a guardrail Published 29 August 2024 by yoshuabengio As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action? Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification. With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at ru...

Machine Learning Street Talk
Joscha Bach - AGI24 Keynote (Cyberanimism)

Machine Learning Street Talk

Play Episode Listen Later Aug 21, 2024 57:21


Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness

Making Sense with Sam Harris
#379 — Regulating Artificial Intelligence

Making Sense with Sam Harris

Play Episode Listen Later Aug 12, 2024 34:21


Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.   Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Making Sense with Sam Harris - Subscriber Content
#379 - Regulating Artificial Intelligence

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Aug 12, 2024 48:31


Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/379-regulating-artificial-intelligence Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. Yoshua Bengio is full professor at Université de Montréal and the Founder and Scientific Director of Mila - Quebec AI Institute. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoffrey Hinton and Yann LeCun, known as the Nobel Prize of computing. He is a Canada CIFAR AI Chair, a member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology, and Chair of the International Scientific Report on the Safety of Advanced AI. Website: https://yoshuabengio.org/ Scott Wiener has represented San Francisco in the California Senate since 2016. He recently introduced SB 1047, a bill aiming to reduce the risks of frontier models of AI. He has also authored landmark laws to, among other things, streamline the permitting of new homes, require insurance plans to cover mental health care, guarantee net neutrality, eliminate mandatory minimums in sentencing, require billion-dollar corporations to disclose their climate emissions, and declare California a sanctuary state for LGBTQ youth. He has lived in San Francisco's historically LGBTQ Castro neighborhood since 1997. Twitter: @Scott_Wiener Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Monde Numérique - Jérôme Colombain
Des IA "garde-fous" pour protéger l'Humanité (Yoshua Bengio, Mila Montréal)

Monde Numérique - Jérôme Colombain

Play Episode Listen Later Aug 7, 2024 27:42


(Rediffusion) Interview exclusive de Yoshua Bengio, fondateur du centre Mila de Montréal consacré à l'intelligence artificielle (en partenariat avec Mon Carnet / Bruno Guglielminetti).Co-inventeur du deep learning (apprentissage profond), et considéré comme l'une des personnalités les plus influentes du monde en intelligence artificielle, l'universitaire québécois Yoshua Bengio milite pour une approche prudente de l'IA. Selon lui, les intelligences artificielles qui seront développées dans le futur représenteront un véritable risque pour l'espèce humaine, pouvant conduire à sa destruction. Contrairement à son collègue français Yann Le Cun, Bengio se situe donc sur une ligne d'inquiétude et de prudence. Il en appelle à faire jouer le principe de précaution. Il explique comment ses travaux de recherche actuels visent à donner le jour à une sorte "d'IA gendarme", qui serait capable de contrôler les autres IA afin de faire en sorte qu'elles respectent des règles éthiques et démocratiques.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique

AI DAILY: Breaking News in AI
HARRIS & TRUMP DIFFER ON AI

AI DAILY: Breaking News in AI

Play Episode Listen Later Jul 30, 2024 3:51


Plus Insta Lets You Create A Bot Of Yourself.  VP Kamala Harris and President Donald Trump present contrasting AI policies. Harris emphasizes AI safety and protections, while Trump seeks to repeal Biden's AI order and reduce regulations. Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us How Harris and Trump Differ on Artificial Intelligence Policy Vice President Kamala Harris and President Donald Trump present contrasting views on AI policy in their presidential campaigns. Harris focuses on AI safety and quick protections without stifling innovation, emphasizing real-world impacts. Trump, however, aims to repeal Biden's AI executive order, advocating for fewer regulations to foster AI development. Meta AI Studio Allows Users to Create AI Versions of Themselves Meta's AI Studio is now available in the U.S., enabling Instagram creators to build AI bots that answer frequently asked questions and link to content. Users can customize their bots to reflect their personality and avoid certain topics. The feature is part of Meta's broader AI strategy to enhance its social media products. iOS Gets an AI Upgrade: Inside Apple's New 'Intelligence' System Apple's new "Apple Intelligence" system introduces advanced AI features optimized for privacy and efficiency. The system includes a 3-billion parameter on-device model for iPhones and a larger server-based model, AFM-server, designed for intensive tasks. Emphasizing responsible AI, Apple ensures user data protection while enhancing text generation, image creation, and in-app interactions. California's AI Bill SB-1047 Sparks Debate Over Innovation and Safety California's proposed SB-1047 aims to regulate large AI models to prevent potential catastrophic risks. Critics argue the bill, backed by AI safety advocates, could hinder innovation and open-source development. Supporters, including AI experts Geoffrey Hinton and Yoshua Bengio, see it as essential for managing AI's future risks. The bill's outcome could influence AI regulation nationwide. AI App 'Hello History' Brings Historical Figures to Life Hello History allows students to engage in lifelike conversations with historical figures like Cleopatra and Einstein using AI technology. This app, available on iOS and Android, enhances learning by providing dynamic interactions with influential figures, making history more engaging and relatable for students and teachers.  AI-Powered Robot Maximo Speeds Up Solar Farm Construction Maximo, an AI-powered robot developed by AES, is revolutionizing solar farm construction by reducing installation timelines and costs by up to 50%. Supported by Amazon, Maximo efficiently installs solar panels even in extreme conditions. This technology aims to meet the growing demand for renewable energy and address workforce shortages in the solar industry.

Math & Physics Podcast
Episode #120 - Yoshua Bengio

Math & Physics Podcast

Play Episode Listen Later Jul 23, 2024 63:24


I decided to adopt a more minimalist approach to the podcast. Enjoy the conversation!

The Nonlinear Library
LW - AI #72: Denying the Future by Zvi

The Nonlinear Library

Play Episode Listen Later Jul 12, 2024 63:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #72: Denying the Future, published by Zvi on July 12, 2024 on LessWrong. The Future. It is coming. A surprising number of economists deny this when it comes to AI. Not only do they deny the future that lies in the future. They also deny the future that is here, but which is unevenly distributed. Their predictions and projections do not factor in even what the AI can already do, let alone what it will learn to do later on. Another likely future event is the repeal of the Biden Executive Order. That repeal is part of the Republican platform, and Trump is the favorite to win the election. We must act on the assumption that the order likely will be repealed, with no expectation of similar principles being enshrined in federal law. Then there are the other core problems we will have to solve, and other less core problems such as what to do about AI companions. They make people feel less lonely over a week, but what do they do over a lifetime? Also I don't have that much to say about it now, but it is worth noting that this week it was revealed Apple was going to get an observer board seat at OpenAI… and then both Apple and Microsoft gave up their observer seats. Presumably that is about antitrust and worrying the seats would be a bad look. There could also be more to it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Long as you avoid GPT-3.5. 4. Language Models Don't Offer Mundane Utility. Many mistakes will not be caught. 5. You're a Nudge. You say it's for my own good. 6. Fun With Image Generation. Universal control net for SDXL. 7. Deepfaketown and Botpocalypse Soon. Owner of a lonely bot. 8. They Took Our Jobs. Restaurants. 9. Get Involved. But not in that way. 10. Introducing. Anthropic ships several new features. 11. In Other AI News. Microsoft and Apple give up OpenAI board observer seats. 12. Quiet Speculations. As other papers learned, to keep pace, you must move fast. 13. The AI Denialist Economists. Why doubt only the future? Doubt the present too. 14. The Quest for Sane Regulation. EU and FTC decide that things are their business. 15. Trump Would Repeal the Biden Executive Order on AI. We can't rely on it. 16. Ordinary Americans Are Worried About AI. Every poll says the same thing. 17. The Week in Audio. Carl Shulman on 80,000 hours was a two parter. 18. The Wikipedia War. One obsessed man can do quite a lot of damage. 19. Rhetorical Innovation. Yoshua Bengio gives a strong effort. 20. Evaluations Must Mimic Relevant Conditions. Too often they don't. 21. Aligning a Smarter Than Human Intelligence is Difficult. Stealth fine tuning. 22. The Problem. If we want to survive, it must be solved. 23. Oh Anthropic. Non Disparagement agreements should not be covered by NDAs. 24. Other People Are Not As Worried About AI Killing Everyone. Don't feel the AGI. Language Models Offer Mundane Utility Yes, they are highly useful for coding. It turns out that if you use GPT-3.5 for your 'can ChatGPT code well enough' paper, your results are not going to be relevant. Gallabytes says 'that's morally fraud imho' and that seems at least reasonable. Tests failing in GPT-3.5 is the AI equivalent of "IN MICE" except for IQ tests. If you are going to analyze the state of AI, you need to keep an eye out for basic errors and always always check which model is used. So if you go quoting statements such as: Paper about GPT-3.5: its ability to generate functional code for 'hard' problems dropped from 40% to 0.66% after this time as well. 'A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset Then even if you hadn't realized or checked before (which you really should have), you need to notice that this says 2021, which is very much not ...

The Nonlinear Library
LW - Yoshua Bengio: Reasoning through arguments against taking AI safety seriously by Judd Rosenblatt

The Nonlinear Library

Play Episode Listen Later Jul 12, 2024 1:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yoshua Bengio: Reasoning through arguments against taking AI safety seriously, published by Judd Rosenblatt on July 12, 2024 on LessWrong. He starts by emphasizing The issue is so hotly debated because the stakes are major: According to some estimates, quadrillions of dollars of net present value are up for grabs, not to mention political power great enough to significantly disrupt the current world order. [...] The most important thing to realize, through all the noise of discussions and debates, is a very simple and indisputable fact: while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans. And goes on to do a pretty great job addressing "those who think AGI and ASI are impossible or are centuries in the future AGI is possible but only in many decades that we may reach AGI but not ASI, those who think that AGI and ASI will be kind to us that corporations will only design well-behaving AIs and existing laws are sufficient who think that we should accelerate AI capabilities research and not delay benefits of AGI talking about catastrophic risks will hurt efforts to mitigate short-term human-rights issues with AI concerned with the US-China cold war that international treaties will not work the genie is out of the bottle and we should just let go and avoid regulation that open-source AGI code and weights are the solution worrying about AGI is falling for Pascal's wager" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Climate Stack
AI Weather Forecasts for Climate Adaptation with Dr. Peetak Mitra

Climate Stack

Play Episode Listen Later Jul 11, 2024 37:54


In this episode we speak with Dr Peetak Mitra, veteran of countless climate change projects, on the founding team of Excarta, core member of ClimateChange.AI, and gracious human being. He illuminates the role AI/ML can play in adapting to a warming planet, describes the ML techniques his company employs in their breakthrough tools, and gives advice for engineers looking to move into the climate space - in short, ‘just do it'. We also discuss growth in the climate sector, and he shares that despite a widespread economic slowdown, investment in climate technology continues to increase. We were delighted to have him on the show. About Dr Peetak MitraPeetak is a San Francisco-based technologist passionate about leveraging AI to combat climate change. He's on the Founding team of Excarta, a venture-backed startup building a breakthrough AI-powered weather intelligence platform for businesses. Prior to Excarta, he was a Member of Research Staff at the Xerox PARC (now SRI-PARC), where he co-led projects for AI climate forecasting funded in part by DARPA, and NASA. He has been part of Climate Change AI, organizing impactful workshops at major ML conferences including ICLR, AAAI, and NeurIPS with Turing Laureate Prof. Yoshua Bengio. He has been a featured speaker on Climate and AI at MIT, SF Climate Week, OpenAI, NSF among others. He holds a PhD in Scientific Machine Learning from the University of Massachusetts Amherst and a Bachelor's degree from BIT Mesra.https://www.linkedin.com/in/peetak/PapersThe paper Peetak mentioned: Tackling Climate Change with Machine Learning - https://dl.acm.org/doi/10.1145/3485128A milestone paper summarizing the application of ML to climate problems. Abstract: “Climate change is one of the greatest challenges facing humanity, and we, as machine learning (ML) experts, may wonder how we can help. Here we describe how ML can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by ML, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the ML community to join the global effort against climate change.”Companies and OrganizationsClimate Change AIClimate Change AI (CCAI) is an organization composed of volunteers from academia and industry who believe that tackling climate change requires concerted societal action, in which machine learning can play an impactful role. Since it was founded in June 2019 (and established as a US domestic non-profit on June 14, 2021), CCAI has led the creation of a global movement in climate change and machine learning, encompassing researchers, engineers, entrepreneurs, investors, policymakers, companies, and NGOs.9zero Climate Co-working Space.  Launched during San Francisco Climate Week 2024, 9Zero is the hub for all things climate. Starting with coworking and events, we're uniting the entire ecosystem. Startups, investors, corporations, service providers, policymakers, academics: if you're working toward a healthier, more resilient world, you belong at 9Zero. Expanding to Seattle and LA this year. Sign up at www.9ZeYour Hosts Mansi Shah - Joshua Marker ClimateStack website - https://climatestack.podcastpage.io/

POLITICO Dispatch
REBROADCAST: Why one ‘godfather of AI' warns humans must exert control

POLITICO Dispatch

Play Episode Listen Later Jun 12, 2024 19:47


Host Steven Overly is in Canada this week for The US-Canada Summit, hosted by BMO Financial Group and Eurasia Group — and it got him thinking about another Canadian who's been on the podcast before: Canadian computer scientist Yoshua Bengio. Bengio has been dubbed one of the “godfathers of AI,” although he's not exactly thrilled about the title. Still, Bengio devoted most of his professional life to making AI smarter. But now, he wants to prevent AI from destroying humanity. On POLITICO Tech, Bengio tells host Steven Overly about his professional pivot and what policy changes he's pushing for around the world.

Impact Theory with Tom Bilyeu
Megathreat: The Dangers Of AI Are Weirder Than You Think | Yoshua Bengio (Replay)

Impact Theory with Tom Bilyeu

Play Episode Listen Later May 30, 2024 86:41


The launch of ChatGPT broke records in consecutive months between December 2022 and February 2023. Over 1 billion users a month for ChatGPT, over 100,000 users and $45 million in revenue for Jasper A.I., and the race to adopting A.I. at scale has begun. Does the global adoption of artificial intelligence have you concerned or apprehensive about what's to come? On one hand it's easy to get caught up in the possibilities of co-existing with A.I. living the enhanced upgraded human experience. We already have tech and A.I. integrated into so many of our daily habits and routines: Apple watches, ora rings, social media algorithms, chat bots, and on and on. Yoshua Bengio has dedicated more than 30 years of his computer science career to deep learning. He's an award winning computer scientist known for his breakthroughs in artificial neural networks. Why after 3 decades contributing to the advancement of A.I. systems is Yoshua now calling to slow down the development of powerful A.I. systems? This conversation is about being open-minded and aware of the dangers of AI we all need to consider from the perspective of one of the world's leading experts in artificial intelligence. Conscious computers, A.I. trolls, and the evolution of machines and what it means to be a neural network are just a few of the things you'll find interesting in this conversation. [Original air date: 4-13-23] Follow Yoshua Bengio: Website: https://yoshuabengio.org/ SPONSORS: Explore the Range Rover Sport at https://landroverusa.com Use this link and Hartford Gold will give you up to $15,000 dollars of FREE silver on your first qualifying: order.offers.americanhartfordgold.com/content-affiliate/?&leadsource=affiliate&utm_sfcampaign=701Rb000009EnmrIAC For comprehensive financial news and analysis, visit the incredible brand that so many great investors use, https://yahoofinance.com.   Visit https://BetterHelp.com/ImpactTheory today to get 10% off your first month. Go to https://shopify.com/impact now to grow your business–no matter what stage you're in Get $1,000 off Vanta when you go to https://vanta.com/THEORY Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://drinkag1.com/impact. Secure your digital life with proactive protection for your assets, identity, family, and tech – Go to https://aura.com/IMPACT to start your free two-week trial. Take control of your gut health by going to https://tryviome.com/impact and use code IMPACT to get 20% off your first 3 months and free shipping. ***Are You Ready for EXTRA Impact?*** If you're ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you.  *New episodes delivered ad-free, EXCLUSIVE access to hundreds of archived Impact Theory episodes, Tom AMAs, and so much more!* This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
AF - Apollo Research 1-year update by Marius Hobbhahn

The Nonlinear Library

Play Episode Listen Later May 29, 2024 14:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apollo Research 1-year update, published by Marius Hobbhahn on May 29, 2024 on The AI Alignment Forum. This is a linkpost for: www.apolloresearch.ai/blog/the-first-year-of-apollo-research About Apollo Research Apollo Research is an evaluation organization focusing on risks from deceptively aligned AI systems. We conduct technical research on AI model evaluations and interpretability and have a small AI governance team. As of 29 May 2024, we are one year old. Executive Summary For the UK AI Safety Summit, we developed a demonstration that Large Language Models ( LLMs) can strategically deceive their primary users when put under pressure. The accompanying paper was referenced by experts and the press (e.g. AI Insight forum, BBC, Bloomberg) and accepted for oral presentation at the ICLR LLM agents workshop. The evaluations team is currently working on capability evaluations for precursors of deceptive alignment, scheming model organisms, and a responsible scaling policy (RSP) on deceptive alignment. Our goal is to help governments and AI developers understand, assess, and address the risks of deceptively aligned AI systems. The interpretability team published three papers: An improved training method for sparse dictionary learning, a new conceptual framework for 'loss-landscape-based interpretability', and an associated empirical paper. We are beginning to explore concrete white-box evaluations for deception and continue to work on fundamental interpretability research. The governance team communicates our technical work to governments (e.g., on evaluations, AI deception and interpretability), and develops recommendations around our core research areas for international organizations and individual governments. Apollo Research works with several organizations, including partnering with the UK AISI and being a member of the US AISI Consortium. As part of our partnership with UK AISI, we were contracted to develop deception evaluations. Additionally, we engage with various AI labs, e.g. red-teaming OpenAI's fine-tuning API before deployment and consulting on the deceptive alignment section of an AI lab's RSP. Like any organization, we have also encountered various challenges. Some projects proved overly ambitious, resulting in delays and inefficiencies. We would have benefitted from having dedicated regular exchanges with senior official external advisors earlier. Additionally, securing funding took more time and effort than expected. We have more room for funding. Please reach out if you're interested. Completed work Evaluations For the UK AI Safety Summit, we developed a demonstration that LLMs can strategically deceive their primary users when put under pressure, which was presented at the UK AI Safety Summit. It was referenced by experts and the press (e.g. Yoshua Bengio's statement for Senator Schumer's AI insight forum, BBC, Bloomberg, US Security and Exchange Commission Chairperson Gary Gensler's speech on AI and law, and many other media outlets). It was accepted for an oral presentation at this year's ICLR LM agents workshop. In our role as an independent third-party evaluator, we work with a range of organizations. For example, we were contracted by the UK AISI to build deceptive capability evaluations with them. We also worked with OpenAI to red-team their fine-tuning API before deployment. We published multiple conceptual research pieces on evaluations, including A Causal Framework for AI Regulation and Auditing and A Theory of Change for AI Auditing. Furthermore, we published conceptual clarifications on deceptive alignment and strategic deception. We were part of multiple collaborations, including: SAD: a situational awareness benchmark with researchers from Owain Evan's group, led by Rudolph Laine (forthcoming). Black-Box Access is Insufficient for Rigorous...

ACM ByteCast
Yoshua Bengio - Episode 54

ACM ByteCast

Play Episode Listen Later May 22, 2024 42:04


In this episode of ACM ByteCast, Rashmi Mohan hosts ACM A.M. Turing Award laureate Yoshua Bengio, Professor at the University of Montreal, and Founder and Scientific Director of MILA (Montreal Institute for Learning Algorithms) at the Quebec AI Institute. Yoshua shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for their work on deep learning. He is also a published author and the most cited scientist in Computer Science. Previously, he founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications, acquired by ServiceNow. He currently serves as technical and scientific advisor to Recursion Pharmaceuticals and scientific advisor for Valence Discovery. He is a Fellow of ACM, the Royal Society, the Royal Society of Canada, Officer of the Order of Canada, and recipient of the Killam Prize, Marie-Victorin Quebec Prize, and Princess of Asturias Award. Yoshua also serves on the United Nations Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology and as a Canada CIFAR AI Chair.  Yoshua traces his path in computing, from programming games in BASIC as an adolescent to getting interested in the synergy between the human brain and machines as a graduate student. He defines deep learning and talks about knowledge as the relationship between symbols, emphasizing that interdisciplinary collaborations with neuroscientists were key to innovations in DL. He notes his and his colleagues' surprise in the speed of recent breakthroughs with transformer architecture and large language models and talks at length about about artificial general intelligence (AGI) and the major risks it will present, such as loss of control, misalignment, and nationals security threats. Yoshua stresses that mitigating these will require both scientific and political solutions, offers advice for researchers, and shares what he is most excited about with the future of AI.

The Nonlinear Library
AF - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Joar Skalse

The Nonlinear Library

Play Episode Listen Later May 17, 2024 4:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Joar Skalse on May 17, 2024 on The AI Alignment Forum. I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. In this paper we introduce the concept of "guaranteed safe (GS) AI", which is a broad research strategy for obtaining safe AI systems with provable quantitative safety guarantees. Moreover, with a sufficient push, this strategy could plausibly be implemented on a moderately short time scale. The key components of GS AI are: 1. A formal safety specification that mathematically describes what effects or behaviors are considered safe or acceptable. 2. A world model that provides a mathematical description of the environment of the AI system. 3. A verifier that provides a formal proof (or some other comparable auditable assurance) that the AI system satisfies the safety specification with respect to the world model. The first thing to note is that a safety specification in general is not the same thing as a reward function, utility function, or loss function (though they include these objects as special cases). For example, it may specify that the AI system should not communicate outside of certain channels, copy itself to external computers, modify its own source code, or obtain information about certain classes of things in the external world, etc. The safety specifications may be specified manually, generated by a learning algorithm, written by an AI system, or obtained through other means. Further detail is provided in the main paper. The next thing to note is that most useful safety specifications must be given relative to a world model. Without a world model, we can only use specifications defined directly over input-output relations. However, we want to define specifications over input-outcome relations instead. This is why a world model is a core component of GS AI. Also note that: 1. The world model need not be a "complete" model of the world. Rather, the required amount of detail and the appropriate level of abstraction depends on both the safety specification(s) and the AI system's context of use. 2. The world model should of course account for uncertainty, which may include both stochasticity and nondeterminism. 3. The AI system whose safety is being verified may or may not use a world model, and if it does, we may or may not be able to extract it. However, the world model that is used for the verification of the safety properties need not be the same as the world model of the AI system whose safety is being verified (if it has one). The world model would likely have to be AI-generated, and should ideally be interpretable. In the main paper, we outline a few potential strategies for producing such a world model. Finally, the verifier produces a quantitative assurance that the base-level AI controller satisfies the safety specification(s) relative to the world model(s). In the most straightforward form, this could simply take the shape of a formal proof. However, if a direct formal proof cannot be obtained, then there are weaker alternatives that would still produce a quantitative guarantee. For example, the assurance may take the form of a proof that bounds the probability of failing to satisfy the safety specification, or a proof that the AI system will converge towards satisfying the safety specification (with increasing amounts of data or computational resources, for example). Such proofs are of course often very hard to obtain. However, further progress in automated theorem proving (and relat...

The Nonlinear Library
LW - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Gunnar Zarncke

The Nonlinear Library

Play Episode Listen Later May 16, 2024 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Gunnar Zarncke on May 16, 2024 on LessWrong. Authors: David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum Abstract: Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Tech&Co
Gadget dopé à l'IA, le Rabbit R1 fait un flop – 07/05

Tech&Co

Play Episode Listen Later May 7, 2024 24:00


Mardi 7 mai, François Sorel a reçu Frédéric Simottel, journaliste BFM Business ; Jérôme Colombain, journaliste, créateur du podcast « Monde Numérique » ; Claudia Cohen, journaliste au Figaro, et Bruno Guglielminetti, journaliste et animateur de « Mon Carnet de l'actualité numérique ». Ils se sont penchés sur l'annonce de l'échec du Rabbit R1, un gadget dopé à l'IA, et une interview avec Yoshua Bengio, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.

London Futurists
What's it like to be an AI, with Anil Seth

London Futurists

Play Episode Listen Later Apr 13, 2024 45:52


As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"Selected follow-ups:Anil Seth's websiteBooks by Anil Seth, including Being YouConsciousness in humans and other things - presentation by Anil Seth at The Royal Society, March 2024Is consciousness more like chess or the weather? - an interview with Anil SethAutopoiesis - Wikipedia article about the concept introduced by Humberto Maturana and Francisco Varela Akinetic mutism, WikipediaCerebral organoid (Brain organoid), WikipediaAI Scientists: Safe and Useful AI? - by Yoshua Bengio, on AIs as oraclesEx Machina (2014 film, written and directed by Alex Garland)The Conscious Electromagnetic Information (Cemi) Field Theory by Johnjoe McFaddenThe Electromagnetic Field Theory of Consciousness by Susan PockettMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationClimate ConfidentWith a new episode every Wed morning, the Climate Confident podcast is weekly podcast...Listen on: Apple Podcasts Spotify Inspiring ComputingThe Inspiring Computing podcast is where computing meets the real world. This podcast...Listen on: Apple Podcasts Spotify

Artificial Intelligence in Industry with Daniel Faggella
Introducing 'The Trajectory': A Specific Editorial Focus on Power and Artificial General Intelligence - with Yoshua Bengio

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Mar 11, 2024 45:01


Today's is an exceptional episode of the AI in Business podcast. This week, we are delighted to announce the launch of ‘The Trajectory,' a new video channel, podcast, and newsletter from Emerj Technology Research. As a special sneak preview of the first episode of the podcast, we're featuring a portion of our conversation with revered computer scientist and University of Montreal professor Yoshua Bengio on today's episode of the ‘AI in Business' podcast. In conversation with Emerj CEO and Head of Research Daniel Faggella, the ‘Trajectory' sneak preview marks Yoshua's return to the ‘AI in Business' podcast for the first time in nearly a decade to explain his recent and very openly declared change of heart on the implications of artificial general intelligence (AGI) on the global realpolitik, and how large language models like ChatGPT became the catalyst for that change.

Economist Podcasts
Babbage: The science that built the AI revolution—part one

Economist Podcasts

Play Episode Listen Later Mar 6, 2024 42:57


What is intelligence? In the middle of the 20th century, the inner workings of the human brain inspired computer scientists to build the first “thinking machines”. But how does human intelligence actually relate to the artificial kind?This is the first episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist's science and technology editor. Contributors: Ainslie Johnstone, The Economist's data journalist and science correspondent; Dawood Dassu and Steve Garratt of UK Biobank; Daniel Glaser, a neuroscientist at London's Institute of Philosophy; Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory; Yoshua Bengio of the University of Montréal, who is known as one of the “godfathers” of modern AI.On Thursday April 4th, we're hosting a live event where we'll answer as many of your questions on AI as possible, following this Babbage series. If you're a subscriber, you can submit your question and find out more at economist.com/aievent. Get a world of insights for 50% off—subscribe to Economist Podcasts+If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.

Babbage from Economist Radio
Babbage: The science that built the AI revolution—part one

Babbage from Economist Radio

Play Episode Listen Later Mar 6, 2024 42:57


What is intelligence? In the middle of the 20th century, the inner workings of the human brain inspired computer scientists to build the first “thinking machines”. But how does human intelligence actually relate to the artificial kind?This is the first episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist's science and technology editor. Contributors: Ainslie Johnstone, The Economist's data journalist and science correspondent; Dawood Dassu and Steve Garratt of UK Biobank; Daniel Glaser, a neuroscientist at London's Institute of Philosophy; Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory; Yoshua Bengio of the University of Montréal, who is known as one of the “godfathers” of modern AI.On Thursday April 4th, we're hosting a live event where we'll answer as many of your questions on AI as possible, following this Babbage series. If you're a subscriber, you can submit your question and find out more at economist.com/aievent. Get a world of insights for 50% off—subscribe to Economist Podcasts+If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.

Win-Win with Liv Boeree
#17 - Dan Hendrycks: Are AI worries overblown?

Win-Win with Liv Boeree

Play Episode Listen Later Feb 21, 2024 100:50


The rate of AI progress is accelerating, so how can we minimize the risks of this incredible technology, while maximizing the rewards? Today I am speaking to leading AI researcher Dan Hendrycks — Dan is the founder of Center for AI Safety, and lead advisor to Elon Musk's X.AI. He was also the architect behind the "Mitigating Risks" letter that was signed by Demis Hassabis, Sam Altman, Bill Gates, Yoshua Bengio and many others. In this conversation we discuss everything from immediate issues like deepfakes, to upcoming risks like malicious use, centralisation of power, regulatory capture and more. In other words, how do we ensure AI ends up a win/win for humanity instead of a lose/lose. Chapters 00:00:00 - Intro 00:02:14 - Are current laws sufficient? 00:09:41 - Types of AI Risk 00:23:30 - Arms Races 00:39:10 - What happens inside an AI? 00:46:39 - Rogue AI 00:52:22 - Sentient AI 01:07:36 - Risks from Centralization 01:14:45 - Open Source 01:23:02 - AI speeding up systemic risks 01:29:54 - Synthetic Data & Simulations 01:36:52 - What Dan is excited about in AI Links ♾️ An Overview of Catastrophic Risk Paper https://arxiv.org/pdf/2306.12001.pdf ♾️ Center for AI Safety https://www.safe.ai/ai-risk ♾️ Representation Engineering https://www.ai-transparency.org/ ♾️ Liv's Ted talk on AI & Moloch https://www.youtube.com/watch?v=WX_vN1QYgmE ♾️ Norbert Wiener https://en.wikipedia.org/wiki/Norbert_Wiener ♾️ Reinforcement Learning Textbook https://inst.eecs.berkeley.edu/~cs188/sp20/assets/files/SuttonBartoIPRLBook2ndEd.pdf ♾️ Richard Posner - Economics Engine https://plato.stanford.edu/entries/legal-econanalysis/ ♾️ More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize https://arxiv.org/abs/2203.06176 The Win-Win Podcast: Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins. Watch the previous episode with Boyan Slat of the Ocean Cleanup here: https://youtu.be/QEYbLN-LC5k Credits ♾️  Hosted by Liv Boeree ♾️  Produced & Edited by Raymond Wei ♾️  Audio Mix by Keir Schmidt

POLITICO Dispatch
Why one ‘godfather of AI' warns humans must exert control

POLITICO Dispatch

Play Episode Listen Later Feb 8, 2024 19:51


Canadian computer scientist Yoshua Bengio has been dubbed one of the “godfathers of AI,” but he's not exactly thrilled about the title. Bengio devoted most of his professional life to making AI smarter. But now, he wants to prevent AI from destroying humanity. On POLITICO Tech, Bengio tells host Steven Overly about his professional pivot and what policy changes he's pushing for around the world.

Best of the Left - Leftist Perspectives on Progressive Politics, News, Culture, Economics and Democracy

Air Date 12/20/2023 AI needs to be regulated by governments even though politicians don't understand computers just as the government regulates the manufacture and operation of aircraft even though your average politician doesn't know their ass from an aileron. That's why expert advisory panels are for. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript WINTER SALE! 20% Off Memberships (including Gifts) in December! Join our Discord community! Related Episodes: #1547 Shaping the Future of the Internet #1578 A.I. is a big tech airplane with a 10% chance of crashing, should society fly it? OUR AFFILIATE LINKS: ExpressVPN.com/BestOfTheLeft GET INTERNET PRIVACY WITH EXPRESS VPN! BestOfTheLeft.com/Libro SUPPORT INDIE BOOKSHOPS, GET YOUR AUDIOBOOK FROM LIBRO! BestOfTheLeft.com/Bookshop BotL BOOKSTORE BestOfTheLeft.com/Store BotL MERCHANDISE! SHOW NOTES Ch. 1: How are governments approaching AI regulation - In Focus by The Hindu - Air Date 11-16-23 Dr Matti Pohjonen speaks to us about the concerns revolving around AI governance, and if there are any fundamental principles that an AI regulatory regime needs to address. Ch. 2: A First Step Toward AI Regulation with Tom Wheeler - Your Undivided Attention - Air Date 11-2-23 President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order and what's next for AI regulation. Ch. 3: Artificial Intelligence Godfathers Call for Regulation as Rights Groups Warn AI Encodes Oppression - Democracy Now! - Air Date 6-1-23 We host a roundtable discussion with three experts in artificial intelligence on growing concerns over the technology's potential dangers: Yoshua Bengio, Max Tegmark, and Tawana Petty. Ch. 4: The EU agrees on AI regulations - What will it mean for people and businesses in the EU - DW News - Air Date 12-9-23 European Union member states and lawmakers reached a preliminary agreement on what they touted as the world's first comprehensive AI legislation on Friday. Ch. 5: EU vs. AI - Today, Explained - Air Date 12-18-23 The EU has advanced first-of-its-kind AI regulation. The Verge's Jess Weatherbed tells us whether it will make a difference, and Columbia University's Anu Bradford explains the Brussels effect. Ch. 6: A First Step Toward AI Regulation with Tom Wheeler Part 2 - Your Undivided Attention - Air Date 11-2-23 Ch. 7: How to Keep AI Under Control | Max Tegmark - TEDTalks - Air Date 11-2-23 Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around. MEMBERS-ONLY BONUS CLIP(S) Ch. 8: Anti-Democratic Tech Firm's Secret Push For A.I. Deregulation w Lori Wallach - Thom Hartmann Program - Air Date 8-8-23 Giant tech firms are meeting secretly to deregulate Artificial Intelligence technologies in the most undemocratic ways possible. Can profit really take over and corrupt progress? Ch. 9: How are governments approaching AI regulation Part 2 - In Focus by The Hindu - Air Date 11-16-23 FINAL COMMENTS Ch. 10: Final comments on the need to understand the benefits and downsides of new technology MUSIC (Blue Dot Sessions)   Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com

The Journal.
Why an AI Pioneer Is Worried

The Journal.

Play Episode Listen Later Dec 19, 2023 22:35


Yoshua Bengio, known as a godfather of AI, is one of hundreds of researchers and tech leaders calling for a pause in the breakneck development of powerful new AI tools. We talk to the AI pioneer about how the tools evolved and why he's worried about their potential. Further Listening: - Artificial: Episode 1, The Dream  - Artificial: Episode 2, Selling Out  - OpenAI's Weekend of Absolute Chaos  Further Reading: - How Worried Should We Be About AI's Threat to Humanity? Even Tech Leaders Can't Agree  - ‘Take Science Fiction Seriously': World Leaders Sound Alarm on AI  Learn more about your ad choices. Visit megaphone.fm/adchoices