POPULARITY
In this episode of Campus Technology Insider Podcast Shorts, host Rhea Kelly discusses the latest stories in education technology. Highlights include the launch of LawZero by Yoshua Bengio to develop transparent 'scientist AI' systems, a new Cloud Security Alliance guide on red teaming for agentic AI, and OpenAI's report on the malicious use of AI in cybercrime. For more detailed coverage, visit campustechnology.com. 00:00 Introduction and Host Welcome 00:15 LawZero: Ensuring Safe AI Development 00:52 Cloud Security Alliance's New Guide 01:27 OpenAI Report on AI in Cybercrime 02:06 Conclusion and Further Resources Source links: New Nonprofit to Work Toward Safer, Truthful AI Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.
James Copnall, presenter of the BBC's Newsday, speaks to Yoshua Bengio, the world-renowned computer scientist often described as one of the godfathers of artificial intelligence, or AI.Bengio is a professor at the University of Montreal in Canada, founder of the Quebec Artificial Intelligence Institute - and recipient of an A.M. Turing Award, “the Nobel Prize of Computing”. AI allows computers to operate in a way that can seem human, by using programmes that learn vast amounts of data and follow complex instructions. Big tech firms and governments have invested billions of dollars in the development of artificial intelligence, thanks to its potential to increase efficiency, cut costs and support innovation.Bengio believes there are risks in AI models that attempt to mimic human behaviour with all its flaws. For example, recent experiments have shown how some AI models are developing the capacity to deceive and even blackmail humans, in a quest for their self-preservation. Instead, he says AI must be safe, scientific and working to understand humans without copying them. The Interview brings you conversations with people shaping our world, from all over the world. The best interviews from the BBC. You can listen on the BBC World Service, Mondays and Wednesdays at 0700 GMT. Or you can listen to The Interview as a podcast, out twice a week on BBC Sounds, Apple, Spotify or wherever you get your podcasts.Presenter: James Copnall Producers: Lucy Sheppard, Ben Cooper Editor: Nick HollandGet in touch with us on email TheInterview@bbc.co.uk and use the hashtag #TheInterviewBBC on social media.(Image: Yoshua Bengio. Credit: Craig Barritt/Getty)
I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle
Dans cet épisode : Premières réflexions et analyses critiques du Projet LoiZéro de Yoshua Bengio et son équipe!Au programme: LoiZéro - Un projet ambitieux.De la force des impératifs commerciaux.Contrôler l'ours - Ou contrôler l'industrie qui crée les ours?Une prophétie irréfutable. Servir le bien de l'humanité...dans la joie!La loi Zéro dans Asimov.Difficulté conceptuelle: protéger un être humain vs protéger l'humanité.Banc d'essaiChapGPT comme psychologue.NotebookLM - «Plonger en profondeur» (DeepDive)... en français!Bonne écoute! Production et animation: Jean-François Sénéchal, Ph.DCollaborateurs et collaboratrices (BaristIAs): Frédérick Plamondon et Stéphane Mineo.Collaborateurs et collaboratrices: Véronique Tremblay, Stéphane Mineo, Frédérick Plamondon, Shirley Plumerand, Sylvain Munger Ph.D, Ève Gaumond, Benjamin Leblanc.Textes et sources mentionnés: Isaac Asimov, Le commencement – Prélude à fondation, Traduit de l'américain par Jean Bonnefoy, Éditions Libre expression, 1989. Yoshua Bengio lance LoiZéro : une nouvelle organisation à but non lucratif visant à concevoir des systèmes d'IA sécuritairesSupport the show
Vassy Kapelos is joined by Yoshua Bengio, President & Scientific Director, LawZero and Full Professor at Université de Montréal, and Founder and Scientific Advisor of Mila – Quebec AI Institute, about this new initiative and its significance in AI. On today's show: Karen Hogan, Auditor General, on the details of her latest reports John Bolton, Former National Security Adviser to US President Donald Trump and Ambassador to the United Nations under President George W. Bush, on the latest developments in the LA protests and the federal response The Explainer: Ryan Manucha, Research Fellow, C.D. Howe Institute and Author, Booze, Cigarettes, and Constitutional Dust-Ups: Canada's Quest for Interprovincial Free Trade, answers this question — what barriers to interprovincial trade exist right now? How much will free internal trade really impact the economy? The Daily Debrief Panel with Supriya Dwivedi, Former Senior Adviser to Prime Minister Justin Trudeau; Jeff Rutledge, Vice President, McMillian Vantage; Stephanie Levitz, senior reporter in The Globe and Mail's Ottawa bureau
Cette semaine dans le Debrief Transat, focus sur deux sujets brûlants de part et d'autre de l'Atlantique : en France, le gouvernement tape du poing sur la table face aux géants du porno en ligne, tandis qu'au Canada, Yoshua Bengio concrétise sa vision d'une intelligence artificielle éthique avec le lancement officiel de son organisation LawZero.
Jérôme Colombain et Bruno Guglielminetti reviennent sur la décision de plusieurs sites pornographiques, dont Pornhub, de se couper de la France en réaction aux nouvelles exigences gouvernementales de vérification d'âge. Sinon à Montréal, Yoshua Bengio annonce la création de LoiZéro, un organisme qui développera des outils concrets pour encadrer l'intelligence artificielle de manière responsable.
Cette semaine, on parle de robots humanoïdes français, de la console Switch 2, de sites porno en grève, de régulation de l'IA, du salon VivaTech à Paris et on s'interroge sur le sens de l'IA avec Luc Julia. Découvrez et testez Frogans au salon Vivatech 2025 [Partenariat]-------------L'ACTU DE LA SEMAINE- Wandercraft et Renault : l'entreprise française de robotique signe un partenariat avec le groupe automobile et lance son premier robot humanoïde, Calvin-40.- Amazon teste de nouveaux robots humanoïdes pour la livraison à domicile.- Nintendo Switch 2 : Le lancement de la nouvelle console de Nintendo suscite un engouement considérable, tandis que son positionnement tarifaire et son marché de jeux s'avèrent cruciaux pour son succès.- Agents IA à la française : les startup Mistral et H lancent des agents IA, dont certains gratuits destinés au grand public. - Quand l'IA se rebelle : des LLM auraient refusé de s'éteindre. Info ou intox ? DEBRIEF TRANSATLANTIQUE- Avec Bruno Guglielminetti, on revient sur la fermeture de plusieurs sites pornographiques en France, sur fond le bras de fer avec le gouvernement français. - On évoque aussi l'initiative "LawZero" du scientifique Yoshua Bengio en faveur de la sécurisation des IA qui vise à encadrer le développement responsable de ces technologies..LES INTERVIEWS DE LA SEMAINE- Jean-Louis Constanza de Wandercraft présente sont robot humanoïde Calvin et commente son partenariat avec Renault en dévoilant les ambitions de l'entreprise dans le domaine des robots humanoïdes.- Florian Roulier de Niji fait le point sur les tendances 2025 attendues au salon Vivatech. En vedette notamment : l'intelligence artificielle et la cybersécurité.- Luc Julia, co-inventeur de Siri, partage son expertise sur les IA génératives et les distinctions entre créativité humaine et capacités des machines, à l'occasion de la sortie de son dernier livre "IA génératives, pas créatives" (Cherche Midi).-----------
Yoshua Bengio, one of the so-called godfathers of AI, wants it to be less human. Plus, a federal judge temporarily blocked a law in Florida that would ban kids under 14 from getting social media accounts.But first, Meta announced an energy deal with one of the country's biggest operators of nuclear reactors. Marketplace's Nova Safo is joined by Jewel Burks Solomon, managing partner at the venture capital firm Collab Capital, to break down these tech stories from the week.
Yoshua Bengio, one of the so-called godfathers of AI, wants it to be less human. Plus, a federal judge temporarily blocked a law in Florida that would ban kids under 14 from getting social media accounts.But first, Meta announced an energy deal with one of the country's biggest operators of nuclear reactors. Marketplace's Nova Safo is joined by Jewel Burks Solomon, managing partner at the venture capital firm Collab Capital, to break down these tech stories from the week.
In this episode, Alex Oliveira breaks down the latest trends shaking up marketing, AI, and entrepreneurship. From Meta's push to automate ad campaigns by 2026, to why consulting giants like EY are scrambling to stay relevant in an AI-first world. Plus, OpenAI's new meeting tools, ChatGPT app integrations, and Yoshua Bengio's $30M AI safety lab. Alex also shares the launch of Pushbio's Future Creator campaign, highlights from new podcast episodes, tools like Apify and Duck.ai, and upcoming events with JMI and Special Compass. Tune in for practical insights, free tools, and stories that matter.Learn more about LawZero
Lee Jae-myung wins South Korea's presidential election, Dutch Prime Minister Dick Schoof resigns, the U.S. approves a plan to integrate foreign fighters in the Syrian Army, an American consulting firm leaves the Gaza Humanitarian Foundation, the White House seeks Congress' approval to codify DOGE cuts, a report warns that around 7 billion people worldwide lack full civil rights, U.S. Homeland Security is sued over its DNA collection program, U.S. officials dismiss reports that FEMA's chief was unaware of the US' hurricane, Bill Gates commits the majority of his $200B fortune to Africa, AI pioneer Yoshua Bengio launches a $30M nonprofit to build “honest” AI systems. Sources: www.verity.news
L’ex-designer d’Apple Jony Ive chez OpenAI: l'entreprise derrière ChatGPT préparerait des appareils sans écran qui nous feront interagir d’une nouvelle façon avec l’IA générative. Aussi au menu: Promo PlanetHoster: la souveraineté de vos données vous inquiète? La solution Code promo : PHA-UTDT The World N0C - Hébergement mutualisé - https://bit.ly/phutdtm HybridCloud N0C - Hébergement dédié - https://bit.ly/phutdt Testés: Les lunettes de réalité augmentée XREAL One vous feront voir le futur. Le clavier Wooting 80HE va séduire les gamers… et bien d’autres utilisateurs de PC encore. En vrac: Yoshua Bengio lance LawZero pour une IA responsable Des deepfakes de plus en plus difficiles à distinguer Microsoft veut des câbles USB-C qui marchent pour vrai Demeo x Dungeons & Dragons: Battlemarked… le jeu qui lancera enfin la réalité virtuelle? EB Games ouvre tard pour la Nintendo Switch 2 Fin de l’appli Pocket: Wallabag.it en remplacement? Apple WWDC25: vers une harmonisation de tous les OS? Android 16 en «mode desktop»: la fin de Chrome OS? Et plus! Voir https://www.cogecomedia.com/vie-privee pour notre politique de vie privée
Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.Want to help shape TED's shows going forward? Fill out our survey! Hosted on Acast. See acast.com/privacy for more information.
Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.
Yoshua Bengio — the world's most-cited computer scientist and a "godfather" of artificial intelligence — is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.
Since December 2024, a run of scientific papers has illustrated AI's capacity for deception. Deep learning pioneer Yoshua Bengio says the consequences could be existential.Writer: Patricia ClarkeProducer: Ada BaruméHost: Tomini BabsPhotography: Joe MeeExecutive Producer: Rebecca Moore Hosted on Acast. See acast.com/privacy for more information.
1. MANUS AI geldi ortalık karıştı. Kendi çıkmadan benzerleri çıktı: openmanus, ANUS, OWL2. Deepseek R2 için önce 17 Mart dendi, sonra yalanlandı3. Modern Yapay Zekanın Babalarından Yoshua Bengio da, Regüle Edilmemesi Halinde Yapay Zekanın İnsanlık İçin Varoluşsal Risk Doğuracağını Söylüyor!4. NVIDIA GEN3C'yi sunuyor tek veya seyrek görüntülü görüntülerden, kamera kontrolünü ve 3B tutarlılığını koruyarak fotogerçekçi videolar üretebilen yeni bir yöntem5. Çin'in ABD Büyükelçisi Xie Feng, kontrolsüz riskleri önlemek için yapay zeka konusunda işbirliği çağrısı yaptı.6. McDonald's büyük bir yapay zeka yenilemesi planlıyor Şirketin satış noktaları, öngörücü bakım ve sipariş doğruluğu için yapay zeka araçlarının yanı sıra bir "yapay zeka sanal yöneticisi"ne sahip olacak. Bu yetenekler için uç bilgi işlem sistemlerini dağıtmak üzere Google Cloud ile birlikte çalışıyor7. iPhone üreticisi Foxconn, ileri düzey akıl yürütme alanında ilk LLM programı olan FoxBrain'i duyurdu8. Portekiz'in ilk Yapay Zeka emlakçısı, sektör için büyük bir dönüm noktası olan 100 milyon doların üzerinde satış yaptı. Başlangıç eSelf AI tarafından inşa edilen ve Porta da Frente Christie's tarafından kullanılan bu AI ajanı hızlı cevaplar, sanal turlar ve canlı pazar güncellemeleriyle ev satın almayı kolaylaştırıyor.• esnada yurdum: Niğde'de mumya satmaya çalışan 6 şüpheli yakalandı.9. Alibaba'dan bir hamle daha: VACE All-in-One Video Creation and Editing10. OpenAI, Agent SDK'sını yeni yayınladı ve bu, AI geliştiricileri için oyunun kurallarını değiştiriyor Yapay zeka ajanlarının inşası haftalardan dakikalara düştü.11. Manus'un arkasındaki ekip, otonom aracının Çince versiyonunu geliştirmek için Alibaba'nın Qwen'iyle ortaklık kurdu İş birliği, Manus'u Qwen'in açık kaynaklı modelleri ve bilgi işlem altyapısıyla entegre edecek12. Çin'in yeni silikonsuz çipi Intel'i %40 daha fazla hız ve %10 daha az enerjiyle geride bırakıyor "Yeni bizmut bazlı transistör, silikonun sınırlamalarını aşarak daha yüksek verimlilik sunarak çip tasarımında devrim yaratabilir."#yapayzeka #teknoloji #deepresearch
The consciousness testCould an artificial intelligence be capable of genuine conscious experience?Coming from a range of different scientific and philosophical perspectives, Yoshua Bengio, Sabine Hossenfelder, Nick Lane, and Hilary Lawson dive deep into the question of whether artificial intelligence systems like ChatGPT could one day become self-aware, and whether they have already achieved this state.Yoshua Bengio is a Turing Award-winning computer scientist. Sabine Hossenfelder is a science YouTuber and theoretical physicist. Nick Lane is an evolutionary biochemist. Hilary Lawson is a post-postmodern philosopher.To witness such topics discussed live buy tickets for our upcoming festival: https://howthelightgetsin.org/festivals/And visit our website for many more articles, videos, and podcasts like this one: https://iai.tv/You can find everything we referenced here: https://linktr.ee/philosophyforourtimesAnd don't hesitate to email us at podcast@iai.tv with your thoughts or questions on the episode! Who do you agree or disagree with?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Le Sommet pour l'Action sur l'intelligence artificielle s'est tenu à Paris du 6 au 12 février 2025, réunissant chefs d'État, entreprises et experts du secteur. Entre ambitions politiques et annonces stratégiques, cet événement a mis en lumière la volonté de la France et de l'Europe de se positionner comme une alternative aux dominations américaine et chinoise en matière d'IA.Ce sommet se déroulait dans deux lieux symboliques : le Grand Palais, où régnait une ambiance politique et institutionnelle et un accès réservé à un nombre limité de participants, et à Station F qui accueillait l'écosystème d'entrepreneurs et qui affichait complet. Sur le plan politique, Emmanuel Macron a réaffirmé sa volonté de faire de la France un leader en intelligence artificielle. 58 pays, dont la France, l'Inde et la Chine, ont signé une déclaration pour une IA "ouverte", "inclusive" et "éthique". Mais les Etats-Unis et la Grande Bretagne n'ont pas signé. Sur le plan économique, on retiendra parmi les annonces marquantes : 109 milliards d'euros investis en France, notamment pour la construction de data centers, un objectif de formation de 100 000 data scientists par an en France et 200 milliards d'euros d'investissements européens annoncés par Ursula von der Leyen.Côté entreprises, Mistral AI a été la grande vedette du sommet, avec des annonces de partenariats avec les télécoms. Free, Orange et Bouygues Telecom vont intégrer de l'IA générative dans leurs offres. Enfin, côté people, Sam Altman (présent) et Elon Musk (absent) se sont livrés à une joute sur X autour d'une hypothétique acquisition d'OpenAI.Sur le plan réglementaire, l'Union européenne semble vouloir assouplir ses restrictions afin de ne pas freiner l'innovation. Toutefois, les enjeux de cybersécurité, de désinformation et de déstabilisation politique restent préoccupants, comme l'a rappelé le scientifique Yoshua Bengio.L'IA est plus que jamais au cœur des enjeux économiques et géopolitiques mondiaux. Reste à savoir si l'Europe pourra réellement imposer une « troisième voie » face aux modèles américain et chinois.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
World leaders and tech luminaries will be flocking to Paris in the days ahead for the AI Action Summit. These global gatherings started over a year ago, but since then, the international AI agenda has shifted dramatically. The focus on mitigating the technology's risks is now all about rolling it out fast. On POLITICO Tech, host Steven Overly talks to AI pioneer and professor Yoshua Bengio about the state of the AI safety debate, and why he's urging leaders not to give up on it. Learn more about your ad choices. Visit megaphone.fm/adchoices
Max Tegmark's Future of Life Institute has called for a pause on the development on advanced AI systems. Tegmark is concerned that the world is moving toward artificial intelligence that can't be controlled, one that could pose an existential threat to humanity. Yoshua Bengio, often dubbed as one of the "godfathers of AI" shares similar concerns. In this special Davos edition of CNBC's Beyond the Valley, Tegmark and Bengio join senior technology correspondent Arjun Kharpal to discuss AI safety and worst case scenarios, such as AI that could try to keep itself alive at the expense of others.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Cette semaine, dans notre débrief transatlantique, retour sur une secousse majeure dans l'industrie de l'intelligence artificielle avec l'arrivée de DeepSeek, ce modèle chinois d'IA qui bouscule les équilibres en proposant une alternative plus frugale aux modèles américains. Quelles conséquences pour le marché de l'IA et les réglementations européennes ?Nous abordons également les ambitions d'Elon Musk pour transformer X en une super app financière avec un partenariat stratégique signé avec Visa. Un pari audacieux, mais la confiance des utilisateurs est-elle au rendez-vous ?Enfin, focus sur le nouvel appel du chercheur Yoshua Bengio, qui alerte une fois de plus sur les risques liés à l'IA. Son dernier rapport met en garde contre trois dangers majeurs : l'usage malveillant, les biais algorithmiques et les bouleversements systémiques du marché du travail. Quels enseignements en tirer, à l'approche du sommet européen sur l'IA ?-----------
Dans leur échange hebdomadaire, Bruno Guglielminetti et Jérôme Colombain reviennent sur plusieurs sujets technologiques marquants de la semaine. DeepSeek, l'IA chinoise, qui secoue l'industrie. X et Visa qui annoncent un partenariat pour intégrer des paiements sur la plateforme d'Elon Musk, mais la confiance des utilisateurs reste un enjeu majeur. Enfin, Yoshua Bengio publie un rapport sur les risques de l'IA avancée, soulignant les dangers liés aux cyberattaques, à la désinformation et à l'impact sur l'emploi.
Big thanks to Brilliant for sponsoring this video! To try everything Brilliant has to offer for free for a full 30 days and 20% discount visit: https://Brilliant.org/DavidBombal // Mike SOCIAL // X: / _mikepound Website: https://www.nottingham.ac.uk/research... // YouTube video reference // Teach your AI with Dr Mike Pound (Computerphile): • Train your AI with Dr Mike Pound (Com... Has Generative AI Already Peaked? - Computerphile: • Has Generative AI Already Peaked? - C... // Courses Reference // Deep Learning: https://www.coursera.org/specializati... AI For Everyone by Andrew Ng: https://www.coursera.org/learn/ai-for... Pytorch Tutorials: https://pytorch.org/tutorials/ Pytorch Github: https://github.com/pytorch/pytorch Pytorch Tensors: https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... Python for Everyone: https://www.py4e.com/ // BOOK // Deep learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville: https://amzn.to/3vmu4LP // PyTorch // Github: https://github.com/pytorch Website: https://pytorch.org/ Documentation: / pytorch // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:43 - Introduction 01:04 - State of AI in 2025 02:10 - AGI Hype: Realistic Expectations 03:15 - Sponsored Section 04:30 - Is AI Plateauing or Advancing? 06:26 - Overhype in AI Features Across Industries 08:01 - Is It Too Late to Start in AI? 09:16 - Where to Start in 2025 10:20 - Recommended Courses and Progression Paths 13:26 - Should I Go to School for AI? 14:18 - Learning AI Independently with Resources Online 17:24 - Machine Learning Progression 19:09 - What is a Notebook? 20:10 - Is AI the Top Skill to Learn in 2025? 23:49 - Other Niches and Fields 25:05 - Cyber Using AI 26:31 - AI on Different Platforms 27:13 - AI isn't Needed Everywhere 29:57 - Leveraging AI 30:35 - AI as a Productivity Tool 31:55 - Retrieval Augmented Generation 33:28 - Concerns About Privacy with AI 36:01 - The Difference Between GPU's, CPU's, NPU's etc. 37:30 - The Release of Sora38:56 - Will AI Take Our Job? 41:00 - Nvidia Says We Don't Need Developers 43:47 - Devin Announcement 44:59 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only.
Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe Yoshua Bengio: https://x.com/Yoshua_Bengio https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en https://yoshuabengio.org/ https://en.wikipedia.org/wiki/Yoshua_Bengio TOC: 1. AI Safety Fundamentals [00:00:00] 1.1 AI Safety Risks and International Cooperation [00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development [00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities [00:15:15] 1.4 Reward Tampering and AI Agency Risks [00:25:17] 1.5 Alignment Challenges and Instrumental Convergence 2. AI Architecture and Safety Design [00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals [00:35:02] 2.2 Separating Intelligence from Goals in AI Systems [00:40:40] 2.3 Non-Agent AI as Scientific Tools [00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks 3. Global Governance and Security [00:49:50] 3.1 International AI Competition and Hardware Governance [00:51:58] 3.2 Military and Security Implications of AI Development [00:56:07] 3.3 Personal Evolution of AI Safety Perspectives [01:00:25] 3.4 AI Development Scaling and Global Governance Challenges [01:12:10] 3.5 AI Regulation and Corporate Oversight 4. Technical Innovations [01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers [01:26:02] 4.2 GFlowNets and Symbolic Computation [01:30:47] 4.3 Neural Dynamics and Consciousness [01:34:38] 4.4 AI Creativity and Scientific Discovery SHOWNOTES (Transcript, references, best clips etc): https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0 CORE REFS (full list in shownotes and pinned comment): [00:00:15] Bengio et al.: "AI Risk" Statement https://www.safe.ai/work/statement-on-ai-risk [00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review) https://hdsr.mitpress.mit.edu/pub/w974bwb0 [00:40:45] Munk Debate on AI existential risk, featuring Bengio https://munkdebates.com/debates/artificial-intelligence [00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety https://arxiv.org/abs/2408.05284 [00:51:20] Bengio (2024) memo on hardware-based AI governance verification https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf [01:12:55] Bengio's involvement in EU AI Act code of practice https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice [01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie) https://arxiv.org/abs/2410.14817 [01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference https://arxiv.org/pdf/2111.09266 [01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie) https://arxiv.org/pdf/2302.06403
Comedian Ben Gleib returns to the show and they open by talking about a hiking trail “Karen” in Colorado, the great magnet that connects all of Adam's pizza orders, and the hot dog options at Crypto.com Arena. Next, Jason “Mayhem” Miller joins to read the news including stories about Elon Musk joking about buying MSNBC with a risqué meme, how crows can hold grudges against individual humans for up to 17 years, tech pioneer Yoshua Bengio's warning that AI systems could turn against humans, and Cher telling Howard Stern that she is fully aware men expect 'fabulous sex' from her. Then, former White House Communications Director Anthony Scaramucci returns to talk about why the government can't be run like a business, why he approves of Trump nominating Robert Kennedy Jr. as the Department of Health & Human Services secretary, and the weird insult he received from a journalist. For more with Ben Gleib: ● PODCAST: Last Week on Earth w/ Ben Gleib ● NEW SPECIAL: The Mad King - Available on YouTube ● INSTAGRAM: @bengleib For more with Anthony Scaramucci: ● PODCAST: The Rest is Politics US ● INSTAGRAM: @scaramucci ● TWITTER/X: @scaramucci Thank you for supporting our sponsors: ● http://Meater.com ● QualiaLife.com/Adam ● http://OReillyAuto.com/Adam
Aspen Strategy Group executive director Anja Manuel joins the podcast to discuss issues surrounding AI and national security, and a new series of original papers and op-eds called “Intelligent Defense: Navigating National Security in the Age of AI.” The papers are authored by Aspen Strategy Group members including: Manuel, Mark Esper, General David Petraeus, David Ignatius, Nick Kristof, Steve Bowsher, Joseph S. Nye, Jr., Yoshua Bengio, Senator Chris Coons, Kent Walker, Jennifer Ewbank, Daniel Poneman, Eileen O'Connor, and Graham Allison.
A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed. While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy. And then there was Yoshua Bengio. Bengio is one of AI's pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn't be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio. But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he's dedicated himself to AI safety. He's a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute. And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it's too late. Mentioned:“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton“Computing Machinery and Intelligence” by Alan Turing“International Scientific Report on the Safety of Advanced AI” “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”Further reading:“‘Deep Learning' Guru Reveals the Future of AI” by Cade Metz“Montréal Declaration for a Responsible Development of Artificial Intelligence” “This A.I. Subculture's Motto: Go, Go, Go” By Kevin Roose“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...
Hugo Larochelle, chercheur principal au laboratoire de Google DeepMind à Montréal discute de l'évolution de l'intelligence artificielle en 2024 et de son potentiel transformateur. On parle de l'importance d'orienter l'IA vers des applications bénéfiques, tout en reconnaissant les responsabilités qui en découlent. Larochelle évoque son travail en bioacoustique, où l'IA est utilisée pour reconnaître des espèces animales à partir de sons enregistrés, notamment les oiseaux. Ces outils sont partagés en source ouverte pour soutenir la recherche en écologie. Il parle également de sa collaboration avec des figures clés comme Yoshua Bengio et Geoffrey Hinton, qui ont influencé sa vision à long terme.
Rencontre avec Valérie Pisano, PDG de Mila. On discute du rôle clé de l'institut québécois d'intelligence artificielle, fondé par Yoshua Bengio. Valérie Pisano met en avant l'importance de la communauté scientifique et du leadership de Bengio dans le rayonnement de Mila. Et on revient sur son approche de l'écosystème qu'est Mila.
Washington isn't poised to pass major AI legislation. Ottawa isn't either. So Canadian computer scientist Yoshua Bengio, one of the “godfathers” of artificial intelligence, is looking to Sacramento. He's urging California Gov. Gavin Newsom to sign an AI safety bill by month's end — and facing off against influential tech executives who want it killed. On today's POLITICO Tech, Bengio explains why he thinks California needs to regulate now.
Send us a textCalifornia's Senate Bill 1047 is on the brink of becoming a law, and we're here to break down what that means for the tech industry and society at large. Tune in as I dissect how this controversial bill mandates rigorous testing of AI systems to identify potential harms such as cybersecurity risks and threats to critical infrastructure. I've got insights from policymakers, including Senator Scott Weiner, who argues that the bill formalizes safety measures already accepted by top AI firms. Amidst passionate debates, hear how tech giants like Google and Meta push back against the regulations, fearing they could cripple innovation, especially for startups. Meanwhile, proponents, including whistleblowers from OpenAI and notable figures like Elon Musk and Yoshua Bengio, champion the necessity of such rules to mitigate substantial AI risks. We'll also explore the broader legislative landscape that aims to combat deep fakes, and automated discrimination, and safeguard the likeness of deceased individuals in AI-generated content. Support the show
Yoshua Bengio helped to create artificial intelligence, and now he wishes he'd included an off switch. The Montreal computer scientist explains why he's worried about the rapidly developing technology, and how it could be reined in.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024), published by Matt MacDermott on September 1, 2024 on The AI Alignment Forum. Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience. The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions. I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve advances in Bayesian machine learning, and also probably solving ELK to get the harm estimates? My answer to that is: yes, I think so. I think Yoshua does too, and that that's the centre of his research agenda. Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety". Bounding the probability of harm from an AI to create a guardrail Published 29 August 2024 by yoshuabengio As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action? Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification. With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at ru...
Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/379-regulating-artificial-intelligence Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. Yoshua Bengio is full professor at Université de Montréal and the Founder and Scientific Director of Mila - Quebec AI Institute. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoffrey Hinton and Yann LeCun, known as the Nobel Prize of computing. He is a Canada CIFAR AI Chair, a member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology, and Chair of the International Scientific Report on the Safety of Advanced AI. Website: https://yoshuabengio.org/ Scott Wiener has represented San Francisco in the California Senate since 2016. He recently introduced SB 1047, a bill aiming to reduce the risks of frontier models of AI. He has also authored landmark laws to, among other things, streamline the permitting of new homes, require insurance plans to cover mental health care, guarantee net neutrality, eliminate mandatory minimums in sentencing, require billion-dollar corporations to disclose their climate emissions, and declare California a sanctuary state for LGBTQ youth. He has lived in San Francisco's historically LGBTQ Castro neighborhood since 1997. Twitter: @Scott_Wiener Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
(Rediffusion) Interview exclusive de Yoshua Bengio, fondateur du centre Mila de Montréal consacré à l'intelligence artificielle (en partenariat avec Mon Carnet / Bruno Guglielminetti).Co-inventeur du deep learning (apprentissage profond), et considéré comme l'une des personnalités les plus influentes du monde en intelligence artificielle, l'universitaire québécois Yoshua Bengio milite pour une approche prudente de l'IA. Selon lui, les intelligences artificielles qui seront développées dans le futur représenteront un véritable risque pour l'espèce humaine, pouvant conduire à sa destruction. Contrairement à son collègue français Yann Le Cun, Bengio se situe donc sur une ligne d'inquiétude et de prudence. Il en appelle à faire jouer le principe de précaution. Il explique comment ses travaux de recherche actuels visent à donner le jour à une sorte "d'IA gendarme", qui serait capable de contrôler les autres IA afin de faire en sorte qu'elles respectent des règles éthiques et démocratiques.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
I decided to adopt a more minimalist approach to the podcast. Enjoy the conversation!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #72: Denying the Future, published by Zvi on July 12, 2024 on LessWrong. The Future. It is coming. A surprising number of economists deny this when it comes to AI. Not only do they deny the future that lies in the future. They also deny the future that is here, but which is unevenly distributed. Their predictions and projections do not factor in even what the AI can already do, let alone what it will learn to do later on. Another likely future event is the repeal of the Biden Executive Order. That repeal is part of the Republican platform, and Trump is the favorite to win the election. We must act on the assumption that the order likely will be repealed, with no expectation of similar principles being enshrined in federal law. Then there are the other core problems we will have to solve, and other less core problems such as what to do about AI companions. They make people feel less lonely over a week, but what do they do over a lifetime? Also I don't have that much to say about it now, but it is worth noting that this week it was revealed Apple was going to get an observer board seat at OpenAI… and then both Apple and Microsoft gave up their observer seats. Presumably that is about antitrust and worrying the seats would be a bad look. There could also be more to it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Long as you avoid GPT-3.5. 4. Language Models Don't Offer Mundane Utility. Many mistakes will not be caught. 5. You're a Nudge. You say it's for my own good. 6. Fun With Image Generation. Universal control net for SDXL. 7. Deepfaketown and Botpocalypse Soon. Owner of a lonely bot. 8. They Took Our Jobs. Restaurants. 9. Get Involved. But not in that way. 10. Introducing. Anthropic ships several new features. 11. In Other AI News. Microsoft and Apple give up OpenAI board observer seats. 12. Quiet Speculations. As other papers learned, to keep pace, you must move fast. 13. The AI Denialist Economists. Why doubt only the future? Doubt the present too. 14. The Quest for Sane Regulation. EU and FTC decide that things are their business. 15. Trump Would Repeal the Biden Executive Order on AI. We can't rely on it. 16. Ordinary Americans Are Worried About AI. Every poll says the same thing. 17. The Week in Audio. Carl Shulman on 80,000 hours was a two parter. 18. The Wikipedia War. One obsessed man can do quite a lot of damage. 19. Rhetorical Innovation. Yoshua Bengio gives a strong effort. 20. Evaluations Must Mimic Relevant Conditions. Too often they don't. 21. Aligning a Smarter Than Human Intelligence is Difficult. Stealth fine tuning. 22. The Problem. If we want to survive, it must be solved. 23. Oh Anthropic. Non Disparagement agreements should not be covered by NDAs. 24. Other People Are Not As Worried About AI Killing Everyone. Don't feel the AGI. Language Models Offer Mundane Utility Yes, they are highly useful for coding. It turns out that if you use GPT-3.5 for your 'can ChatGPT code well enough' paper, your results are not going to be relevant. Gallabytes says 'that's morally fraud imho' and that seems at least reasonable. Tests failing in GPT-3.5 is the AI equivalent of "IN MICE" except for IQ tests. If you are going to analyze the state of AI, you need to keep an eye out for basic errors and always always check which model is used. So if you go quoting statements such as: Paper about GPT-3.5: its ability to generate functional code for 'hard' problems dropped from 40% to 0.66% after this time as well. 'A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset Then even if you hadn't realized or checked before (which you really should have), you need to notice that this says 2021, which is very much not ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yoshua Bengio: Reasoning through arguments against taking AI safety seriously, published by Judd Rosenblatt on July 12, 2024 on LessWrong. He starts by emphasizing The issue is so hotly debated because the stakes are major: According to some estimates, quadrillions of dollars of net present value are up for grabs, not to mention political power great enough to significantly disrupt the current world order. [...] The most important thing to realize, through all the noise of discussions and debates, is a very simple and indisputable fact: while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans. And goes on to do a pretty great job addressing "those who think AGI and ASI are impossible or are centuries in the future AGI is possible but only in many decades that we may reach AGI but not ASI, those who think that AGI and ASI will be kind to us that corporations will only design well-behaving AIs and existing laws are sufficient who think that we should accelerate AI capabilities research and not delay benefits of AGI talking about catastrophic risks will hurt efforts to mitigate short-term human-rights issues with AI concerned with the US-China cold war that international treaties will not work the genie is out of the bottle and we should just let go and avoid regulation that open-source AGI code and weights are the solution worrying about AGI is falling for Pascal's wager" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
In this episode we speak with Dr Peetak Mitra, veteran of countless climate change projects, on the founding team of Excarta, core member of ClimateChange.AI, and gracious human being. He illuminates the role AI/ML can play in adapting to a warming planet, describes the ML techniques his company employs in their breakthrough tools, and gives advice for engineers looking to move into the climate space - in short, ‘just do it'. We also discuss growth in the climate sector, and he shares that despite a widespread economic slowdown, investment in climate technology continues to increase. We were delighted to have him on the show. About Dr Peetak MitraPeetak is a San Francisco-based technologist passionate about leveraging AI to combat climate change. He's on the Founding team of Excarta, a venture-backed startup building a breakthrough AI-powered weather intelligence platform for businesses. Prior to Excarta, he was a Member of Research Staff at the Xerox PARC (now SRI-PARC), where he co-led projects for AI climate forecasting funded in part by DARPA, and NASA. He has been part of Climate Change AI, organizing impactful workshops at major ML conferences including ICLR, AAAI, and NeurIPS with Turing Laureate Prof. Yoshua Bengio. He has been a featured speaker on Climate and AI at MIT, SF Climate Week, OpenAI, NSF among others. He holds a PhD in Scientific Machine Learning from the University of Massachusetts Amherst and a Bachelor's degree from BIT Mesra.https://www.linkedin.com/in/peetak/PapersThe paper Peetak mentioned: Tackling Climate Change with Machine Learning - https://dl.acm.org/doi/10.1145/3485128A milestone paper summarizing the application of ML to climate problems. Abstract: “Climate change is one of the greatest challenges facing humanity, and we, as machine learning (ML) experts, may wonder how we can help. Here we describe how ML can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by ML, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the ML community to join the global effort against climate change.”Companies and OrganizationsClimate Change AIClimate Change AI (CCAI) is an organization composed of volunteers from academia and industry who believe that tackling climate change requires concerted societal action, in which machine learning can play an impactful role. Since it was founded in June 2019 (and established as a US domestic non-profit on June 14, 2021), CCAI has led the creation of a global movement in climate change and machine learning, encompassing researchers, engineers, entrepreneurs, investors, policymakers, companies, and NGOs.9zero Climate Co-working Space. Launched during San Francisco Climate Week 2024, 9Zero is the hub for all things climate. Starting with coworking and events, we're uniting the entire ecosystem. Startups, investors, corporations, service providers, policymakers, academics: if you're working toward a healthier, more resilient world, you belong at 9Zero. Expanding to Seattle and LA this year. Sign up at www.9ZeYour Hosts Mansi Shah - Joshua Marker ClimateStack website - https://climatestack.podcastpage.io/
Host Steven Overly is in Canada this week for The US-Canada Summit, hosted by BMO Financial Group and Eurasia Group — and it got him thinking about another Canadian who's been on the podcast before: Canadian computer scientist Yoshua Bengio. Bengio has been dubbed one of the “godfathers of AI,” although he's not exactly thrilled about the title. Still, Bengio devoted most of his professional life to making AI smarter. But now, he wants to prevent AI from destroying humanity. On POLITICO Tech, Bengio tells host Steven Overly about his professional pivot and what policy changes he's pushing for around the world.
The launch of ChatGPT broke records in consecutive months between December 2022 and February 2023. Over 1 billion users a month for ChatGPT, over 100,000 users and $45 million in revenue for Jasper A.I., and the race to adopting A.I. at scale has begun. Does the global adoption of artificial intelligence have you concerned or apprehensive about what's to come? On one hand it's easy to get caught up in the possibilities of co-existing with A.I. living the enhanced upgraded human experience. We already have tech and A.I. integrated into so many of our daily habits and routines: Apple watches, ora rings, social media algorithms, chat bots, and on and on. Yoshua Bengio has dedicated more than 30 years of his computer science career to deep learning. He's an award winning computer scientist known for his breakthroughs in artificial neural networks. Why after 3 decades contributing to the advancement of A.I. systems is Yoshua now calling to slow down the development of powerful A.I. systems? This conversation is about being open-minded and aware of the dangers of AI we all need to consider from the perspective of one of the world's leading experts in artificial intelligence. Conscious computers, A.I. trolls, and the evolution of machines and what it means to be a neural network are just a few of the things you'll find interesting in this conversation. [Original air date: 4-13-23] Follow Yoshua Bengio: Website: https://yoshuabengio.org/ SPONSORS: Explore the Range Rover Sport at https://landroverusa.com Use this link and Hartford Gold will give you up to $15,000 dollars of FREE silver on your first qualifying: order.offers.americanhartfordgold.com/content-affiliate/?&leadsource=affiliate&utm_sfcampaign=701Rb000009EnmrIAC For comprehensive financial news and analysis, visit the incredible brand that so many great investors use, https://yahoofinance.com. Visit https://BetterHelp.com/ImpactTheory today to get 10% off your first month. Go to https://shopify.com/impact now to grow your business–no matter what stage you're in Get $1,000 off Vanta when you go to https://vanta.com/THEORY Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://drinkag1.com/impact. Secure your digital life with proactive protection for your assets, identity, family, and tech – Go to https://aura.com/IMPACT to start your free two-week trial. Take control of your gut health by going to https://tryviome.com/impact and use code IMPACT to get 20% off your first 3 months and free shipping. ***Are You Ready for EXTRA Impact?*** If you're ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. *New episodes delivered ad-free, EXCLUSIVE access to hundreds of archived Impact Theory episodes, Tom AMAs, and so much more!* This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices
What is intelligence? In the middle of the 20th century, the inner workings of the human brain inspired computer scientists to build the first “thinking machines”. But how does human intelligence actually relate to the artificial kind?This is the first episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist's science and technology editor. Contributors: Ainslie Johnstone, The Economist's data journalist and science correspondent; Dawood Dassu and Steve Garratt of UK Biobank; Daniel Glaser, a neuroscientist at London's Institute of Philosophy; Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory; Yoshua Bengio of the University of Montréal, who is known as one of the “godfathers” of modern AI.On Thursday April 4th, we're hosting a live event where we'll answer as many of your questions on AI as possible, following this Babbage series. If you're a subscriber, you can submit your question and find out more at economist.com/aievent. Get a world of insights for 50% off—subscribe to Economist Podcasts+If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.
What is intelligence? In the middle of the 20th century, the inner workings of the human brain inspired computer scientists to build the first “thinking machines”. But how does human intelligence actually relate to the artificial kind?This is the first episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist's science and technology editor. Contributors: Ainslie Johnstone, The Economist's data journalist and science correspondent; Dawood Dassu and Steve Garratt of UK Biobank; Daniel Glaser, a neuroscientist at London's Institute of Philosophy; Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory; Yoshua Bengio of the University of Montréal, who is known as one of the “godfathers” of modern AI.On Thursday April 4th, we're hosting a live event where we'll answer as many of your questions on AI as possible, following this Babbage series. If you're a subscriber, you can submit your question and find out more at economist.com/aievent. Get a world of insights for 50% off—subscribe to Economist Podcasts+If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.
Air Date 12/20/2023 AI needs to be regulated by governments even though politicians don't understand computers just as the government regulates the manufacture and operation of aircraft even though your average politician doesn't know their ass from an aileron. That's why expert advisory panels are for. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript WINTER SALE! 20% Off Memberships (including Gifts) in December! Join our Discord community! Related Episodes: #1547 Shaping the Future of the Internet #1578 A.I. is a big tech airplane with a 10% chance of crashing, should society fly it? OUR AFFILIATE LINKS: ExpressVPN.com/BestOfTheLeft GET INTERNET PRIVACY WITH EXPRESS VPN! BestOfTheLeft.com/Libro SUPPORT INDIE BOOKSHOPS, GET YOUR AUDIOBOOK FROM LIBRO! BestOfTheLeft.com/Bookshop BotL BOOKSTORE BestOfTheLeft.com/Store BotL MERCHANDISE! SHOW NOTES Ch. 1: How are governments approaching AI regulation - In Focus by The Hindu - Air Date 11-16-23 Dr Matti Pohjonen speaks to us about the concerns revolving around AI governance, and if there are any fundamental principles that an AI regulatory regime needs to address. Ch. 2: A First Step Toward AI Regulation with Tom Wheeler - Your Undivided Attention - Air Date 11-2-23 President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order and what's next for AI regulation. Ch. 3: Artificial Intelligence Godfathers Call for Regulation as Rights Groups Warn AI Encodes Oppression - Democracy Now! - Air Date 6-1-23 We host a roundtable discussion with three experts in artificial intelligence on growing concerns over the technology's potential dangers: Yoshua Bengio, Max Tegmark, and Tawana Petty. Ch. 4: The EU agrees on AI regulations - What will it mean for people and businesses in the EU - DW News - Air Date 12-9-23 European Union member states and lawmakers reached a preliminary agreement on what they touted as the world's first comprehensive AI legislation on Friday. Ch. 5: EU vs. AI - Today, Explained - Air Date 12-18-23 The EU has advanced first-of-its-kind AI regulation. The Verge's Jess Weatherbed tells us whether it will make a difference, and Columbia University's Anu Bradford explains the Brussels effect. Ch. 6: A First Step Toward AI Regulation with Tom Wheeler Part 2 - Your Undivided Attention - Air Date 11-2-23 Ch. 7: How to Keep AI Under Control | Max Tegmark - TEDTalks - Air Date 11-2-23 Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around. MEMBERS-ONLY BONUS CLIP(S) Ch. 8: Anti-Democratic Tech Firm's Secret Push For A.I. Deregulation w Lori Wallach - Thom Hartmann Program - Air Date 8-8-23 Giant tech firms are meeting secretly to deregulate Artificial Intelligence technologies in the most undemocratic ways possible. Can profit really take over and corrupt progress? Ch. 9: How are governments approaching AI regulation Part 2 - In Focus by The Hindu - Air Date 11-16-23 FINAL COMMENTS Ch. 10: Final comments on the need to understand the benefits and downsides of new technology MUSIC (Blue Dot Sessions) Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com
Yoshua Bengio, known as a godfather of AI, is one of hundreds of researchers and tech leaders calling for a pause in the breakneck development of powerful new AI tools. We talk to the AI pioneer about how the tools evolved and why he's worried about their potential. Further Listening: - Artificial: Episode 1, The Dream - Artificial: Episode 2, Selling Out - OpenAI's Weekend of Absolute Chaos Further Reading: - How Worried Should We Be About AI's Threat to Humanity? Even Tech Leaders Can't Agree - ‘Take Science Fiction Seriously': World Leaders Sound Alarm on AI Learn more about your ad choices. Visit megaphone.fm/adchoices