POPULARITY
Tout est parti d'une question posée par un abonné : « Quelle est ta vision sur la stratégie IA de Meta ? »Alors PPC a creusé. Il a voulu aller plus loin. Pas juste les modèles LLaMA, mais la logique globale derrière leur posture open source. Il a remonté articles après articles, blogs après blogs... décorticage des annonces, interviews de Mark Zuckerberg, et de Yann LeCun, le monsieur IA de Meta...
Des études récentes révèlent des taux d'hallucination qui empirent concernant les chatbots d'IA. Quelles conséquences ? Comment faire face à ce phénomène ? L'intelligence artificielle générative est partout… mais elle reste profondément bancale. Le phénomène est de plus en plus inquiétant : les hallucinations des IA, ces moments où les modèles inventent des faits, des citations, voire des événements entiers. Plus grave encore, ces erreurs semblent augmenter à mesure que les modèles deviennent plus puissants.Pourquoi ces hallucinations surviennent-elles ? Quelles sont les limites structurelles des IA actuelles, comme celles de ChatGPT ou Gemini ? Et surtout, que faire face à cette technologie qui se généralise sans qu'on puisse pleinement lui faire confiance ? Médias, justice, médecine : aucun domaine n'est à l'abri des conséquences d'une réponse erronée générée par une IA.Alors que les géants du secteur promettent des solutions (alignement des modèles, apprentissage renforcé, avertissements…), certains experts comme Yann LeCun doutent que le problème puisse un jour être complètement résolu.-----------
Yann Lecun es uno de los padres del campo de la inteligencia artificial. Inventó las redes convolucionales que tan usadas son para visión y actualmente es uno de los mandamases de Meta. Hoy en la tertulia comentamos una provocadora charla reciente suya donde no deja títere con cabeza: Yann sugiere que hay que abandonar el aprendizaje por refuerzo y los modelos de lenguaje. Participan en la tertulia: Paco Zamora, Josu Gorostegui, Imanol Solano y Guillermo Barbadillo. Recuerda que puedes enviarnos dudas, comentarios y sugerencias en: https://twitter.com/TERTUL_ia Más info en: https://ironbar.github.io/tertulia_inteligencia_artificial/
From early inspirations to groundbreaking AI achievements, Yann's journey chronicles the rise of deep learning, the struggles for recognition, and the revolution that changed computing forever.00:09- About Yann LeCunYann is the Chief AI Scientist for Facebook AI Research (FAIR).He is also a Silver Professor at New York University on a part-time basis, mainly affiliated with the NYU Center for Data Science, and the Courant Institute of Mathematical Sciences.
Yann LeCun, Meta's chief AI scientist and Turing Award winner, joins us to discuss the limits of today's LLMs, why generative AI may be hitting a wall, what's missing for true human-level intelligence, the real meaning of AGI, Meta's open-source strategy with Llama, the future of AI assistants in smart glasses, why diversity in AI models matters, and how open models could shape the next era of innovation Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:40 - Introduction to Yann LeCun, Chief AI Scientist at Meta 0:02:11 - The limitations and hype cycles of LLMs, and historical patterns of overestimating new AI paradigms. 0:05:45 - The future of AI research, and the need for machines that understand the physical world, can reason and plan, and are driven by human-defined objectives 0:14:47 - AGI Timeline, human-level AI within a decade, with deep learning as the foundation for advanced machine intelligence 0:21:35 - Why true AI intelligence requires abstract reasoning and hierarchical planning beyond language capabilities, unlike today's neural networks that rely on computational tricks 0:30:24 - Meta's open-source LLAMA strategy, empowering academia and startups, and commercial benefits 0:36:10 - The future of AI assistants, wearable tech, cultural diversity, and open-source models 0:42:52 - The impact of immigration policies on US technological leadership and STEM education 0:44:26 - Does Yann have a cat? 0:45:19 - Thank you to Yann LaCun for joining the AI Inside podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Yann LeCun is the chief AI scientist at Meta. He joins Big Technology Podcast to discuss the strengths and limitations of current AI models, weighing in on why they've been unable to invent new things despite possessing almost all the world's written knowledge. LeCun digs deep into AI science, explaining why AI systems must build an abstract knowledge of the way the world operates to truly advance. We also cover whether AI research will hit a wall, whether investors in AI will be disappointed, and the value of open source after DeepSeek. Tune in for a fascinating conversation with one of the world's leading AI pioneers. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
OpenAI's Isa Fulford and Josh Tobin discuss how the company's newest agent, Deep Research, represents a breakthrough in AI research capabilities by training models end-to-end rather than using hand-coded operational graphs. The product leads explain how high-quality training data and the o3 model's reasoning abilities enable adaptable research strategies, and why OpenAI thinks Deep Research will capture a meaningful percentage of knowledge work. Key product decisions that build transparency and trust include citations and clarification flows. By compressing hours of work into minutes, Deep Research transforms what's possible for many business and consumer use cases. Hosted by: Sonya Huang and Lauren Reeder, Sequoia Capital Mentioned in this episode: Yann Lecun's Cake: An analogy Meta AI's leader shared in his 2016 NIPS keynote
Dans cet épisode, nous plongeons au cœur d'une question qui fait souvent débat : l'intelligence artificielle est-elle une simple vue de l'esprit ou une technologie véritablement en action ? L'IA est omniprésente dans nos vies, des recommandations de films à la détection de fraudes bancaires, mais reste encore largement méconnue dans son fonctionnement réel. Nous explorons ce qu'est véritablement l'IA aujourd'hui, en quoi elle diffère des clichés de la science-fiction, et quelles opportunités et défis elle pose à notre société.Les chiffres ne manquent pas : 60 % des entreprises utilisent déjà des solutions basées sur l'IA pour leurs opérations, tandis que les outils médicaux basés sur l'IA ont réduit de 20 % les faux négatifs dans le dépistage du cancer du sein en 2023 (source). Cependant, de nombreux obstacles subsistent, notamment le risque de biais algorithmique (source), le manque de transparence dans les décisions automatisées et la question éthique de la responsabilité. En éclairant ces points, nous tâchons de démystifier l'IA et de la repositionner non pas comme un mythe ou un simple buzzword, mais comme un outil puissant, bien que loin d'être parfait.Pour aller plus loin, retrouvez nos sources et ressources :Enquête annuelle de la Conférence sur l'Intelligence Artificielle, 2023Journal of Medical AI Research, 2023Étude sur les biais dans la reconnaissance faciale, MIT Media LabRapport PWC 2024 : « Les biais dans l'IA en entreprise »Yann LeCun, pionnier de l'apprentissage profondAndrew Ng, expert en apprentissage automatique----------------------------------DSI et des Hommes est un podcast animé par Nicolas BARD, qui explore comment le numérique peut être mis au service des humains, et pas l'inverse. Avec pour mission de rendre le numérique accessible à tous, chaque épisode plonge dans les expériences de leaders, d'entrepreneurs, et d'experts pour comprendre comment la transformation digitale impacte nos façons de diriger, collaborer, et évoluer. Abonnez-vous pour découvrir des discussions inspirantes et des conseils pratiques pour naviguer dans un monde toujours plus digital.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
In der vierten Episode von AI in Finance diskutieren Sascha und Maik über die bevorstehenden Veröffentlichung von GPT‑4.5, beleuchten die EU‑Initiativen zur Förderung der KI‑Forschung, die kritischen Ansichten von Yann LeCun über große KI‑Modelle sowie Sicherheitsrisiken von KI‑Agenten.
Shiyan Koh, Managing Partner of Hustle Fund, and Jeremy Au discussed: 1. Trump's Economic Policies, Tariffs & Crypto Initiatives: They examined the economic impact of Trump's 2025 return, including a 10% tariff hike on Chinese imports and new tariffs on Canada and Mexico. The administration also ended the de minimis exemption for low-cost e-commerce imports, affecting platforms like Temu and Shein, which had relied on duty-free shipping to US consumers. These changes disproportionately impacted lower-income Americans, despite cost-of-living concerns being a key election issue. Canada and Mexico secured a 30-day tariff delay, while Trump also launched Trump Coin and proposed a US Bitcoin reserve, signaling a pro-crypto stance that could draw US crypto firms back onshore. 2. DeepSeek & US-China AI Dynamics: They discussed the launch of DeepSeek-V3, a Chinese AI model matching GPT-4 but with lower training costs, which Jeremy called a “Sputnik moment”. The model's success exposed the limits of US chip export bans, as Chinese engineers developed efficient AI training methods despite NVIDIA H100 restrictions. DeepSeek's open-source availability via Hugging Face complicated regulatory enforcement, leading US Senator Josh Hawley to propose severe penalties, including 20-year prison terms for users and $100M fines for corporations. Meta's AI lead, Yann LeCun, framed the issue as a debate between open-source and closed-source AI rather than a purely U.S.-China rivalry. 3. Grab-GoTo Potential Merger: They revisited ongoing Grab-GoTo (Gojek) merger talks, noting that Grab's stronger financial position made it the likely acquirer. While Singapore's regulators were expected to approve the deal, Indonesian authorities might impose conditions such as fare caps or job guarantees to prevent monopolistic practices. Reduced competition could push ride-hailing fares higher, with some Singaporeans already shifting back to public transport as Grab's peak-hour prices reached $40. SoftBank, a major investor in both companies, had long pushed for consolidation, and with Gojek's founding team no longer involved, negotiations had become more financially driven. Jeremy and Shiyan also discussed Waymo's self-driving taxis and their potential impact in Southeast Asia, Singapore's emphasis on “future-proofing” careers versus the US culture of embracing disruption, and how US trade and AI restrictions are accelerating Chinese firms' shift towards Southeast Asia and the EU. Watch, listen or read the full insight at https://www.bravesea.com/blog/deepseek-and-us-china-ai-race Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts
As the world's attention turned to Paris for the AI Action Summit, host Tammy Haddad takes the Washington AI Network global with an in-depth look at the most influential players and consequential conversations happening in Paris. Ina Fried of Axios joins Haddad to offer an in-the-room perspective on the world leaders, company executives, and policymakers' discourse surrounding AI's risks and promises. This episode also features an exclusive interview with Milena Harito from the European Network for Women in Leadership, plus hear from OpenAI's Sam Altman and Meta's Yann LeCun from the summit stage.
Le Sommet mondial pour l'action sur l'intelligence artificielle débute à Paris. La France veut se positionner comme un acteur clé du secteur malgré plusieurs paradoxes.Durant plusieurs jours, des figures majeures comme Sam Altman (OpenAI), Sundar Pichai (Google) ou encore Brad Smith (Microsoft) échangeront sur les opportunités et défis de l'IA. Côté français, Yann LeCun et Arthur Mensch (Mistral AI) seront présents, aux côtés de leaders politiques européens et internationaux.Derrière la volonté de structurer une gouvernance internationale de l'IA et de faire briller la France, se cachent cependant trois grands paradoxes. D'abord, si la France excelle en recherche IA elle peine toujours à transformer ses avancées en succès commerciaux, faute de capitaux et d'entreprises de grande envergure. Ensuite, l'Europe, malgré son poids économique, voit son marché freiné par des réglementations strictes, suscitant des inquiétudes parmi les géants du secteur. Enfin, la confiance dans l'IA est en berne, avec 79% des Français se déclarant inquiets, alimentés par un discours souvent alarmiste.Alors que la France mise sur l'open source et une IA "éthique et frugale", le sommet vise à poser les bases d'une action concrète. Reste à savoir si cette dynamique suffira à renforcer la compétitivité et la confiance dans l'IA.Mots-clés : intelligence artificielle, sommet IA, France, open source, régulation IA, Europe, Yann LeCun, Arthur Mensch, Sam Altman, Sundar Pichai, Brad Smith, innovation technologique-----------
France's digital minister has announced that 35 sites are ready to host data centres in the country, as the global AI summit opens its doors in Paris. It's an opportunity for President Emmanuel Macron to show that France is a key contender on the world stage when it comes to AI. Speaking to FRANCE 24, Meta's chief AI scientist Yann Le Cun said the success of Chinese AI company DeepSeek was a warning to OpenAI that "they aren't as ahead as they think they are, or at least not for very long".
Marc Andreessen is an entrepreneur, investor, co-creator of Mosaic, co-founder of Netscape, and co-founder of the venture capital firm Andreessen Horowitz. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep458-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/marc-andreessen-2-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Marc's X: https://x.com/pmarca Marc's Substack: https://pmarca.substack.com Marc's YouTube: https://www.youtube.com/@a16z Andreessen Horowitz: https://a16z.com SPONSORS: To support this podcast, check out our sponsors & get discounts: Encord: AI tooling for annotation & data management. Go to https://encord.com/lex GitHub: Developer platform and AI code editor. Go to https://gh.io/copilot Notion: Note-taking and team collaboration. Go to https://notion.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex OUTLINE: (00:00) - Introduction (12:46) - Best possible future (22:09) - History of Western Civilization (31:28) - Trump in 2025 (39:09) - TDS in tech (51:56) - Preference falsification (1:07:52) - Self-censorship (1:22:55) - Censorship (1:31:34) - Jon Stewart (1:34:20) - Mark Zuckerberg on Joe Rogan (1:43:09) - Government pressure (1:53:57) - Nature of power (2:06:45) - Journalism (2:12:20) - Bill Ackman (2:17:17) - Trump administration (2:24:56) - DOGE (2:38:48) - H1B and immigration (3:16:42) - Little tech (3:29:02) - AI race (3:37:52) - X (3:41:24) - Yann LeCun (3:44:59) - Andrew Huberman (3:46:30) - Success (3:49:26) - God and humanity PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
How important is open source to the future of AI and are we at human-level intelligence yet? Patrick Moorhead and Daniel Newman are joined by Meta's Yann LeCun , VP & Chief AI Scientist for a conversation on the latest AI developments and insights from WEF25 in this segment of The View From Davos. Get their take on: - The importance of open source for accelerating AI development and - Going beyond LLMs: LeCun imagines future AI systems will understand the physical world, reason, plan, and have persistent memory - The role of AI in addressing global challenges - Insights into future AI projects at Meta - Yann LeCun's perspective on ethical AI and its governance
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Weekly Rundown December 22nd to December 29th 2024:
Please join my mailing list here
We're bringing you a special episode of On With Kara Swisher! Kara sits down for a live interview with Meta's Yann LeCun, an “early AI prophet” and the brains behind the largest open-source large language model in the world. The two discuss the potential dangers that come with open-source models, the massive amounts of money pouring into AI research, and the pros and cons of AI regulation. They also dive into LeCun's surprisingly spicy social media feeds — unlike a lot of tech employees who toe the HR line, LeCun isn't afraid to say what he thinks of Elon Musk or President-elect Donald Trump. This interview was recorded live at the Johns Hopkins University Bloomberg Center in Washington, DC as part of their Discovery Series. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Kara sits down for a live interview with Yann LeCun, an “early AI prophet” and the brains behind the largest open-source large language model in the world. The two discuss the potential dangers that come with open-source models, the massive amounts of money pouring into AI research, and the pros and cons of AI regulation. They also dive into LeCun's surprisingly spicy social media feeds — unlike a lot of tech employees who toe the HR line, Yann isn't afraid to say what he thinks of Elon Musk or President-elect Donald Trump. This interview was recorded live at the Johns Hopkins University Bloomberg Center in Washington, DC as part of their Discovery Series. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Das ist das KI-Update vom 03.12.2024 unter anderem mit diesen Themen: Wer ist David Mayer? ChatGPT kann darauf nicht antworten Neues Adobe-Modell "MultiFoley" vertont Videos Warum Europa kein Billionen-Dollar-Unternehmen hat und Deutsche Unternehmen sind schlecht auf generative KI vorbereitet Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10186126 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki https://www.ki-adventskalender.de/
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit
Yann LeCun, a professor at New York University and senior researcher at Meta, is one of the godfathers of artificial intelligence but unlike other leaders in the field he doesn't think today's AI tech presents an existential peril to humanity. WSJ tech columnist Christopher Mims joins host Zoe Thomas to discuss LeCun's position and why he says today's AI is dumber than a cat. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
durée : 00:05:55 - La tech la première - L'intelligence artificielle doublement primée aux Nobel cette semaine. Nobel de physique mardi et nobel de chimie mercredi pour des recherches en lien direct avec les réseaux neuronaux et ce fameux “deep learning” sur lequel reposent toutes les IA actuelles. Yann Le Cun réagit en exclusivité.
This episode is sponsored by Bloomreach. Bloomreach is a cloud-based e-commerce experience platform and B2B service specializing in marketing automation, product discovery, and content management systems. Check out Bloomreach: https://www.bloomreach.com Explore Loomi AI: https://www.bloomreach.com/en/products/loomi Other Bloomreach products: https://www.bloomreach.com/en/products In this episode of the Eye on AI podcast, we sit down with Pedro Domingos, professor of computer science and author of The Master Algorithm and 2040, to dive deep into the future of artificial intelligence, machine learning, and AI governance. Pedro shares his expertise in AI, offering a unique perspective on the real dangers and potential of AI, far from the apocalyptic fears of superintelligence taking over. We explore his satirical novel, 2040, where an AI candidate for president—Prezibot—raises questions about control, democracy, and the flaws in both AI systems and human decision-makers. Throughout the episode, Pedro sheds light on Silicon Valley's utopian dreams clashing with its dystopian realities, highlighting the contrast between tech innovation and societal challenges like homelessness. He discusses how AI has already integrated into our daily lives, from recommendation systems to decision-making tools, and what this means for the future. We also unpack the ongoing debate around AI safety, the limits of current AI models like ChatGPT, and why he believes AI is more of a tool to amplify human intelligence rather than an existential threat. Pedro offers his insights into the future of AI development, focusing on how symbolic AI and neural networks could pave the way for more reliable and intelligent systems. Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest insights into AI, machine learning, and tech culture. Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Preview and Introduction (01:06) Pedro's Background and Contributions to AI (03:36) The Satirical Take on AI in '2040' (05:42) AI Safety Debate: Geoffrey Hinton vs. Yann LeCun (08:06) Debunking AI's Real Risks (12:45) Satirical Elements in '2040': HappyNet and Prezibot (17:57) AI as a Decision-Making Tool: Potential and Risks (22:55) The Limits of AI as an Arbiter of Truth (27:35) Crowdsourced AI: PreziBot 2.0 and Real-Time Decision Making (29:54) AI Governance and the Kill Switch Debate (37:42) Integrating AI into Society: Challenges and Optimism (47:11) Pedro's Current Research and Future of AI (55:17) Scaling AI and the Future of Reinforcement Learning
A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed. While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy. And then there was Yoshua Bengio. Bengio is one of AI's pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn't be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio. But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he's dedicated himself to AI safety. He's a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute. And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it's too late. Mentioned:“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton“Computing Machinery and Intelligence” by Alan Turing“International Scientific Report on the Safety of Advanced AI” “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”Further reading:“‘Deep Learning' Guru Reveals the Future of AI” by Cade Metz“Montréal Declaration for a Responsible Development of Artificial Intelligence” “This A.I. Subculture's Motto: Go, Go, Go” By Kevin Roose“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
L'intelligence artificielle est supposée nous faire gagner du temps. Et c'est vrai qu'elle nous en fait gagner pas mal pour ceux et celles qui, comme moi, l'utilisent assez régulièrement. Donc l'une des questions qui va se poser bientôt aux dirigeants d'entreprises, c'est que faire du temps gagné par les salariés ? Les politiques diront que c'est l'occasion pour enfin appliquer le fameux rêve de la semaine des 4 jours à salaire inchangé. Et certains patrons peuvent embrayer sur ce raisonnement en se disant, ben oui, pourquoi ne pas rendre ce gain de productivité aux salariés pour assouplir leur temps de travail et leur permettre un meilleur équilibre entre vie privée et vie professionnelle. C'est le genre d'argument qui plaît beaucoup auprès des jeunes générations qui, disent-elles, ne veulent pas perdre leur vie à la gagner comme l'ont fait leurs aînés – c'est-à-dire leurs parents. Mais la question à se poser, c'est : est-ce que la réduction du temps de travail est la seule réponse à donner à ce gain de temps gagné par les salariés ? Non, pas nécessairement : on sait aussi que les salariés cherchent du sens dans leur job, une raison d'être comme on dit aujourd'hui. Et donc, la vraie question à se poser, c'est : ne faut-il pas chercher une solution qui profite à la fois aux salariés, mais également à l'entreprise et pourquoi pas, à la société dans son ensemble ? On pourrait imaginer que le temps gagné grâce à l'intelligence artificielle soit consacré à des tâches ou des travaux à plus grande valeur ajoutée… Mots-Clés : cas, service, ressources humaines, département, procédures, automatisé, gain de temps, candidat, poste, manager, Les Echos, former, jeunes, mentorat, imaginer, concrétiser, missions sociétales, environnementales, cœur, collectivité, interrogation, théorique, monde, promesses, magique, étude, surestimé, vitesse, adoption, technologie, avance, vitesse exponentielle, organisation, capital humain, malléable, résultats, technologique, américain, investissements colossaux, financiers, Yann Le Cun, spécialiste, niveau mondial, succès, dépenses, actuel, électricité, sociétés, machine à cash, OpenAI, maison mère, ChatGPT, milliards de dollars, année, persévérance, marbre. --- La chronique économique d'Amid Faljaoui, tous les jours à 8h30 et à 17h30. Merci pour votre écoute Pour écouter Classic 21 à tout moment : www.rtbf.be/classic21 Retrouvez tous les épisodes de La chronique économique sur notre plateforme Auvio.be : https://auvio.rtbf.be/emission/802 Et si vous avez apprécié ce podcast, n'hésitez pas à nous donner des étoiles ou des commentaires, cela nous aide à le faire connaître plus largement.
Doğruluk, zeka ve mizah hakkında üfürelim. Yapay zekanın cevaplarını nasıl teyit edebiliriz?Gerçekten ne kadar zekiler, Turing Testi anlamlı mı, başka ölçütler var mı?Bu işin geleceği büyük dil modelleri midir?.------- Podbee Sunar -------Bu podcast, Hiwell hakkında reklam içerir.Hiwell'in klinik psikologlarıyla ücretsiz tanışma görüşmeleri yapmak ve terapi seanslarınızda pod10 koduyla %10 indirimden faydalanmak için linkten Hiwell indirin.Bu podcast, ON Dijital Bankacılık hakkında reklam içerir.ON Dijital Bankacılık ile her zaman avantajlı faiz oranları ve farklı bir çok avantaj seni bekliyor! Hemen tıkla, "ONBEE" kodunu davet kodu alanına girerek ON'lu ol, rahat bankacılığın avantajlarla dolu dünyasıyla tanış!Konular:(00:04) Laplace'ın Şeytanı(04:45) Doğruluk beklentisi(06:43) Yalan taciz(08:20) Halisünasyon(09:40) Mahkeme kayıtları(10:40) Dunning-Kruger(11:55) Teyit AI(12:37) Turing Testi(15:33) ARC(17:30) Emergence(20:32) Hinton(21:47) Mizah(25:10) Yann Lecun(27:00) Patreon TeşekkürleriKaynaklar:Yazı: GPT-4 has passed the Turing test, researchers claimMakale: Do large language models solve ARC visual analogies like people do?Yazı: What Really Made Geoffrey Hinton Into an AI DoomerYazı: LLMs develop their own understanding of reality Kitap: Machines like Us: Toward AI with Common SenseVideo: Geoffrey Hinton Warns of the “Existential Threat” of AIYazı: Large Language Models' Emergent Abilities Are a MirageVideo: LLM? More Like "Limited" Language Model with Emily M. BenderVideo: Yann Lecun: Meta AI, Open Source, Limits of LLMsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/379-regulating-artificial-intelligence Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. Yoshua Bengio is full professor at Université de Montréal and the Founder and Scientific Director of Mila - Quebec AI Institute. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoffrey Hinton and Yann LeCun, known as the Nobel Prize of computing. He is a Canada CIFAR AI Chair, a member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology, and Chair of the International Scientific Report on the Safety of Advanced AI. Website: https://yoshuabengio.org/ Scott Wiener has represented San Francisco in the California Senate since 2016. He recently introduced SB 1047, a bill aiming to reduce the risks of frontier models of AI. He has also authored landmark laws to, among other things, streamline the permitting of new homes, require insurance plans to cover mental health care, guarantee net neutrality, eliminate mandatory minimums in sentencing, require billion-dollar corporations to disclose their climate emissions, and declare California a sanctuary state for LGBTQ youth. He has lived in San Francisco's historically LGBTQ Castro neighborhood since 1997. Twitter: @Scott_Wiener Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
(Rediffusion) Interview exclusive de Yoshua Bengio, fondateur du centre Mila de Montréal consacré à l'intelligence artificielle (en partenariat avec Mon Carnet / Bruno Guglielminetti).Co-inventeur du deep learning (apprentissage profond), et considéré comme l'une des personnalités les plus influentes du monde en intelligence artificielle, l'universitaire québécois Yoshua Bengio milite pour une approche prudente de l'IA. Selon lui, les intelligences artificielles qui seront développées dans le futur représenteront un véritable risque pour l'espèce humaine, pouvant conduire à sa destruction. Contrairement à son collègue français Yann Le Cun, Bengio se situe donc sur une ligne d'inquiétude et de prudence. Il en appelle à faire jouer le principe de précaution. Il explique comment ses travaux de recherche actuels visent à donner le jour à une sorte "d'IA gendarme", qui serait capable de contrôler les autres IA afin de faire en sorte qu'elles respectent des règles éthiques et démocratiques.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
(Rediffusion) Le "pape" de l'intelligence artificielle Yann Le Cun, chef de la recherche en IA chez Meta, explique sa vision de l'intelligence artificielle du futur. Yann Le Cun le promet : grâce aux développements de l'intelligence artificielle, nous aurons tous un jour des petits assistants virtuels capables de nous venir en aide dans de nombreux domaines. Cependant, la route est encore longue pour les chercheurs avant d'arriver à ce résultat. Le chercheur en chef de Meta estime que les modèles actuels basés uniquement sur le texte (LLM) sont trop limités et ne parviendront jamais à comprendre la complexité du monde. Meta a choisi une autre voix avec les algorithmes JEPA, qui privilégient une approche globale, au-delà du texte ou des images, guidés par l'objectif à atteindre. Le Cun insiste sur le besoin de nouvelles architectures pour la compréhension du monde physique par les machines. Selon lui, il est peu probable qu'une IA de niveau humain voit le jour à court terme. En revanche, il est certain que nous entretiendrons un jour des relations affectives avec les intelligences artificielles.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
Travailler deviendrait optionnel ? C'est ce que pense Elon Musk qui a encore fait cette déclaration fracassante : "avec l'arrivée de l'IA, nous n'aurons plus de jobs sauf si vous voulez en faire un hobby". En bref, est-ce que le travail, notre travail, votre travail va devenir un hobby ? Alors la question posée comme ça semble totalement saugrenu, mais pas tant que ça. Un sage chinois a dit un jour que si vous ne voulez pas travailler, eh bien choisissez un job qui vous plaît. Mais tout de même, dire que le travail va devenir un hobby, c'est tout de même fort de café. Sauf pour Elon Musk qui clame haut et fort qu'avec l'intelligence artificielle, l'emploi c'est fini. C'est ce qu'il a déclaré auprès du magazine économique américain Fortune cette semaine. En réalité, cette déclaration d'Elon Musk n'est pas vraiment nouvelle. Il a dit exactement la même chose à Paris la semaine dernière dans le cadre du salon VivaTech, qui est donc la plus grande rencontre de dirigeants du secteur technologique au monde. Et s'exprimant à ce salon par webcam interposée, Elon Musk a dit plusieurs choses. Primo qu'avec l'intelligence artificielle et les robots humanoïdes, je le cite "il est probable qu'aucun d'entre nous n'aura un emploi sympa". Et puis il a décrit un avenir dans lequel je le cite à nouveau "Les emplois seront optionnels. Si vous voulez faire un travail qui est un peu comme un hobby, vous pourrez le faire sinon, l'IA et les robots fourniront tous les biens et les services que vous souhaitez". C'est à cause ou grâce à cette déclaration fracassante que vous allez voir bientôt fleurir des tas d'articles demandant si nous aurons encore un travail demain. Et surtout, que faire de nous si nous n'avons plus de travail ? Mots-Clés : nombril, mojito, plage, argent, question, existentiel, patrons, Silicon Valley, bénéficier, revenu, universel, jobs, révolte, machine, panique, chercheurs, intelligence artificielle, qualités intellectuelles, sujet, spécialiste, experts, tromper, timing, menace, réel, politiques, nouveau, représentant, monde, niveau, Belgique, personne, question, pire, gouverner, prévoir, draps, Français, Yann Lecun, patron, Meta, nom, Facebook, spécialiste, paradoxe, Moravec, robots, humanoïdes, complexes, difficultés, exécuter, simple, fenêtre, OpenAI, maison, ChatGPT, liste, métiers, manuels, coiffeur, couvreur, mécanicien, cols blancs, cadres, condamné, ouvreurs, porte. --- La chronique économique d'Amid Faljaoui, tous les jours à 8h30 et à 17h30. --- La chronique économique d'Amid Faljaoui, tous les jours à 8h30 et à 17h30. Merci pour votre écoute Pour écouter Classic 21 à tout moment : www.rtbf.be/classic21 Retrouvez tous les épisodes de La chronique économique sur notre plateforme Auvio.be : https://auvio.rtbf.be/emission/802 Et si vous avez apprécié ce podcast, n'hésitez pas à nous donner des étoiles ou des commentaires, cela nous aide à le faire connaître plus largement.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Llama Llama-3-405B?, published by Zvi on July 25, 2024 on LessWrong. It's here. The horse has left the barn. Llama-3.1-405B, and also Llama-3.1-70B and Llama-3.1-8B, have been released, and are now open weights. Early indications are that these are very good models. They were likely the best open weight models of their respective sizes at time of release. Zuckerberg claims that open weights models are now competitive with closed models. Yann LeCun says 'performance is on par with the best closed models.' This is closer to true than in the past, and as corporate hype I will essentially allow it, but it looks like this is not yet fully true. Llama-3.1-405B not as good as GPT-4o or Claude Sonnet. Certainly Llama-3.1-70B is not as good as the similarly sized Claude Sonnet. If you are going to straight up use an API or chat interface, there seems to be little reason to use Llama. That is a preliminary result. It is still early, and there has been relatively little feedback. But what feedback I have seen is consistent on this. Prediction markets are modestly more optimistic. This market still has it 29% to be the #1 model on Arena, which seems unlikely given Meta's own results. Another market has it 74% to beat GPT-4-Turbo-2024-04-09, which currently is in 5th position. That is a big chance for it to land in a narrow window between 1257 and 1287. This market affirms that directly on tiny volume. Such open models like Llama-3.1-405B are of course still useful even if a chatbot user would have better options. There are cost advantages, privacy advantages and freedom of action advantages to not going through OpenAI or Anthropic or Google. In particular, if you want to distill or fine-tune a new model, and especially if you want to fully own the results, Llama-3-405B is here to help you, and Llama-3-70B and 8B are here as potential jumping off points. I expect this to be the main practical effect this time around. If you want to do other things that you can't do with the closed options? Well, technically you can't do most of them under Meta's conditions either, but there is no reason to expect that will stop people, especially those overseas including in China. For some of these uses that's a good thing. Others, not as good. Zuckerberg also used the moment to offer a standard issue open source manifesto, in which he abandons any sense of balance and goes all-in, which he affirmed in a softball interview with Rowan Cheung. On the safety front, while I do not think they did their safety testing in a way that would have caught issues if there had been issues, my assumption is there was nothing to catch. The capabilities are not that dangerous at this time. Thus I do not predict anything especially bad will happen here. I expect the direct impact of Llama-3.1-405B to be positive, with the downsides remaining mundane and relatively minor. The only exception would be the extent to which this enables the development of future models. I worry that this differentially accelerates and enables our rivals and enemies and hurts our national security, and indeed that this will be its largest impact. And I worry more that this kind of action and rhetoric will lead us down the path where if things get dangerous in the future, it will become increasingly hard not to get ourselves into deep trouble, both in terms of models being irrevocably opened up when they shouldn't be and increasing pressure on everyone else to proceed even when things are not safe, up to and including loss of control and other existential risks. If Zuckerberg had affirmed a reasonable policy going forward but thought the line could be drawn farther down the line, I would have said this was all net good. Instead, I am dismayed. I do get into the arguments about open weights at the end of this post, because it felt obligato...
If you see this in time, join our emergency LLM paper club on the Llama 3 paper!For everyone else, join our special AI in Action club on the Latent Space Discord for a special feature with the Cursor cofounders on Composer, their newest coding agent!Today, Meta is officially releasing the largest and most capable open model to date, Llama3-405B, a dense transformer trained on 15T tokens that beats GPT-4 on all major benchmarks:The 8B and 70B models from the April Llama 3 release have also received serious spec bumps, warranting the new label of Llama 3.1.If you are curious about the infra / hardware side, go check out our episode with Soumith Chintala, one of the AI infra leads at Meta. Today we have Thomas Scialom, who led Llama2 and now Llama3 post-training, so we spent most of our time on pre-training (synthetic data, data pipelines, scaling laws, etc) and post-training (RLHF vs instruction tuning, evals, tool calling).Synthetic data is all you needLlama3 was trained on 15T tokens, 7x more than Llama2 and with 4 times as much code and 30 different languages represented. But as Thomas beautifully put it:“My intuition is that the web is full of s**t in terms of text, and training on those tokens is a waste of compute.” “Llama 3 post-training doesn't have any human written answers there basically… It's just leveraging pure synthetic data from Llama 2.”While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:* SFT for Code: 3 approaches for synthetic data for the 405B bootstrapping itself with code execution feedback, programming language translation, and docs backtranslation.* SFT for Math: The Llama 3 paper credits the Let's Verify Step By Step authors, who we interviewed at ICLR:* SFT for Multilinguality: "To collect higher quality human annotations in non-English languages, we train a multilingual expert by branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingualtokens."* SFT for Long Context: "It is largely impractical to get humans to annotate such examples due to the tedious and time-consuming nature of reading lengthy contexts, so we predominantly rely on synthetic data to fill this gap. We use earlier versions of Llama 3 to generate synthetic data based on the key long-context use-cases: (possibly multi-turn) question-answering, summarization for long documents, and reasoning over code repositories, and describe them in greater detail below"* SFT for Tool Use: trained for Brave Search, Wolfram Alpha, and a Python Interpreter (a special new ipython role) for single, nested, parallel, and multiturn function calling.* RLHF: DPO preference data was used extensively on Llama 2 generations. This is something we partially covered in RLHF 201: humans are often better at judging between two options (i.e. which of two poems they prefer) than creating one (writing one from scratch). Similarly, models might not be great at creating text but they can be good at classifying their quality.Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.Llama2 was also used as a classifier for all pre-training data that went into the model. It both labelled it by quality so that bad tokens were removed, but also used type (i.e. science, law, politics) to achieve a balanced data mix. Tokenizer size mattersThe tokens vocab of a model is the collection of all tokens that the model uses. Llama2 had a 34,000 tokens vocab, GPT-4 has 100,000, and 4o went up to 200,000. Llama3 went up 4x to 128,000 tokens. You can find the GPT-4 vocab list on Github.This is something that people gloss over, but there are many reason why a large vocab matters:* More tokens allow it to represent more concepts, and then be better at understanding the nuances.* The larger the tokenizer, the less tokens you need for the same amount of text, extending the perceived context size. In Llama3's case, that's ~30% more text due to the tokenizer upgrade. * With the same amount of compute you can train more knowledge into the model as you need fewer steps.The smaller the model, the larger the impact that the tokenizer size will have on it. You can listen at 55:24 for a deeper explanation.Dense models = 1 Expert MoEsMany people on X asked “why not MoE?”, and Thomas' answer was pretty clever: dense models are just MoEs with 1 expert :)[00:28:06]: I heard that question a lot, different aspects there. Why not MoE in the future? The other thing is, I think a dense model is just one specific variation of the model for an hyperparameter for an MOE with basically one expert. So it's just an hyperparameter we haven't optimized a lot yet, but we have some stuff ongoing and that's an hyperparameter we'll explore in the future.Basically… wait and see!Llama4Meta already started training Llama4 in June, and it sounds like one of the big focuses will be around agents. Thomas was one of the authors behind GAIA (listen to our interview with Thomas in our ICLR recap) and has been working on agent tooling for a while with things like Toolformer. Current models have “a gap of intelligence” when it comes to agentic workflows, as they are unable to plan without the user relying on prompting techniques and loops like ReAct, Chain of Thought, or frameworks like Autogen and Crew. That may be fixed soon?
On revient sur les actualités du mois de juin signées Apple, Google, OpenAI ou encore Anthropic.EN PARTENARIAT AVEC FREE PRO, LE MEILLEUR DE FREE POUR LES ENTREPRISESIAJuin a été marqué par des annonces majeures, essentiellement en matière d'IA. Apple a présenté de nouvelles fonctionnalités estampillées "Apple Intelligence" pour iOS 18, dont le déploiement en Europe sera cependant retardé "en raison du Digital Markets Act (DMA)", selon la marque à la pomme. OpenAI, de son côté, a dévoilé GPT-4O, tandis que Google a présenté diverses nouvelles fonctionnalités également prometteuses, sans oublier Anthropic qui impressionne avec la version 3.5 de son chatbot Claude.IA et droits d'auteurLa RIAA, l'association de l'industrie musicale aux États-Unis, a commencé à contester les IA génératives de musique, rappelant les batailles juridiques contre Napster.Elon Musk et TwitterElon Musk a continué à faire parler de lui avec ses interventions sur X (anciennement Twitter) et des changements comme la suppression des likes pour certains utilisateurs. Son bras de fer avec Yann Le Cun et les tensions avec Linda Iaccarino, la CEO de Twitter.Élections et Vote ÉlectroniqueEn cette période d'élections, impossible de ne pas évoquer la sempiternelle question du vote électronique, histoire de rappeler les défis de sécurité et de confidentialité mais aussi les expériences menées ici et là. Cet épisode a été enregistré dans le "métaverse" avec Horizon Workrooms de Meta. L'utilisation d'avatars 3D a offert une expérience immersive, mais a également révélé des défis, notamment le confort des casques de réalité virtuelle.Au micro :
Hat Tip to this week's creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschoolContents* Editorial: * Essays of the Week* Situational Awareness: The Decade Ahead* ChatGPT is b******t* AGI by 2027?* Ilya Sutskever, OpenAI's former chief scientist, launches new AI company* The Series A Crunch Is No Joke* The Series A Crunch or the Seedpocalypse of 2024 * The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels* Video of the Week* Danny Rimer on 20VC - (Must See)* AI of the Week* Anthropic has a fast new AI model — and a clever new way to interact with chatbots* Nvidia's Ascent to Most Valuable Company Has Echoes of Dot-Com Boom* The Expanding Universe of Generative Models* DeepMind's new AI generates soundtracks and dialogue for videos* News Of the Week* Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025* Is the news industry ready for another pivot to video?* Cerebras, an Nvidia Challenger, Files for IPO Confidentially* Startup of the Week* Final Cut Camera and iPad Multicam are Truly Revolutionary* X of the Week* Leopold AschenbrennerEditorialI had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.So I had to read it, and it is this week's essay of the week. He starts his 165-page epic with:Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.So, Leopold is not humble. He finds himself “among” the few people with situational awareness.As a person prone to bigging up myself, I am not one to prematurely judge somebody's view of self. So, I read all 165 pages.He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.At that point, I felt I was reading a manifesto for World War Three.But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:* Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn't some random community of coders writing an innocent open source software package; this isn't fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it'll be the most important thing we ever do. * America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can't simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won't cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first. * We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we're summoning is one we cannot yet fully control. These are manageable—but improvising won't cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.'Claude 3.5 Opus is probably not far away.Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.Contrast that paper with “ChatGPT is B******t,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts b******t. It can be right, and it can be wrong, but it does not know the difference. It's a bullshitter.Hilariously, they define three types of BS:B******t (general)Any utterance produced where a speaker has indifference towards the truth of the utterance.Hard b******tB******t produced with the intention to mislead the audience about the utterer's agenda.Soft b******tB******t produced without the intention to mislead the hearer regarding the utterer's agenda.They then conclude:With this distinction in hand, we're now in a position to consider a worry of the following sort: Is ChatGPT hard b**********g, soft b**********g, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft b**********g. However, the question of whether these chatbots are hard b**********g is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.This is closer to Gary Marcus's point of view in his ‘AGI by 2027?' response to Leopold. It is also below.I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc. (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence.The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe
Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of Situational Awareness - The Decade Ahead, published by OscarD on June 8, 2024 on The Effective Altruism Forum. Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of resources towards this. China is still competitive in the AGI race, and China being first to superintelligence would be very bad because it may enable a stable totalitarian world regime. So the US must win to preserve a liberal world order. Within a few years both the CCP and USG will likely 'wake up' to the enormous potential and nearness of superintelligence, and devote massive resources to 'winning'. USG will nationalise AGI R&D to improve security and avoid secrets being stolen, and to prevent unconstrained private actors from becoming the most powerful players in the world. This means much of existing AI governance work focused on AI company regulations is missing the point, as AGI will soon be nationalised. This is just one story of how things could play out, but a very plausible and scarily soon and dangerous one. I. From GPT-4 to AGI: Counting the OOMs Past AI progress Increases in 'effective compute' have led to consistent increases in model performance over several years and many orders of magnitude (OOMs) GPT-2 was akin to roughly a preschooler level of intelligence (able to piece together basic sentences sometimes), GPT-3 at the level of an elementary schooler (able to do some simple tasks with clear instructions), and GPT-4 similar to a smart high-schooler (able to write complicated functional code, long coherent essays, and answer somewhat challenging maths questions). Superforecasters and experts have consistently underestimated future improvements in model performance, for instance: The creators of the MATH benchmark expected that "to have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community". But within a year of the benchmark's release, state-of-the-art (SOTA) models went from 5% to 50% accuracy, and are now above 90%. Professional forecasts made in August 2021 expected the MATH benchmark score of SOTA models to be 12.7% in June 2022, but the actual score was 50%. Experts like Yann LeCun and Gary Marcus have falsely predicted that deep learning will plateau. Bryan Caplan is on track to lose a public bet for the first time ever after GPT-4 got an A on his economics exam just two months after he bet no AI could do this by 2029. We can decompose recent progress into three main categories: Compute: GPT-2 was trained in 2019 with an estimated 4e21 FLOP, and GPT-4 was trained in 2023 with an estimated 8e24 to 4e25 FLOP.[1] This is both because of hardware improvements (Moore's Law) and increases in compute budgets for training runs. Adding more compute at test-time, e.g. by running many copies of an AI, to allow for debate and delegation between each instance, could further boos...
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
Pour écouter l'épisode en entier tapez "#397 - Yann Le Cun - Chief AI Scientist chez Meta - L'Intelligence Artificielle Générale ne viendra pas de Chat GPT" sur votre plateforme d'écoute.
L'IA en chatbot (LLM), c'est (déjà) le passé. Yann Le Cun, scientifique en chef et fondateur du pôle IA chez Meta est l'une des divinités de l'IA au même titre que Sam Altman, Laurent Alexandre et compagnie. Une question l'anime : quel est le mystère de l'intelligence ? Considéré comme l'inventeur du Deep Learning, Yann est l'un des pères fondateurs de l'IA et des réseaux cognitifs. Il a publié plus de 180 articles scientifiques et travaille pour Meta depuis 2013. Yann repose les bases de la technologie phare du XXIème siècle. Il dévoile quelles seront les prochaines vagues pour une fenêtre sur le futur et une exploration des défis occupant les plus grands scientifiques de notre temps : Pourquoi les LLM étaient des jeux d'enfants à côté des modèles multimodaux et sensoriels ? Devons-nous nous préparer à un siècle des lumières amplifié ou bien à une société à la Big Brother ? Pourquoi “l'histoire de l'IA est jonchée de cadavres” ? Pourquoi un enfant de 4 ans est “plus intelligent” que ChatGPT ? Quelle est la stratégie de Meta à court-moyen terme ? Comment travailler dans le secteur de l'IA ? Les formations, les prochaines innovations, les évolutions de marchés… Pourquoi le casque Apple Vision Pro est un raté et à quoi ressembleront les lunettes du futur ? Quel est le vrai danger de l'IA ? TIMELINE: 00:00:00 : Petit vocabulaire du XXIème siècle : LLM, Deep Learning 0 0:09:09 : Comment entraîner une IA : les différentes méthodes 00 :15:04 : La génération par LLM : une intelligence factice ? 00:19:18 : Le chou, la chèvre, le loup et les limites du langage 00:27:38 : L'intelligence artificielle générale (IAG ou AGI) : où en est-on ? 00:37:02 : Les LLM c'est (déjà) le passé 00:43:56 : Llama, le modèle IA open source 00:47:57 : La réalité augmentée arrive : les “wearables” de Meta 00:52 :01 : Comment se former à l'IA pour tirer profit ? 00:56:26 : Ce que les LLM ne comprennent pas 01:10:30 : Comment évolue la recherche : philosophie, capteurs sensoriels, raisonnements logiques… 01:19:15 : Voitures autonomes : « Oh my God, je vais renverser ce vélo » 01:24:13 : Un siècle des lumières amplifié ou l'ère de Big Brother ? 01:36:41 : Le couple gagnant : internet et IA Les anciens épisodes de GDIY mentionnés : #219 - Bob Sinclar - DJ - Mélanger des sons pour faire danser les gens #327 - Laurent Alexandre - Auteur - ChatGPT & IA : “Dans 6 mois, il sera trop tard pour s'y intéresser” #381 - Marjolaine Grondin - Jam - Travailler mieux et devenir libre grâce à l'IA #396 - Gérard Saillant - Institut du Cerveau - Le chirurgien de Ronaldo, Schumacher, du PSG et de la FIA #238 - Clément Delangue - Hugging Face - Démocratiser le machine learning pour impacter des milliards d'individus #353 - Stanislas Polu - Dust - La vérité sur ce que l'IA nous réserve #321 - Georges-Olivier Reymond - Pasqal - Et si le leader mondial du Quantum Computing était Français ? Avec Yann, nous avons parlé de : LLM = Large Language Model Lex Fridman avec Yann round 1 Lex Fridman avec Yann round 2 Lex Fridman avec Yann round 3 Ray-Ban Meta Galactica, l'IA générative scientifique de Meta Seamless Communication Conjecture de Goldbach ("Tout nombre pair supérieur à 2 est la somme de deux nombres premiers.”) Facebook Artificial Intelligence Research (FAIR) Mistral AI Cours Deep Learning de Yann à NYU (sous-titres en français) PyTorch Meta Llama 3 Hugging Face JEPA : Joint Embedding Predictive Architecture General Problem Solver : programme informatique par Herbert Simon, Cliff Shaw et Allen Newell DINO Les recommandations de lecture : QED: The Strange Theory of Light and Matter (en) Lumière et matière - Une étrange histoire (fr) La Plus Belle Histoire de l'intelligence: Des origines aux neurones artificiels : vers une nouvelle étape de l'évolution Quand la machine apprend: La révolution des neurones artificiels et de l'apprentissage profond Vous pouvez contacter Yann sur LinkedIn, Instagram, Threads, Facebook, X. La musique du générique vous plaît ? C'est à Morgan Prudhomme que je la dois ! Contactez-le sur : https://studio-module.com. Vous souhaitez sponsoriser Génération Do It Yourself ou nous proposer un partenariat ? Contactez mon label Orso Media via ce formulaire.
O Hipsters: Fora de Controle é o podcast da Alura com notícias sobre Inteligência Artificial aplicada e todo esse novo mundo no qual estamos começando a engatinhar, e que você vai poder explorar conosco! Nesse episódio conversamos com o Pedro Serafim, Diretor de Produto da Doris, uma empresa que vem usando IA generativa para mudar a experiência de e-commerce no mundo da moda. Além disso, debatemos as principais notícias da semana, incluindo a troca farpas entre Elon Musk e Yann LeCun, o rumor sobre a Siri com IA no iOS 18, e os acordos da OpenAI com a Vox Media e The Atlantic. Vem ver quem participou desse papo: Marcus Mendes, host fora de controle Fabrício Carraro, Program Manager da Alura, autor de IA e host do podcast Dev Sem Fronteiras Pedro Serafim, Chief Product Officer na Doris
In this episode of ACM ByteCast, Rashmi Mohan hosts ACM A.M. Turing Award laureate Yoshua Bengio, Professor at the University of Montreal, and Founder and Scientific Director of MILA (Montreal Institute for Learning Algorithms) at the Quebec AI Institute. Yoshua shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for their work on deep learning. He is also a published author and the most cited scientist in Computer Science. Previously, he founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications, acquired by ServiceNow. He currently serves as technical and scientific advisor to Recursion Pharmaceuticals and scientific advisor for Valence Discovery. He is a Fellow of ACM, the Royal Society, the Royal Society of Canada, Officer of the Order of Canada, and recipient of the Killam Prize, Marie-Victorin Quebec Prize, and Princess of Asturias Award. Yoshua also serves on the United Nations Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology and as a Canada CIFAR AI Chair. Yoshua traces his path in computing, from programming games in BASIC as an adolescent to getting interested in the synergy between the human brain and machines as a graduate student. He defines deep learning and talks about knowledge as the relationship between symbols, emphasizing that interdisciplinary collaborations with neuroscientists were key to innovations in DL. He notes his and his colleagues' surprise in the speed of recent breakthroughs with transformer architecture and large language models and talks at length about about artificial general intelligence (AGI) and the major risks it will present, such as loss of control, misalignment, and nationals security threats. Yoshua stresses that mitigating these will require both scientific and political solutions, offers advice for researchers, and shares what he is most excited about with the future of AI.
Episode 6: Will AI change our economic systems forever? Join hosts Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) as they delve into these pressing questions. In this wide ranging episode, Matt and Nathan answer your thought provoking questions and preview new AI video tools coming out, explore the revolutionary impact of AI on societal structures, healthcare advancements, and economic systems. They discuss the potential for AI to streamline government efficiency, uncover cures for diseases, and even tackle global challenges such as climate change and hunger. However, the conversation also navigates through the complexities of government regulations, the technological arms race among big corporations, and the societal implications of widespread AI adoption. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI podcast hosts discuss audience questions, insights. (04:38) Adobe premiere integrating Sora for video improvements. (09:01) Voice chat limits bots and negativity online. (10:55) AI's impact on economy and work uncertainty. (15:38) Public, not governments or corporations, should adapt. (19:19) Open source AI catching up, corporate control. (20:10) Big companies' financial support crucial for open source. (25:33) Future tech: UI, voice, cloud, startups role. (29:03) Greg Isenberg on how small companies compete. (31:05) AI will enhance human connection in startups. (34:05) Humans may not need to continue training large language models, as AI could self-improve through reinforcement learning. (37:30) Yann LeCun doubts large language models' potential. (42:02) IAC's extreme approach may bring regulation. (43:44) Encouraging engagement and feedback for future episodes. — Mentions: Adobe Premiere: https://www.adobe.com/products/premiere.html OpenAI: https://www.openai.com/ Chat GPT: https://chat.openai.com/ Sam Altman: https://blog.samaltman.com/ Yann LeCun: http://yann.lecun.com/ — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Mido Assran, a research scientist at Meta's Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video version of Meta's Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI. The complete show notes for this episode can be found at twimlai.com/go/677.
Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://facebook.com/yann.lecun Meta AI: https://ai.meta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:10) - Limits of LLMs (20:47) - Bilingualism and thinking (24:39) - Video prediction (31:59) - JEPA (Joint-Embedding Predictive Architecture) (35:08) - JEPA vs LLMs (44:24) - DINO and I-JEPA (45:44) - V-JEPA (51:15) - Hierarchical planning (57:33) - Autoregressive LLMs (1:12:59) - AI hallucination (1:18:23) - Reasoning in AI (1:35:55) - Reinforcement learning (1:41:02) - Woke AI (1:50:41) - Open source (1:54:19) - AI and ideology (1:56:50) - Marc Andreesen (2:04:49) - Llama 3 (2:11:13) - AGI (2:15:41) - AI doomers (2:31:31) - Joscha Bach (2:35:44) - Humanoid robots (2:44:52) - Hope for the future