American annual computer science prize
POPULARITY
The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI's most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.I think it's fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life's work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way.So I wanted to ask Hinton: If we keep going down this path, what will become of us?Mentioned:If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresAgentic Misalignment: How LLMs could be insider threats, by AnthropicMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson. And he thinks LLMs are a dead end.After interviewing him, my steel man of Richard's position is this: LLMs aren't capable of learning on-the-job, so no matter how much we scale, we'll need some new architecture to enable continual learning.And once we have it, we won't need a special training phase — the agent will just learn on-the-fly, like all humans, and indeed, like all animals.This new paradigm will render our current approach with LLMs obsolete.In our interview, I did my best to represent the view that LLMs might function as the foundation on which experiential learning can happen… Some sparks flew.A big thanks to the Alberta Machine Intelligence Institute for inviting me up to Edmonton and for letting me use their studio and equipment.Enjoy!Watch on YouTube; listen on Apple Podcasts or Spotify.Sponsors* Labelbox makes it possible to train AI agents in hyperrealistic RL environments. With an experienced team of applied researchers and a massive network of subject-matter experts, Labelbox ensures your training reflects important, real-world nuance. Turn your demo projects into working systems at labelbox.com/dwarkesh* Gemini Deep Research is designed for thorough exploration of hard topics. For this episode, it helped me trace reinforcement learning from early policy gradients up to current-day methods, combining clear explanations with curated examples. Try it out yourself at gemini.google.com* Hudson River Trading doesn't silo their teams. Instead, HRT researchers openly trade ideas and share strategy code in a mono-repo. This means you're able to learn at incredible speed and your contributions have impact across the entire firm. Find open roles at hudsonrivertrading.com/dwarkeshTimestamps(00:00:00) – Are LLMs a dead end?(00:13:04) – Do humans do imitation learning?(00:23:10) – The Era of Experience(00:33:39) – Current architectures generalize poorly out of distribution(00:41:29) – Surprises in the AI field(00:46:41) – Will The Bitter Lesson still apply post AGI?(00:53:48) – Succession to AIs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
My guest today is Vinton G. Cerf, widely regarded as a “father of the Internet.” In the 1970s, Vint co-developed the TCP/IP protocols that define how data is formatted, transmitted, and received across devices. In essence, his work enabled networks to communicate, thus laying the foundation for the Internet as a unified global system. He has received honorary degrees and awards that include the National Medal of Technology, the Turing Award, the Presidential Medal of Freedom, the Marconi Prize, and membership in the National Academy of Engineering. He is currently Chief Internet Evangelist at Google.In this episode, Vint reflects on the Internet's path from ARPANET and TCP/IP to the scaling choices that made global connectivity possible. He explains why decentralization was key, and how fiber optics and data centers underwrote explosive growth. Vint also addresses today's policy anxieties (fragmentation, sovereignty walls, and fragile infrastructures…) before looking upward to the interplanetary Internet now linking spacecraft. Finally, we turn to AI: how LLMs are reshaping learning and software, and why the next leap may be systems that question us back. I hope you enjoy our discussion.You can follow me on X (@ProfSchrepel) and BlueSky (@ProfSchrepel).
In this episode of The Geek in Review, we welcome back Pablo Arredondo, VP of CoCounsel at Thomson Reuters, along with Joel Hron, the company's CTO. The conversation centers on the recent release of ChatGPT-5 and the rise of “reasoning models” that go beyond traditional language models' limitations. Pablo reflects on his years of tracking neural net progress in the legal field, from escaping “keyword prison” to the current ability of AI to handle complex, multi-step legal reasoning. He describes scenarios where entire litigation records could be processed to map out strategies for summary judgment motions, calling it a transformative step toward what he sees as “celestial legal products.”Joel brings an engineering perspective, comparing the legal sector's AI trajectory to the rapid advancements in AI developer tools. He notes that these tools have historically amplified the skills of top performers rather than leveling the playing field. Applied to law, he believes AI will free lawyers from rote work and allow them to focus on higher-value decisions and strategy. The discussion shifts to Deep Research, Thomson Reuters' latest enhancement for CoCounsel, which leverages reasoning models in combination with domain-specific tools like KeyCite to follow “breadcrumb trails” through case law with greater accuracy and transparency.The trio explores the growing importance of transparency and verification in AI-driven research. Joel explains how Deep Research provides real-time visibility into an AI's reasoning path, highlights potentially hallucinated citations, and integrates verification tools to cross-check references against authoritative databases. Pablo adds historical and philosophical perspective, likening hallucinations to a tiger “going tiger,” stressing that while the risk cannot be eliminated, the technology already catches a significant number of human errors. Both agree that AI tools must be accompanied by human oversight and well-designed workflows to build trust in their output.Looking to the future, Joel predicts that the adoption of AI agents will reshape organizational talent strategies, elevating the importance of those who excel at complex decision-making. Pablo proposes “ambient AI” as the next frontier—intelligent systems that unobtrusively monitor legal work, flagging potential issues instantly, much like a spellchecker. Both caution that certain legal tasks, especially in judicial opinion drafting, warrant careful consideration before fully integrating AI. The episode closes with practical insights on staying current, from following AI researchers on social platforms to reading technical blogs and academic papers, underscoring the need for informed engagement in this rapidly evolving space.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Blue Sky: @geeklawblog.com @marlgebEmail: geekinreviewpodcast@gmail.comMusic: Jerry David DeCiccaAcademic Papers:Weekly research for trends.scholar.google.com,ssrn.com,arxiv.orgFrançois Chollet: Balanced AI insights. x.com/fcholletfchollet.comJason Wei (OpenAI): Reinforcement learning updates. x.com/jason_d_weopenai.com/blogGeoffrey Hinton: AI research insights.x.com/geoffreyhintonRichard Sutton: Reinforcement learning and philosophical takes.incompleteideas.netReinforcement Learning: An Introduction (Book):http://incompleteideas.net/book/RLbook2020.pdfSeminal work by Turing Award winner.University of Alberta Lab: https://rlai-lab.github.ioCurrent research on scalable AI methods.Blue Sky: @geeklawblog.com @marlgebEmail: geekinreviewpodcast@gmail.comMusic: Jerry David DeCiccaTranscript
Useful Resources: 1. Ben Shneiderman, Professor Emeritus, University Of Maryland. 2. Richard Hamming and Hamming Codes. 3. Human Centered AI - Ben Shneiderman. 4. Allen Newell and Herbert A. Simon. 5. Raj Reddy and the Turing Award. 6. Doug Engelbart. 7. Alan Kay. 8. Conference on Human Factors in Computing Systems. 9. Software psychology: Human factors in computer and information systems - Ben Shneiderman. 10. Designing the User Interface: Strategies for Effective Human-Computer Interaction - Ben Shneiderman. 11. Direct Manipulation: A Step Beyond Programming Languages - Ben Shneiderman. 12. Steps Toward Artificial Intelligence - Marvin Minsky. 13. Herbert Gelernter. 14. Computers And Thought - Edward A Feigenbaum and Julian Feldman. 15. Lewis Mumford. 15. Technics and Civilization - Lewis Mumford. 16. Buckminster Fuller. 17. Marshall McLuhan. 18. Roger Shank. 19. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness - Jonathan Haidt. 20. John C. Thomas, IBM. 21. Yousuf Karsh, photographer. 22. Gary Marcus, professor emeritus of psychology and neural science at NYU. 23. Geoffrey Hinton. 24. Nassim Nicholas Taleb. 25. There Is No A.I. - Jaron Lanier. 26. Anil Seth On The Science of Consciousness - Episode 94 of Brave New World. 27. A ‘White-Collar Blood Bath' Doesn't Have to Be Our Fate - Tim Wu 28. Information Management: A Proposal - Tim Berners-Lee 29. Is AI-assisted coding overhyped? : METR study 30. RLHF, Reinforcement learning from human feedback31. Joseph Weizenbaum 32. What Is Computer Science? - Allen Newel, Alan J. Perlis, Herbert A. Simon -- Check out Vasant Dhar's newsletter on Substack. The subscription is free!
He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices
James Copnall, presenter of the BBC's Newsday, speaks to Yoshua Bengio, the world-renowned computer scientist often described as one of the godfathers of artificial intelligence, or AI.Bengio is a professor at the University of Montreal in Canada, founder of the Quebec Artificial Intelligence Institute - and recipient of an A.M. Turing Award, “the Nobel Prize of Computing”. AI allows computers to operate in a way that can seem human, by using programmes that learn vast amounts of data and follow complex instructions. Big tech firms and governments have invested billions of dollars in the development of artificial intelligence, thanks to its potential to increase efficiency, cut costs and support innovation.Bengio believes there are risks in AI models that attempt to mimic human behaviour with all its flaws. For example, recent experiments have shown how some AI models are developing the capacity to deceive and even blackmail humans, in a quest for their self-preservation. Instead, he says AI must be safe, scientific and working to understand humans without copying them. The Interview brings you conversations with people shaping our world, from all over the world. The best interviews from the BBC. You can listen on the BBC World Service, Mondays and Wednesdays at 0700 GMT. Or you can listen to The Interview as a podcast, out twice a week on BBC Sounds, Apple, Spotify or wherever you get your podcasts.Presenter: James Copnall Producers: Lucy Sheppard, Ben Cooper Editor: Nick HollandGet in touch with us on email TheInterview@bbc.co.uk and use the hashtag #TheInterviewBBC on social media.(Image: Yoshua Bengio. Credit: Craig Barritt/Getty)
Today's conversation with Turing Award-winning computer scientist LESLIE VALIANT explores a question I find myself returning to over and over again – What makes us human? What unique abilities have allowed homo sapiens to succeed, flourish, and dominate – knowing it's not our size, strength, or speed. His new book, THE IMPORTANCE OF BEING EDUCABLE: A NEW THEORY ON HUMAN UNIQUENESS, has added timeliness, as we confront a crisis of social mistrust as well as the threats and promise of AI.Valiant-03-31-2025 Transcript
Martin Hellman is an American cryptographer known for co-inventing public-key cryptography with Whitfield Diffie and Ralph Merkle in the 1970s. Their groundbreaking Diffie-Hellman key exchange method allowed secure communication over insecure channels, laying the foundation for modern encryption protocols. Hellman has also contributed to cybersecurity policy and ethical discussions on nuclear risk. His work has The post Turing Award Special: A Conversation with Martin Hellman appeared first on Software Engineering Daily.
Martin Hellman is an American cryptographer known for co-inventing public-key cryptography with Whitfield Diffie and Ralph Merkle in the 1970s. Their groundbreaking Diffie-Hellman key exchange method allowed secure communication over insecure channels, laying the foundation for modern encryption protocols. Hellman has also contributed to cybersecurity policy and ethical discussions on nuclear risk. His work has The post Turing Award Special: A Conversation with Martin Hellman appeared first on Software Engineering Daily.
David A. Patterson is a pioneering computer scientist known for his contributions to computer architecture, particularly as a co-developer of Reduced Instruction Set Computing, or RISC, which revolutionized processor design. He has co-authored multiple books, including the highly influential Computer Architecture: A Quantitative Approach. David is a UC Berkeley Pardee professor emeritus, a Google distinguished The post Turing Award Special: A Conversation with David Patterson appeared first on Software Engineering Daily.
David A. Patterson is a pioneering computer scientist known for his contributions to computer architecture, particularly as a co-developer of Reduced Instruction Set Computing, or RISC, which revolutionized processor design. He has co-authored multiple books, including the highly influential Computer Architecture: A Quantitative Approach. David is a UC Berkeley Pardee professor emeritus, a Google distinguished The post Turing Award Special: A Conversation with David Patterson appeared first on Software Engineering Daily.
Yann LeCun, Meta's chief AI scientist and Turing Award winner, joins us to discuss the limits of today's LLMs, why generative AI may be hitting a wall, what's missing for true human-level intelligence, the real meaning of AGI, Meta's open-source strategy with Llama, the future of AI assistants in smart glasses, why diversity in AI models matters, and how open models could shape the next era of innovation Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:40 - Introduction to Yann LeCun, Chief AI Scientist at Meta 0:02:11 - The limitations and hype cycles of LLMs, and historical patterns of overestimating new AI paradigms. 0:05:45 - The future of AI research, and the need for machines that understand the physical world, can reason and plan, and are driven by human-defined objectives 0:14:47 - AGI Timeline, human-level AI within a decade, with deep learning as the foundation for advanced machine intelligence 0:21:35 - Why true AI intelligence requires abstract reasoning and hierarchical planning beyond language capabilities, unlike today's neural networks that rely on computational tricks 0:30:24 - Meta's open-source LLAMA strategy, empowering academia and startups, and commercial benefits 0:36:10 - The future of AI assistants, wearable tech, cultural diversity, and open-source models 0:42:52 - The impact of immigration policies on US technological leadership and STEM education 0:44:26 - Does Yann have a cat? 0:45:19 - Thank you to Yann LaCun for joining the AI Inside podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Software Engineering Daily: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.
John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.
John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.
Jeffrey Ullman is a renowned computer scientist and professor emeritus at Stanford University, celebrated for his groundbreaking contributions to database systems, compilers, and algorithms. He co-authored influential texts like Principles of Database Systems and Compilers: Principles, Techniques, and Tools (often called the “Dragon Book”), which have shaped generations of computer science students. Jeffrey received the The post Turing Award Special: A Conversation with Jeffrey Ullman appeared first on Software Engineering Daily.
Jeffrey Ullman is a renowned computer scientist and professor emeritus at Stanford University, celebrated for his groundbreaking contributions to database systems, compilers, and algorithms. He co-authored influential texts like Principles of Database Systems and Compilers: Principles, Techniques, and Tools (often called the “Dragon Book”), which have shaped generations of computer science students. Jeffrey received the The post Turing Award Special: A Conversation with Jeffrey Ullman appeared first on Software Engineering Daily.
In the latest episode of Approximately Correct, we're taking the time to celebrate with Amii Fellow, Chief Scientific Advisor, and Canada CIFAR AI Chair Rich Sutton, newly-minted winner of the A.M. Turing Award, a prize that is often referred to as the “Nobel Prize of Computer Science.”
Jack Dongarra is an American computer scientist who is celebrated for his pioneering contributions to numerical algorithms and high-performance computing. He developed essential software libraries like LINPACK and LAPACK, which are widely used for solving linear algebra problems on advanced computing systems. Dongarra is also a co-creator of the TOP500 list, which ranks the world's The post Turing Award Special: A Conversation with Jack Dongarra appeared first on Software Engineering Daily.
Jack Dongarra is an American computer scientist who is celebrated for his pioneering contributions to numerical algorithms and high-performance computing. He developed essential software libraries like LINPACK and LAPACK, which are widely used for solving linear algebra problems on advanced computing systems. Dongarra is also a co-creator of the TOP500 list, which ranks the world's The post Turing Award Special: A Conversation with Jack Dongarra appeared first on Software Engineering Daily.
In this episode of 'The Wisdom Of' Show, host Simon Bowen speaks with Ed Catmull, co-founder of Pixar Animation Studios and former president of Walt Disney Animation Studios and Disneytoon Studios. With five Academy Awards® including an Oscar for Lifetime Achievement and the prestigious Turing Award for his work in computer graphics, Ed shares profound insights on creative leadership, innovation, and building world-class organizations. From pioneering 3D animation to leading the creation of beloved films that have grossed over $14 billion worldwide, Ed's journey offers valuable lessons on fostering creativity, navigating change, and building sustainable success.Ready to unlock your leadership potential and drive real change? Join Simon's exclusive masterclass on The Models Method. Learn how to articulate your unique value and create scalable impact: https://thesimonbowen.com/masterclassEpisode Breakdown00:00 Introduction and Ed's pioneering journey in animation05:18 Merging art and science: The power of interdisciplinary thinking12:36 Company culture and collective ownership beyond shares18:52 The inversion of business values: Product, People, Profit25:44 Navigating change and innovation in fast-evolving industries33:29 Pixar's 5-step decision-making framework for creative excellence38:22 Truth-finding mechanisms in organizations45:36 The CEO's role in facilitating collaborative genius52:12 Shifting from achievement to effectiveness: "Is this working?"58:43 Future implications and conclusionsKey InsightsWhy combining seemingly incongruous disciplines (science, art, math) creates richer innovationHow most businesses conflate collective ownership with shares or control, missing true ownershipThe dangerous mismatch between stated values and actual priorities in business decision-makingWhy understanding the accelerating rate of change is fundamental to business survivalThe 5-step framework Pixar uses to make all critical creative decisionsWhy most CEOs incorrectly believe they have effective error detection mechanismsHow shifting focus from "What am I achieving?" to "Is this working?" transforms leadershipThe CEO's role in fostering collaboration rather than providing all the answersWhy judging the creation, not the creator, is essential for innovationAbout Ed CatmullEd Catmull is a pioneer in computer graphics and animation who co-founded Pixar Animation Studios. Under his leadership, Pixar produced groundbreaking animated films including Toy Story, Finding Nemo, The Incredibles, and many more. After Disney acquired Pixar in 2006, Ed served as President of both Pixar and Walt Disney Animation Studios, overseeing hits like Frozen, Tangled, and Wreck-It Ralph.His numerous accolades include five Academy Awards®, the Turing Award from the Association for Computing Machinery, and the prestigious Gordon E. Sawyer Award for lifetime contributions to computer graphics in film. Ed's book "Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration" is considered essential reading on creative leadership.With a Ph.D. in computer science and an initial passion for animation that led him through physics to pioneering computer graphics, Ed's career exemplifies the power of combining art and science to create revolutionary innovation.Connect with Ed CatmullLinkedIn: https://www.linkedin.com/in/edwincatmull/X:...
Our 202nd episode with a summary and discussion of last week's big AI news! Recorded on 03/07/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Join our Discord here! https://discord.gg/nTyezGSKwP In this episode: Alibaba released Qwen-32B, their latest reasoning model, on par with leading models like DeepMind's R1. Anthropic raised $3.5 billion in a funding round, valuing the company at $61.5 billion, solidifying its position as a key competitor to OpenAI. DeepMind introduced BigBench Extra Hard, a more challenging benchmark to evaluate the reasoning capabilities of large language models. Reinforcement Learning pioneers Andrew Bartow and Rich Sutton were awarded the prestigious Turing Award for their contributions to the field. Timestamps + Links: cle picks: (00:00:00) Intro / Banter (00:01:41) Episode Preview (00:02:50) GPT-4.5 Discussion (00:14:13) Alibaba's New QwQ 32B Model is as Good as DeepSeek-R1 ; Outperforms OpenAI's o1-mini (00:21:29) With Alexa Plus, Amazon finally reinvents its best product (00:26:08) Another DeepSeek moment? General AI agent Manus shows ability to handle complex tasks (00:29:14) Microsoft's new Dragon Copilot is an AI assistant for healthcare (00:32:24) Mistral's new OCR API turns any PDF document into an AI-ready Markdown file (00:33:19) A.I. Start-Up Anthropic Closes Deal That Values It at $61.5 Billion (00:35:49) Nvidia-Backed CoreWeave Files for IPO, Shows Growing Revenue (00:38:05) Waymo and Uber's Austin robotaxi expansion begins today (00:38:54) UK competition watchdog drops Microsoft-OpenAI probe (00:41:17) Scale AI announces multimillion-dollar defense deal, a major step in U.S. military automation (00:44:43) DeepSeek Open Source Week: A Complete Summary (00:45:25) DeepSeek AI Releases DualPipe: A Bidirectional Pipeline Parallelism Algorithm for Computation-Communication Overlap in V3/R1 Training (00:53:00) Physical Intelligence open-sources Pi0 robotics foundation model (00:54:23) BIG-Bench Extra Hard (00:56:10) Cognitive Behaviors that Enable Self-Improving Reasoners (01:01:49) The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems (01:05:32) Pioneers of Reinforcement Learning Win the Turing Award (01:06:56) OpenAI launches $50M grant program to help fund academic research (01:07:25) The Nuclear-Level Risk of Superintelligent AI (01:13:34) METR's GPT-4.5 pre-deployment evaluations (01:17:16) Chinese buyers are getting Nvidia Blackwell chips despite US export controls
Jason Howell returns from Mobile World Congress with some AI trends to discuss along with Jeff Jarvis. OpenAI has some pricey plans in the works, Amazon announces Alexa Plus, and more! Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow Note: Time codes subject to change depending on dynamic ad insertion by the distributor. NEWS 0:04:11 - Gemini Live ‘Astra' video and screen sharing rolling out in March 0:13:52 - Deutsche Telekom and Perplexity announce new ‘AI Phone' priced at under $1K 0:16:56 - OpenAI Plots Charging $20,000 a Month For PhD-Level Agents 0:22:31 - The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply 0:28:22 - The future of Google Search just rolled out on Labs - and AI Mode changes everything 0:35:41 - Amazon announces AI-powered Alexa Plus 0:38:06 - Judge denies Musk's attempt to block OpenAI from becoming for-profit entity 0:39:18 - Eerily realistic AI voice demo sparks amazement and discomfort online 0:45:57 - Turing Award winners warn over unsafe deployment of AI models Learn more about your ad choices. Visit megaphone.fm/adchoices
Plus: After a long reprieve, one B.C. town faces the prospect of a renewed peacock invasion. Also: A conversation with AI pioneer Richard Sutton, co-winner of this year's Turing Award.
Das ist das KI-Update vom 06.03.2025 mit diesen Themen: Google erweitert AI Overviews und führt neuen KI Mode ein Turing Award für Reinforcement Learning Militärische Drohnen, gesteuert durch natürliche Sprache Jugendliche sehen KI skeptischer als noch im Vorjahr Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10304924 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki heise KI PRO Zum Schluss noch der Hinweis, dass es heise KI PRO nur noch bis zum 28.2.2025 zum attraktiven Aktionspreis gibt. Unser Fachdienst zum Thema künstliche Intelligenz begleitet Dich und Dein Team Schritt für Schritt bei der Entwicklung und Implementierung einer zukunftssicheren KI-Strategie. Profitiere von fundiertem Fachwissen, Live-Webinaren und Talks mit KI-Experten, praxisnahen Anleitungen sowie dem Austausch mit unserer stetig wachsenden KI-Business-Community. Erfahre mehr auf pro.heise.de/ki/ und sichere Dir jetzt noch den attraktiven Aktionspreis für heise KI PRO. Das KI-Update als Newsletter Das KI-Update gibt es jetzt auch als Newsletter. Gemeinsam mit den Kollegen von The Decoder bereiten wir alle Themen aus dem Podcast zum Nachlesen für Euch auf. Komplett mit allen Links zu weiterführenden Themen. Ihr könnt Euch auf unserer Website dafür anmelden. Alle Infos findet Ihr unter heise.de/newsletter – oder folgt dem Anmelde-Link in den Shownotes. Heise KI PRO Aktionspreis: Zum Schluss noch der Hinweis, dass es heise KI PRO derzeit zum attraktiven Aktionspreis gibt. Unser Fachdienst zum Thema künstliche Intelligenz begleitet Dich und Dein Team Schritt für Schritt bei der Entwicklung und Implementierung einer zukunftssicheren KI-Strategie. Profitiere von fundiertem Fachwissen, Live-Webinaren und Talks mit KI-Experten, praxisnahen Anleitungen sowie dem Austausch mit unserer stetig wachsenden KI-Business-Community. Erfahre mehr auf pro.heise.de/ki/ und sichere Dir jetzt den attraktiven Aktionspreis für heise KI PRO. Kennenlernangebot: "Zum Schluss noch der Hinweis, dass heise KI PRO jetzt über ein kostenloses Kennenlernangebot verfügt. Unser Fachdienst zum Thema künstliche Intelligenz begleitet Dich und Dein Team Schritt für Schritt bei der Entwicklung und Implementierung einer zukunftssicheren KI-Strategie. Profitiere von fundiertem Fachwissen, Live-Webinaren und Talks mit KI-Experten, praxisnahen Anleitungen sowie dem Austausch mit unserer stetig wachsenden KI-Business-Community. Erfahre mehr auf pro.heise.de/ki/ und melde Dich jetzt an für das kostenlose Kennenlernangebot von heise KI PRO." heise+ Zum Schluss noch mal der Hinweis auf das heise-Angebot für unsere Podcast-Community. Ihr bekommt das heise+ Abo die ersten 3 Monate zum Sonderpreis für nur 6,45€ pro Monat: Damit erhaltet Ihr nicht nur Zugriff auf alle Artikel auf heise online, sondern könnt auch alle Heise-Magazine im digitalen Abo jederzeit mobil lesen. Nach Ablauf der Testphase ist Euer heise+ Abo natürlich monatlich kündbar. Dieses Angebot für unsere Podcast-Fans findet Ihr unter heiseplus.de/podcast Zum Schluss noch der Hinweis, dass Ihr als Teil der heise online Podcast Community das heise+ Abo die ersten 3 Monate zum Sonderpreis für nur 6,45€ pro Monat bekommt: Damit könnt Ihr nicht nur alle unsere Artikel auf heise online lesen. Ihr könnt auch auf alle Heise-Magazine im digitalen Abo mobil zugreifen. Nach Ablauf der Testphase ist Euer heise+ Abo natürlich jederzeit monatlich kündbar. Dieses Angebot für unsere Podcast-Fans findet Ihr unter heiseplus.de/podcast Zum Schluss noch mal der Hinweis, dass Ihr als Teil unserer Podcast-Community auch in diesem Jahr noch profitiert und Euer heise+ Abo für die ersten 3 Monate zum Sonderpreis von nur 6,45€ pro Monat bekommt: Damit erhaltet Ihr nicht nur Zugrif
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Google is reinventing search through AI-driven overviews, while Amazon is aggressively pursuing Agentic AI and hybrid reasoning models. Researchers are being recognised for reinforcement learning achievements, and warnings are emerging about emotional attachments to hyper-realistic AI voices. Meanwhile, legal battles surrounding OpenAI's for-profit transition continue, and academic institutions are benefiting from initiatives like OpenAI's NextGenAI. Furthermore, Cohere has launched an impressive multilingual vision model, while incidents such as students using AI to cheat in interviews highlight ongoing ethical challenges.
DeepSeek has impressed many and reinforcement learning is getting new attention. However, it was never gone, but is now celebrating a revival. Jan Kountik is one of the most prominent AI experts in the field. We talk to him. We talk to Jan Koutnik about prejudices, approaches and new ideas in reinforcement learning and why new ideas are needed.
The consciousness testCould an artificial intelligence be capable of genuine conscious experience?Coming from a range of different scientific and philosophical perspectives, Yoshua Bengio, Sabine Hossenfelder, Nick Lane, and Hilary Lawson dive deep into the question of whether artificial intelligence systems like ChatGPT could one day become self-aware, and whether they have already achieved this state.Yoshua Bengio is a Turing Award-winning computer scientist. Sabine Hossenfelder is a science YouTuber and theoretical physicist. Nick Lane is an evolutionary biochemist. Hilary Lawson is a post-postmodern philosopher.To witness such topics discussed live buy tickets for our upcoming festival: https://howthelightgetsin.org/festivals/And visit our website for many more articles, videos, and podcasts like this one: https://iai.tv/You can find everything we referenced here: https://linktr.ee/philosophyforourtimesAnd don't hesitate to email us at podcast@iai.tv with your thoughts or questions on the episode! Who do you agree or disagree with?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Eric chats with 2024 Nobel Laureate Geoffrey Hinton and Stanford Professor Jay McClelland, two pioneers who have spent nearly half a century laying the groundwork for modern-day AI, advancing research on neural networks long before it captured the world's imagination.In fact, their early work faced significant skepticism from the scientific community - an experience they candidly discuss in this episode. This wide-ranging conversation covers everything from the capabilities of recent breakthrough LLMS like DeepSeek to AI agents, the nature of memory and confabulation, the challenges to aligning AI with human values when we humans don't even agree on our values, and Geoff's fascinating new theory of language, featuring an analogy of words as thousand-dimensional, shape-shifting Lego blocks with hands.Geoff, who retired in 2023, divided his time between the University of Toronto and Google DeepMind. With numerous accolades including the 2018 Turing Award and 2024 Nobel Prize in Physics, he is perhaps best known for co-developing the backpropagation algorithm - now a cornerstone of AI research. Jay, currently at Stanford and Google DeepMind, has revolutionized our understanding of human learning through his work on Parallel Distributed Processing (PDP), applying neural network principles to understand phenomena like language acquisition. His insights into human learning have profoundly influenced how we understand machine learning.Their friendship dates back to the late 1970s and grew stronger as both collaborated with fellow pioneer David Rumelhart. They share some touching memories about Dave in this episode. Remarkably, despite decades of friendship and building upon each other's work, this appears to be their first recorded conversation together. Eric challenged them to discuss their latest insights and disagreements.This episode was recorded on January 29, 2025.JOIN OUR SUBSTACK! Stay up to date with the pod and become part of the ever-growing community! https://stanfordpsypod.substack.com/If you found this episode interesting at all, consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.Links:Geoff's websiteGeoff's Google ScholarJay's websiteJay's Google ScholarEric's websiteEric's X @EricNeumannPsyPodcast X @StanfordPsyPodPodcast Substack https://stanfordpsypod.substack.com/Let us know what you think of this episode, or of the podcast! stanfordpsychpodcast@gmail.com
Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe Yoshua Bengio: https://x.com/Yoshua_Bengio https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en https://yoshuabengio.org/ https://en.wikipedia.org/wiki/Yoshua_Bengio TOC: 1. AI Safety Fundamentals [00:00:00] 1.1 AI Safety Risks and International Cooperation [00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development [00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities [00:15:15] 1.4 Reward Tampering and AI Agency Risks [00:25:17] 1.5 Alignment Challenges and Instrumental Convergence 2. AI Architecture and Safety Design [00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals [00:35:02] 2.2 Separating Intelligence from Goals in AI Systems [00:40:40] 2.3 Non-Agent AI as Scientific Tools [00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks 3. Global Governance and Security [00:49:50] 3.1 International AI Competition and Hardware Governance [00:51:58] 3.2 Military and Security Implications of AI Development [00:56:07] 3.3 Personal Evolution of AI Safety Perspectives [01:00:25] 3.4 AI Development Scaling and Global Governance Challenges [01:12:10] 3.5 AI Regulation and Corporate Oversight 4. Technical Innovations [01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers [01:26:02] 4.2 GFlowNets and Symbolic Computation [01:30:47] 4.3 Neural Dynamics and Consciousness [01:34:38] 4.4 AI Creativity and Scientific Discovery SHOWNOTES (Transcript, references, best clips etc): https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0 CORE REFS (full list in shownotes and pinned comment): [00:00:15] Bengio et al.: "AI Risk" Statement https://www.safe.ai/work/statement-on-ai-risk [00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review) https://hdsr.mitpress.mit.edu/pub/w974bwb0 [00:40:45] Munk Debate on AI existential risk, featuring Bengio https://munkdebates.com/debates/artificial-intelligence [00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety https://arxiv.org/abs/2408.05284 [00:51:20] Bengio (2024) memo on hardware-based AI governance verification https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf [01:12:55] Bengio's involvement in EU AI Act code of practice https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice [01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie) https://arxiv.org/abs/2410.14817 [01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference https://arxiv.org/pdf/2111.09266 [01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie) https://arxiv.org/pdf/2302.06403
Please join my mailing list here
Send us a textEnglish edition [EN]: Niklaus Wirth is one of the computing pioneers and his work inspired many other technologies and a generation of engineers. In this episode I discuss one of his many contributions: the programming language Pascal. And we hear from 3 people who worked and learnt with Pascal in their career: Irving Reid, Todd Jacobs and Charles Forsythe.Links:https://computerhistory.org/profile/niklaus-wirth/https://people.inf.ethz.ch/wirth/https://amturing.acm.org/award_winners/wirth_1025774.cfm Turing Award for N Wirthhttps://people.inf.ethz.ch/wirth/CompilerConstruction/index.html Book on Compiler Constructionhttp://pascal.hansotten.com/ucsd-p-system/more-on-p-code/ p-code machineshttp://pascal.hansotten.com/standard-pascal-and-validation/ Standard Pascalhttps://www.embarcadero.com/products/delphi Delphihttps://en.wikipedia.org/wiki/UCSD_Pascal UCSD Pascalhttps://www.youtube.com/watch?v=Yj3DMUn6cck Kathleen Jenssen (co-author of the Pascal book) at the 80th birthday reception for N Wirthhttps://www.fidonet.org Fidonet bulletin boardhttp://www.retroarchive.org/swag/index.html Software Archiving group (Pascal)Not everyone was enamoured with Pascal. Here is a link to B Kernighan's post on 'Pascal...is a toy language' http://www.lysator.liu.se/c/bwk-on-pascal.html Support the showThank you for listening! Merci de votre écoute! Vielen Dank für´s Zuhören! Contact Details/ Coordonnées / Kontakt: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastodon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org Bluesky: https://bsky.app/profile/code4thought.bsky.social LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
After the Nobel Prize in physics went to John J. Hopfield and Geoffrey E. Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks", many asked why a prize for physics has gone to computer scientists for what is also an achievement in computer science.在约翰·霍普菲尔德和杰弗里·辛顿因“为推动利用人工神经网络进行机器学习作出的基础性发现和发明”获得诺贝尔物理学奖后,许多人发问,为什么物理学奖授予了计算机学家,且其成就也属于计算机科学领域。Even Hinton, a winner of the 2018 Turing Award and one of the "godfathers of AI", was himself "extremely surprised" at receiving the call telling him he had got the Nobel in physics, while the other recipient Hopfield said "It was just astounding."就连2018年图灵奖得主、“人工智能教父”之一的辛顿,在接到瑞典皇家科学院的电话时,也直呼“没有想到”。另一位获奖者霍普菲尔德则说:“这简直令人震惊。”Actually, the artificial neural network research has a lot to do with physics. Most notably, Hopfield replicated the functioning of the human brain by using the self-rotation of single molecules as if they were neurons and linking them together into a network, which is what the famous Hopfield neural network is about. In the process, Hopfield used two physical equations. Similarly, Hinton made Hopfield's approach the basis for a more sophisticated artificial neural network called the Boltzmann machine, which can catch and correct computational errors.其实,人工神经网络研究与物理学有很大关系。最值得注意的是,霍普菲尔德利用单分子自旋复制了人脑的功能,把它们当作神经元,并把它们连接成一个网络,这就是著名的“霍普菲尔德神经网络”。在这个过程中,霍普菲尔德使用了两个物理方程。同样,辛顿将霍普菲尔德的方法作为一种更复杂的人工神经网络的基础,这种人工神经网络被称为玻尔兹曼机,它可以捕捉和纠正计算错误。The two steps have helped in forming a net that can act like a human brain and compute. The neural networks today can learn from their own mistakes and constantly improve, thus being able to solve complicated problems for humanity. For example, the Large Language Model that's the basis of the various GPT technologies people use today dates back to the early days when Hopfield and Hinton formed and improved their network.这两项成果帮助形成了可以像人脑一样进行计算的网络。如今的神经网络可以从自己的错误中学习并不断改进,从而能够为人类解决复杂的问题。例如,作为当今人们使用的各种GPT技术基础的大语言模型,就可以追溯到早期霍普菲尔德和辛顿形成和改进人工神经网络的时候。Instead of weakening the role of physics, that the Nobel Prize in Physics goes to neural network achievements strengthens it by revealing to the world the role physics, or fundamental science as a whole, plays in sharpening technology. Physics studies the rules followed by particles and the universe and paves the way for modern technologies. That is why there is much to thank physicists for the milestones modern computer science has crossed.诺贝尔物理学奖授予神经网络成就,并不是削弱物理学的作用,而是通过向世界揭示物理学或整个基础科学在提高技术方面的作用来加强其地位。物理学研究粒子和宇宙所遵循的规则,并为现代技术铺平道路。这就是现代计算机科学所跨越的里程碑要感谢物理学家的原因。neuraladj. 神经的astoundingadj. 令人震惊的replicatev. 复制,重复
A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed. While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy. And then there was Yoshua Bengio. Bengio is one of AI's pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn't be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio. But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he's dedicated himself to AI safety. He's a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute. And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it's too late. Mentioned:“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton“Computing Machinery and Intelligence” by Alan Turing“International Scientific Report on the Safety of Advanced AI” “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”Further reading:“‘Deep Learning' Guru Reveals the Future of AI” by Cade Metz“Montréal Declaration for a Responsible Development of Artificial Intelligence” “This A.I. Subculture's Motto: Go, Go, Go” By Kevin Roose“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/379-regulating-artificial-intelligence Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. Yoshua Bengio is full professor at Université de Montréal and the Founder and Scientific Director of Mila - Quebec AI Institute. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoffrey Hinton and Yann LeCun, known as the Nobel Prize of computing. He is a Canada CIFAR AI Chair, a member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology, and Chair of the International Scientific Report on the Safety of Advanced AI. Website: https://yoshuabengio.org/ Scott Wiener has represented San Francisco in the California Senate since 2016. He recently introduced SB 1047, a bill aiming to reduce the risks of frontier models of AI. He has also authored landmark laws to, among other things, streamline the permitting of new homes, require insurance plans to cover mental health care, guarantee net neutrality, eliminate mandatory minimums in sentencing, require billion-dollar corporations to disclose their climate emissions, and declare California a sanctuary state for LGBTQ youth. He has lived in San Francisco's historically LGBTQ Castro neighborhood since 1997. Twitter: @Scott_Wiener Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Send us a Text Message.Meet The Godfather of Modern Causal InferenceHis work has pretty literally changed the course of my life and I am honored and incredibly grateful we could meet for this great conversation in his home in Los AngelesTo anybody who knows something about modern causal inference, he needs no introduction.He loves history, philosophy and music, and I believe it's fair to say that he's the godfather of modern causality.Ladies & gentlemen, please welcome, professor Judea Pearl.Subscribe to never miss an episodeAbout The GuestJudea Pearl is a computer scientist, and a creator of the Structural Causal Model (SCM) framework for causal inference. In 2011, he has been awarded the Turing Award, the highest distinction in computer science, for his pioneering works on Bayesian networks and graphical causal models and "fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning".Connect with Judea:Judea on Twitter/XJudea's webpageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:Alex on the Internet LinksPearl, J. - "The Book of Why"Kahneman, D. - "ThinkiShould we build the Causal Experts Network?Share your thoughts in the surveyAnything But LawDiscover inspiring stories and insights from entrepreneurs, athletes, and thought leaders.Listen on: Apple Podcasts SpotifySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Episode 132I spoke with Manuel and Lenore Blum about:* Their early influences and mentors* The Conscious Turing Machine and what theoretical computer science can tell us about consciousnessEnjoy—and let me know what you think!Manuel is a pioneer in the field of theoretical computer science and the winner of the 1995 Turing Award in recognition of his contributions to the foundations of computational complexity theory and its applications to cryptography and program checking, a mathematical approach to writing programs that check their work. He worked as a professor of computer science at the University of California, Berkeley until 2001. From 2001 to 2018, he was the Bruce Nelson Professor of Computer Science at Carnegie Mellon University.Lenore is a Distinguished Career Professor of Computer Science, Emeritus at Carnegie Mellon University and former Professor-in-Residence in EECS at UC Berkeley. She is president of the Association for Mathematical Consciousness Science and newly elected member of the American Academy of Arts and Sciences. Lenore is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering, and Math (STEM) fields. She was a founder of the Association for Women in Mathematics, and founding Co-Director (with Nancy Kreinberg) of the Math/Science Network and its Expanding Your Horizons conferences for middle- and high-school girls.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:09) Manuel's interest in consciousness* (05:55) More of the story — from memorization to derivation* (11:15) Warren McCulloch's mentorship* (14:00) McCulloch's anti-Freudianism* (15:57) More on McCulloch's influence* (27:10) On McCulloch and telling stories* (32:35) The Conscious Turing Machine (CTM)* (33:55) A last word on McCulloch* (35:20) Components of the CTM* (39:55) Advantages of the CTM model* (50:20) The problem of free will* (52:20) On pain* (1:01:10) Brainish / CTM's multimodal inner language, language and thinking* (1:13:55) The CTM's lack of a “central executive”* (1:18:10) Empiricism and a self, tournaments in the CTM* (1:26:30) Mental causation* (1:36:20) Expertise and the CTM model, role of TCS* (1:46:30) Dreams and dream experience* (1:50:15) Disentangling components of experience from multimodal language* (1:56:10) CTM Robot, meaning and symbols, embodiment and consciousness* (2:00:35) AGI, CTM and AI processors, capabilities* (2:09:30) CTM implications, potential worries* (2:17:15) Advice for younger (computer) scientists* (2:22:57) OutroLinks:* Manuel's homepage* Lenore's homepage; find Lenore on Twitter (https://x.com/blumlenore) and Linkedin (https://www.linkedin.com/in/lenore-blum-1a47224)* Articles* “The ‘Accidental Activist' Who Changed the Face of Mathematics” — Ben Brubaker's Q&A with Lenore* “How this Turing-Award-winning researcher became a legendary academic advisor” — Sheon Han's profile of Manuel* Papers (Manuel and Lenore)* AI Consciousness is Inevitable: A Theoretical Computer Science Perspective* A Theory of Consciousness from a Theoretical Computer Science Perspective: Insights from the Conscious Turing Machine* A Theoretical Computer Science Perspective on Consciousness and Artificial General Intelligence* References (McCulloch)* Embodiments of Mind* Rebel Genius Get full access to The Gradient at thegradientpub.substack.com/subscribe
We're excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn. Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge. Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts. In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like: What are the core components of educability that make human intelligence special? How can we design AI systems to augment rather than replace human learning? Why has the science of education lagged behind other fields, and what role can AI play in accelerating pedagogical research and practice? Should we be concerned about a potential "intelligence explosion" as machines grow more sophisticated, or are there limits to the power of AI? Let's dive into our conversation with Leslie Valiant. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music
My guest is Yann LeCun, a pioneering French-American computer scientist, known for his groundbreaking work in machine learning, computer vision, and neural networks. Yann is the Silver Professor at the Courant Institute of Mathematical Sciences at New York University and serves as the Vice President and Chief AI Scientist at Meta. Yann is one of the world's most influential computer scientists. He has accumulated over 350,000 citations on Google Scholar, he is one of the founding figures in the field of deep learning thanks to its contribution to convolutional neural networks and backpropagation algorithms, and he is a vocal proponent of open source. In recognition of his significant contributions to artificial intelligence, he was awarded the Turing Award in 2018, often referred to as the “Nobel Prize of Computing.” Our conversation is structured into three distinct parts. We begin by discussing the overarching dynamics in the AI space, then narrow our focus to the firm level, and finally, we conclude with an exploration of the challenges that lie ahead. By the end of this discussion, you will learn whether open source has a chance to make it in AI, the key factors for scaling an AI foundation model, the role ecosystems play in market dynamics, Meta long term strategy in the space, how concentration among chip manufacturers impacts AI companies, the current effect of the European AI Act on AI companies, what Yann would like to see regulators doing, and more. I hope you enjoy the conversation.
In this episode of ACM ByteCast, Rashmi Mohan hosts ACM A.M. Turing Award laureate Yoshua Bengio, Professor at the University of Montreal, and Founder and Scientific Director of MILA (Montreal Institute for Learning Algorithms) at the Quebec AI Institute. Yoshua shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for their work on deep learning. He is also a published author and the most cited scientist in Computer Science. Previously, he founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications, acquired by ServiceNow. He currently serves as technical and scientific advisor to Recursion Pharmaceuticals and scientific advisor for Valence Discovery. He is a Fellow of ACM, the Royal Society, the Royal Society of Canada, Officer of the Order of Canada, and recipient of the Killam Prize, Marie-Victorin Quebec Prize, and Princess of Asturias Award. Yoshua also serves on the United Nations Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology and as a Canada CIFAR AI Chair. Yoshua traces his path in computing, from programming games in BASIC as an adolescent to getting interested in the synergy between the human brain and machines as a graduate student. He defines deep learning and talks about knowledge as the relationship between symbols, emphasizing that interdisciplinary collaborations with neuroscientists were key to innovations in DL. He notes his and his colleagues' surprise in the speed of recent breakthroughs with transformer architecture and large language models and talks at length about about artificial general intelligence (AGI) and the major risks it will present, such as loss of control, misalignment, and nationals security threats. Yoshua stresses that mitigating these will require both scientific and political solutions, offers advice for researchers, and shares what he is most excited about with the future of AI.
Today's guest is theoretical computer scientist Leslie Valiant - currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Among his many accolades, Leslie was awarded the Turing Award in 2010 for transformative contributions to the theory of computation, including the theory of PAC learning which stands for Probably Approximately Correct, as well as the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing.In this episode, Leslie and I discuss his life and career journey – from what problems he has looked to solve in his career to how his PAC theory was first received and his latest book, The Importance of Being Educable.Have you ever wondered what your digital footprint says about you? Or curious how you can make your pitch stand out?Then check out WhiteBridge.ai – it's an AI-powered digital identity research tool that finds, verifies, and analyzes publicly collected data about someone and structures it into an insightful report.They actually ran a report on me and I was seriously impressed!But not only can you use it to check your online digital profile but you could use it to help you quickly research and understand other people whether it's a potential client, employee or investor – the report gives you more than enough useful info on the person for you to truly personalize your correspondence to them and help you build that early rapport.Want to learn more? Head to https://whitebridge.ai and use my discount code DANIELLE30 for 30% off your first report.Please enjoy my conversation with Leslie Valiant.
It's that time of year to do some spring cleaning, which includes your tech world as well. Nate came up with the very "helpful" acronym P.F.A.N.T.S.S. to help you think through the aspects of your technology world that could use some attention. Maybe this will be the year that we practice what we preach! After that, we'll get you caught up with what's going on in the world of tech and provide some helpful tips and picks so that you can tech better! Watch on YouTube! INTRO (00:00) Spring Cleaning Your P.F.A.N.T.S.S. (09:40) Physical Files Apps Notifications Time Subscriptions Security DAVE'S PRO-TIP OF THE WEEK: Photo album duplicates (31:30) JUST THE HEADLINES: (36:25) Apple alerts users in 92 nations to mercenary spyware attacks Taylor Swift Songs Return to TikTok Walmart will deploy robotic forklifts in its distribution centers Microsoft starts testing ads in the Windows 11 Start menu The Motion Picture Association has big plans to crack down on movie piracy again Chechnya is banning music that's too fast or slow Computer scientist wins Turing Award for seminal work on randomness TAKES: Humane AI Pin review: not even close (37:45) Macs with AI-focused M4 chip launching this year (41:25) April's Patch Tuesday Brings Record Number of Fixes (44:05) BONUS ODD TAKE: Neal.fun - Who Was Alive? (45:50) PICKS OF THE WEEK: Dave: Gemini 2 - Duplicate File Finder (50:25) Nate: Udio.com - AI Music Generation (53:15) RAMAZON PURCHASE - Giveaway! (59:10) Find us elsewhere: https://notpicks.com https://notnerd.com https://www.youtube.com/c/Notnerd https://www.instagram.com/n0tnerd https://www.facebook.com/n0tnerd/ info@Notnerd.com
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/15/272-leslie-valiant-on-learning-and-educability-in-computers-and-people/Support Mindscape on Patreon.Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://facebook.com/yann.lecun Meta AI: https://ai.meta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:10) - Limits of LLMs (20:47) - Bilingualism and thinking (24:39) - Video prediction (31:59) - JEPA (Joint-Embedding Predictive Architecture) (35:08) - JEPA vs LLMs (44:24) - DINO and I-JEPA (45:44) - V-JEPA (51:15) - Hierarchical planning (57:33) - Autoregressive LLMs (1:12:59) - AI hallucination (1:18:23) - Reasoning in AI (1:35:55) - Reinforcement learning (1:41:02) - Woke AI (1:50:41) - Open source (1:54:19) - AI and ideology (1:56:50) - Marc Andreesen (2:04:49) - Llama 3 (2:11:13) - AGI (2:15:41) - AI doomers (2:31:31) - Joscha Bach (2:35:44) - Humanoid robots (2:44:52) - Hope for the future
Welcome to our second Design Better episode on the creative process. You may not know Ed Catmull's name, but there's almost no doubt you're familiar with his work. As the co-founder of Pixar, he's responsible for helping to create movies ranging from the original Toy Story on through The Incredibles, Wall-E, Moana, and Inside Out. Ed has a background in computer science, and as someone who pioneered many of the computer graphics and digital animation techniques that we now take for granted, he has a unique perspective on the intersection of technology and creativity. We chat with Ed about his transition from creating things himself, to leading creative teams; the elements of a sustainable creative culture, and how to give people feedback so they'll actually listen to you. Ed also collaborated with Steve Jobs longer than probably anyone else who knew him—for over 30 years—and we hear some stories that haven't been told anywhere else. One more quick thing before we go: we have some amazing guests lined up for our upcoming AMAs, like Judy Wert Debbie Millman, which are filling up quickly. Go to our events page and you can register for free. Show notes: https://designbetterpodcast.com/p/ed-catmull-the-journey-from-lucasfilm#details Bio Dr. Ed Catmull is co-founder of Pixar Animation Studios and the former president of Pixar, Walt Disney Animation Studios, and Disneytoon Studios. For over twenty-five years, Pixar has dominated the world of animation, producing #1 box office hits that include iconic works such as Toy Story, Frozen, Cars, and The Incredibles. Pixar's works have grossed more than $14 billion at the worldwide box office, and won twenty-three Academy Awards®, 10 Golden Globes Awards, and 11 Grammys, among countless other achievements. Dr. Ed Catmull's book Creativity, Inc.—co-written with journalist Amy Wallace and years in the making—is a distillation of the ideas and management principles he has used to develop a creative culture. A book for managers who want to encourage a growth mindset and lead their employees to new heights, it also grants readers an all-access trip into the nerve center of Pixar Animation Studios—into the meetings, postmortems, and “Braintrust” sessions where some of the most successful films in history have been made. Dr. Catmull has been honored with five Academy Awards®, including an Oscar of Lifetime Achievement for his technical contributions and leadership in the field of computer graphics for the motion picture industry. He also has been awarded the Turing Award by the world's largest society of computing professionals, the Association for Computing Machinery, for his work on three-dimensional computer graphics. Please visit the links below to help support our show: Methodical Coffee: Roasted, blended, brewed, served and perfected by verified coffee nerds
Leslie Lamport is a computer scientist & mathematician who won ACM's Turing Award in 2013 for his fundamental contributions to the theory and practice of distributed and concurrent systems. He also created LaTeX and TLA+, a high-level language for “writing down the ideas that go into the program before you do any coding.”
Sam Altman, CEO of OpenAI, which created ChatGPT, says that AI is a powerful tool that will streamline human work and quicken the pace of scientific advancement But ChatGPT has both enthralled and terrified us, and even some of AI's pioneers are freaked out by it – by how quickly the technology has advanced. David Remnick talks with Altman, and with computer scientist Yoshua Bengio, who won the prestigious Turing Award for his work in 2018, but recently signed an open letter calling for a moratorium on some AI research until regulation can be implemented. The stakes, Bengio says, are high. “I believe there is a non-negligible risk that this kind of technology, in the short term, could disrupt democracies.”
Geoffrey Hinton, 2018 Turing Award winner for his foundational work in AI, recently left Google so he could speak freely about the dangers of AI without negatively impacting Google whom he believes has acted responsibly in its AI roll-out. Is juice jacking a real threat to users of up to date smartphones? And JAMA has a story about the comparison between real physicians and ChatGPT answering patient questions.Starring Tom Merritt, Rich Stroffolino, Chris Ashley, Roger Chang, Joe.Link to the Show Notes. Become a member at https://plus.acast.com/s/dtns. Hosted on Acast. See acast.com/privacy for more information.