Podcast appearances and mentions of Gary Marcus

  • 176PODCASTS
  • 262EPISODES
  • 47mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 30, 2025LATEST
Gary Marcus

POPULARITY

20172018201920202021202220232024


Best podcasts about Gary Marcus

Latest podcast episodes about Gary Marcus

SoundBytes
DIALING DOWN THE AI HYPE

SoundBytes

Play Episode Listen Later Mar 30, 2025 1:00


AI's coming, for sure — but there's also a lot of hype — one leading researcher, GARY MARCUS, has been trying to keep some perspective on it all The post DIALING DOWN THE AI HYPE appeared first on sound*bytes.

Esto es lo que AI
La gran apuesta

Esto es lo que AI

Play Episode Listen Later Mar 28, 2025 99:45


En este primer episodio de la quinta temporada de Esto es lo que AI, nos atrevemos a hacer apuestas sobre el futuro de la IA. ¿Será capaz de entender el argumento de una película y responder preguntas complejas sin errores? ¿Podrá escribir un guión digno de un Oscar o desarrollar descubrimientos científicos revolucionarios? Nuestra nueva tertuliana, Patricia Charro, se une a la conversación con una visión fresca sobre el impacto de la IA en los próximos años. Además del debate habitual con Adolfo Corujo, Luis Martín, Miguel Lucas, Julio Gonzalo y Roberto Carreras, este episodio incluye una entrevista especial con José Raúl González, CEO de Progreso, sobre Clara, la primera embajadora de sostenibilidad basada en IA en la industria del cemento en Centroamérica. Un ejemplo real de cómo la IA puede servir al propósito, la transparencia y el impacto social. También viajamos al origen de la web con Tim Berners-Lee, de la mano de Margorieth Tejeira, y descubrimos usos inesperados de la IA con Ángela Ortega en una nueva sección. Inspirados en el desafío lanzado por Gary Marcus, analizamos hasta qué punto la IA podrá alcanzar hitos que hoy parecen ciencia ficción. ¿Nos sorprenderá con avances inesperados o seguirá atrapada en la burbuja del hype?

This Week in Google (MP3)
IM 809: Fun Mustache - Gary Marcus, Digg Returns, AI.com

This Week in Google (MP3)

Play Episode Listen Later Mar 6, 2025 174:10


Interview with Gary Marcus Turing Award Goes to 2 Pioneers of Artificial Intelligence Meta Discusses AI Data Center Project That Could Cost $200 Billion Satya Nadella Argues AI's True Value Will Come When It Finds Killer App Akin To Email or Excel Google's co-founder tells AI staff to stop "building nanny products 27-Year-Old EXE Became Python In Minutes. Is AI-Assisted Reverse Engineering Next? - Slashdot Crossing the uncanny valley of conversational voice What to know about deepfakes bill backed by Melania Trump Digg is getting another revival, this time with an injection of AI Google lets Americans delete their search results 'The Brutalist' Director Brady Corbet Responds to AI Backlash Alexis Ohanian Joins Project Liberty's TikTok Bid YouTube Says It Now Has More Than 1 Billion Monthly Viewers of Podcast Content Skype is shutting down after two decades LATimes insights page PDF example from the LATimes Gebru: "

This Week in Google (Video HI)
IM 809: Fun Mustache - Gary Marcus, Digg Returns, AI.com

This Week in Google (Video HI)

Play Episode Listen Later Mar 6, 2025 174:10


Interview with Gary Marcus Turing Award Goes to 2 Pioneers of Artificial Intelligence Meta Discusses AI Data Center Project That Could Cost $200 Billion Satya Nadella Argues AI's True Value Will Come When It Finds Killer App Akin To Email or Excel Google's co-founder tells AI staff to stop "building nanny products 27-Year-Old EXE Became Python In Minutes. Is AI-Assisted Reverse Engineering Next? - Slashdot Crossing the uncanny valley of conversational voice What to know about deepfakes bill backed by Melania Trump Digg is getting another revival, this time with an injection of AI Google lets Americans delete their search results 'The Brutalist' Director Brady Corbet Responds to AI Backlash Alexis Ohanian Joins Project Liberty's TikTok Bid YouTube Says It Now Has More Than 1 Billion Monthly Viewers of Podcast Content Skype is shutting down after two decades LATimes insights page PDF example from the LATimes Gebru: "

Big Technology Podcast
OpenAI's New Model, Jensen's Bold Claim, Alexa+ Is Here

Big Technology Podcast

Play Episode Listen Later Feb 28, 2025 59:49


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) OpenAI's release of GPT 4.5 2) Is GPT 4.5 a major advance or what? 3) What better EQ gets you in an AI model 4) What reasoning advances can be built on top of GPT 4.5 5) Is AI product or model more important? (cont.) 6) Gary Marcus says OpenAI is in trouble 7) Anthropic releases Claude Sonnet 3.7 8) NVIDIA earnings 9) Jensen says reasoning costs 100x typical LLMs 10) Meta wants to build a standalone AI app 11) Will Alexa+ work? 12) RIP Skype --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

SHIFT
A List of What to Watch in 2025

SHIFT

Play Episode Listen Later Jan 15, 2025 21:45


We catch up with cognitive scientist Gary Marcus on his list of predictions for what to watch in AI this year, and beyond.We Meet: Author & Cognitive Scientist Gary MarcusCredits:This episode of SHIFT was produced by Jennifer Strong with help from Emma Cillekens. It was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Meg Marco.

Mixture of Experts
Episode 36: OpenAI o3, DeepSeek-V3, and the Brundage/Marcus AI bet

Mixture of Experts

Play Episode Listen Later Jan 3, 2025 39:19


Is deep learning hitting a wall? It's 2025 and Mixture of Experts is back and better than ever. In episode 36, host Tim Hwang is joined by Chris Hay, Kate Soule and Kush Varshney to debrief one of the biggest releases of 2024, OpenAI o3. Next, DeepSeek-V3 is here! Finally, will AI exist in 2027? The experts dissect the AI bet between Miles Brundage and Gary Marcus. All that and more on the first Mixture of Experts of 2025.The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.00:00 — Intro00:49 — OpenAI o314:40 — DeepSeek-V328:00 — The Brundage/Marcus bet

Keen On Democracy
Episode 2289: Gary Marcus on how Artificial General Intelligence (AGI) is, in the long run, inevitable

Keen On Democracy

Play Episode Listen Later Dec 31, 2024 41:43


Gary Marcus is amongst the world's leading skeptics on the AI revolution. So it's worth taking note when Marcus admits that “of course we are getting to AGI eventually”. No, he says, artificial general intelligence (AGI) won't take place in 2027 or perhaps even 2050. But it will happen, he confidently predicts, by 2100. So that only underlines Marcus' argument, made in his acclaimed 2024 book Taming Silicon Valley, of the desperate need to regulate AI before it regulates us. And it also contextualizes our short term preoccupation with corporate pioneers of generative AI technology like OpenAI, Anthropic, DeepMind and xAI. As Marcus argues, it's likely that the dominant AI technology that will get us to AGI by the end of the 21st century hasn't even been invented yet. Gary Marcus is a leading voice in artificial intelligence, well known for his challenges to contemporary AI. He is a scientist and best-selling author and was founder and CEO of Geometric.AI, a machine learning company acquired by Uber. A Professor Emeritus at NYU, he is the author of five previous books, including the bestseller Guitar Zero, Kluge (one of The Economist's eight best books on the brain and consciousness), and Rebooting AI (with Ernest Davis), one of Forbes's seven must-read books on AI.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown childrenKeen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Yang Speaks
The Best of 2024: Highlights from the Year that Shook Politics!

Yang Speaks

Play Episode Listen Later Dec 30, 2024 67:55


The best moments from the Forward podcast in 2024 are here! Highlights include editor and audience picks featuring Rainn Wilson, Gary Marcus, Johann Hari, and more, with deep dives into mental health, spirituality, and innovative ideas to make our future better and brighter. Don't miss it! ---- Follow Andrew Yang: https://andrewyang.com | https://x.com/andrewyang ---- Get 50% off Factor at https://factormeals.com/yang50 Get an extra 3 months free at https://expressvpn.com/yang Get 20% off + 2 free pillows at https://helixsleep.com/yang code helixpartner20 ---- Subscribe to Forward: Apple — https://podcasts.apple.com/podcast/id1508035243 Spotify — https://open.spotify.com/show/25cFfnG3lGuypTerKDxKia To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Sognandoilpiano: la musica, nel profondo.
Imparare la musica da adulti: cosa dice la scienza?

Sognandoilpiano: la musica, nel profondo.

Play Episode Listen Later Dec 29, 2024 17:50


Cosa accade quando uno psicologo cognitivista, convinto di essere stonato e senza talento musicale, decide di imparare a suonare uno strumento a 39 anni?

unSILOed with Greg LaBlanc
487. Challenging AI's Capabilities with Gary Marcus

unSILOed with Greg LaBlanc

Play Episode Listen Later Dec 6, 2024 48:14


In the last five years, artificial intelligence has exploded but there are a lot of holes in how it works, what it is and is not capable of, and what a realistic future of AI looks like. Gary Marcus is an emeritus professor of psychology and neural science at NYU and an expert in AI. His books like Taming Silicon Valley: How We Can Ensure That AI Works for Us and Rebooting AI: Building Artificial Intelligence We Can Trust explore the limitations and challenges of contemporary AI.Gary and Greg discuss the misconceptions about AI's current capabilities and the “gullibility gap” where people overestimate AI's abilities, the societal impacts of AI including misinformation and discrimination, and why AI might need regulatory oversight akin to the FDA. *unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Thinking, Fast and Slow by Daniel KahnemannJoseph Weizenbaum“Overregularization in Language Acquisition” by Gary Marcus and Marcus Pinker Lina KhanReid HoffmanTesla collides with private jet (YouTube)Cruise (autonomous vehicle)AlphaFold“The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence” by Gary Marcus Guest Profile:Professional WebsiteSocial Media Profile on LinkedInSubstackHis Work:Taming Silicon Valley: How We Can Ensure That AI Works for UsRebooting AI: Building Artificial Intelligence We Can TrustKluge: The Haphazard Evolution of the Human MindThe Algebraic Mind: Integrating Connectionism and Cognitive Science (Learning, Development, and Conceptual Change)Episode Quotes:Is Gary pessimistic about AI's future?30:28: [With AI] I think the last five years have been a kind of digression, a detour from the work that we actually need to do. But I think we will get there. People are already realizing that the economics are not there, the reliability is not there. At some point, there will be an appetite to do something different. It's very difficult right now to do anything different because so many resources go into this one approach that makes it hard to start a startup to do anything else. Expectations are too high because people want magical AI that can answer any question, and we don't actually know how to do that with reliability right now. There are all kinds of sociological problems, but they will be solved. Not only that, but I'm somebody who wants AI to succeed.Why AI hallucinations can't be fixed until we stop running the system21:02: Any given hallucination is created by the same mechanism as any given truth that comes out of these systems. So, it's all built by the same thing. With your less-than, greater-than bug, you can work on it selectively in a modular system; you fix it. But the only way you can kill hallucinations is to not run the system. As long as you run the system, you're going to get it sometimes because that's how it works.Should we help people cultivate their uniquely human common sense?43:01: In general, critical thinking skills are always useful. It's not just common sense; a lot of its scientific method and reasoning. I think the most important thing that people learn in psychology grad school is whenever you've done an experiment and you think your hypothesis works, someone clever can come up with another hypothesis and point out a control group that you haven't done. That's a really valuable lesson. That breaks some of the confirmation bias and really raises one's level of sophistication. That's beyond common sense. It's part of scientific reasoning; those things are incredibly useful. I think they'll still be useful in 20 years.

Marketplace Tech
It's not too late to change the future of AI

Marketplace Tech

Play Episode Listen Later Nov 13, 2024 12:44


Gary Marcus is worried about AI. The professor emeritus at NYU doesn’t count himself a luddite or techno-pessimist. But Marcus has become one of the loudest voices of caution when it comes to AI. He's chronicled some of the funniest and most disturbing errors made by current tools like ChatGPT, calling out the many costs – both human and environmental – of an industry that continues to accrete money and power. In his new book “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” Marcus lays out his vision for a responsible path forward. Marketplace's Meghan McCarty Carino spoke to Marcus about that path and how it may be further out of reach, though not impossible, given the results of this year's presidential election.

Marketplace Tech
It's not too late to change the future of AI

Marketplace Tech

Play Episode Listen Later Nov 13, 2024 12:44


Gary Marcus is worried about AI. The professor emeritus at NYU doesn’t count himself a luddite or techno-pessimist. But Marcus has become one of the loudest voices of caution when it comes to AI. He's chronicled some of the funniest and most disturbing errors made by current tools like ChatGPT, calling out the many costs – both human and environmental – of an industry that continues to accrete money and power. In his new book “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” Marcus lays out his vision for a responsible path forward. Marketplace's Meghan McCarty Carino spoke to Marcus about that path and how it may be further out of reach, though not impossible, given the results of this year's presidential election.

Marketplace All-in-One
It's not too late to change the future of AI

Marketplace All-in-One

Play Episode Listen Later Nov 13, 2024 12:44


Gary Marcus is worried about AI. The professor emeritus at NYU doesn’t count himself a luddite or techno-pessimist. But Marcus has become one of the loudest voices of caution when it comes to AI. He's chronicled some of the funniest and most disturbing errors made by current tools like ChatGPT, calling out the many costs – both human and environmental – of an industry that continues to accrete money and power. In his new book “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” Marcus lays out his vision for a responsible path forward. Marketplace's Meghan McCarty Carino spoke to Marcus about that path and how it may be further out of reach, though not impossible, given the results of this year's presidential election.

The Data Chief
Five Best Practices to Succeed with Data and GenAI: Lessons from Leaders

The Data Chief

Play Episode Listen Later Nov 13, 2024 31:15


Key Moments: Focusing on Value with Bill Schmarzo 1:48Unlocking the Collective Genius with Walid Mehanna 4:07Building a Data-Literate Workforce with Valerie Logan 5:58Creating a Human-Centric AI Strategy with Sadie St. Lawrence 7:40Selecting the Right Tools with Katie Russell 11:23Implementing tools responsibly with Robert Garnett 16:00Why Clean Data Matters with Barr Moses 19:36Ensuring Responsible AI for the Long-Term with Dr. Gary Marcus 25:45 Key Quotes:“Data-driven is not important. Value-driven—that's what's important. We should focus on value.” — Bill Schmarzo, Head of Customer Data Innovation at Dell Technologies“Our role was rather to activate the organizational muscle… to try things out and tell us what has the highest opportunity and possibility.” — Walid Mehanna, Chief Data and AI Officer at Merck Group“It's really a mindset and a muscle… we need to foster this kind of lasting change.” — Valerie Logan, CEO of the Datalodge“Teaching people to ask better questions is more about critical thinking than technology.” — Sadie St. Lawrence, Founder of the Human Machine Collaboration Institute“We wanted to make analytics accessible to everyone, combining real-time data and intuitive tools so every team member can gain insights and contribute to our mission to decarbonize.” — Katie Russell, Head of Data and Analytics at OVO Energy As we are looking at applications of AI within our environment, we are focused first on responsibility, making sure that we have a broad enough data set when we're building machine learning models, for instance. And so that's at the heart of anything that we do.” – Robert Garnett, Vice President for Government Analytics and Health Benefits Cost of Care at Elevance Health“Our world is moving towards a place where data is the product—and in that world, directionally accurate just doesn't cut it anymore.” — Barr Moses, CEO and Co-Founder of Monte Carlo“The tech policy that we set right now is going to really affect the rest of our lives.” —  Dr. Gary Marcus, Scientist, Advisor to Governments and Corporations, and Author of Taming Silicon ValleyGuest Bios Bill Schmarzo Bill Schmarzo has extensive hands-on experience in the areas of big data, data science, designthinking, data monetization, and data economics. Bill is currently part of Dell Technology's core data management leadership team, where he is responsible for spearheading customer co-creation engagement to identify and prioritize the key data management, data science, and data monetization requirements.Walid MehannaWalid Mehanna is Chief Data & AI Officer at Merck KGaA, Darmstadt, Germany, where he leads the company's Data & AI organization, delivering value, governance, architecture, engineering, and operations across the company globally. With many years experience in startups, IT, and consulting major corporations, Walid encompasses a strong understanding of the intersection between business and technology. Katie RussellKatie Russell is the Data Director at OVO Energy, leading teams of Data Scientists, Data Engineers and Analysts who are transforming OVO's data capability. As part of a technology led business, leveraging data using artificial intelligence keeps OVO truly innovative, delivering the best possible service for our customers. Rob GarnettRobert Garnett serves as Vice President for Government Analytics and Health Benefits Cost of Care at Elevance Health. In this role, he leads a data-driven organization supporting analytics and insights for Medicaid, Medicare, Commercial and enterprise customers in the areas of population health, cost of care, performance management, operational excellence, and quality improvement. Valerie LoganFounding The Data Lodge in 2019, Valerie is as committed to data literacy as it gets. With train-the-trainer bootcamps, and a peer community, she's certifying the world's first Data Literacy Program Leads. In 2023, The Data Lodge was acquired as the basis of a newly formed venture, Data Society Group (DSG), aimed at fostering data and AI literacy and cultural change at scale. Valerie is excited to also serve as the Chief Strategy Officer of DSG. Previously, Valerie was a Gartner Research VP in the CDO team where she pioneered Data Literacy research and was awarded Gartner's Top Thought Leadership Award.Sadie St. LawrenceSadie St. Lawrence  is on a personal mission to create a more compassionate and connected world through technology. Having grown up on a farm in Iowa she witnessed first-hand how advancements in technology rapidly changed how we work and earn a living, which in turn affected the overall success of a community. Through her work, she noticed that while many organizations and individuals have good intentions when it comes to D&I in data careers, there was a lack of progress.Dr. Gary MarcusGary Marcus is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience. An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of six books. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.

Cloud Security Podcast by Google
EP196 AI+TI: What Happens When Two Intelligences Meet?

Cloud Security Podcast by Google

Play Episode Listen Later Oct 28, 2024 28:08


Guest: Vijay Ganti, Director of Product Management, Google Cloud Security Topics: What have been the biggest pain points for organizations trying to use threat intelligence (TI)? Why has it been so difficult to convert threat knowledge into effective security measures in the past? In the realm of AI, there's often hype (and people who assume “it's all hype”). What's genuinely different about AI now, particularly in the context of threat intelligence? Can you explain the concept of "AI-driven operationalization" in Google TI? How does it work in practice? What's the balance between human expertise and AI in the TI process? Are there specific areas where you see the balance between human and AI involvement shifting in a few years? Google Threat Intelligence aims to be different. Why are we better from client PoV? Resources: Google Threat Intel website “Future of Brain” book by Gary Marcus et al Detection engineering blog (Part 9) and the series Detect engineering blogs by David French The pyramid of pain blog, the classic “Scaling Up Malware Analysis with Gemini 1.5 Flash” and “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blogs on Gemini for security

Scrum Master Toolbox Podcast
BONUS: AI-Driven Agile, Speeding Up Feedback Cycles for Better Product Iteration, And More AI Transformations | Jurgen Appelo

Scrum Master Toolbox Podcast

Play Episode Listen Later Oct 19, 2024 42:35


BONUS: AI-Driven Agile, Speeding Up Feedback Cycles for Better Product Iteration, And More AI Transformations with Jurgen Appelo In this BONUS episode, leadership expert and entrepreneur Jurgen Appelo joins us to dive into the transformative power of AI in today's workplaces. Creator of the unFIX model and author of Management 3.0, Jurgen shares his insights on how AI is revolutionizing team collaboration, creativity, and innovation. This engaging conversation covers practical examples, personal stories, and thought-provoking ideas for anyone interested in leveraging AI to thrive in their career and business. AI and the Future of Collaboration "AI gives me more time to focus on the things I really enjoy." Jurgen kicks off by discussing the major changes AI is bringing to how teams collaborate and get work done. He highlights how AI tools like ChatGPT are enhancing feedback loops in product development, allowing teams to gain insights faster and more efficiently. Jurgen shares how he's used AI to improve his own writing, helping his editor focus more on storytelling rather than grammar corrections. For teams, AI is already making client interactions smoother and boosting productivity. "AI gives teams more time to focus on creativity and innovation by automating repetitive tasks and improving workflow efficiency." AI as an Assistant or Creative Partner? "We need to learn to delegate to AI."  Jurgen dives deeper into his personal experience of managing multiple AI systems to develop a library of use cases and patterns. He sees AI as a powerful assistant, capable of generating creative ideas and enhancing human work, but stresses that we're still in the early stages. To truly maximize AI's potential, people need to learn how to delegate tasks to AI more effectively, while AI systems evolve to help us think beyond our usual patterns. "Delegating to AI allows us to break free from old habits and explore new creative possibilities." AI's Role in Personal Development "AI is a general-purpose technology, like the internet was in the beginning."  AI may have a vast potential to enhance personal and professional growth. However, many of its future applications are still unknown. He compares AI to the early days of the internet, a tool with endless possibilities yet to be fully realized. Right now, AI can help individuals automate simple tasks, but it has the potential to do so much more, including reshaping how we approach learning and career development. "AI could revolutionize personal development by helping people organize and prioritize their learning journeys." AI and Creativity: Can It Be a True Collaborator? "AI can give you instant feedback on whatever you create."  Jurgen discusses how AI can enhance creativity within teams, providing immediate feedback on ideas and helping teams refine their concepts without leaving their desks. He mentions real-world examples, such as using AI to generate designs and suggestions in creative fields, giving people access to insights they might not have considered otherwise. "AI can act as a creative collaborator, offering immediate, actionable feedback that pushes innovation forward." The Exciting Future of AI in the Workplace "I'm an optimist—AI frees us up to do more of what we love."  Looking ahead, Jurgen expresses optimism about AI's potential to change the way we work. While AI will inevitably displace some jobs, he believes it will also enable people to focus on tasks they truly enjoy. AI levels the playing field between small entrepreneurs and large enterprises by making high-quality tools accessible to everyone. This shift will create new opportunities and competition in the market. "AI will free up time for the tasks that matter most while leveling the playing field for entrepreneurs and businesses alike." Resources for Further Exploration Looking to dive deeper into the AI revolution? Jurgen recommends the book Co-intelligence by Ethan Mollick for those curious about AI's collaborative potential and Rebooting AI by Gary Marcus for a more skeptical view on its impact. "If you're looking to learn more about AI, these books will give you both the optimistic and cautious perspectives." About Jurgen Appelo Jurgen Appelo is a writer, speaker, and entrepreneur who helps organizations thrive in the 21st century. Creator of the unFIX model, he focuses on organization design, continuous innovation, and enhancing the human experience. Jurgen is also the author of Management 3.0 and a recognized leadership expert by Inc.com. You can link with Jurgen Appelo on LinkedIn.

Data Science at Home
What Big Tech Isn't Telling You About AI (Ep. 267)

Data Science at Home

Play Episode Listen Later Oct 12, 2024 19:15


Are AI giants really building trustworthy systems? A groundbreaking transparency report by Stanford, MIT, and Princeton says no. In this episode, we expose the shocking lack of transparency in AI development and how it impacts bias, safety, and trust in the technology. We'll break down Gary Marcus's demands for more openness and what consumers should know about the AI products shaping their lives.   Check our new YouTube channel https://www.youtube.com/@DataScienceatHome and Subscribe!    Cool links https://mitpress.mit.edu/9780262551069/taming-silicon-valley/ http://garymarcus.com/index.html

BLUEPRINT
How GenAI is Changing Your SOC for the Better with Seth Misenar

BLUEPRINT

Play Episode Listen Later Oct 9, 2024 96:22


Click here to send us your ideas and feedback on Blueprint!In this mega-discussion with Seth Misenar on GenAI and LLM usage for security operations we cover some very interesting questions such as: - The importance of natural language processing in Sec Ops- How AI is helping us detect phishing email- Where and how AI is lowering the bar for entry-level security SOC roles- Should we worry about AI hallucinations or AI taking our jobs?- What is a reasoning model and how is it different than what we've seen so far?- The future of AI - Multimodal interaction, Larger Context Windows, RAG, and more- What is Agentic AI and why will it change the game?Episode Links:The book from Manning Seth liked as a thoughtful accessible on-ramp: https://www.manning.com/books/introduction-to-generative-aiCoursera prompt engineering course series: https://coursera.org/specializations/prompt-engineeringGandalf Online Prompt Injection Challenges from Lakera (FYI Seth finds a lot of Lakera's content to be really high-quality and useful): https://gandalf.lakera.ai/baseline“Nonsense on stilts” reference from Gary Marcus in response to the Google employee claiming LaMDA was sentient: https://garymarcus.substack.com/p/nonsense-on-stilts?utm_source=twitter&sd=pf. AI as a monster with a smiley face image: https://knowyourmeme.com/memes/shoggoth-with-smiley-face-artificial-intelligenceEthan Mollick is the Wharton professor Seth mentioned, Seth says his “One Useful Thing” Substack is a valuable and thought provoking source: https://www.oneusefulthing.org/. Also his book, Co-Intelligence: Living and Working with AI, would also be worth checking out: Learn more about SANS' SOC courses at sans.org/socConnect with John:- LinkedIn- Take A Training Course with JohnSOC Analyst and Leadership Training Courses:- SEC450: Blue Team Fundamentals - Security Operations and Analysis- LDR551: Building and Leading Security Operations CentersSANS:- Cyber Defense Course List- Upcoming Training Events- Free tools, VMs, cheat sheets and more for cyber defenders

Business Pants
BIZ NUGGETS: Toyota's DEI flipflop, not WeWork WeWork, AI > climate change, and the Buzzfeed obsession

Business Pants

Play Episode Listen Later Oct 8, 2024 29:57


Live from Alabama's Anti-ESG unscented Rose Garden, it's an all-new terrific Tuesday edition of Business Pants. Joined by Analyst-Hole Matt Moscardi! In today's ESG-sized onesie called October 8, 2024: BIZ NUGGETS!Our show today is being sponsored by Free Float Analytics, the only platform measuring board power, connections, and performance for FREE. DAMION1In our 'Because it's 2024 and "Hey, why don't you just save on gas and buy a Chevy Bolt" is just too damn complicated' headline of the week. Uber to launch AI assistant powered by OpenAI's GPT-4o to help drivers go electric In our 'Spotify co-founders Daniel Ek and Martin Lorentzon announce that shareholders ARE children and that's why they own about 25% of actual shares but about 75% of voting power' headline of the week. Spotify's HR chief says remote staff aren't ‘children' as company reaffirms work-from-anywhere policyIn our 'Boeing wishes it had Qantas's problems' headline of the week. Qantas apologizes after R-rated movie played to passengers on Sydney to Tokyo flightIn our 'What type of card do I buy for a patriarchy where 0.8% of CEOs are women?' headline of the week. Women in Asia are slowly starting to break through historic barriers to the top of the corporate worldIn our '3M board planning to drop commitment to Reduce Emissions Across the Value Chain by More than 40% by 2030 by 2026' headline of the week. 3M Commits to Reduce Emissions Across the Value Chain by More than 40% by 2030 MATT1In our 'Keep the oily parts' headline of the week. Big Oil Urges Trump Not to Gut Biden's Climate LawThey really like some of the carbon capture funding. Oh, also, the global record $7 trillion in oil subsidies, many of which stayed in the bill, they're cool tooIn our 'What if we call it "outdoor air conditioning upgrades" or "enhancing nature's HVAC"?' headline of the week. Most CEOs Sticking with Climate Strategies, but Changing How they Communicate it: KPMG SurveyIn our 'Dog rescinds promise not to pee on the rug' headline of the week. BP drops goal to reduce oil and gas outputIn our 'Board members everywhere shocked to find out they're just average using Free Float Analytics data' headline of the week. 7 out of 10 employees dangerously underestimate or overestimate their skill levels, new analysis finds74% of active directors have historically delivered between the 40th and 60th percentile of TSR in whatever industry board they sit on. Those directors tend to have the highest average age (62 years old) and are overwhelmingly male (73%)In our 'Adam Neumann announces WorkWe, a real estate co-working company not to be confused with WeWork, the company he founded and bankrupted, because the Work and the We are swapped' headline of the week. Adam Neumann's Latest Project Is a WeWork CompetitorWorkflow is a shared office real estate companyDAMION1In our 'Don't worry, their independent board is there to ensure the company sticks to its not-for-profit bylaws... oh wait, never mind... Sam made those women look shrill' headline of the week. AI expert Gary Marcus thinks OpenAI will be the 'most Orwellian company of all time'In our 'The other three were Berkshire Hathaway board members with the last name "Buffett" and they all said "pull my finger" ' headline of the week. 7 out of 10 employees dangerously underestimate or overestimate their skill levels, new analysis findsIn our 'John Deere CEO John May says we should go all in on getting rid of DEI policies because 'we are never going to get rid of systemic racism anyway'' headline of the week. Former Google CEO Eric Schmidt says we should go all in on building AI data centers because 'we are never going to meet our climate goals anyway'In our 'Hey ma, are we supposed to be surprised when Japan ranked 125th out of 146 countries in the World Economic Forum's 2023 Gender Gap Index? Also, a Toyota Prius is a man's car, like a Deere tractor. Tell Dad' headline of the week. Toyota follows growing trend of companies halting DEI policies and initiatives In our 'Olive Garden is betting $1 billion on unlimited breadsticks as a ‘healthier' carrot' headline of the week. PepsiCo is betting $1 billion on tortilla chips as a ‘healthier' snackMATT2In our 'Correction: Toyota USA's all male board asks Toyota's 86% male board whether is was cool to "ditch the woman and gay stuff" because "a guy on the internet asked us to"' headline of the week. Toyota follows growing trend of companies halting DEI policies and initiativesBringing the total to 108 board members who cowered under their pillows at the idea that they'd have to talk to the gay people they said they "definitely were cool with"In our 'White guy who heard other white guy says something about black people once apparently not discriminated against' headline of the week. Blackrock Beats Equity Trader's Bias Suit Over ESG, DEI PoliciesBlackrock equity traded claimed because Larry Fink said diversity once in a speech years after he was fired, he was fired for being white male and heterosexualIn our 'So I'm not sure I can protect democracy, social cohesion, or the self esteem of young girls, but I can DEFINITELY turn this Porsche sports car into a minivan, because miracles ARE possible' headline of the week. Mark Zuckerberg Redesigns Porsche Cayenne Turbo GT Into A Minivan For Wife Priscilla Chan, Gets A 911 GT3 For HimselfIn our ‘Dear Linda, I know you are the daughter of Vince McMahon and class III director at Truth Social, so I thought I'd come to you first - are you really working with TitsAssMillionaire123 with a guaranteed investment scheme?' headline of the week. ‘I haven't told my wife about this blunder': How Truth Social users are getting scammed out of thousands of dollarsIn our 'Vivek Ramaswamy sends Edge One Capital a Letter Saying "Finders keepers,nyah nyah"' headline of the week. Edge One Capital Sends Letter to BuzzFeed Demanding Overhaul of Corporate Board and GovernanceEdge One Capital letter: Given its lack of relevant experience, it's unsurprising that the incumbent board has for years failed to hold its CEO responsible for destroying shareholder value. What we find truly extraordinary, however, is the extent to which the board has ac;vely created an environment where Jonah Peretti faces no repercussions for poor decisions and is instead insulated from shareholder influence. Peretti has dual class shares with 64% control and hand picks the board

The Data Chief
Can We Tame AI Before It's Too Late? With Dr. Gary Marcus

The Data Chief

Play Episode Listen Later Oct 2, 2024 44:38


Key Moments: Disappointment With Today's AI Systems (4:00) Congressional Inaction And The Need for AI Regulation (9:00)The Seduction of AI Propaganda (15:00)The Misguided Hypothesis of "Scale is All You Need" (23:00)Don't Be Fooled by the Masters of AI Hype (27:00) The Global AI Race and the Need for International Cooperation (33:00)Key Quotes:“This matters. It matters as much as immigration policy or financial policy. The tech policy that we set right now is going to really affect the rest of our lives.”“We should want to have AI that can be like an oracle that can answer any question. There is value in trying to build such a technology. But, we don't actually have that technology. A lot of people are seduced into thinking that we do. But it may be decades away.”“Nobody can look you in the eye and say, ‘I understand how human intelligence works'. If they say that, they're lying to you. It's still an unexplored domain.” Mentions:  Taming Silicon Valley: How We Can Ensure AI Works for All Of Us Kluge: The Haphazard Construction of the Human MindThe Algebraic Mind: Integrating Connectionism and Cognitive Science (Learning, Development, and Conceptual Change)The EU AI ActAI Generates Covertly Racist Decisions About People Based On Their DialectDr. Gary Marcus Bio: Gary Marcus is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience.An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of six books, including, The Algebraic Mind, Kluge, The Birth of the Mind, the New York Times Bestseller Guitar Zero, and most recently Taming Silicon Valley: How We Can Ensure AI Works for All of Us.  He has often contributed to The New Yorker, Wired, and The New York Times. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.

Your Undivided Attention
‘We Have to Get It Right': Gary Marcus On Untamed AI

Your Undivided Attention

Play Episode Listen Later Sep 26, 2024 41:43


It's a confusing moment in AI. Depending on who you ask, we're either on the fast track to AI that's smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He's a cognitive psychologist and computer scientist who built his own successful AI start-up. But he's also been called AI's loudest critic.On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIALink to Gary's book: Taming Silicon Valley: How We Can Ensure That AI Works for UsFurther reading on the deepfake of the CEO of India's National Stock ExchangeFurther reading on the deepfake of of an explosion near the Pentagon.The study Gary cited on AI and false memories.Footage from Gary and Sam Altman's Senate testimony. RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetNo One is Immune to AI Harms with Dr. Joy Buolamwini Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government's standard for GPS reliability is 95%.

Machine Learning Street Talk
Taming Silicon Valley - Prof. Gary Marcus

Machine Learning Street Talk

Play Episode Listen Later Sep 24, 2024 116:55


AI expert Prof. Gary Marcus doesn't mince words about today's artificial intelligence. He argues that despite the buzz, chatbots like ChatGPT aren't as smart as they seem and could cause real problems if we're not careful. Marcus is worried about tech companies putting profits before people. He thinks AI could make fake news and privacy issues even worse. He's also concerned that a few big tech companies have too much power. Looking ahead, Marcus believes the AI hype will die down as reality sets in. He wants to see AI developed in smarter, more responsible ways. His message to the public? We need to speak up and demand better AI before it's too late. Buy Taming Silicon Valley: https://amzn.to/3XTlC5s Gary Marcus: https://garymarcus.substack.com/ https://x.com/GaryMarcus Interviewer: Dr. Tim Scarfe (Refs in top comment) TOC [00:00:00] AI Flaws, Improvements & Industry Critique [00:16:29] AI Safety Theater & Image Generation Issues [00:23:49] AI's Lack of World Models & Human-like Understanding [00:31:09] LLMs: Superficial Intelligence vs. True Reasoning [00:34:45] AI in Specialized Domains: Chess, Coding & Limitations [00:42:10] AI-Generated Code: Capabilities & Human-AI Interaction [00:48:10] AI Regulation: Industry Resistance & Oversight Challenges [00:54:55] Copyright Issues in AI & Tech Business Models [00:57:26] AI's Societal Impact: Risks, Misinformation & Ethics [01:23:14] AI X-risk, Alignment & Moral Principles Implementation [01:37:10] Persistent AI Flaws: System Limitations & Architecture Challenges [01:44:33] AI Future: Surveillance Concerns, Economic Challenges & Neuro-Symbolic AI YT version with refs: https://youtu.be/o9MfuUoGlSw

Keen On Democracy
Episode 2199: Anindya Ghose on Maximizing our Well-Being in the Age of AI

Keen On Democracy

Play Episode Listen Later Sep 23, 2024 42:01


Not everyone fears that AI revolution represents an existential event for humanity. Anindya Ghose, the Heinz Riehl Professor of Business at NYU's illustrious Stern school, actually believes AI can positively impact our daily lives - from health and wellness, to work, education, even love and dating. In Thrive, a new book he co-authored with Ravi Bapna, Ghose explains how we can maximize our well-being in the AI age. It won't be easy, he acknowledges. But, in sharp contrast with skeptics like Gary Marcus, Ghose believes that the AI revolution can nudge us into living richer, happier, healthier and more productive lives. Let's hope he's right.Anindya Ghose is the Heinz Riehl Chair Professor of Technology and Marketing at New York University's Leonard N. Stern School of Business where he holds a joint appointment in the TOPS and Marketing departments. He is the author of TAP: Unlocking The Mobile Economy which is a double winner in the 2018 Axiom Business Book Awards and has been translated into five languages (Korean, Mandarin, Vietnamese, Japanese and Taiwanese). He is the Director of the Masters of Business Analytics and AI Program at NYU Stern. He is a Leonard Stern Faculty Scholar with an MBA scholarship (the Ghose Scholarship) named after him. He has been a Visiting Professor at the Wharton School of Business. In 2014, he was named by Poets & Quants as one of the Top 40 Professors Under 40 Worldwide and by Analytics Week as one the "Top 200 Thought Leaders in Big Data and Business Analytics".Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The Sunday Show
Gary Marcus Wants to Tame Silicon Valley

The Sunday Show

Play Episode Listen Later Sep 22, 2024 44:31


Gary Marcus writes that the companies developing artificial intelligence systems want the citizens of democracies “to absorb all the negative externalities” that might arise from their products, “such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.” And, he says, we need to fight back. His new book is called Taming Silicon Valley: How We Can Ensure That AI Works for Us, published by MIT Technology Press on September 17, 2024.

Prompt
TænkGPT, en farlig farlig mand og atompsykose

Prompt

Play Episode Listen Later Sep 19, 2024 54:30


OpenAI har lanceret en ny chatbot, der kan ræsonnere og tænke, før den taler. Er det virkelig et kvantespring, som OpenAI påstår - eller blot endnu en omgang salgsgas fra tech-giganten? Samtidig advarer den amerikanske professor emeritus Gary Marcus om, at Sam Altman, CEO i OpenAI, er på vej til at blive verdens farligste mand. Ifølge Marcus manipulerer Altman bag kulisserne, selvom han udadtil fremstår rolig og troværdig. Endelig dykker vi ned i et stort læk fra Kreml, som afslører, at Rusland har hyret et PR-bureau til at skabe frygt og tvivl i Vesten. Henrik graver i bureauets mål om at få halvdelen af den vestlige befolkning til at frygte fremtiden. Værter: Marcel Mirzaei-Fard, tech-analytiker og Henrik Moltke, tech-korrespondent.

Science Salon
Taming Silicon Valley: AI's Perils and Promise

Science Salon

Play Episode Listen Later Sep 17, 2024 71:21


AI could bring unprecedented advancements in science and technology, but Gary Marcus, in Taming Silicon Valley, warns it might also lead to democracy's collapse or even human extinction. With Big Tech manipulating both the public and government, are we prepared for the consequences of AI's unchecked power? Marcus urges that the choices we make today will define our future. Can we harness AI's potential without losing control? Exposing AI's flaws and Big Tech's grip on policymakers, Marcus offers eight solutions to avert disaster—from ensuring data rights to enforcing strict oversight. But will governments act in time? Marcus calls for citizens to push for change before it's too late. Will we rise to the challenge, or let AI's future be shaped by a few, for their own gain? Shermer and Marcus discuss various aspects of AI, including the current state of AI, AGI, and Generative AI like ChatGPT, and the AI we should aim for. They explore the key problems to solve, the 12 biggest threats of Generative AI, and the moral landscape of Silicon Valley, highlighting its influence on public opinion and government policy. Issues like data rights, privacy, transparency, and liability are examined, alongside the need for independent oversight. The conversation also covers the incentives driving AI development, the debate between private and government regulation, and the importance of international AI governance for managing its global impact.

Keen On Democracy
Episode 2194: Marietje Schaake explains how to save democracy from Silicon Valley

Keen On Democracy

Play Episode Listen Later Sep 17, 2024 49:50


This is the final episode of a trilogy of critical conversations about the digital revolution. Earlier this week, Gary Marcus explained how to tame Silicon Valley's AI barons. Then Mark Weinstein talked to us the reinvention of social media. And now we have the former member of the European Parliament & current Fellow at Stanford's Cyber Policy Center, Marietje Schaake, explaining how we can save democracy from Silicon Valley. In her provocative new book, Tech Coup, Schaake explains how, under the cover of “innovation,” Silicon Valley companies have successfully resisted regulation and have even begun to seize power from governments themselves. So what to do? For Marietje Schaake, in addition to government regulation, what we need is a radical reinvention of government so that our political institutions have the agility and intelligence to take on Silicon Valley.Marietje Schaake is a Fellow at Stanford's Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Yang Speaks
BIG TECH, democracy and how AI can make things worse

Yang Speaks

Play Episode Listen Later Sep 16, 2024 52:15


In this week's episode, entrepreneur and prominent AI researcher Gary Marcus critiques the growing generative AI industry. Marcus highlights concerns of corporate irresponsibility, widespread AI deployment, lack of regulation, security, and inherent flaws in AI technology, as discussed during his Senate testimony in 2023. His new book, Taming Silicon Valley, warns about Big Tech's potential harm to consumers and democracy without regulatory action, especially at the federal level. Learn more about the challenges and downsides of the AI industry combined with optimistic solutions and guidelines that could help shape a brighter future with AI for generations to come. Watch the full episode on YouTube: https://youtu.be/jjayDaEVgFs ---- Follow Gary Marcus: https://garymarcus.substack.com/ | https://x.com/garymarcus | Taming Silicon Valley Follow Andrew Yang: https://andrewyang.com | https://x.com/andrewyang Get 50% off Factor at https://factormeals.com/yang50 Get an extra 3 months free at https://expressvpn.com/yang Get 20% off + 2 free pillows at https://helixsleep.com/yang code helixpartner20 Get 20% off your first order at https://ashanderie.com/ code yang ---- Subscribe to Forward: Apple — https://podcasts.apple.com/podcast/id1508035243 Spotify — https://open.spotify.com/show/25cFfnG3lGuypTerKDxKia To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

POLITICO Dispatch
Breaking the Silicon Valley hype machine

POLITICO Dispatch

Play Episode Listen Later Sep 16, 2024 18:38


False promises of a high-tech future that's good for humanity have allowed Silicon Valley to hoodwink policymakers and the public, argues cognitive scientist and tech critic Gary Marcus. And with the rapid advancement of artificial intelligence, Marcus says it's more urgent than ever for governments to rein the industry in. On today's POLITICO Tech, Marcus joins host Steven Overly to discuss ideas for how to do that from his new book, “Taming Silicon Valley.”

Le Show
Le Show For The Week Of September 15, 2024

Le Show

Play Episode Listen Later Sep 15, 2024 58:30


On this week's edition of Le Show, Harry brings us regular features like News of the Godly, News of the Atom, The Apologies of the Week, and News of Crypto-Winter. We also get to hear from author Gary Marcus, who joins the program to discuss artificial intelligence and his new book, Taming Silicon Valley.

Keen On Democracy
Episode 2192: Mark Weinstein on how to restore our sanity online

Keen On Democracy

Play Episode Listen Later Sep 15, 2024 47:35


Early social media pioneer Mark Weinstein is deeply disturbed by the current state of social media. He's not alone of course, but in his new book, Restoring Our Sanity Online, Weinstein lays out what he boasts is a “revolutionary social framework” to clean up social media. The book comes with blurbs from tech royalty like Sir Tim Berners-Lee and Steve Wozniak, but I wonder if Weinstein, in his attempt to right social media through a more decentralized Web3 style architecture , is trying a fix yesterday's problem. In tech, timing is everything and the future of online sanity, as Gary Marcus noted a couple of days ago on this show, will be determined by our ability to harness AI. Rather than social media, that's what we now need a revolutionary framework to protect us from. MARK WEINSTEIN is a world-renowned tech entrepreneur, contemporary thought leader, privacy expert, and one of the visionary inventors of social networking. His adventure in social media has lasted over 25 years through three award-winning personal social media platforms enjoyed by millions of members worldwide. Mark is frequently interviewed and published in major media including the Wall Street Journal, New York Times, Fox, CNN, BBC, PBS, Newsweek, Los Angeles Times, The Hill, and many more worldwide. He covers topics including social media, privacy, AI, free speech, antitrust, and protecting kids online. During his social media years, Mark's advisors have included Sir Tim Berners-Lee, the inventor of the Web; Steve “Woz” Wozniak, co-founder of Apple; Sherry Turkle, MIT academic and tech ethics leader; Raj Sisodia, co-founder of the Conscious Capitalism movement; and many others. A leading privacy advocate, Mark's landmark 2020 TED Talk, “The Rise of Surveillance Capitalism,” exposed the many infractions and manipulations by Big Tech, and called for a privacy revolution. Mark has also been listed as one of the “Top 8 Minds in Online Privacy” and named “Privacy by Design Ambassador” by the Canadian government.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Keen On Democracy
Episode 2191: Why the future has to be built by innovators, rather than just hoped for by optimists

Keen On Democracy

Play Episode Listen Later Sep 14, 2024 44:56


Yesterday, KEEN ON featured a conversation with the technologist Gary Marcus about how we can ensure that AI works for us. Today, on our regular That Was The Week tech weekly roundup, Andrew and Keith Teare discuss the role of human agency in determining our tech future. For Keith, optimism in itself is what he calls a “false God”. It's not enough just to hope for a better future, he reminds us, echoing Gary Marcus, but we all have a responsibility to go out and build it. Perhaps. But as Andrew reminds us, our supposedly common future is vulnerable to the whims of imminent trillionaires like Elon Musk whose wealth and power is now eclipsing most of the world's nation-states. Keith Teare is the founder and CEO of SignalRank Corporation. Previously, he was executive chairman at Accelerated Digital Ventures Ltd., a U.K.-based global investment company focused on startups at all stages. Teare studied at the University of Kent and is the author of “The Easy Net Book” and “Under Siege.” He writes regularly for TechCrunch and publishes the “That Was The Week” newsletter.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Keen On Democracy
Episode 2190: Gary Marcus on How to Tame Silicon Valley's AI Barons

Keen On Democracy

Play Episode Listen Later Sep 13, 2024 46:51


Few artificial intelligence experts have been as outspoken or prescient as the author and entrepreneur Gary Marcus. In his new book, Taming Silicon Valley, Marcus takes on the new AI barons of Silicon Valley - billionaires like OpenAI CEO Sam Altman who are building an AI future that works for them rather than for the rest of us. In technology, Marcus argues, human agency is all important. So Marcus' new polemic seizes back the mantle from these Silicon Valley barons on its insistence that AI must work for us.GARY MARCUS is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience. An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of six books, including, The Algebraic Mind, Kluge, The Birth of the Mind, and the New York Times Bestseller Guitar Zero. He has often contributed to The New Yorker, Wired, and The New York Times. His new book, Taming Silicon Valley: How We Can Ensure that AI Works for Us is published by MIT Press. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

WorldAffairs
In AI We Trust? A 2024 Election Special

WorldAffairs

Play Episode Listen Later Sep 2, 2024 54:00


In May, Senate Majority Leader, Chuck Schumer, presented a sprawling “road map” for regulating artificial intelligence. But tech experts have called the plan “pathetic”, and many critics believe Washington is out of touch. And California's legislature will soon vote on a plan that would put guardrails on the biggest AI players. This week, we're airing our special election episode from June about why AI may be the big bad “X Factor” of the upcoming presidential election. First, we'll hear from Josh Lawson, Director of AI and Democracy at the Aspen Institute. Then, US Congressman Ted Lieu and Dr. Gary Marcus, Founder of Robust AI and Geometric AI, join Ray Suarez to talk about the future of AI, and whether it can be regulated in time. Guests: Josh Lawson, Director of AI and Democracy at the Aspen Institute US Representative Ted Lieu (D-CA 36th District) Dr. Gary Marcus, Founder of Robust AI and Geometric AI Host:   Ray Suarez If you appreciate this episode and want to support the work we do, please consider making a donation to Commonwealth Club World Affairs. We cannot do this work without your help. Thank you.

Machine Learning Street Talk
Gary Marcus' keynote at AGI-24

Machine Learning Street Talk

Play Episode Listen Later Aug 17, 2024 72:16


Prof Gary Marcus revisited his keynote from AGI-21, noting that many of the issues he highlighted then are still relevant today despite significant advances in AI. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Gary Marcus criticized current large language models (LLMs) and generative AI for their unreliability, tendency to hallucinate, and inability to truly understand concepts. Marcus argued that the AI field is experiencing diminishing returns with current approaches, particularly the "scaling hypothesis" that simply adding more data and compute will lead to AGI. He advocated for a hybrid approach to AI that combines deep learning with symbolic AI, emphasizing the need for systems with deeper conceptual understanding. Marcus highlighted the importance of developing AI with innate understanding of concepts like space, time, and causality. He expressed concern about the moral decline in Silicon Valley and the rush to deploy potentially harmful AI technologies without adequate safeguards. Marcus predicted a possible upcoming "AI winter" due to inflated valuations, lack of profitability, and overhyped promises in the industry. He stressed the need for better regulation of AI, including transparency in training data, full disclosure of testing, and independent auditing of AI systems. Marcus proposed the creation of national and global AI agencies to oversee the development and deployment of AI technologies. He concluded by emphasizing the importance of interdisciplinary collaboration, focusing on robust AI with deep understanding, and implementing smart, agile governance for AI and AGI. YT Version (very high quality filmed) https://youtu.be/91SK90SahHc Pre-order Gary's new book here: Taming Silicon Valley: How We Can Ensure That AI Works for Us https://amzn.to/4fO46pY Filmed at the AGI-24 conference: https://agi-conf.org/2024/ TOC: 00:00:00 Introduction 00:02:34 Introduction by Ben G 00:05:17 Gary Marcus begins talk 00:07:38 Critiquing current state of AI 00:12:21 Lack of progress on key AI challenges 00:16:05 Continued reliability issues with AI 00:19:54 Economic challenges for AI industry 00:25:11 Need for hybrid AI approaches 00:29:58 Moral decline in Silicon Valley 00:34:59 Risks of current generative AI 00:40:43 Need for AI regulation and governance 00:49:21 Concluding thoughts 00:54:38 Q&A: Cycles of AI hype and winters 01:00:10 Predicting a potential AI winter 01:02:46 Discussion on interdisciplinary approach 01:05:46 Question on regulating AI 01:07:27 Ben G's perspective on AI winter

The ChatGPT Report
#106 Grok 2.0 Image Creation Here We Come

The ChatGPT Report

Play Episode Listen Later Aug 15, 2024 12:15


In this episode of TheChatGPTReport, we spotlight midaiartwork's "The Final Hours of Pompeii" video, a captivating blend of history and AI artistry created using Midjourney, Google-ImageFX, Luma, Runway, and ElevenLabs. We also dive into the release of Grok 2.0, examining its new image generation capabilities powered by Black Forest Lab's Flux 1 AI model. While Grok 2.0 impresses with quick image creation, it still falls short of Midjourney's advanced customization options. We touch on Google's recent Gemini demo, showcasing the real-world challenges of AI implementation. The episode wraps up with a discussion on the impact of AI in marketing, citing a study that reveals decreased consumer trust and purchase intent when AI is prominently featured. We explore Gary Marcus's controversial prediction of an imminent AI bubble collapse, considering both the current hype and the potential for future legitimacy in AI applications. This episode offers a balanced look at the latest developments in AI technology and its effects on various industries.

Many Minds
From the archive: What does ChatGPT really know?

Many Minds

Play Episode Listen Later Jul 24, 2024 55:10


Hi friends, we're on a brief summer break at the moment. We'll have a new episode for you in August. In the meanwhile, enjoy this pick from our archives! ---- [originally aired January 25, 2023] By now you've probably heard about the new chatbot called ChatGPT. There's no question it's something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you've probably also been wondering: What's really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities? My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models', and it's the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway. Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities." Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info. Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!   The paper we discuss is here. A transcript of this episode is here.   Notes and links 6:30 – The 2017 “breakthrough” article by Vaswani and colleagues. 8:00 – A popular article about GPT-3. 10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT). 14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.” 19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT. 30:00 – One of Dr. Shanahan's books is titled, Embodiment and the Inner Life. 39:00 – An example of a robotic agent, SayCan, which is connected to a language model. 40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind. 44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here. 45:00 – See Dr. Shanahan's general audience essay on “conscious exotica" and the space of possible minds. 49:00 – See Dennett's book, The Intentional Stance.   Dr. Shanahan recommends: Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell (see also our earlier episode with Dr. Mitchell) ‘Abstraction for Deep Reinforcement Learning', by M. Shanahan and M. Mitchell   You can read more about Murray's work on his website and follow him on Twitter.   Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.

Strong Songs
Post-Season Update, Summer Plans, & Some Music Recs

Strong Songs

Play Episode Listen Later Jul 12, 2024 30:48


An update on what's next now that Season Six is complete, along with some book and music recommendations!BOOK RECOMMENDATIONSThe History of Jazz by Ted Gioia, 3rd edition - 1997, revised 2021Guitar Zero: The New Musician and the Science of Learning by Gary Marcus, 2012MUSIC RECOMMENDATION LINKSSteve Pardo - DOS - (also in Video Game form on Steam)Completions - I Needed HelpRuth Moody - WandererThe Onesies - The Onesies Dig With the Wrong FootSheena Ringo - HiizurutokoroAdi Oasis - Lotus GlowSt. Vincent - All Born ScreamingLaura Marling - Short Movie (Director's Cut)----LINKS-----SUPPORT STRONG SONGS!Paypal | Patreon.com/StrongsongsMERCH STOREstore.strongsongspodcast.comSOCIAL MEDIAIG: @Kirk_Hamilton | Threads: @Kirk_HamiltonNEWSLETTERnewsletter.kirkhamilton.comJOIN THE DISCORDhttps://discord.gg/GCvKqAM8SmSTRONG SONGS PLAYLISTSSpotify | Apple Music | YouTube MusicSHOW ARTTom Deja, Bossman Graphics--------------------JULY 2024 WHOLE-NOTE PATRONSRobyn MetcalfeBrian TempletCesarBob TuckerCorpus FriskyBen BarronCatherine WarnerDamon WhiteKaya WoodallJay SwartzMiriam JoyRushDaniel Hannon-BarryChristopher MillerJamie WhiteChristopher McConnellDavid MascettiJoe LaskaKen HirshMelanie AndrichJenness GardnerPaul DelaneyDave SharpeSami SamhuriJeremy DawsonAccessViolationAndre BremerDave FloreyJULY 2024 HALF-NOTE PATRONSAshleyThe Seattle Trans And Nonbinary Choral EnsembleKevin MarceloMatt CSamantha CoatesJamesMark NadasdiJeffDan CutterJoseph RomeroOl ParkerJohn BerryDanielle KrizMichael YorkClint McElroyMordok's Vape PenInmar GivoniMichael SingerMerv AdrianJoe GalloLauren KnottsDave KolasHenry MindlinMonica St. AngeloStephen WolkwitzSuzanneRand LeShayMaxeric spMatthew JonesThomasAnthony MentzJames McMurryEthan LaserBrian John PeterChris RemoMatt SchoenthalAaron WilsonDent EarlCarlos LernerMisty HaisfieldAbraham BenrubiChris KotarbaCallum WebbLynda MacNeilDick MorganBen SteinSusan GreenGrettir AsmundarsonSean MurphyAlan BroughRandal VegterGo Birds!Robert Granatdave malloyNick GallowayHeather Jjohn halpinPeter HardingDavidJohn BaumanMartín SalíasStu BakerSteve MartinoDr Arthur A GrayCarolinaGary PierceMatt BaxterLuigi BocciaE Margaret WartonCharles McGeeCatherine ClauseEthan BaumanKenIsWearingAHatJordan BlockAaron WadeJeff UlmDavid FutterJamieDeebsPortland Eye CareRichard SneddonCliff WhitlockJanice BerryDoreen CarlsonDavid McDarbyWendy GilchristElliot RosenLisa TurnerPaul WayperMiles FormanBruno GaetaKenneth JungAdam StofskyZak RemerRishi SahayJeffrey BeanJason ReitmanAilie FraserRob TsukNATALIE MISTILISJosh SingerAmy Lynn ThornsenAdam WKelli BrockingtonVictoria Yumino caposselaSteve PaquinDavid JoskeBernard KhooRobert HeuerDavid NoahGeraldine ButlerMadeleine MaderJason PrattAbbie BergDoug BelewDermot CrowleyAchint SrivastavaRyan RairighMichael BermanLinda DuffyBonnie PrinsenLiz SegerEoin de BurcaKevin PotterM Shane BordersDallas HockleyJason GerryNell MorseNathan GouwensLauren ReayEric PrestemonCookies250Damian BradyAngela LivingstoneDiane HughesMichael CasnerLowell MeyerStephen TsoneffJoshua HillGeoff GoldenPascal RuegerRandy SouzaClare HolbertonDiane TurnerTom ColemanDhu WikMelmaniacEric HelmJonathan DanielsMichael FlahertyCaro Fieldmichael bochnerNaomi WatsonDavid CushmanAlexanderChris KGavin DoigSam FennTanner MortonAJ SchusterJennifer BushDavid StroudBrad CallahanAmanda FurlottiAndrew BakerAndrew FairL.B. MorseBill ThorntonBrian AmoebasBrett DouvilleJeffrey OlsonMatt BetzelNate from KalamazooMelanie StiversRichard TollerAlexander PolsonEarl LozadaJustin McElroyArjun SharmaJames JohnsonKevin MorrellColin Hodo

TED Talks Technology
The TED AI Show: Is AI just all hype? w/ Gary Marcus

TED Talks Technology

Play Episode Listen Later Jul 9, 2024 39:04


Are we mistaking the remarkable skills of tools like ChatGPT with genuine intelligence? AI skeptic Gary Marcus pulls no punches when he warns that believing in the hype of generative AI models might be distracting us from building the type of AI future we actually want. Bilawal and Gary delve into the nuanced perspectives beyond the AI hype cycle, and try to find the common ground between healthy skepticism and techno-optimism.This is an episode of The TED AI Show with Bilawal Sidhu. For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts 

WorldAffairs
Could AI Swing the November Election?

WorldAffairs

Play Episode Listen Later Jun 10, 2024 53:59


In May, Senate Majority Leader, Chuck Schumer, presented a sprawling “road map” for regulating artificial intelligence. The report called for $32 billion in spending to put guardrails on the rapidly evolving technology. But tech experts have called the plan “pathetic”, and many critics believe Washington is out of touch. This week, in our latest special election series, why AI may be the big bad “X Factor” of the upcoming presidential election. We'll hear from Josh Lawson, Director of AI and Democracy at the Aspen Institute. Then, US Congressman Ted Lieu and Dr. Gary Marcus, Founder of Robust AI and Geometric AI, join Ray Suarez to talk about the future of AI, and whether it can be regulated in time. Guests:   Josh Lawson, Director of AI and Democracy at the Aspen Institute US Representative Ted Lieu (D-CA 36th District) Dr. Gary Marcus, Founder of Robust AI and Geometric AI Host: Ray Suarez If you appreciate this episode and want to support the work we do, please consider making a donation to World Affairs. We cannot do this work without your help. Thank you.

Faster, Please! — The Podcast

While AI doomers proselytize their catastrophic message, many politicians are recognizing that the loss of America's competitive edge poses a much more real threat than the supposed “existential risk” of AI. Today on Faster, Please!—The Podcast, I talk with Adam Thierer about the current state of the AI policy landscape and the accompanying fierce regulatory debate.Thierer is a senior fellow at the R Street Institute, where he promotes greater freedom for innovation and entrepreneurship. Prior to R Street, he worked as a senior fellow at the Mercatus Center at George Mason University, president of the Progress and Freedom Foundation, and at the Adam Smith Institute, Heritage Foundation, and Cato Institute.In This Episode* A changing approach (1:09)* The global AI race (7:26)* The political economy of AI (10:24)* Regulatory risk (16:10)* AI policy under Trump (22:29)Below is a lightly edited transcript of our conversationA changing approach (1:09)Pethokoukis: Let's start out with just trying to figure out the state of play when it comes to AI regulation. Now I remember we had people calling for the AI Pause, and then we had a Biden executive order. They're passing some sort of act in Europe on AI, and now recently a senate working group in AI put out a list of guidelines or recommendations on AI. Given where we started, which was “shut it down,” to where we're at now, has that path been what you might've expected, given where we were when we were at full panic?Thierer: No, I think we've moved into a better place, I think. Let's look back just one year ago this week: In the Senate Judiciary Committee, there was a hearing where Sam Altman of OpenAI testified along with Gary Marcus, who's a well-known AI worrywart, and the lawmakers were falling all over themselves to praise Sam and Gary for basically calling for a variety of really extreme forms of AI regulation and controls, including not just national but international regulatory bodies, new general purpose licensing systems for AI, a variety of different types of liability schemes, transparency mandates, disclosure as so-called “AI nutritional labels,” I could go on down the list of all the types of regulations that were being proposed that day. And of course this followed, as you said, Jim, a call for an AI Pause, without any details about exactly how that would work, but it got a lot of signatories, including people like Elon Musk, which is very strange considering he was at the same time deploying one of the biggest AI systems in history. But enough about Elon.The bottom line is that those were dark days, and I think the tenor of the debate and the proposals on the table today, one year after that hearing, have improved significantly. That's the good news. The bad news is that there's still a lot of problematic regulatory proposals percolating throughout the United States. As of this morning, as we're taping the show, we are looking at 738 different AI bills pending in the United States according to multistate.ai, an AI tracking service. One hundred and—I think—eleven of those are federal bills. The vast majority of it is state. But that count does not include all of the municipal regulatory proposals that are pending for AI systems, including some that have already passed in cities like New York City that already has a very important AI regulation governing algorithmic hiring practices. So the bottom line, Jim, is it's the best of times, it's the worst of times. Things have both gotten better and worse.Well—just because the most recent thing that happened—I know with this the senate working group, and they were having all kinds of technologists and economists come in and testify. So that report, is it really calling for anything specific to happen? What's in there other than just kicking it back to all the committees? If you just read that report, what does it want to happen?A crucial thing about this report, and let's be clear what this is, because it was an important report because senator Senate Majority Leader Chuck Schumer was in charge of this, along with a bipartisan group of other major senators, and this started the idea of, so-called “AI insight forums” last year, and it seemed to be pulling some authority away from committees and taking it to the highest levels of the Senate to say, “Hey, we're going to dictate AI policy and we're really scared.” And so that did not look good. I think in the process, just politically speaking—That, in itself, is a good example. That really represents the level of concern that was going around, that we need to do something different and special to address this existential risk.And this was the leader of the Senate doing it and taking away power, in theory, from his committee members—which did not go over well with said committee members, I should add. And so a whole bunch of hearings took place, but they were not really formal hearings, they were just these AI insight forum working groups where a lot of people sat around and said the same things they always say on a daily basis, and positive and negatives of AI. And the bottom line is, just last week, a report came out from this AI senate bipartisan AI working group that was important because, again, it did not adopt the recommendations that were on the table a year ago when the process got started last June. It did not have overarching general-purpose licensing of artificial intelligence, no new call for a brand new Federal Computer Commission for America, no sweeping calls for liability schemes like some senators want, or other sorts of mandates.Instead, it recommended a variety of more generic policy reforms and then kicked a lot of the authority back to those committee members to say, “You fill out the details, for better for worse.” And it also included a lot of spending. One thing that seemingly everybody agrees on in this debate is that, well, the government should spend a lot more money and so another $30 billion was on the table of sort of high-tech pork for AI-related stuff, but it really did signal a pretty important shift in approach, enough that it agitated the groups on the more pro-regulatory side of this debate who said, “Oh, this isn't enough! We were expecting Schumer to go for broke and swing for the fences with really aggressive regulation, and he's really let us down!” To which I can only say, “Well, thank God he did,” because we're in a better place right now because we're taking a more wait-and-see approach on at least some of these issues.A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopoliticsThe global AI race (7:26)I'm going to ask you in a minute what stuff in those recommendations worries you, but before I do, what happened? How did we get from where we were a year ago to where we've landed today?A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics. We face major adversaries, but specifically China, who has said in documents that the CCP [Chinese Communist Party] has published that they want to be the global leader in algorithmic and computational technologies by 2030, and they're spending a lot of money putting a lot of state resources into it. Now, I don't necessarily believe that means they're going to automatically win, of course, but they're taking it seriously. But it's not just China. We have seen in the past year massive state investments and important innovations take place across the globe.I'm always reminding people that people talk a big game about America's foundational models are large scale systems, including things like Meta's Llama, which was the biggest open source system in the world a year ago, and then two months after Meta launched Llama, their open source platform, the government of the UAE came out with Falcon 180B, an open source AI model that was two-and-a-half times larger than Facebook's model. That meant America's AI supremacy and open source foundational models lasted for two months. And that's not China, that's the government of the UAE, which has piled massive resources into being a global leader in computation. Meanwhile, China's launched their biggest super—I'm sorry, Russia's launched their biggest supercomputer system ever; you've got Europe applying a lot of resources into it, and so on and so forth. A lot of folks in the Senate have come to realize that problem is real: that if we shoot ourselves in the foot as a nation, they could race ahead and gain competitive advantage in geopolitical strategic advantages over the United States if it hobbles our technology base. I think that's the first fundamental thing that's changed.I think the other thing that changed, Jim, is just a little bit of existential-risk exhaustion. The rhetoric in this debate, as you've written about eloquently in your columns, has just been crazy. I mean, I've never really seen anything like it in all the years we've been covering technology and economic policy. You and I have both written, this is really an unprecedented level of hysteria. And I think, at some point, the Chicken-Littleism just got to be too much, and I think some saner minds prevailed and said, “Okay, well wait a minute. We don't really need to pause the entire history of computation to address these hypothetical worst-case scenarios. Maybe there's a better plan than that.” And so we're starting to pull back from the abyss, if you will, a little bit, and the adults are reentering the conversation—a little bit, at least. So I think those are the two things that really changed more, although there were other things, but those were two big ones.The political economy of AI (10:24)To what extent do you think we saw the retreat from the more apocalyptic thinking—how much that was due from what businesses were saying, venture capitalists, maybe other tech . . . ? What do you think were the key voices Congress started listening to a little bit more?That's a great question. The political economy of AI policy and tech policy is something that is terrifically interesting to me. There are so many players and voices involved in AI policy because AI is the most important general-purpose technology of our time, and as a widespread broad base—Do you have any doubt about that? (Let me cut you off.) Do you have any doubt about that?I don't. I think it's unambiguous, and we live in a world of “combinatorial innovation,” as Hal Varian calls it, where technologies build on top of the other, one after another, but the thing is they all lead to greater computational capacity, and therefore, algorithmic and machine learning systems come out of those—if we allow it. And the state of data science in this country has gotten to the point where it's so sophisticated because of our rich base of diverse types of digital technologies and computational technologies that finally we're going to break out of the endless cycle of AI booms and busts, and springs and winters, and we're going to have a summer. I think we're having it right now. And so that is going to come to affect every single segment and sector of our economy, including the government itself. I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other's competitors under the busNow let me let you go return to the political economy, what I was asking you about, what were the voices, sorry, but I wanted to get that in there.Well, I think there are so many voices, I can't name them all today, obviously, but obviously we're going to start with one that's a quiet voice behind the scenes, but a huge one, which is, I think, the National Security community. I think clearly going back to our point about China and geopolitical security, I think a lot of people behind the scenes who care about these issues, including people in the Pentagon, I think they had conversations with certain members of Congress and said, “You know what? China exists. And if we're shooting ourselves in the foot, we begin this race for geopolitical strategic supremacy in an important new general-purpose technology arena, we're really hurting our underlying security as a nation. I think that that thinking is there. So that's an important voice.Secondly, I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other's competitors under the bus, unfortunately, and that includes OpenAI trying to screw over other companies and technologies, which is dangerous, but the bottom line is: More and more of them are coming to realize, as they saw the actual details of regulation and thinking through the compliance costs, that “Hell no, we won't go, we're not going to do that. We need a better approach.” And it was always easier in the old days to respond to the existential risk route, like, “Oh yeah, sure, regulation is fine, we'll go along with it!” But then when you see the devilish details, you think twice and you realize, “This will completely undermine our competitive advantage in the space as a company or our investment or whatever else.” All you need to do is look at Exhibit A, which is Europe, and say, if you always run with worst-case scenario thinking and Chicken-Littleism is the basis of your technology policy, guess what? People respond to incentives and they flee.Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else. It's interesting, the national security aspect, my little amateurish thought experiment would be, what would be our reaction, and what would be the reaction in Washington if, in November, 2022, instead of it being a company, an American company with a big investment from another American company having rolled out ChatGPT, what if it would've been Tencent, or Alibaba, or some other Chinese company that had rolled this out, something that's obviously a leap forward, and they had been ahead, even if they said, “Oh, we're two or three years ahead of America,” it would've been bigger than Sputnik, I think.People are probably tired of hearing about AI—hopefully not, I hope they'll also listen to this podcast—but that would all we would be talking about. We wouldn't be talking about job loss, and we wouldn't be talking about ‘The Terminator,' we'd be talking about the pure geopolitical terms that the US has suffered a massive, massive defeat here and who's to blame? What are we going to do? And anybody at that moment who would've said, “We need to launch cruise missile strikes on our own data centers” for fear. . . I mean! And I think you're right, the national security component, extremely important here.In fact, I stole your little line about “Sputnik moment,” Jim, when I testified in front of the House Oversight Committee last month and I said, “Look, it would've been a true ‘Sputnik moment,' and instead it's those other countries that are left having the Sputnik moment, right? They're wondering, ‘How is it that, once again, the United States has gotten out ahead on digital and computational-based technologies?'” But thank God we did! And as I pointed out in the committee room that day, there's a lot of people who have problems with technology companies in Congress today. Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else. That's just a unifying theme. Everybody in the committee room that day nodded their head, “Yes, yes, absolutely. We still hate them, but we're thankful that they're here.” And that then extends to AI: Can the next generation of companies that they want to bring to Congress and bash and pull money from for their elections, can they once again exist in the United States?Regulatory risk (16:10)So whether it's that working group report, or what else you see in Congress, what are a couple, three areas where you're concerned, where there still seems to be some sort of regulatory momentum?Let's divide it into a couple of chunks here. First of all, at the federal level, Congress is so damn dysfunctional that I'm not too worried that even if they have bad ideas, they're going to pursue them because they're just such a mess, they can't get any basic things done on things like baseline privacy legislation, or driverless car legislation, or even, hell, the budget and the border! They can't get basics done!I think it's a big positive that one, while they're engaging in dysfunction, the technology is evolving. And I hope, if it's as important as I think you and I think, more money will be invested, we'll see more use cases, it'll be obvious—the downsides of screwing up the regulation I think will be more obvious, and I think that's a tailwind for this technology.We're in violent agreement on that, Jim, and of course this goes by the name of “the pacing problem,” the idea that technology is outpacing law in many ways, and one man's pacing problem is another man's pacing benefit, in my opinion. There's a chance for technology to prove itself a little bit. That being said, we don't live in a legislative or regulatory vacuum. We already have in the United States 439 government agencies and sub-agencies, 2.2 million employees just at the federal level. So many agencies are active right now trying to get their paws on artificial intelligence, and some of them already have it. You look at the FDA [Food and Drug Administration], the FAA [Federal Aviation Administration], NHTSA [National Highway Traffic Safety Administration], I could go all through the alphabet soup of regulatory agencies that are already trying to regulate or overregulating AI right now.Then you have the Biden administration, who's gone out and done a lot of cheerleading in favor of more aggressive unilateral regulation, regardless of what Congress says and basically says, “To hell with all that stuff about Chevron Doctrine and major questions, we're just going to go do it! We're at least going to jawbone a lot and try to threaten regulation, and we're going to do it in the name of ‘algorithmic fairness,'” which is what their 100-plus-page executive order and their AI Bill of Rights says they're all about, as opposed to talking about AI opportunity and benefits—it's all misery. And it's like, “Look at how AI is just a massive tool of discrimination and bias, and we have to do something about it preemptively through a precautionary principle approach.” So if Congress isn't going to act, unfortunately the Biden administration already is and nobody's stopping them.But that's not even the biggest problem. The biggest problem, going back to the point that there are 730-plus bills pending in the US right now, the vast majority of them are state and local. And just last Friday, governor Jared Polis of Colorado signed into law the first major AI regulatory measure in Colorado, and there's a bigger and badder bill pending right now in California, there's 80 different bills pending in New York alone, and any half of them would be a disaster.I could go on down the list of troubling state patchwork problems that are going to develop for AI and ML [Machine Learning] systems, but the bottom line is this: This would be a complete and utter reversal of the winning formula that Congress and the Clinton administration gave us in the 1990s, which was a national—a global framework for global electronic commerce. It was very intentionally saying, “We're going to break with the Analog Era disaster, we're going to have a national framework that's pro-freedom to innovate, and we're going to make sure that these meddlesome barriers do not develop to online speech and commerce.” And yet, here with AI, we are witnessing a reversal of that. States are in the lead, and again, like I said, localities too, and Congress is sitting there and is the dysfunctional soup that it is saying, “Oh, maybe we should do something to spend a little bit more money to promote AI.” Well, we can spend all the money we want, but we can end up like Europe who spends tons of money on techno-industrial policies and gets nothing for it because they can't get their innovation culture right, because they're regulating the living hell out of digital technology.So you want Congress to take this away from the states?I do. I do, but it's really, really hard. I think what we need to do is follow the model that we had in the Telecommunications Act of 1996 and the Internet Tax Freedom Act of 1998. We've also had moratoriums, not only through the Internet Tax Freedom Act, but through the Commercial Space Amendments having to do with space commercial travel and other bills. Congress has handled the question of preemption before and put moratoria in place to say, “Let's have a learning period before we go do stupid things on a new technology sector that is fast moving and hard to understand.” I think that would be a reasonable response, but again, I have to go back to what we just talked about, Jim, which is that there's no chance of us probably getting it. There's no appetite in it. Not any of the 111 bills pending in Congress right now says a damn thing about state and local regulation of technology!Is the thrust of those federal bills, is it the kinds of stuff that you're generally worried about?Mostly, but not entirely. Some of it is narrower. A lot of these bills are like, “Let's take a look at AI and. . . fill in the blank: elections, AI and jobs, AI and whatever.” And some of them, on the merits, not terrible, others, I have concerns, but it's certainly better that we take a targeted sectoral approach to AI policy and regulation than having the broad-based, general-purpose stuff. Now, there are broad-based, general-purpose measures, and here's what they do, Jim: They basically say, “Look, instead of having a whole cloth new regulatory approach, let's build on the existing types of approaches being utilized in the Department of Commerce, namely through our NIST [National Institute of Standards and Technology], and NTIA [National Telecommunications and Information Administration] sub-agencies there. NIST is the National Standards Body, and basically they develop best practices through something called the AI Risk Management Framework for artificial intelligence development—and they're good! It's multi-stakeholder, it's bottom up, it's driven by the same principles that motivated the Clinton administration to do multi-stakeholder processes for the internet. Good model. It is non-regulatory, however. It is a consensus-based, multi-stakeholder, voluntary approach to developing consensus-based standards for best practices regarding various types of algorithmic services. These bills in Congress—and there's at least five of them that I count, that I've written about recently—say, “Let's take that existing infrastructure and give it some enforcement teeth. Let's basically say, ‘This policy infrastructure will be converted into a quasi-regulatory system,'” and there begins the dangerous path towards backdoor regulation of artificial intelligence in this country, and I think that's the most likely model we'll get. Like I said, five models, legislative models in the Senate alone that would do that to varying degrees.AI policy under Trump (22:29)Do you have any feel for what a Trump administration would want to do on this?I do, because a month before the Trump administration left office, they issued a report through the Office of Management and Budget (OMB), and it basically laid out for agencies a set of principles for how it should evaluate artificial intelligence systems, both that are used by the government or that they regulate in the private sector, and it was an excellent set of principles. It was a restatement of the importance of policy, forbearance and humility. It was a restatement of a belief in cost-benefit analysis and identifying not only existing regulatory capacity to address these problems, but also non-regulatory mechanisms or best practices or standards that could address some of these things. It was a really good memo. I praised it in a piece that I wrote just before the Trump administration left. Now, of course, the Trump administration may change.Yes, and also, the technology has changed. I mean, that was 2020 and a lot has happened, and I don't know where. . . . I'm not sure where all the Republicans are. I think some people get it. . .I think the problem, Jim, is that, for the Republican Party, and Trumpian conservatives, in particular, they face a time of choosing. And what I mean by this is that they have spent the last four to six years—and Trump egged this on—engaging in nonstop quote-unquote “big tech bashing” and making technology companies in the media out to be, as Trumps calls them, “the enemy of the American people.” And so many hearings now are just parading tech executives and others up there to be beaten with a stick in front of the public, and this is the new thing. And then there's just a flood of bills that would regulate traditional digital technologies, repeal things like Section 230, which is liability protection for the tech sector, and so on, child safety regulations.Meanwhile, that same Republican Party and Mr. Trump go around hating on Joe Biden in China. If it's one thing they can't stand more than big tech, it's Joe and China! And so, in a sense, they've got to choose, because their own policy proposals on technology could essentially kneecap America's technology base in a way that would open up the door to whether it's what they fear in the “woke DEI policies” of Biden or the CCP's preferred policy agenda for controlling computation in the world today. Choose two, you don't get all three. And I think this is going to be an interesting thing to watch if Mr. Trump comes back into office, do they pick up where that OMB memo left off, or do they go right back to beating that “We've got to kill big tech by any means necessary in a seek-and-destroy mission, to hell with the consequences.” And I don't know yet.Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

POLITICO Dispatch
One critic's case for why artificial intelligence is actually dumb

POLITICO Dispatch

Play Episode Listen Later Apr 9, 2024 18:31


Gary Marcus is a cognitive scientist and serial entrepreneur who is an AI critic, and not shy about it. He's one of the tech experts pushing Senate Majority Leader Chuck Schumer and others in Washington to pass sweeping and stringent legislation to rein in AI -- technology that he doesn't actually think is very smart. Gary has written a piece for POLITICO magazine, out tomorrow, arguing what all should be included in a big AI bill. And he talked with Steven Overly to make his case.

Le Show
Le Show For The Week Of April 7, 2024

Le Show

Play Episode Listen Later Apr 7, 2024 55:30


On this week's edition of Le Show, Harry brings us The Apologies of the Week, News of Musk Love, and recounts a recent experience with musician and photographer Henry Diltz. We then turn to the archives of Le Show for another batch of Highlights from the Recent Past with all sorts of sketches and songs. We'll hear Clintonsomething, Keeping Up with the Murdochs, Nixon in Heaven, music by The Folksmen, an interview with Gary Marcus and much more.

Metaverse Marketing
Oh My AI | What's Happening on the Internet?!

Metaverse Marketing

Play Episode Listen Later Apr 3, 2024 81:20


In this episode, we talk about their favorite April Fools pranks in tech. Cathy shares an April Fool's from Gary Marcus about an OpenAI sneak peek. More in AI news, the CEO of StabilityAI stepped down. Lee talks about Lego's apology for using AI art. Cathy and Lee debate the “AI legal iceberg.” Cathy and Lee segway into what's happening on the Internet including Steve Wozniak suing YouTube, Pornhub being blocked in Texas, and why video games can't quit microtransactions. Lee explains Oregon's governor signing the nation's first right-to-repair bill. Cathy interviews Andrew Rosen, founder of PARQOR and author of the Medium Shift.Show Notes:Nintendo April Fool'sStability AI Founder Emad Mostaque Tanked His Billion-Dollar StartupLego is the latest brand to apologize for using AI artThe AI Industry Is Steaming Toward A Legal IcebergLawsuit against YouTube led by Steve Wozniak set to continuePornhub is now blocked in Texas. Could Florida be next?Why The $183 Billion Video Game Industry Can't Quit MicrotransactionsOregon governor signs nation's first right-to-repair bill that bans parts pairingApple sues former iOS engineer for allegedly leaking Vision Pro, Journal app detailsVladimir Putin orders creation of Russian game consoles, Steam-like cloud delivery, and OSSubscribe and Share:Cathy Hackl on LinkedInLee Kebler on LinkedInLily Snyder on LinkedIn Hosted on Acast. See acast.com/privacy for more information.

The Wright Show
Does AI Understand Things? (Robert Wright & Gary Marcus)

The Wright Show

Play Episode Listen Later Feb 22, 2024 60:00 Very Popular


Gary's background and place in the AI world ... Are Gary's views on AI paradoxical? ... Bob: Searle's Chinese Room argument is dead ... Have LLMs demonstrated "theory of mind"? ... Arguing the semantics of an LLM's “semantic space” ... Do LLM representations map onto the real world? ... Can (and should) we slow down AI development? ... Gary: I've never seen a field as myopic as AI today ... The (symbolic?) future of AI ...

Bloggingheads.tv
Does AI Understand Things? (Robert Wright & Gary Marcus)

Bloggingheads.tv

Play Episode Listen Later Feb 22, 2024 60:00


Gary's background and place in the AI world ... Are Gary's views on AI paradoxical? ... Bob: Searle's Chinese Room argument is dead ... Have LLMs demonstrated "theory of mind"? ... Arguing the semantics of an LLM's “semantic space” ... Do LLM representations map onto the real world? ... Can (and should) we slow down AI development? ... Gary: I've never seen a field as myopic as AI today ... The (symbolic?) future of AI ...

Le Show
Le Show For The Week Of February 4, 2024

Le Show

Play Episode Listen Later Feb 4, 2024 57:42


On this week's edition of Le Show we take Another Trip to the Memory Hole, revisit regular features like News of the Atom, The Apologies of the Week, and News of Musk Love, then turn our attention to professor emeritus Gary Marcus, a frequent guest of the program, who discusses the latest obstacles and challenges of Artificial Intelligence.

Babbage from Economist Radio
Babbage: Sam Altman and Satya Nadella on their vision for AI

Babbage from Economist Radio

Play Episode Listen Later Jan 24, 2024 45:00 Very Popular


OpenAI and Microsoft are leaders in generative artificial intelligence (AI). OpenAI has built GPT-4, one of the world's most sophisticated large language models (LLMs) and Microsoft is injecting those algorithms into its products, from Word to Windows. At the World Economic Forum in Davos last week, Zanny Minton Beddoes, The Economist's editor-in-chief, interviewed Sam Altman and Satya Nadella, who run OpenAI and Microsoft respectively. They explained their vision for humanity's future with AI and addressed some thorny questions looming over the field, such as how AI that is better than humans at doing tasks might affect productivity and how to ensure that the technology doesn't pose existential risks to society.Host: Alok Jha, The Economist's science and technology editor. Contributors: Zanny Minton Beddoes, editor-in-chief of The Economist; Ludwig Siegele, The Economist's senior editor, AI initiatives; Sam Altman, chief executive of OpenAI; Satya Nadella, chief executive of Microsoft. If you subscribe to The Economist, you can watch the full interview on our website or app. Essential listening, from our archive:“Daniel Dennett on intelligence, both human and artificial”, December 27th 2023“Fei-Fei Li on how to really think about the future of AI”, November 22nd 2023“Mustafa Suleyman on how to prepare for the age of AI”, September 13th 2023“Vint Cerf on how to wisely regulate AI”, July 5th 2023“Is GPT-4 the dawn of true artificial intelligence?”, with Gary Marcus, March 22nd 2023Sign up for a free trial of Economist Podcasts+. If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.

Economist Podcasts
Babbage: Sam Altman and Satya Nadella's vision for AI

Economist Podcasts

Play Episode Listen Later Jan 24, 2024 45:00


OpenAI and Microsoft are leaders in generative artificial intelligence (AI). OpenAI has built GPT-4, one of the world's most sophisticated large language models (LLMs) and Microsoft is injecting those algorithms into its products, from Word to Windows. At the World Economic Forum in Davos last week, Zanny Minton Beddoes, The Economist's editor-in-chief, interviewed Sam Altman and Satya Nadella, who run OpenAI and Microsoft respectively. They explained their vision for humanity's future with AI and addressed some thorny questions looming over the field, such as how AI that is better than humans at doing tasks might affect productivity and how to ensure that the technology doesn't pose existential risks to society.Host: Alok Jha, The Economist's science and technology editor. Contributors: Zanny Minton Beddoes, editor-in-chief of The Economist; Ludwig Siegele, The Economist's senior editor, AI initiatives; Sam Altman, chief executive of OpenAI; Satya Nadella, chief executive of Microsoft. If you subscribe to The Economist, you can watch the full interview on our website or app. Essential listening, from our archive:“Daniel Dennett on intelligence, both human and artificial”, December 27th 2023“Fei-Fei Li on how to really think about the future of AI”, November 22nd 2023“Mustafa Suleyman on how to prepare for the age of AI”, September 13th 2023“Vint Cerf on how to wisely regulate AI”, July 5th 2023“Is GPT-4 the dawn of true artificial intelligence?”, with Gary Marcus, March 22nd 2023Sign up for a free trial of Economist Podcasts+. If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.