Podcast appearances and mentions of arvind narayanan

  • 52PODCASTS
  • 62EPISODES
  • 35mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about arvind narayanan

Latest podcast episodes about arvind narayanan

Sway
Meta on Trial + Is A.I. a ‘Normal' Technology? + HatGPT

Sway

Play Episode Listen Later Apr 18, 2025 80:28


This week Meta is on trial, in a landmark case over whether it illegally snuffed out competition when it acquired Instagram and WhatsApp. We discuss some of the most surprising revelations from old email messages made public as evidence in the case, and explain why we think the F.T.C.'s argument has gotten weaker in the years since the lawsuit was filed. Then we hear from Princeton computer scientist Arvind Narayanan on why he believes it will take decades, not years, for A.I. to transform society in the ways the big A.I. labs predict. And finally, what do dolphins, Katy Perry and A1 steak sauce have in common? They're all important characters in our latest round of HatGPT. Tickets to Hard Fork live are on sale now! See us June 24 at SFJAZZ. Guest:Arvind Narayanan, director of the Center for Information Technology at Princeton and co-author of “AI Snake Oil: What Artificial Intelligence Can Do, What it Can't, and How to Tell the Difference.” Additional Reading:What if Mark Zuckerberg Had Not Bought Instagram and WhatsApp?AI as Normal TechnologyOne Giant Stunt for Womankind We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

The Next Big Idea
The Next Big Idea Daily: What Can AI Really Do?

The Next Big Idea

Play Episode Listen Later Mar 17, 2025 15:54


Two of TIME's 100 Most Influential People in AI share what you need to know about AI — and how to defend yourself against bogus AI claims and products.

The Next Big Idea Daily
What Can AI Really Do?

The Next Big Idea Daily

Play Episode Listen Later Mar 17, 2025 15:24


Two of TIME's 100 Most Influential People in AI share what you need to know about AI — and how to defend yourself against bogus AI claims and products.

The Jim Rutt Show
EP 283 Brian Chau on the Trump Administration and AI

The Jim Rutt Show

Play Episode Listen Later Feb 11, 2025 68:04


Jim talks with Brian Chau about what the new administration could mean for AI development. They discuss recent actions by the Tump administration including repealing Biden's executive order & the Stargate infrastructure project, Biden's impact on AI, the formation of the Alliance for the Future, regulatory bureaucracy, state patchwork laws, censorship, the Gemini controversy & DEI in AI, safety restrictions in chat models, the meaning of DeepSeek, economic implications of model distillation, historical analogies for AI development, national security & sovereignty implications, 3 main lanes for AI development, democratized access vs gatekeeping, trust issues, "AI" vs "LLMs," and much more. Episode Transcript Alliance for the Future From the New World (Substack) Brian Chau on Twitter JRS EP200 - Brian Chau on AI Pluralism Nous Research JRS EP221 - George Hotz on Open-Source Driving Assistance AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference, by Arvind Narayanan and Sayash Kapoor Brian Chau is a mathematician by training and is tied for the youngest Canadian to win a gold medal at the International Olympiad in Informatics. He writes software for a living while posting on his spare time. He writes independently on American bureaucracy and political theory and has contributed to Tablet Magazine. His political philosophy can be summed up as “see the world as it is, not as you wish it to be.” Everything else is application.

Unsupervised Learning
Ep 54: Princeton Researcher Arvind Narayanan on the Limitations of Agent Evals, AI's Societal Impact & Important Lessons from History

Unsupervised Learning

Play Episode Listen Later Jan 30, 2025 57:09


Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he's one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools. Some of our favorite take-aways: [0:00] Intro[0:46] Reasoning Models and Their Uneven Progress[2:46] Challenges in AI Benchmarks and Real-World Applications[5:03] Inference Scaling and Verifier Imperfections[7:33] Agentic AI: Tools vs. Autonomous Actions[12:07] Future of AI in Everyday Life[15:34] Evaluating AI Agents and Collaboration[24:49] Regulatory and Policy Implications of AI[27:49] Analyzing Generative AI Adoption Rates[29:17] Educational Policies and Generative AI[30:09] Flaws in Predictive AI Models[31:31] Regulation and Safety in AI[33:47] Academia's Role in AI Development[36:13] AI in Scientific Research[38:22] AI and Human Minds[46:04] Economic Impacts of AI[49:42] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Interviews: Tech and Business
AI Snake Oil: Princeton Professor Exposes AI Truths | #867

Interviews: Tech and Business

Play Episode Listen Later Jan 28, 2025 57:59


In CXOTalk episode 867, Princeton professor Arvind Narayanan, co-author of AI Snake Oil, reveals why many AI products fail to deliver on their promises and how leaders can distinguish hype-driven solutions from those that create value. Exploring the landscape of AI advancements, deceptions, and limitations, Narayanan explains how to detect genuine AI innovations from overhyped and potentially harmful applications. We discuss real-world examples, ethical concerns, and the role of policy and regulation in mitigating AI snake oil. Tune in to learn actionable insights for consumers and businesses and explore how AI reshapes industries while posing unique challenges and opportunities. #enterpriseai #cxotalk #aihype #aiethics

unSILOed with Greg LaBlanc
497. Spotting The Difference Between AI Innovation and AI Snake Oil feat. Arvind Narayanan

unSILOed with Greg LaBlanc

Play Episode Listen Later Jan 8, 2025 46:11


Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Deep LearningGenerative Artificial IntelligenceAISnakeOil.com | NewsletterBitcoin and Cryptocurrency Technologies | Princeton/Coursera CourseGuest Profile:Faculty Profile at Princeton UniversityLinkedIn ProfileWikipedia PageHis Work:Amazon Author PageAI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the DifferenceBitcoin and Cryptocurrency Technologies: A Comprehensive IntroductionFairness and Machine Learning: Limitations and OpportunitiesGoogle Scholar PageEpisode Quotes:What can the AI community learn from medicine about testing?28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.Replacing broken systems with AI can escalate the problem08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.Foundation models lower costs and could shift AI research back to academia27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas. 

Marketplace Tech
Not all AI is, well, AI

Marketplace Tech

Play Episode Listen Later Jan 2, 2025 13:54


Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.” He says even the term AI doesn’t always mean what you think.

Marketplace Tech
Not all AI is, well, AI

Marketplace Tech

Play Episode Listen Later Jan 2, 2025 13:54


Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.” He says even the term AI doesn’t always mean what you think.

Marketplace All-in-One
Not all AI is, well, AI

Marketplace All-in-One

Play Episode Listen Later Jan 2, 2025 13:54


Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.” He says even the term AI doesn’t always mean what you think.

The Ashish Sinha Show
Engineering Jobs in the Age of AI: What You MUST Know #AISnakeOil

The Ashish Sinha Show

Play Episode Listen Later Dec 20, 2024 38:24


In this conversation, Ashish Sinha interviews Arvind Narayanan, professor of computer science at Princeton University and co-author of the book AI Snake Oil (https://amzn.to/402Ayiw), discussing the impact of AI on various sectors, the differences between generative and predictive AI, the challenges of AI agents, and the future of AI technology. We explore the importance of human-AI collaboration, the role of reasoning in AI, and the need for better evaluation criteria to build trust in AI systems. Narayanan emphasizes the necessity for technical breadth over mastery in the evolving job market and shares practical applications of AI in education and research. Unique Quotes from the Conversation On Predicting Creative Success "The success of cultural products relies on chance elements that cannot be predicted in advance." This highlights the inherent unpredictability of creative ventures, whether driven by AI or humans. -- On Generative AI in Programming "It's not that AI writes better code, but it takes so much of the drudgery and boring parts out of it." A reminder that AI is a tool to enhance, not replace, human creativity in programming. -- On Predictive AI's Ethical Concerns "These tools are only slightly better than random at making really consequential decisions about people." A critique of the overreliance on AI in life-altering decisions like hiring or criminal justice. -- On the Capability-Reliability Gap "The capability-reliability gap means these systems are not reliable right now." An acknowledgment of AI's limitations, emphasizing the need for better testing and accountability. -- On Preparing for the Future of Work "Technical mastery is less valuable than having technical skills combined with a breadth of skills." Advice for future professionals to combine technical expertise with adaptability and interdisciplinary thinking. -- Takeaways The unpredictability of success in creative products is a key theme. Generative AI is widely recognized, but predictive AI poses ethical challenges. AI agents must be more than just wrappers around models. Benchmarking AI in complex environments is a significant challenge. The capability reliability gap highlights the unreliability of current AI systems. Human-AI collaboration is crucial for effective AI deployment. Inference scaling is a promising area for improving AI performance. Trust in AI is at risk due to rapid deployment without proper evaluation. Future engineers should focus on technical breadth and adaptability.

Daybreak
AI Snakeoil ft. Chloe Moa — Monday, Dec. 16

Daybreak

Play Episode Listen Later Dec 16, 2024 4:44


Today, we cover AI policy with professors Arvind Narayanan and Sayash Kapoor, an update with the ongoing issue of mysterious objects flying over New York and New Jersey, and a number of serious weather incidents across the US over the weekend.

Techtonic with Mark Hurst | WFMU
Arvind Narayanan, author, "AI Snake Oil" from Dec 9, 2024

Techtonic with Mark Hurst | WFMU

Play Episode Listen Later Dec 10, 2024


Arvind Narayanan, author, "AI Snake Oil" Tomaš Dvořák - "Game Boy Tune" - "Mark's intro" - "Interview with Arvind Narayanan" [0:04:54] - "Mark's comments" [0:47:52] Waveshaper - "E.P.R.O.M." [0:55:33] https://www.wfmu.org/playlists/shows/146887

BCG Henderson Institute
AI Snake Oil with Sayash Kapoor

BCG Henderson Institute

Play Episode Listen Later Dec 3, 2024 27:47


In AI Snake Oil: What AI Can Do, What It Can't, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan provide an essential understanding of how AI works and why some applications remain fundamentally beyond its capabilities.Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. As a researcher at Princeton University's Center for Information Technology Policy, he examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. In his new book, he cuts through the hype to help readers discriminate between legitimate and bogus claims for AI technologies and applications.In his conversation with Martin Reeves, chair of the BCG Henderson Institute, Kapoor discusses historical patterns of technology hype, differentiates between the powers and limitations of predictive versus generative AI, and outlines how managers can balance healthy skepticism with embracing the potential of new technologies.Key topics discussed: 01:05 | Examples of AI “snake oil”04:42 | Historical patterns of technology hypeand how AI is different07:26 | Capabilities and exaggerations of predictive AI11:42 | Powers and limitations of generative AI17:11 | Drivers of inflated expectations20:18 | Implications for regulation23:26 | How managers can balance scepticism and embracing new tech24:58 | Future of AI researchAdditional inspirations from Sayash Kapoor:AI Snake Oil (Substack)A Checklist of Eighteen Pitfalls in AI Journalism (UNESCO article, 2022)This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy

We Are Not Saved
Mid-length Non-fiction Book Reviews: Volume 2

We Are Not Saved

Play Episode Listen Later Nov 30, 2024 24:32


AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference by: Arvind Narayanan and Sayash Kapoor Country Driving: A Journey Through China from Farm to Factory by: Peter Hessler On Grand Strategy by: John Lewis Gaddis Leisure: The Basis of Culture by: Josef Pieper Anatomy of the State by: Murray Rothbard The ONE Thing: The Surprisingly Simple Truth Behind Extraordinary Results Alone on the Ice: The Greatest Survival Story in the History of Exploration by: David Roberts The Killer Angels: The Classic Novel of the Civil War by: Michael Shaara

El Brieff
Los aranceles de Sheinbaum: Las noticias para este miércoles

El Brieff

Play Episode Listen Later Nov 27, 2024 12:50


En el episodio de hoy, analizamos las tensiones comerciales entre México y Estados Unidos tras las amenazas arancelarias de Donald Trump y la respuesta de la presidenta Claudia Sheinbaum. También abordamos investigaciones sobre corrupción en Nuevo León, avances en la despenalización del aborto en el Estado de México, y los desafíos económicos y políticos que enfrenta el país.En el ámbito internacional, destacamos el anuncio del alto el fuego entre Israel y Hezbolá, la reacción de China a las políticas de Trump, y los últimos acontecimientos en Rusia y Ucrania. En nuestra sección de economía, exploramos movimientos significativos de empresas como Barclays y Walmart, y compartimos buenas noticias sobre la lucha contra el VIH.Patrocinador del día: Descubre cómo EVA by STRTGY puede transformar tu negocio con inteligencia artificial estratégica. Visita www.strtgy.ai o contáctanos a arturo@strtgy.ai para más información.Libro recomendado: 'AI Snake Oil' de Arvind Narayanan y Sayash Kapoor. Una exploración crítica sobre las promesas y limitaciones de la inteligencia artificial en el mundo actual. Disponible en BrieffyConviértete en patrocinador de El Brieff donando 25, 60 o 100 pesos al mes entrando a este link.Descarga Brieffy haciendo click aquí.Si te interesa una mención en El Brieff, escríbenos a arturo@brieffy.com Hosted on Acast. See acast.com/privacy for more information.

Untangled
Is AI snake oil?

Untangled

Play Episode Listen Later Nov 24, 2024 41:47


Hi, I'm Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here.* Check out my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here.* Last week I interviewed Mozilla's Jasmine Sun and Nik Marda on the potential of public AI, and the week prior I shared my conversation with AI reporter Karen Hao on OpenAI's mythology, Meta's secret, and Microsoft's hypocrisy.

BBS Radio Station Streams
Bible News Prophecy, November 23, 2024

BBS Radio Station Streams

Play Episode Listen Later Nov 24, 2024 14:23


Bible News Prophecy with Dr Bob Thiel Artificial Intelligence Deadly Perils What are some of the benefits and risks of Artificial Intelligence (AI) that Arvind Narayanan, Professor of Computer Science at Princeton University mentioned at the Hindustan Times Leadership Summit. Will government regulations eliminate the risks? Why or why not? What are AI 'hallucinations'? Did a Google Gemini AI chat bot tell a student that he was a waste and should "Please die," despite programming that was supposed to prevent such? Was the response actually "non-sensical" as Google claimed or instead of being nonsensical, was it in logical English but dangerous? Do AI programs give wrong answers that many may falsely rely upon? Are there any similarities between Gemini's response and the HAL9000 computer in the 1968 movie, 2001: A Space Odyssey? Might governments, including one coming in Europe, intentionally program horrible results? Will the Beast of the Sea and Earth of Revelation 13 use something like AI to require worship as well as to put in totalitarian controls of buying and selling? Although there can be real benefits from AI, will government regulations prevent it from being used by 666? Dr. Thiel and Steve Dupuie address these matters.

Something You Should Know
The Real and False Promises of AI & What They Really Ate at the First Thanksgiving

Something You Should Know

Play Episode Listen Later Nov 21, 2024 50:34


How many photographs have been taken worldwide in the history of photography? And how many just this year? These are a few of the fascinating facts that begin this episode that I know you'll end up repeating at upcoming holiday parties that will make you sound so interesting! Source: John Mitchinson author of 1227 Quite Interesting Facts to Blow Your Socks Off (https://amzn.to/4fP4vaX). To hear it said, artificial intelligence is the greatest thing in the world or the beginning of the end of civilization. So, what's the truth about AI? What can it do and what will it never do? That is what Arvind Narayanan is going to tell you, and he is someone to listen to. Arvind is a professor of computer science at Princeton University and director of its Center for Information Technology Policy. He was named one of Time magazine 100 most influential people in AI and he is co-author of the book k AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference (https://amzn.to/3Z9RBiv). What did they eat at the first Thanksgiving? No doubt you've heard stories about the first Thanksgiving but a lot of what we were told just isn't true. In fact, many of the foods and traditions of Thanksgiving came much later. Here to set the record straight on that famous dinner held by the Pilgrims and native Americans is Leslie Landrigan. She has been writing about New England history for over 10 years – and she is author of the book the book Historic Thanksgiving Foods: And the People who Cooked Them, 1607 to 1955 (https://amzn.to/40NW23s) Anyone who owns a printer has wondered why the ink cartridges cost so much to replace. The answer is a bit complicated and kind of interesting. Listen as I explain https://www.consumerreports.org/electronics-computers/printers/why-is-printer-ink-so-expensive-a2101590645/ Learn more about your ad choices. Visit megaphone.fm/adchoices

WSJ Tech News Briefing
Is Generative AI All It's Hyped Up to Be?

WSJ Tech News Briefing

Play Episode Listen Later Oct 22, 2024 13:02


There is a lot of hype around generative artificial intelligence, but is the tech truly transformational? Arvind Narayanan, Princeton University professor, director of the Center for Information Technology Policy at the school and author of the book “AI Snake Oil” says it's overhyped, while Box co-founder and CEO Aaron Levie advocates for the use of AI in a functional enterprise capacity. The two spoke with WSJ tech columnist Christopher Mims at the WSJ CIO Network Summit. Zoe Thomas hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

TechNation Radio Podcast
Episode 24-41 AI Snake Oil ++ Some Really Great Science

TechNation Radio Podcast

Play Episode Listen Later Oct 16, 2024 59:00


On this week's Tech Nation, Moira speaks with Dr. Arvind Narayanan, Princeton University Professor of Computer Science about his book, “AI Snake Oil … What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference” Then, Dr. Avak Kahvejian, Founding CEO of Cellarity joins me to explain how comparing healthy and diseased cells, along with our understanding of medications, could help treat that disease at its source.

Economics Explained
UBI Experiment: Success or Failure? Insights from Sam Altman's Trial - EP257

Economics Explained

Play Episode Listen Later Oct 10, 2024 45:47


In this episode, Gene Tunny dives into a recent Universal Basic Income (UBI) experiment funded by Sam Altman, CEO of OpenAI. Gene explores the key findings of the randomised controlled trial and discusses whether the positive outcomes are enough to convince sceptics. Are UBI recipients more financially secure, or are there deeper concerns about its impact on labour force participation and long-term wealth? Get Gene's balanced analysis of this major UBI trial and its broader implications.If you have any questions, comments, or suggestions for Gene, please email him at contact@economicsexplored.com  or send a voice message via https://www.speakpipe.com/economicsexplored. Timestamps for EP257Introduction (0:00)Defining Universal Basic Income (UBI) (4:21)Overview of the OpenAI UBI Experiment (8:09)Positive Findings from the OpenAI UBI Experiment (13:54)Concerns and Criticisms of the OpenAI UBI Experiment (21:55)Financial Impact of UBI on Household Net Worth (22:50)Gene Tunny's Skepticism About UBI (34:17)Closing Remarks and Previous Episode Clips (37:57)TakeawaysMixed Outcomes of UBI: The experiment showed some positive effects, such as increased financial flexibility and well-being, but also concerning results, such as a slight decrease in labour market participation.Spending Behavior: UBI recipients spent more on necessities like food and rent; interestingly, they were more likely to help others financially.Limited Educational and Employment Impact: Younger participants showed interest in further education, but there was no significant boost in human capital or labour productivity.Debate Over Financial Impact: UBI did not lead to clear improvements in recipients' financial health. The study found increased debt in some cases, raising questions about UBI's long-term benefits.AI and UBI: As technological advancements continue, UBI is seen by some as a solution to technological unemployment, though Gene and some experts remain sceptical about the scale of potential job loss.Links relevant to the conversationBloomberg article “Sam Altman-Backed Group Completes Largest US Study on Basic Income”:https://www.bloomberg.com/news/articles/2024-07-22/ubi-study-backed-by-openai-s-sam-altman-bolsters-support-for-basic-incomeOpenResearch's website:https://www.openresearchlab.org/Pete Judo's video on UBI experiment failing:https://youtu.be/oyoMgGiWgJQ?si=j3T-3yaEL5RajcpwNBER working papers on the studyThe Employment Effects of a Guaranteed Income: Experimental Evidence from Two U.S. States:https://www.nber.org/papers/w32719The Impact of Unconditional Cash Transfers on Consumption and Household Balance Sheets: Experimental Evidence from Two US States:https://www.nber.org/papers/w32784Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor:https://youtu.be/M3U5UVyGTuQ?si=qcqSflHCf837GisAAI can do only 5pc of jobs, says MIT economist who fears crash:https://www.afr.com/world/north-america/ai-can-do-only-5pc-of-jobs-says-mit-economist-who-fears-crash-20241003-p5kfilPrevious episodes:https://economicsexplored.com/2022/05/03/a-ubi-advocate-on-its-benefits-and-costs-ep137-show-notes-transcript/https://economicsexplored.com/2022/02/13/ubi-universal-basic-income-w-ben-phillips-anu-ep126/Lumo Coffee promotion10% of Lumo Coffee's Seriously Healthy Organic Coffee.Website: https://www.lumocoffee.com/10EXPLOREDPromo code: 10EXPLORED 

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Oct 8, 2024 54:22


Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap', which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI's catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on CORE-Bench, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks. The complete show notes for this episode can be found at https://twimlai.com/go/704.

Factually! with Adam Conover
Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor

Factually! with Adam Conover

Play Episode Listen Later Oct 2, 2024 71:28


The AI hype train has officially left the station, and it's speeding so fast it might just derail. This isn't because of what AI can actually do, it's all because of how it's marketed. This week, Adam sits with Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and co-authors of "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference." Together, they break down everything from tech that's labeled as "AI" but really isn't, to surprising cases where so-called "AI" is actually just low-paid human labor in disguise. Find Arvind and Sayash's book at factuallypod.com/booksSUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Wright Show
AI Snake Oil (Robert Wright & Arvind Narayanan)

The Wright Show

Play Episode Listen Later Oct 2, 2024 60:00


Arvind's book and newsletter: AI Snake Oil ... A taxonomy of AI observers ... The impact of generative AI ... Why predictive AI sometimes fails ... Uses and misuses of AI in law and healthcare ... Can social media be saved? ... Heading to Overtime ...

Bloggingheads.tv
AI Snake Oil (Robert Wright & Arvind Narayanan)

Bloggingheads.tv

Play Episode Listen Later Oct 2, 2024 60:00


Arvind's book and newsletter: AI Snake Oil ... A taxonomy of AI observers ... The impact of generative AI ... Why predictive AI sometimes fails ... Uses and misuses of AI in law and healthcare ... Can social media be saved? ... Heading to Overtime ...

The Sunday Show
AI Snake Oil: Separating Hype from Reality

The Sunday Show

Play Episode Listen Later Sep 29, 2024 35:44


Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference, published September 24 by Princeton University Press. In this conversation, Justin Hendrix focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"

Ground Truths
AI Snake Oil—A New Book by 2 Princeton University Computer Scientists

Ground Truths

Play Episode Listen Later Sep 24, 2024 39:24


Arvind Narayanan and Sayash Kapoor are well regarded computer scientists at Princeton University and have just published a book with a provocative title, AI Snake Oil. Here I've interviewed Sayash and challenged him on this dismal title, for which he provides solid examples of predictive AI's failures. Then we get into the promise of generative AI.Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Transcript with links to audio and external links to key publications Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths, and I'm delighted to welcome the co-author of a new book AI SNAKE OIL and it's Sayash Kapoor who has written this book with Arvind Narayanan of Princeton. And so welcome, Sayash. It's wonderful to have you on Ground Truths.Sayash Kapoor (00:28):Thank you so much. It's a pleasure to be here.Eric Topol (00:31):Well, congratulations on this book. What's interesting is how much you've achieved at such a young age. Here you are named in TIME100 AI's inaugural edition as one of those eminent contributors to the field. And you're currently a PhD candidate at Princeton, is that right?Sayash Kapoor (00:54):That's correct, yes. I work at the Center for Information Technology Policy, which is a joint program between the computer science department and the school of public and international affairs.Eric Topol (01:05):So before you started working on your PhD in computer science, you already were doing this stuff, I guess, right?Sayash Kapoor (01:14):That's right. So before I started my PhD, I used to work at Facebook as a machine learning engineer.Eric Topol (01:20):Yeah, well you're taking it to a more formal level here. Before I get into the book itself, what was the background? I mean you did describe it in the book why you decided to write a book, especially one that was entitled AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Background to Writing the BookSayash Kapoor (01:44):Yeah, absolutely. So I think for the longest time both Arvind and I had been sort of looking at how AI works and how it doesn't work, what are cases where people are somewhat fooled by the potential for this technology and fail to apply it in meaningful ways in their life. As an engineer at Facebook, I had seen how easy it is to slip up or make mistakes when deploying machine learning and AI tools in the real world. And had also seen that, especially when it comes to research, it's really easy to make mistakes even unknowingly that inflate the accuracy of a machine learning model. So as an example, one of the first research projects I did when I started my PhD was to look at the field of political science in the subfield of civil war prediction. This is a field which tries to predict where the next civil war will happen and in order to better be prepared for civil conflict.(02:39):And what we found was that there were a number of papers that claimed almost perfect accuracy at predicting when a civil war will take place. At first this seemed sort of astounding. If AI can really help us predict when a civil war will start like years in advance sometimes, it could be game changing, but when we dug in, it turned out that every single one of these claims where people claim that AI was better than two decades old logistic regression models, every single one of these claims was not reproducible. And so, that sort of set the alarm bells ringing for the both of us and we sort of dug in a little bit deeper and we found that this is pervasive. So this was a pervasive issue across fields that were quickly adopting AI and machine learning. We found, I think over 300 papers and the last time I compiled this list, I think it was over 600 papers that suffer from data leakage. That is when you can sort of train on the sets that you're evaluating your models on. It's sort of like teaching to the test. And so, machine learning model seems like it does much better when you evaluate it on your data compared to how it would really work out in the real world.Eric Topol (03:48):Right. You say in the book, “the goal of this book is to identify AI snake oil - and to distinguish it from AI that can work well if used in the right ways.” Now I have to tell you, it's kind of a downer book if you're an AI enthusiast because there's not a whole lot of positive here. We'll get to that in a minute. But you break down the types of AI, which I'm going to challenge a bit into three discrete areas, the predictive AI, which you take a really harsh stance on, say it will never work. Then there's generative AI, obviously the large language models that took the world by storm, although they were incubating for several years when ChatGPT came along and then content moderation AI. So maybe you could tell us about your breakdown to these three different domains of AI.Three Types of AI: Predictive, Generative, Content ModerationSayash Kapoor (04:49):Absolutely. I think one of our main messages across the book is that when we are talking about AI, often what we are really interested in are deeper questions about society. And so, our breakdown of predictive, generative, and content moderation AI sort of reflects how these tools are being used in the real world today. So for predictive AI, one of the motivations for including this in the book as a separate category was that we found that it often has nothing to do with modern machine learning methods. In some cases it can be as simple as decades old linear regression tools or logistic regression tools. And yet these tools are sold under the package of AI. Advances that are being made in generative AI are sold as if they apply to predictive AI as well. Perhaps as a result, what we are seeing is across dozens of different domains, including insurance, healthcare, education, criminal justice, you name it, companies have been selling predictive AI with the promise that we can use it to replace human decision making.(05:51):And I think that last part is where a lot of our issues really come down to because these tools are being sold as far more than they're actually capable of. These tools are being sold as if they can enable better decision making for criminal justice. And at the same time, when people have tried to interrogate these tools, what we found is these tools essentially often work no better than random, especially when it comes to some consequential decisions such as job automation. So basically deciding who gets to be called on the next level of like a job interview or who is rejected, right as soon as they submit the CV. And so, these are very, very consequential decisions and we felt like there is a lot of snake oil in part because people don't distinguish between applications that have worked really well or where we have seen tremendous advances such as generative AI and applications where essentially we've stalled for a number of decades and these tools don't really work as claimed by the developers.Eric Topol (06:55):I mean the way you partition that, the snake oil, which is a tough metaphor, and you even show the ad from 1905 of snake oil in the book. You're really getting at predictive AI and how it is using old tools and selling itself as some kind of breakthrough. Before I challenge that, are we going to be able to predict things? By the way, using generative AI, not as you described, but I would like to go through a few examples of how bad this has been and since a lot of our listeners and readers are in the medical world or biomedical world, I'll try to get to those. So one of the first ones you mentioned, which I completely agree, is how prediction of Covid from the chest x-ray and there were thousands of these studies that came throughout the pandemic. Maybe you could comment about that one.Some Flagrant ExamplesSayash Kapoor (08:04):Absolutely. Yeah, so this is one of my favorite examples as well. So essentially Michael Roberts and his team at the University of Cambridge a year or so after the pandemic looked back at what had happened. I think at the time there were around 500 studies that they included in the sample. And they looked back to see how many of these would be useful in a clinical setting beyond just the scope of writing a research paper. And they started out by using a simple checklist to see, okay, are these tools well validated? Does the training and the testing data, is it separate? And so on. So they ran through the simple checklist and that excluded all but 60 of these studies from consideration. So apart from 60 studies, none of these other studies even passed a very, very basic criteria for being included in the analysis. Now for these 60, it turns out that if you take a guess about how many were useful, I'm pretty confident most cases would be wrong.(09:03):There were exactly zero studies that were useful in a clinically relevant setting. And the reasons for this, I mean in some cases the reasons were as bizarre as training a machine learning model to predict Covid where all of the positive samples of people who had Covid were from adults. But all of the negative samples of people who didn't have Covid were from children. And so, essentially claiming that the resulting classifier can predict who has Covid is bizarre because all the classifier is doing is looking at the checks history and basically predicting which x-ray belongs to a child versus an adult. And so, this is the sort of error in some cases we saw duplicates in the training and test set. So you have the same person that is being used for training the model and that it is also used for evaluating the model. So simply memorizing a given sample of x-rays would be enough to achieve a very high performance. And so, for issues like these, I think all 60 of these studies prove to be not useful in a clinically relevant setting. And I think this is sort of the type of pattern that we've seen over and over again.Eric Topol (10:14):Yeah, and I agree with you on that point. I mean that was really a flagrant example and that would fulfill your title of your book, which as I said is a very tough title. But on page 29, and we'll have this in the post. You have a figure, the landscape of AI snake oil, hype, and harm. And the problem is there is nothing good in this landscape. So on the y-axis you have works, hype, snake oil going up on the y-axis. And on the x-axis, you have benign and harmful. So the only thing you have that works and that's benign is autocomplete. I wouldn't say that works. And then you have works facial recognition for surveillance is harmful. This is a pretty sobering view of AI. Obviously, there's many things that are working that aren't on this landscape. So I just would like to challenge, are you a bit skewed here and only fixating on bad things? Because this diagram is really rough. I mean, there's so much progress in AI and you have in here you mentioned the predicting civil wars, and obviously we have these cheating detection, criminal risk prediction. I mean a lot of problems, video interviews that are deep fakes, but you don't present any good things.Optimism on Generative AISayash Kapoor (11:51):So to be clear, I think both Arvind and are somewhat paradoxically optimistic about the future of generative AI. And so, the decision to focus on snake oil was a very intentional one from our end. So in particular, I think at various places in the book we outline why we're optimistic, what types of applications we think we're optimistic about as well. And the reason we don't focus on them is that it basically comes down to the fact that no one wants to read a book that has 300 pages about the virtues of spellcheck or AI for code generation or something like that. But I think I completely agree and acknowledge that there are lots of positive applications that didn't make the cut for the book as well. That was because we wanted people to come to this from a place of skepticism so that they're not fooled by the hype.(12:43):Because essentially we see even these positive uses of AI being lost out if people have unrealistic expectations from what an AI tool should do. And so, pointing out snake oil is almost a prerequisite for being able to use AI productively in your work environment. I can give a couple of examples of where or how we've sort of manifested this optimism. One is AI for coding. I think writing code is an application that I do, at least I use AI a lot. I think almost half of the code I write these days is generated, at least the first draft is generated using AI. And yet if I did not know how to program, it would be a completely different question, right? Because for me pointing out that, oh, this syntax looks incorrect or this is not handling the data in the correct way is as simple as looking at a piece of code because I've done this a few times. But if I weren't an expert on programming, it would be completely disastrous because even if the error rate is like 5%, I would have dozens of errors in my code if I'm using AI to generate it.(13:51):Another example of how we've been using it in our daily lives is Arvind has two little kids and he's built a number of applications for his kids using AI. So I think he's a big proponent of incorporating AI into children's lives as a force for good rather than having a completely hands-off approach. And I think both of these are just two examples, but I would say a large amount of our work these days occurs with the assistance of AI. So we are very much optimistic. And at the same time, I think one of the biggest hindrances to actually adopting AI in the real world is not understanding its limitations.Eric Topol (14:31):Right. Yeah, you say in the book quote, “the two of us are enthusiastic users of generative AI, both in our work and our personal lives.” It just doesn't come through as far as the examples. But before I leave the troubles of predictive AI, I liked to get into a few more examples because that's where your book shines in convincing that we got some trouble here and we need to be completely aware. So one of the most famous, well, there's a couple we're going to get into, but one I'd like to review with you, it's in the book, is the prediction of sepsis in the Epic model. So as you know very well, Epic is the most used IT and health systems electronic health records, and they launched never having published an algorithm that would tell when the patient was hospitalized if they actually had sepsis or risk of sepsis. Maybe you could take us through that, what you do in the book, and it truly was a fiasco.The Sepsis DebacleSayash Kapoor (15:43):Absolutely. So I think back in 2016/2017, Epic came up with a system that would help healthcare providers predict which patients are most at risk of sepsis. And I think, again, this is a very important problem. I think sepsis is one of the leading causes of death worldwide and even in the US. And so, if we could fix that, I think it would be a game changer. The problem was that there were no external validations of this algorithm for the next four years. So for four years, between 2017 to 2021, the algorithm wasn't used by hundreds of hospitals in the US. And in 2021, a team from University of Michigan did this study in their own hospital to see what the efficacy of the sepsis prediction model is. They found out that Epic had claimed an AUC of between 0.76 and 0.83, and the actual AUC was closer to 0.6, and AUC of 0.5 is making guesses at random.(16:42):So this was much, much worse than the company's claims. And I think even after that, it still took a year for sepsis to roll back this algorithm. So at first, Epic's claims were that this model works well and that's why hospitals are adopting it. But then it turned out that Epic was actually incentivizing hospitals to adopt sepsis prediction models. I think they were giving credits of hundreds of thousands of dollars in some cases. If a hospital satisfied a certain set of conditions, one of these conditions was using a sepsis prediction model. And so, we couldn't really take their claims at face value. And finally in October 2022, Epic essentially rolled back this algorithm. So they went from this one size fits all sepsis prediction model to a model that each hospital has to train on its own data, an approach which I think is more likely to work because each hospital's data is different. But it's also more time consuming and expensive for the hospitals because all of a sudden you now need your own data analysts to be able to roll out this model to be able to monitor it.(17:47):I think this study also highlights many of the more general issues with predictive AI. These tools are often sold as if they're replacements for an existing system, but then when things go bad, essentially they're replaced with tools that do far less. And companies often go back to the fine print saying that, oh, we should always deploy it with the human in the loop, or oh, it needs to have these extra protections that are not our responsibility, by the way. And I think that gap between what developers claim and how the tool actually works is what is most problematic.Eric Topol (18:21):Yeah, no, I mean it's an egregious example, and again, it fulfills like what we discussed with statistics, but even worse because it was marketed and it was incentivized financially and there's no doubt that some patients were completely miscategorized and potentially hurt. The other one, that's a classic example that went south is the Optum UnitedHealth algorithm. Maybe you could take us through that one as well, because that is yet another just horrible case of how people were discriminated against.The Infamous Optum AlgorithmSayash Kapoor (18:59):Absolutely. So Optum, another health tech company created an algorithm to prioritize high risk patients for preemptive care. So I think it was around when Obamacare was being introduced that insurance networks started looking into how they could reduce costs. And one of the main ways they identified to reduce costs is basically preemptively caring for patients who are extremely high risk. So in this case, they decided to keep 3% of the patients in the high risk category and they built a classifier to decide who's the highest risk, because potentially once you have these patients, you can proactively treat them. There might be fewer emergency room visits, there might be fewer hospitalizations and so on. So that's all fine and good. But what happened when they implemented the algorithm was that every machine learning model needs like the target variable, what is being predicted at the end of the day. What they decided to predict was how much patient would pay, how much would they charge, what cost the hospital would incur if they admitted this patient.(20:07):And they essentially use that to predict who should be prioritized for healthcare. Now unsurprisingly, it turned out that white patients often pay a lot more or are able to pay a lot more when it comes to hospital visits. Maybe it's because of better insurance or better conditions at work that allow them to take leave and so on. But whatever the mechanism is, what ended up happening with this algorithm was I think black patients with the same level of healthcare prognosis were half as likely or about much less likely compared to white ones of getting enrolled in this high risk program. So they were much less likely to get this proactive care. And this was a fantastic study by Obermeyer, et al. It was published in Science in 2019. Now, what I think is the most disappointing part of this is that Optum did not stop using this algorithm after this study was released. And that was because in some sense the algorithm was working precisely as expected. It was an algorithm that was meant to lower healthcare costs. It wasn't an algorithm that was meant to provide better care for patients who need it most. And so, even after this study was rolled out, I think Optum continued using this algorithm as is. And I think as far as I know, even today this is or some version of this algorithm is still in use across the network of hospitals that Optum serves.Eric Topol (21:31):No, it's horrible the fact that it was exposed by Ziad Obermeyer's paper in Science and that nothing has been done to change it, it's extraordinary. I mean, it's just hard to imagine. Now you do summarize the five reasons predictive AI fails in a nice table, we'll put that up on the post as well. And I think you've kind of reviewed that as these case examples. So now I get to challenge you about predictive AI because I don't know that such a fine line between that and generative AI are large language models. So as you know, the group at DeepMind and now others have done weather forecasting with multimodal large language models and have come up with some of the most accurate weather forecasting we've ever seen. And I've written a piece in Science about medical forecasting. Again, taking all the layers of a person's data and trying to predict if they're high risk for a particular condition, including not just their electronic record, but their genomics, proteomics, their scans and labs and on and on and on exposures, environmental.Multimodal A.I. in Medicine(22:44):So I want to get your sense about that because this is now a coalescence of where you took down predictive AI for good reasons, and then now these much more sophisticated models that are integrating not just large data sets, but truly multimodal. Now, some people think multimodal means only text, audio, speech and video images, but here we're talking about multimodal layers of data as for the weather forecasting model or earthquake prediction or other things. So let's get your views on that because they weren't really presented in the book. I think they're a positive step, but I want to see what you think.Sayash Kapoor (23:37):No, absolutely. I think maybe the two questions are sort of slightly separate in my view. So for things like weather forecasting, I think weather forecasting is a problem that's extremely tenable for generative AI or for making predictions about the future. And I think one of the key differences there is that we don't have the problem of feedback loops with humans. We are not making predictions about individual human beings. We are rather making predictions about what happens with geological outcomes. We have good differential equations that we've used to predict them in the past, and those are already pretty good. But I do think deep learning has taken us one step further. So in that sense, I think that's an extremely good example of what doesn't really fit within the context of the chapter because we are thinking about decisions thinking about individual human beings. And you rightly point out that that's not really covered within the chapter.(24:36):For the second part about incorporating multimodal data, genomics data, everything about an individual, I think that approach is promising. What I will say though is that so far we haven't seen it used for making individual decisions and especially consequential decisions about human beings because oftentimes what ends up happening is we can make very good predictions. That's not in question at all. But even with these good predictions about what will happen to a person, sometimes intervening on the decision is hard because oftentimes we treat prediction as a problem of correlations, but making decisions is a problem of causal estimation. And that's where those two sort of approaches disentangle a little bit. So one of my examples, favorite examples of this is this model that was used to predict who should be released before screening when someone comes in with symptoms of pneumonia. So let's say a patient comes in with symptoms of pneumonia, should you release them on the day of?(25:39):Should you keep them in the hospital or should you transfer them to the ICU? And these ML researchers were basically trying to solve this problem. They found out that the neural network model they developed, this was two decades ago, by the way. The neural network model they developed was extremely accurate at predicting who would basically have a high risk of having complications once they get pneumonia. But it turned out that the model was saying essentially that anyone who comes in who has asthma and who comes in with symptoms of pneumonia is the lowest risk patient. Now, why was this? This was because when in the past training data, when some such patients would come into the hospital, these patients would be transferred directly to the ICU because the healthcare professionals realized that could be a serious condition. And so, it turned out that actually patients who had asthma who came in with symptoms of pneumonia were actually the lowest risk amongst the population because they were taken such good care of.(26:38):But now if you use this prediction that a patient comes in with symptoms of pneumonia and they have asthma, and so they're low risk, if you use this to make a decision to send them back home, that could be catastrophic. And I think that's the danger with using predictive models to make decisions about people. Now, again, I think the scope and consequences of decisions also vary. So you could think of using this to surface interesting patterns in the data, especially at a slightly larger statistical level to see how certain subpopulations behave or how certain groups of people are likely to develop symptoms or whatever. But I think when as soon as it comes to making decisions about people, the paradigm of problem solving changes because as long as we are using correlational models, I think it's very hard to say what will happen if we change the conditions, what will happen if the decision making mechanism is very different from one where the data was collected.Eric Topol (27:37):Right. No, I mean where we agree on this is that at the individual level, using multimodal AI with all these layers of data that have now recently become available or should be available, that has to be compared ideally in a randomized trial with standard of care today, which doesn't use any of that. And to see whether or not that decision's made, does it change the natural history and is it an advantage, that's yet to be done. And I agree, it's a very promising pathway for the future. Now, I think you have done what is a very comprehensive sweep on the predictive AI failures. You've mentioned here in our discussion, your enthusiasm and in the book about generative AI positive features and hope and excitement perhaps even. You didn't really yet, we haven't discussed much on the content moderation AI that you have discreetly categorized. Maybe you could just give us the skinny on your sense of that.Content Moderation AISayash Kapoor (28:46):Absolutely. So content moderation AI is AI that's used to sort of clean up social media feeds. Social media platforms have a number of policies about what's allowed and not allowed on the platforms. Simple things such as spam are obviously not allowed because let's say people start spamming the platform, it becomes useless for everyone. But then there are other things like hate speech or nudity or pornography and things like that, which are also disallowed on most if not all social media platforms today. And I think a lot of the ways in which these policies are enforced today is using AI. So you might have an AI model that runs every single time you upload a photo to Facebook, for instance. And not just one perhaps hundreds of such models to detect if it has nudity or hate speech or any of these other things that might violate the platform's terms of service.(29:40):So content moderation AI is AI that's used to make these decisions. And very often in the last few years we've seen that when something gets taken down, for instance, Facebook deletes a post, people often blame the AI for having a poor understanding. Let's say of satire or not understanding what's in the image to basically say that their post was taken down because of bad AI. Now, there have been many claims that content moderation AI will solve social media's problems. In particular, we've heard claims from Mark Zuckerberg who in a senate testimony I think back in 2018, said that AI is going to solve most if not all of their content moderation problems. So our take on content moderation AI is basically this. AI is very, very useful for solving the simple parts of content moderation. What is a simple part? So basically the simple parts of content moderation are, let's say you have a large training data of the same type of policy violation on a platform like Facebook.(30:44):If you have large data sets, and if these data sets have a clear line in the sand, for instance, with nudity or pornography, it's very easy to create classifiers that will automate this. On the other hand, the hard part of content moderation is not actually just creating these AI models. The hard part is drawing the line. So when it comes to what is allowed and not allowed on platforms, these platforms are essentially making decisions about speech. And that is a topic that's extremely fraught. It's fraught in the US, it's also fraught globally. And essentially these platforms are trying to solve this really hard problem at scale. So they're trying to come up with rules that apply to every single user of the platform, like over 3 billion users in the case of Facebook. And this inevitably has these trade-offs about what speech is allowed versus disallowed that are hard to say one way or the other.(31:42):They're not black and white. And what we think is that AI has no place in this hard part of content moderation, which is essentially human. It's essentially about adjudicating between competing interests. And so, when people claim that AI will solve these many problems of content moderation, I think what they're often missing is that there's this extremely large number of things you need to do to get content moderation right. AI solves one of these dozen or so things, which is detecting and taking down content automatically, but all of the rest of it involves essentially human decisions. And so, this is sort of the brief gist of it. There are also other problems. For example, AI doesn't really work so well for low resource languages. It doesn't really work so well when it comes to nuances and so on that we discussed in the book. But we think some of these challenges are solvable in the medium to long term. But these questions around competing interests of power, I think are beyond the domain of AI even in the medium to long term.Age 28! and Career AdviceEric Topol (32:50):No, I think you nailed that. I think this is an area that you've really aptly characterized and shown the shortcomings of AI and how the human factor is so critically important. So what's extraordinary here is you're just 28 and you are rocking it here with publications all over the place on reproducibility, transparency, evaluating generative AI, AI safety. You have a website on AI snake oil that you're collecting more things, writing more things, and of course you have the experience of having worked in the IT world with Facebook and also I guess also Columbia. So you're kind of off to the races here as one of the really young leaders in the field. And I am struck by that, and maybe you could comment about the inspiration you might provide to other young people. You're the youngest person I've interviewed for Ground Truths, by the way, by a pretty substantial margin, I would say. And this is a field where it attracts so many young people. So maybe you could just talk a bit about your career path and your advice for people. They may be the kids of some of our listeners, but they also may be some of the people listening as well.Sayash Kapoor (34:16):Absolutely. First, thank you so much for the kind words. I think a lot of this work is with collaborators without whom of course, I would never be able to do this. I think Arvind is a great co-author and supporter. I think in terms of my career parts, it was sort of like a zigzag, I would say. It wasn't clear to me when I was an undergrad if I wanted to do grad school or go into the industry, and I sort of on a whim went to work at Facebook, and it was because I'd been working on machine learning for a little bit of time, and I just thought, it's worth seeing what the other side has to offer beyond academia. And I think that experience was very, very helpful. One of the things, I talked to a lot of undergrads here at Princeton, and one of the things I've seen people be very concerned about is, what is the grad school they're going to get into right after undergrad?(35:04):And I think it's not really a question you need to answer now. I mean, in some cases I would say it's even very helpful to have a few years of industry experience before getting into grad school. That has definitely, at least that has been my experience. Beyond that, I think working in a field like AI, I think it's very easy to be caught up with all of the new things that are happening each day. So I'm not sure if you know, but AI has I think over 500-1,000 new archive papers every single day. And with this rush, I think there's this expectation that you might put on yourself on being successful requires a certain number of publications or a certain threshold of things. And I think more often than not, that is counterproductive. So it has been very helpful for me, for example, to have collaborators who are thinking long term, so this book, for instance, is not something that would be very valued within the CS community, I would say. I think the CS community values peer-reviewed papers a lot more than they do books, and yet we chose to write it because I think the staying power of a book or the longevity of a book is much more than any single paper could do. So the other concrete thing I found very helpful is optimizing for a different metric compared to what the rest of the community seems to be doing, especially when it comes to fast moving fields like AI.Eric Topol (36:29):Well, that last piece of advice is important because I think too often people, whether it's computer scientists, life scientists, whoever, they don't realize that their audience is much broader. And that reaching the public with things like a book or op-eds or essays, varied ways that are intended for public consumption, not for, in this case, computer scientists. So that's why I think the book is a nice contribution. I don't like the title because it's so skewed. And also the content is really trying to hammer it at home. I hope you write a sequel book on the positive sides of AI. I did want to ask you, when I read the book, I thought I heard your voice. I thought you had written the book, and Arvind maybe did some editing. You wrote about Arvind this and Arvind that. Did you write the first draft of the book and then he kind of came along?Sayash Kapoor (37:28):No, absolutely not. So the way we wrote the book was we basically started writing it in parallel, and I wrote the first draft of half the chapters and he wrote the first draft of the other half, and that was essentially all the way through. So we would sort of write a draft, pass it to the other person, and then keep doing this until we sent it to our publishers.Eric Topol (37:51):Okay. So I guess I was thinking of the chapters you wrote where it came through. I'm glad that it was a shared piece of work because that's good, because that's what co-authoring is all about, right? Well, Sayash, it's really been a joy to meet you and congratulations on this book. I obviously have expressed my objections and my disagreements, but that's okay because this book will feed the skeptics of AI. They'll love this. And I hope that the positive side, which I think is under expressed, will not be lost and that you'll continue to work on this and be a conscience. You may know I've interviewed a few other people in the AI space that are similarly like you, trying to assure its safety, its transparency, the ethical issues. And I think we need folks like you. I mean, this is what helps get it on track, keeping it from getting off the rails or what it shouldn't be doing. So keep up the great work and thanks so much for joining.Sayash Kapoor (39:09):Thank you so much. It was a real pleasure.************************************************Thanks for listening, reading or watching!The Ground Truths newsletters and podcasts are all free, open-access, without ads.Please share this post/podcast with your friends and network if you found it informative!Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff for audio and video support at Scripps Research.Note: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in. Get full access to Ground Truths at erictopol.substack.com/subscribe

AI DAILY: Breaking News in AI
SUPERINTELLIGENCE BY 2034

AI DAILY: Breaking News in AI

Play Episode Listen Later Sep 24, 2024 3:44


Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us OpenAI CEO Predicts AI Superintelligence in a Decade Sam Altman, CEO of OpenAI, predicts that AI superintelligence could emerge within the next 10 years, marking the start of "The Intelligence Age." While acknowledging potential labor market disruptions, he envisions AI revolutionizing fields like healthcare and education, driving global prosperity. Altman urges caution but remains optimistic about AI's societal impact. California Implements AI Laws on Deepfakes, Political Ads, and Robocalls California Governor Gavin Newsom has signed nine new AI laws, addressing issues like AI-generated deepfakes, robocalls, and political ads. The laws include transparency requirements for AI-generated content, restrictions on deepfake nudes, and mandates for AI disclosures in election-related materials. Additional regulations are under review to manage AI's societal impact. AI Memes Influence 2024 Presidential Race Without Major Threats AI-generated political memes are shaping the 2024 U.S. presidential election, but they are more absurd than feared. Images like Trump riding a cat have flooded social media, raising concerns over racist messaging. Experts note AI's speed and accessibility, allowing campaigns to produce rapid, attention-grabbing content. 108 Nations Collaborate on AI Playbook for Global Knowledge Sharing Small nations, led by Singapore and Rwanda, have developed an AI Playbook to facilitate global collaboration and knowledge sharing on AI adoption. The playbook helps guide AI governance, safety, and societal impact, addressing challenges small states face. It promotes inclusive, open-source tools for safe and accessible AI systems. Tackling AI Hype Through Education The hype surrounding generative AI feels inescapable, often perpetuated by companies, researchers, and journalists alike. Princeton's Arvind Narayanan and PhD candidate Sayash Kapoor advocate for addressing AI's limitations with proper education. Their new book, AI Snake Oil, critiques misleading claims about AI's capabilities. They emphasize that while AI tools can influence society, understanding key concepts like machine learning is crucial.  AI Takes Over Police Body Cam Review, Raises Questions About Officer Behavior Human reviewers can't keep up with police body cam footage, so AI tools like Truleo are being used to assess officer behavior. While some departments see improvements in professionalism, concerns arise over AI's role and whether officers may adjust behavior to "game" the system for better evaluations.

Tech Policy Podcast
385: AI Snake Oil

Tech Policy Podcast

Play Episode Listen Later Sep 23, 2024 53:58


Sayash Kapoor (Princeton) discusses the incoherence of precise p(doom) predictions and the pervasiveness of AI “snake oil.” Check out his and Arvind Narayanan's new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Topics include:What's a prediction, really?p(doom): your guess is as good as anyone'sFreakishly chaotic creatures (us, that is)AI can't predict the impact of AIGaming AI with invisible inkLife is luck—let's act like itSuperintelligence (us, that is)The bitter lessonAI danger: sweat the small stuffLinks:AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the DifferenceAI Existential Risk Probabilities Are Too Unreliable to Inform Policy AI Snake Oil (Substack)

Masters of Privacy
Daniel Jaye: non-deprecated cookies (II), hyper-federated data, p3p and publishers

Masters of Privacy

Play Episode Listen Later Sep 16, 2024 23:14


This is our second interview analyzing the impact of Google's decision not to deprecate third-party cookies on its Chrome browser. Daniel Jaye is a seasoned technology industry executive and currently is CEO and founder of Aqfer, a Marketing Data Platform on top of which businesses can build their own MarTech and AdTech solutions.  Daniel has provided strategic, tactical and technology advisory services to a wide range of marketing technology and big data companies. Clients have included Brave Browser, Altiscale, ShareThis, Ghostery, OwnerIQ, Netezza, Akamai, and Tremor Media. He was the founder and CEO of Korrelate, a leading automotive marketing attribution company -purchased by J.D. Power in 2014- as well as the former president of TACODA -bought by AOL in 2007. Daniel was also the founder and CTO of Permissus, an enterprise privacy compliance technology provider.  All of the above were preceded by his role as founder and CTO of Engage, acting CTO of CMGI and director of High Performance Computing at Fidelity Investments. He also worked at Epsilon and Accenture (formerly Andersen Consulting).  Daniel Jaye graduated magna cum laude with a BA in Astronomy and Astrophysics and Physics from Harvard University.   References: Daniel Jaye on LinkedIn Aqfer P3P: Platform for Privacy Preferences (W3C) Luke Mulks (Brave Browser) on Masters of Privacy Adnostic: Privacy Preserving Targeted Advertising (paper by Vincent Toubiana, Arvind Narayanan, Dan Boneh, Helen Nissenbaum, Solon Barocas)

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: OpenAI's Newest Board Member, Zico Colter on The Biggest Bottlenecks to the Performance of Foundation Models | The Biggest Questions and Concerns in AI Safety | How to Regulate an AI-Centric World

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Sep 4, 2024 60:23


Zico Colter is a Professor and the Director of the Machine Learning Department at Carnegie Mellon University.  His research spans several topics in AI and machine learning, including work in AI safety and robustness, LLM security, the impact of data on models, implicit models, and more.  He also serves on the Board of OpenAI, as a Chief Expert for Bosch, and as Chief Technical Advisor to Gray Swan, a startup in the AI safety space. In Today's Episode with Zico Colter We Discuss: 1. Model Performance: What are the Bottlenecks: Data: To what extent have we leveraged all available data? How can we get more value from the data that we have to improve model performance? Compute: Have we reached a stage of diminishing returns where more data does not lead to an increased level of performance? Algorithms: What are the biggest problems with current algorithms? How will they change in the next 12 months to improve model performance? 2. Sam Altman, Sequoia and Frontier Models on Data Centres: Sam Altman: Does Zico agree with Sam Altman's statement that "compute will be the currency of the future?" Where is he right? Where is he wrong? David Cahn @ Sequoia: Does Zico agree with David's statement; "we will never train a frontier model on the same data centre twice?" 3. AI Safety: What People Think They Know But Do Not: What are people not concerned about today which is a massive concern with AI? What are people concerned about which is not a true concern for the future? Does Zico share Arvind Narayanan's concern, "the biggest danger is not that people will believe what they see, it is that they will not believe what they see"? Why does Zico believe the analogy of AI to nuclear weapons is wrong and inaccurate?  

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Aug 28, 2024 51:55


Arvind Narayanan is a professor of Computer Science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a big proponent of the AI scaling myths around the importance of just adding more compute. He is also the lead author of a textbook on the computer science of cryptocurrencies which has been used in over 150 courses around the world, and an accompanying Coursera course that has had over 700,000 learners. In Today's Episode with Arvind Narayanan We Discuss: 1. Compute, Data, Algorithms: What is the Bottleneck: Why does Arvind disagree with the commonly held notion that more compute will result in an equal and continuous level of model performance improvement? Will we continue to see players move into the compute layer in the need to internalise the margin? What does that mean for Nvidia? Why does Arvind not believe that data is the bottleneck? How does Arvind analyse the future of synthetic data? Where is it useful? Where is it not? 2. The Future of Models: Does Arvind agree that this is the fastest commoditization of a technology he has seen? How does Arvind analyse the future of the model landscape? Will we see a world of few very large models or a world of many unbundled and verticalised models? Where does Arvind believe the most value will accrue in the model layer? Is it possible for smaller companies or university research institutions to even play in the model space given the intense cash needed to fund model development? 3. Education, Healthcare and Misinformation: When AI Goes Wrong: What are the single biggest dangers that AI poses to society today? To what extent does Arvind believe misinformation through generative AI is going to be a massive problem in democracies and misinformation? How does Arvind analyse AI impacting the future of education? What does he believe everyone gets wrong about AI and education? Does Arvind agree that AI will be able to put a doctor in everyone's pocket? Where does he believe this theory is weak and falls down?  

CAISzeit – In welcher digitalen Gesellschaft wollen wir leben?
Ist Gerechtigkeit programmierbar? Fairness und Transparenz in Algorithmen.

CAISzeit – In welcher digitalen Gesellschaft wollen wir leben?

Play Episode Listen Later Aug 27, 2024 38:39


Algorithmen bestimmen unser Leben: Von den Inhalten, die wir in sozialen Medien sehen, bis hin zu den Krediten, die uns gewährt werden. Aber inwieweit sind Algorithmen fair und transparent? Und welche Folgen kann es haben, wenn sie es nicht sind? Ist Gerechtigkeit programmierbar? Diese Fragen und mehr besprechen wir in dieser CAISzeit mit Miriam Fahimi. Miriam ist von April bis September 2024 als Fellow am CAIS und promoviert derzeit in den Science and Technology Studies am Digital Age Research Center (D!ARC) der Universität Klagenfurt. Sie erforscht die „Fairness in Algorithmen“ und hat über eineinhalb Jahre in einem Kreditunternehmen beobachtet, wie dort über transparente und faire Algorithmen diskutiert wird. Empfehlungen zum Thema Forschung: · Digital Age Research Center (D!ARC), Universität Klagenfurt. https://www.aau.at/digital-age-research-center/ · Meisner, C., Duffy, B. E., & Ziewitz, M. (2022). The labor of search engine evaluation: Making algorithms more human or humans more algorithmic? New Media & Society. https://doi.org/10.1177/14614448211063860 · Poechhacker, N., Burkhardt, M., & Passoth, J.-H. (2024). 10. Recommender Systems beyond the Filter Bubble: Algorithmic Media and the Fabrication of Publics. In J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, & M. Arnold (Hrsg.), Algorithmic Regimes (S. 207–228). Amsterdam University Press. https://doi.org/10.1515/9789048556908-010 Populärwissenschaftliche Literatur: · Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. · Webseite von Kate Crawford. https://katecrawford.net Dokumentarfilm: · Coded Bias (dt. Vorprogrammierte Diskriminierung; abrufbar auf Netflix): In dieser Dokumentation werden die Vorurteile in Algorithmen untersucht, die die Forscherin am MIT Media Lab Joy Buolamwini in Systemen zur Gesichtserkennung offenlegte. https://www.netflix.com/de/title/81328723 Newsletter: · AI Snake Oil von Arvind Narayanan & Sayash Kapoor. https://www.aisnakeoil.com Ticker vom D64 –Zentrum für Digitalen Fortschritt: https://kontakt.d-64.org/ticker/

Scaling Theory
#9 – Arvind Narayanan: Myths and Policies in Scaling AI

Scaling Theory

Play Episode Listen Later Aug 26, 2024 48:54


My guest is Arvind Narayanan, a Professor of Computer Science at Princeton University, and the director of the Center for Information Technology Policy, also at Princeton. Arvind is renowned for his work on the societal impacts of digital technologies, including his textbook on fairness and machine learning, his online course on cryptocurrencies, his research on data de-anonymization, dark patterns, and more. He has already amassed over 30,000 citations on Google Scholar. In just a few days, in late September 2024, Arvind will release a book co-authored with Sayash Kapoor titled “AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.” Having had the privilege of reading an early version, our conversation delves into some of the book's key arguments. We also explore what Arvind calls AI scaling myths, the reality of artificial general intelligence, how governments can scale effective AI policies, the importance of transparency, the role that antitrust can, and cannot play, the societal impacts of scaling automation, and more. I hope you enjoy our conversation. Find me on X at @⁠⁠⁠ProfSchrepel⁠⁠⁠⁠⁠⁠. Also, be sure to subscribe. ** References: ➝ AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference (2024) ➝ AI scaling myths (2024) ➝ AI existential risk probabilities are too unreliable to inform policy (2024) ➝ Foundation Model Transparency Reports (2024)

The Nonlinear Library
LW - Superintelligent AI is possible in the 2020s by HunterJay

The Nonlinear Library

Play Episode Listen Later Aug 14, 2024 44:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is possible in the 2020s, published by HunterJay on August 14, 2024 on LessWrong. Back in June 2023, Soroush Pour and I discussed AI timelines on his podcast, The AGI Show. The biggest difference between us was that I think "machines more intelligent than people are likely to be developed within a few years", and he thinks that it's unlikely to happen for at least a few decades.[1] We haven't really resolved our disagreement on this prediction in the year since, so I thought I would write up my main reasons for thinking we're so close to superintelligence, and why the various arguments made by Soroush (and separately by Arvind Narayanan and Sayash Kapoor) aren't persuasive to me. Part 1 - Why I Think We Are Close Empirically You can pick pretty much any trend relating to AI & computation, and it looks like this:[2] We keep coming up with new benchmarks, and they keep getting saturated. While there are still some notable holdouts such as ARC-AGI, SWE-Bench, and GPQA, previous holdouts like MATH also looked like this until they were solved by newer models. If these trends continue, it's hard to imagine things that AI won't be able to do in a few years time[3], unless they are bottlenecked by regulation (like being a doctor), or by robot hardware limitations (like being a professional football player)[4]. Practically The empirical trends are the result of several different factors; changes in network architecture, choice of hyperparameters, optimizers, training regimes, synthetic data creation, and data cleaning & selection. There are also many ideas in the space that have not been tried at scale yet. Hardware itself is also improving -- chips continue to double in price performance every 2-3 years, and training clusters are scaling up massively. It's entirely possible that some of these trends slow down -- we might not have another transformers-level architecture advance this decade, for instance -- but the fact that there are many different ways to continue improving AI for the foreseeable future makes me think that it is unlikely for progress to slow significantly. If we run out of text data, we can use videos. If we run out of that, we can generate synthetic data. If synthetic data doesn't generalise, we can get more efficient with what we have through better optimisers and training schedules. If that doesn't work, we can find architectures which learn the patterns more easily, and so on. In reality, all of these will be done at the same time and pretty soon (arguably already) the AIs themselves will be doing a significant share of the research and engineering needed to find and test new ideas[5]. This makes me think progress will accelerate rather than slow. Theoretically Humans are an existence proof of general intelligence, and since human cognition is itself just computation[6], there is no physical law stopping us from building another general intelligence (in silicon) given enough time and resources[7]. We can use the human brain as an upper bound for the amount of computation needed to get AGI (i.e. we know it can be done with the amount of computation done in the brain, but it might be possible with less)[8]. We think human brains do an equivalent of between 10^12 and 10^28 FLOP[9] per second [a hilariously wide range]. Supercomputers today can do 10^18. The physical, theoretical limit seems to be approximately 10^48 FLOP per second per kilogram. We can also reason that humans are a lower bound for the compute efficiency of the AGI (i.e. we know that with this amount of compute, we can get human-level intelligence, but it might be possible to do it with less)[10]. If humans are more efficient than current AI systems per unit of compute, then we know that more algorithmic progress must be possible as well. In other words, there seems to be...

The Nonlinear Library: LessWrong
LW - Superintelligent AI is possible in the 2020s by HunterJay

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 14, 2024 44:36


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is possible in the 2020s, published by HunterJay on August 14, 2024 on LessWrong. Back in June 2023, Soroush Pour and I discussed AI timelines on his podcast, The AGI Show. The biggest difference between us was that I think "machines more intelligent than people are likely to be developed within a few years", and he thinks that it's unlikely to happen for at least a few decades.[1] We haven't really resolved our disagreement on this prediction in the year since, so I thought I would write up my main reasons for thinking we're so close to superintelligence, and why the various arguments made by Soroush (and separately by Arvind Narayanan and Sayash Kapoor) aren't persuasive to me. Part 1 - Why I Think We Are Close Empirically You can pick pretty much any trend relating to AI & computation, and it looks like this:[2] We keep coming up with new benchmarks, and they keep getting saturated. While there are still some notable holdouts such as ARC-AGI, SWE-Bench, and GPQA, previous holdouts like MATH also looked like this until they were solved by newer models. If these trends continue, it's hard to imagine things that AI won't be able to do in a few years time[3], unless they are bottlenecked by regulation (like being a doctor), or by robot hardware limitations (like being a professional football player)[4]. Practically The empirical trends are the result of several different factors; changes in network architecture, choice of hyperparameters, optimizers, training regimes, synthetic data creation, and data cleaning & selection. There are also many ideas in the space that have not been tried at scale yet. Hardware itself is also improving -- chips continue to double in price performance every 2-3 years, and training clusters are scaling up massively. It's entirely possible that some of these trends slow down -- we might not have another transformers-level architecture advance this decade, for instance -- but the fact that there are many different ways to continue improving AI for the foreseeable future makes me think that it is unlikely for progress to slow significantly. If we run out of text data, we can use videos. If we run out of that, we can generate synthetic data. If synthetic data doesn't generalise, we can get more efficient with what we have through better optimisers and training schedules. If that doesn't work, we can find architectures which learn the patterns more easily, and so on. In reality, all of these will be done at the same time and pretty soon (arguably already) the AIs themselves will be doing a significant share of the research and engineering needed to find and test new ideas[5]. This makes me think progress will accelerate rather than slow. Theoretically Humans are an existence proof of general intelligence, and since human cognition is itself just computation[6], there is no physical law stopping us from building another general intelligence (in silicon) given enough time and resources[7]. We can use the human brain as an upper bound for the amount of computation needed to get AGI (i.e. we know it can be done with the amount of computation done in the brain, but it might be possible with less)[8]. We think human brains do an equivalent of between 10^12 and 10^28 FLOP[9] per second [a hilariously wide range]. Supercomputers today can do 10^18. The physical, theoretical limit seems to be approximately 10^48 FLOP per second per kilogram. We can also reason that humans are a lower bound for the compute efficiency of the AGI (i.e. we know that with this amount of compute, we can get human-level intelligence, but it might be possible to do it with less)[10]. If humans are more efficient than current AI systems per unit of compute, then we know that more algorithmic progress must be possible as well. In other words, there seems to be...

Fluidity
Artificial Neurons Considered Harmful, Part 2

Fluidity

Play Episode Listen Later Aug 11, 2024 28:26


The conclusion of this chapter.   So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad.  https://betterwithout.ai/artificial-neurons-considered-harmful Sayash Kapoor and Arvind Narayanan's "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": https://www.youtube.com/watch?v=bPgwwvjtX_g Another video showing a walk through latent space: https://www.youtube.com/watch?v=YnXiM97ZvOM You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

CSAIL Alliances Podcasts
Hype vs. Reality: the Current State of AI with Arvind Narayanan

CSAIL Alliances Podcasts

Play Episode Listen Later Aug 5, 2024 33:29


Princeton Professor Arvind Narayanan, author of "AI Snake Oil," sheds light on the stark contrast between the public perception and actual capabilities of AI. In this podcast, he explores the significant gap between the excitement surrounding AI and its current limitations. Find out more about CSAIL Alliances, as well as a full transcript of this podcast, at https://cap.csail.mit.edu/podcasts/current-state-ai-arvind-narayanan. If you would like to learn more about the Cybersecurity for Technical Leaders Course, visit here:https://cap.csail.mit.edu/cybersecurity-technical-leaders-online-course-mit-csail-alliances Podcast listeners save 10% with code MITXTPOD10

Machine Learning Street Talk
Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)

Machine Learning Street Talk

Play Episode Listen Later Jul 28, 2024 49:42


How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don't worry too much about x-risk from alien invasions. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at brave.com/api. Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. Kapoor has previously worked on AI in both industry and academia, with experience at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Notably, Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. Sayash Kapoor https://x.com/sayashk https://www.cs.princeton.edu/~sayashk/ Arvind Narayanan (other half of the AI Snake Oil duo) https://x.com/random_walker AI existential risk probabilities are too unreliable to inform policy https://www.aisnakeoil.com/p/ai-existential-risk-probabilities Pre-order AI Snake Oil Book https://amzn.to/4fq2HGb AI Snake Oil blog https://www.aisnakeoil.com/ AI Agents That Matter https://arxiv.org/abs/2407.01502 Shortcut learning in deep neural networks https://www.semanticscholar.org/paper/Shortcut-learning-in-deep-neural-networks-Geirhos-Jacobsen/1b04936c2599e59b120f743fbb30df2eed3fd782 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/ TOC: 00:00:00 Intro 00:01:57 How seriously should we take Xrisk threat? 00:02:55 Risk too unrealiable to inform policy 00:10:20 Overinflated risks 00:12:05 Perils of utility maximisation 00:13:55 Scaling vs airplane speeds 00:17:31 Shift to smaller models? 00:19:08 Commercial LLM ecosystem 00:22:10 Synthetic data 00:24:09 Is AI complexifying our jobs? 00:25:50 Does ChatGPT make us dumber or smarter? 00:26:55 Are AI Agents overhyped? 00:28:12 Simple vs complex baselines 00:30:00 Cost tradeoff in agent design 00:32:30 Model eval vs downastream perf 00:36:49 Shortcuts in metrics 00:40:09 Standardisation of agent evals 00:41:21 Humans in the loop 00:43:54 Levels of agent generality 00:47:25 ARC challenge

JAMA Medical News: Discussing timely topics in clinical medicine, biomedical sciences, public health, and health policy

Amid the surging buzz around artificial intelligence (AI), can we trust the Al hype, and more importantly, are we ready for its implications? In this Q&A, Arvind Narayanan, PhD, a professor of computer science at Princeton, joins JAMA's Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, to discuss the exploration of Al's fairness, transparency, and accountability. Related Content: How to Navigate the Pitfalls of AI Hype in Health Care

The Ezra Klein Show
A Lot Has Happened in A.I. Let's Catch Up.

The Ezra Klein Show

Play Episode Listen Later Dec 1, 2023 70:21


Thursday marked the one-year anniversary of the release of ChatGPT. A lot has happened since. OpenAI, the makers of ChatGPT, recently dominated headlines again after the nonprofit board of directors fired C.E.O. Sam Altman, only for him to return several days later.But that drama isn't actually the most important thing going on in the A.I. world, which hasn't slowed down over the past year, even as people are still discovering ChatGPT for the first time and reckoning with all of its implications.Tech journalists Kevin Roose and Casey Newton are hosts of the weekly podcast “Hard Fork.” Roose is my colleague at The Times, where he writes a tech column called “The Shift.” Newton is the founder and editor of Platformer, a newsletter about the intersection of technology and democracy. They've been closely tracking developments in the field since well before ChatGPT launched. I invited them on the show to catch up on the state of A.I.We discuss: who is — and isn't — integrating ChatGPT into their daily lives, the ripe market for A.I. social companions, why so many companies are hesitant to dive in, progress in the field of A.I. “interpretability” research, and America's “fecklessness” that cedes major A.I. benefits to the private sector, and much more.Recommendations:Electrifying America by David E. NyeYour Face Belongs to Us by Kashmir Hill“Intro to Large Language Models” by Andrej Karpathy (video)Import AI by Jack Clark.AI Snake Oil by Arvind Narayanan and Sayash KapoorPragmatic Engineer by Gergely OroszThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact checking by Michelle Harris, with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show's production team also includes Emefa Agawu and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

Forbes India Daily Tech Brief Podcast
US President Biden moves to establish AI guardrails with Executive Order

Forbes India Daily Tech Brief Podcast

Play Episode Listen Later Nov 1, 2023 5:26


In today's episode we take a quick look at news of US President Joe Biden's executive order to regulate AI, but first one other headline that's caught everyone's attention at home. Headlines Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports. Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitaram Yechury, Moneycontrol reports, citing the politicians as saying they have received notifications from Apple stating that their devices were being targeted by state-sponsored attackers. One thing today US President Joe Biden yesterday issued an executive order outlining new regulations and safety requirements for artificial intelligence (AI) technologies, as the pace at which such technologies are advancing has alarmed governments around the world about the potential for their misuse. The order, which runs into some 20,000 words, introduces a safety measure by defining a threshold based on computing power for AI models. AI models trained with a computing power of 10^26 floating-point operations, or flops, will be subject to these new rules. This threshold surpasses the current capabilities of AI models, including GPT-4, but is expected to apply to next-generation models from prominent AI companies such as OpenAI, Google, Anthropic, and others, Casey Newton, a prominent technology writer who attended the Whitehouse conference at which President Biden announced the new rules yesterday, notes in his newsletter, Platformer. Companies developing models that meet this criterion must conduct safety tests and share the results with the government before releasing their AI models to the public. This mandate builds on voluntary commitments by 15 major tech companies earlier this year, Newton writes in his letter. The sweeping executive order addresses various potential harms related to AI technologies and their applications ranging from telecom and wireless networks to energy and cybersecurity. It assigns the US Commerce Department the task of establishing standards for digital watermarks and other authenticity verification methods to combat deepfake content. It mandates AI developers to assess their models' potential for aiding in the development of bioweapons, and orders agencies to conduct risk assessments related to AI's role in chemical, biological, radiological, and nuclear weapons. Newton references an analysis of the executive order by computer scientists Arvind Narayanan, Sayash Kapoor and Rishi Bommasani to point out that despite these significant steps, the executive order leaves some important issues unaddressed. Notably, it lacks specific requirements for transparency in AI development, such as pre-training data, fine-tuning data, the labour involved in annotation, model evaluation, usage, and downstream impacts. Experts like them argue that transparency is essential for ensuring accountability and preventing potential biases and unintended consequences in AI applications. The order hasn't also addressed the current debate surrounding open-source AI development versus proprietary tech. The choice between open-source models, as advocated by Meta and Stability AI, and closed models, like those pursued by OpenAI and Google, has become a contentious issue, Newton writes.  Prominent scientists, such as Stanford University Professor Andrew Ng, who previously founded Google Brain, have criticised the large tech companies for seeking industry regulation as a way of stifling open-source competition. They argue that while regulation is necessary, open-source AI research fosters innovation and democratizes technology.

Marketplace Tech
The advantages —and drawbacks — of decentralized social networks

Marketplace Tech

Play Episode Listen Later Jul 28, 2023 11:58


It’s been just a few weeks since the new Threads app burst onto the scene, threatening to be the ultimate Twitter-Killer, or platform formerly known as Twitter-killer. But it’s not just an alternative to the former bird app Threads has promised, but an alternative model of social media. One that is decentralized and interoperable. So how is this model different than the classic flavor most of us are used to? Marketplace’s Meghan McCarty Carino asked Arvind Narayanan, a professor of computer science at Princeton.

Marketplace All-in-One
The advantages —and drawbacks — of decentralized social networks

Marketplace All-in-One

Play Episode Listen Later Jul 28, 2023 11:58


It’s been just a few weeks since the new Threads app burst onto the scene, threatening to be the ultimate Twitter-Killer, or platform formerly known as Twitter-killer. But it’s not just an alternative to the former bird app Threads has promised, but an alternative model of social media. One that is decentralized and interoperable. So how is this model different than the classic flavor most of us are used to? Marketplace’s Meghan McCarty Carino asked Arvind Narayanan, a professor of computer science at Princeton.

Marketplace Tech
What are the ethical hazards in the effort to commercialize AI?

Marketplace Tech

Play Episode Listen Later Mar 15, 2023 10:19


Microsoft’s Bing chatbot has displayed some strange, sometimes inappropriate responses. Could training in ethics help? Meghan McCarty Carino spoke with Arvind Narayanan, a computer science professor at Princeton University, about the ethical concerns he sees increasing around artificial intelligence.

Marketplace All-in-One
What are the ethical hazards in the effort to commercialize AI?

Marketplace All-in-One

Play Episode Listen Later Mar 15, 2023 10:19


Microsoft’s Bing chatbot has displayed some strange, sometimes inappropriate responses. Could training in ethics help? Meghan McCarty Carino spoke with Arvind Narayanan, a computer science professor at Princeton University, about the ethical concerns he sees increasing around artificial intelligence.

Spark from CBC Radio
558: What AI can and can't do

Spark from CBC Radio

Play Episode Listen Later Nov 10, 2022 54:02


We've seen remarkable gains in artificial intelligence – but only in specific, narrow domains, like fraud prevention or navigation. One reason for that is the way AI innovations get adopted. Another is our poor ability to distinguish between real progress and so-called AI snake oil. This week, we demystify AI with guests Ajay Agrawal, professor in University of Toronto's Rotman School of Management, founder of the Creative Destruction Lab, and co-author of a new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence; and Arvind Narayanan, professor of computer science at Princeton University and co-author of the newsletter and forthcoming book called AI Snake Oil.

WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives
Notes from the Electronic Cottage 11/10/22: AI Snake Oil

WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives

Play Episode Listen Later Nov 10, 2022 8:39


Producer/Host: Jim Campbell Suppose you were an academic who posted some slides from a lecture on your university’s archive page and suppose that tens of thousands of people found them and downloaded them and 2 million people read your Twitter feed on the subject. Would you be surprised? This really happened to Avrind Narayanan and therein lies the source of a book in progress and blog underway entitled “AI Snake Oil.” That title alone should make it worth listening to today’s episode of the Electronic Cottage. Here are links to the sources mentioned in the program: AI Snakeoil, Substack How to Recognize AI Snake Oil, Arvind Narayanan, Associate Professor of Computer Science, Princeton University A checklist of eighteen pitfalls in AI journalism, Sayash Kapoor, Arvind Narayanan. September 30, 2022 About the host: Jim Campbell has a longstanding interest in the intersection of digital technology, law, and public policy and how they affect our daily lives in our increasingly digital world. He has banged around non-commercial radio for decades and, in the little known facts department (that should probably stay that way), he was one of the readers voicing Richard Nixon's words when NPR broadcast the entire transcript of the Watergate tapes. Like several other current WERU volunteers, he was at the station's sign-on party on May 1, 1988 and has been a volunteer ever since doing an early stint as a Morning Maine host, and later producing WERU program series including Northern Lights, Conversations on Science and Society, Sound Portrait of the Artist, Selections from the Camden Conference, others that will probably come to him after this is is posted, and, of course, Notes from the Electronic Cottage. The post Notes from the Electronic Cottage 11/10/22: AI Snake Oil first appeared on WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives.

Notes From The Electronic Cottage | WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives

Producer/Host: Jim Campbell Suppose you were an academic who posted some slides from a lecture on your university’s archive page and suppose that tens of thousands of people found them and downloaded them and 2 million people read your Twitter feed on the subject. Would you be surprised? This really happened to Avrind Narayanan and therein lies the source of a book in progress and blog underway entitled “AI Snake Oil.” That title alone should make it worth listening to today’s episode of the Electronic Cottage. Here are links to the sources mentioned in the program: AI Snakeoil, Substack How to Recognize AI Snake Oil, Arvind Narayanan, Associate Professor of Computer Science, Princeton University A checklist of eighteen pitfalls in AI journalism, Sayash Kapoor, Arvind Narayanan. September 30, 2022 About the host: Jim Campbell has a longstanding interest in the intersection of digital technology, law, and public policy and how they affect our daily lives in our increasingly digital world. He has banged around non-commercial radio for decades and, in the little known facts department (that should probably stay that way), he was one of the readers voicing Richard Nixon's words when NPR broadcast the entire transcript of the Watergate tapes. Like several other current WERU volunteers, he was at the station's sign-on party on May 1, 1988 and has been a volunteer ever since doing an early stint as a Morning Maine host, and later producing WERU program series including Northern Lights, Conversations on Science and Society, Sound Portrait of the Artist, Selections from the Camden Conference, others that will probably come to him after this is is posted, and, of course, Notes from the Electronic Cottage. The post Notes from the Electronic Cottage 11/10/22: AI Snake Oil first appeared on WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives.

Reimagining the Internet
See Through AI Hype with Arvind Narayanan

Reimagining the Internet

Play Episode Listen Later Oct 5, 2022 36:34


Arvind Narayanan is a Princeton computer science professor who wants to make it easy for you to cut through the AI. In a fascinating and plain old helpful interview, Arvid runs through all the big claims made about AI today and makes them very simple to understand.

ada: Heute das Morgen verstehen
Der große KI-Bluff

ada: Heute das Morgen verstehen

Play Episode Listen Later Dec 13, 2019 29:37


Viele Unternehmen versprechen bessere Entscheidungen und fairere Auswahlprozesse dank künstlicher Intelligenz. Expert*innen befürchten: Das Potenzial von KI wird überschätzt. Auch, weil viel Unklarheit darüber herrscht, was KI eigentlich kann und was nicht. Aus der Überzeichnung der Möglichkeiten wird ein Geschäftsmodell, das sogar die Entwicklung von KI behindern könnte. Die Präsentation von Arvind Narayanan, auf die wir uns im Podcast beziehen, findet ihr hier: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf Unsere Buchtipps aus dieser Folge: Katharina Zweig: "Ein Algorithmus kennt kein Taktgefühl" Gary Marcus & Ernest Davies: "Rebooting AI: Building Artificial Intelligence We Can Trust" Dieser Podcast ist eine Produktion der ada-Redaktion: https://join-ada.com/ Werbepartner dieser Folge ist MINI Deutschland. Den Podcast "The Sooner Now" findet ihr hier: www.thesoonernow.com

The Compline Service from St. Mark's Cathedral
The Office of Compline for March 31, 2019

The Compline Service from St. Mark's Cathedral

Play Episode Listen Later Apr 1, 2019 35:33


March 31, 2019 The Fourth Sunday in Lent: Laetare ORISON: Drop, drop, slow tears (Tune: Song 46) – Orlando Gibbons (1583-1625) HYMN: A Hymne to God the Father – Thomas Joyce PSALM 32 – Peter R. Hallock (1924-2014) NUNC DIMITTIS: Plainsong setting, Tone III.6 ANTHEM: Lamentations of the Prophet Jeremiah – Peter R. HallockTyler Morse, alto soloist •Arvind Narayanan, tenor soloist Jason Anderson, director • William Turnipseed, reader • Tyler […]

The Compline Service from St. Mark's Cathedral
The Office of Compline for December 2, 2018

The Compline Service from St. Mark's Cathedral

Play Episode Listen Later Dec 3, 2018 28:28


December 2, 2018 The First Sunday of Advent ORISON: Matin Responsory – Jason A. Anderson (b. 1976) PSALM 25:1-9 – Plainsong, Tone IV.2 HYMN: Come, thou long-expected Jesus (Tune: Stuttgart) – mel. From Psalmodia Sacra, 1715; adapt. and harm. William Havergal (1793-1870), alt. NUNC DIMITTIS – Richard Proulx (1937-2010) ANTHEM: Sive vigilem – William Mundy (c. 1529-1591) Jason Anderson, director • William Turnipseed, reader • Arvind Narayanan, cantor

Three Minute Book Review
Bitcoin & Cryptocurrency Technologies, by Arvind Narayanan et al

Three Minute Book Review

Play Episode Listen Later Sep 17, 2018 3:02


I readable textbook on the workings of Bitcoin and related technologies. The information is still relevant to Bitcoin (minus the lightning network upgrade), but only lightly covers other crypto technologies that have risen since the publishing of this book. Overall, I recommend! Book: https://www.amazon.com/Bitcoin-Cryptocurrency-Technologies-Comprehensive-Introduction/dp/0691171696 Class: https://www.coursera.org/learn/cryptocurrency This video is also available on YouTube and BitChute.

Curious with Jeremy
S01E05 – Privacy, Security, and the Blockchain (Arvind Narayanan)

Curious with Jeremy

Play Episode Listen Later Oct 16, 2017 39:12


I’m joined today by Arvind Narayanan, an assistant professor at Princeton University, who is an assistant professor at Princeton and an affiliate scholar at Stanford Law School’s Center for Internet and Society. He’s well know for his work on privacy, security, and de-anonymization, and he’s one of the foremost scholars on privacy and security in cryptocurrency and the blockchain. In this episode, we cover… * What led Arvind to become so fascinated with privacy and de-anonymization. * How the public - and even experts - dramatically underestimate the ability to take “anonymous” data and link it back to identities through relatively simple processes. * The fragility of privacy. * The vast amount of data leaks even in cryptocurrencies. * The limitations of smart contracts. * Current and potential future societal impacts of blockchain technology. * Why some technology threats are vastly over-examined and others are ignored. The post S01E05 – Privacy, Security, and the Blockchain (Arvind Narayanan) appeared first on CURIOUS.

Curious with Jeremy
S01E03 – How Does Cryptocurrency Work? (Part 2) – with Bernard Golden & Arvind Narayanan

Curious with Jeremy

Play Episode Listen Later Oct 2, 2017 40:59


In this episode, I’m joined by Bernard Golden and Arvind Narayanan to discuss some of the deeper technologies underlying cryptocurrencies. This includes discussions of… * The basics and history of modern cryptography. * What’s the best way for explaining the technology and importance of bitcoin and blockchain? * Mining - How does the blockchain securely record transactions without high risk of hacking or fraud? * Is it important for more people to better understand the technical side of cryptocurrency? * What are the biggest advantages to a global distributed ledger? * What’s a good basic explanation of hash functions and hash pointers? * In asymmetrical cryptography, how is anyone able to verify a digital signature without a private key? The post S01E03 – How Does Cryptocurrency Work? (Part 2) – with Bernard Golden & Arvind Narayanan appeared first on CURIOUS.

What's The Point
WTP Best Of: Internet Tracking

What's The Point

Play Episode Listen Later Mar 23, 2017 32:25


Jody interviews Arvind Narayanan about the latest in online tracking, and what you can do to shield yourself.

What's The Point
.59 Your Browser's Fingerprint

What's The Point

Play Episode Listen Later Sep 1, 2016 31:07


A new survey of one million websites reveals the latest tricks being used to track your online behavior. Arvind Narayanan of Princeton University discusses his research.

Center for Internet and Society
Arvind Narayanan - Hearsay Culture Show #238 - KZSU-FM (Stanford)

Center for Internet and Society

Play Episode Listen Later May 27, 2015 57:02


I am pleased to post Show # 238, May 27, my interview with Prof. Arvind Narayanan of Princeton University on Bitcoin, cryptography, privacy and web transparency. Arvind does a range of information policy-related research and writing as a professor affiliated with Princeton's Center for Information Technology Policy (CITP). [Note: I am a Visiting Research Collaborator at CITP]. Through studying the operation of and security challenges surrounding the cryptocurrency Bitcoin, Arvind has been able to assess cryptography as a privacy-enhancing and dis-intermediating technology. To that end, we had a wide-ranging discussion, from the varied roles of cryptography to commercial surveillance and transparency. Because Arvind is such a dynamic and interdisciplanary scholar, we had a wonderful discussion that I hope you enjoy! {Hearsay Culture is a talk show on KZSU-FM, Stanford, 90.1 FM, hosted by Center for Internet & Society Resident Fellow David S. Levine. The show includes guests and focuses on the intersection of technology and society. How is our world impacted by the great technological changes taking place? Each week, a different sphere is explored. For more information, please go to http://hearsayculture.com.}

CERIAS Security Seminar Podcast
Vitaly Shmatikov, Obfuscated Databases: Definitions and Constructions

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 2, 2005 50:11


I will present some new definitions and constructions for privacy in large databases. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, our techniques are designed to prevent indiscriminate harvesting of information while enabling some forms of legitimate access. We start with a simple construction for an obfuscated database that is provably indistinguishable from a black-box lookup oracle (in the random oracle model). Some attributes of the database are designated as "key," the rest as "data." The database behaves as a lookup oracle if, for any record, it is infeasible to extract the data fields without specifying the key fields, yet, given the values of the key fields, it is easy to retrieve the corresponding data fields. We then generalize our constructions to a larger class of queries, and achieve a privacy property we call "group privacy." It ensures that users can retrieve individual records or small subsets of records from the database by identifying them precisely. The database is obfuscated in such a way that queries returning a large subset of records are computationally infeasible. This is joint work with Arvind Narayanan. About the speaker: Vitaly Shmatikov is an assistant professor in the Department of Computer Sciences at the University of Texas at Austin. Prior to joining UT, he worked as a computer scientist at SRI International. Vitaly's research focuses on tools and formal methods for automated analysis and verification of secure systems, as well as various aspects of anonymity and privacy. Vitaly received his PhD in 2000 from Stanford University, with thesis on "Finite-State Analysis of Security Protocols."

CERIAS Security Seminar Podcast
Vitaly Shmatikov, "Obfuscated Databases: Definitions and Constructions"

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 2, 2005


I will present some new definitions and constructions for privacy in large databases. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, our techniques are designed to prevent indiscriminate harvesting of information while enabling some forms of legitimate access. We start with a simple construction for an obfuscated database that is provably indistinguishable from a black-box lookup oracle (in the random oracle model). Some attributes of the database are designated as "key," the rest as "data." The database behaves as a lookup oracle if, for any record, it is infeasible to extract the data fields without specifying the key fields, yet, given the values of the key fields, it is easy to retrieve the corresponding data fields. We then generalize our constructions to a larger class of queries, and achieve a privacy property we call "group privacy." It ensures that users can retrieve individual records or small subsets of records from the database by identifying them precisely. The database is obfuscated in such a way that queries returning a large subset of records are computationally infeasible. This is joint work with Arvind Narayanan.