Mystery AI Hype Theater 3000

Follow Mystery AI Hype Theater 3000
Share on
Copy link to clipboard

Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.

Emily M. Bender and Alex Hanna


    • May 28, 2025 LATEST EPISODE
    • every other week NEW EPISODES
    • 1h 1m AVG DURATION
    • 57 EPISODES


    Search for episodes from Mystery AI Hype Theater 3000 with a specific topic:

    Latest episodes from Mystery AI Hype Theater 3000

    The "AI"-Enabled Immigration Panopticon (with Petra Molnar), 2025.05.05

    Play Episode Listen Later May 28, 2025 61:58 Transcription Available


    This week, Alex and Emily talk with anthropologist and immigration lawyer Petra Molnar about the dehumanizing hype of border-enforcement tech. From hoovering up data to hunt anyone of ambiguous citizenship status, to running surveillance of physical borders themselves, "AI" tech is everywhere in the enforcement of national borders. And as companies ranging from Amazon, to NSO Group, to Palantir all profit, this widening of automation threatens a future of faceless human rights violations with no attempts at accountability of any kind.Petra Molnar is associate director of York University's Refugee Law Lab, and a faculty associate for the Berkman Klein Center for Internet and Society at Harvard University. She's also the author of the book The Walls Have Eyes: Surviving immigration in the age of artificial intelligence.References:Department of Homeland Security: Robot Dogs Take Another Step Towards Deployment at the BorderLeaked: Palantir's Plan to Help ICE Deport PeopleAthens prepares to host DEFEA 2025, a major hub for international defence cooperationFresh AI Hell:Meta served teen girls beauty product ads whenever they deleted selfiesDating app/luxury surveillance leaks personal info"AI" for subway crime predictionCA used "AI" to make bar exam questionsCA using "AI" tool to bypass building permit processWildly unethical "AI persuasion" research on Reddit usersAI makeup to retouch Holocaust imagesCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    AGI: "Imminent", "Inevitable", and Inane, 2025.04.21

    Play Episode Listen Later May 14, 2025 65:02 Transcription Available


    Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot.References:AI 2027Fresh AI Hell:AI persona bots for undercover copsPalantir heart eyes Keir StarmerAnti-vaxxers are grifting off the measles outbreak with AI-formulated supplementsThe cost, environmental and otherwise, of being polite to ChatGPTActors who sold voice & likeness find it used for scamsAddictive tendencies and ChatGPT (satire)Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    AI Hell in a Handbasket, 2025.04.14

    Play Episode Listen Later Apr 30, 2025 60:25 Transcription Available


    It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis.AI Hell:LLM processing like human language processing (not)Jack Clark predicting AGISebastian Bubeck says predictions in "sparks" paper have already come trueWIRED puff piece on the AmodeisFoundation agents & leaning in to the computational metaphor (Fig 1, p14)Chaser: Trying to recreate the GPT unicornThe WSJ has an AI bot for all your tax questionsChatGPT libelAOL.com uses autogenerated captions about attempted murderAI coding tools fix bugs by adding bugs"We teach AGI to think, so you don't have to"(from: Turing.com)MAGA/DOGE paints teachers as glorified babysitters in push for AIChaser: How we are NOT using AI in the classroomAI benchmarks are self-promoting trash — but regulators keep using themDOGE is pushing AI tool created as "sandbox" for federal testing"Psychological profiling" based on social mediaThe tariffs and ChatGPT"I was not informed that Microsoft would sell my work to the Israeli military and government"Microsoft fires engineers who protested Israeli military use of its toolsPulling back on data centers, Microsoft editionAbandoned data centers, China editionBill Gates: 2 day workweek coming thanks to AI...replacing doctors and teachers??Chaser: Tesla glue fail schadenfreudeChaser: Let's talk about the genie tropeChaser: We finally met!!!Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    "AI" Agents, A Single Point of Failure (with Margaret Mitchell), 2025.03.31

    Play Episode Listen Later Apr 17, 2025 63:02 Transcription Available


    After "AI" stopped meaning anything, the hype salesmen moved on to "AI" "agents", those allegedly indefatigable assistants, allegedly capable of operating your software for you -- whether you need to make a restaurant reservation, book a flight, or book a flight to a restaurant reservation. Hugging Face's Margaret Mitchell joins Emily and Alex to help break down what agents actually are, and what to actually worry about.References:PwC launches AI agent operating system to revolutionize AI workflows for enterprisesAn Open-Source AI Agent for Doing Tasks on the WebScale AI announces multimillion-dollar defense deal, a major step in U.S. military automationOther references:Why handing over total control to AI agents would be a huge mistakeFully Autonomous AI Agents Should Not be DevelopedBender vs. Bubeck: The Great Chatbot Debate: Do LLMs Really Understand?Democratize artFresh AI Hell:DOGE suggests replacing workers with "AI" (of course)Vape, or the tamagotchi gets itVia @maaikeverbruggen"AI" for psychotherapy, still bad, still hypedBiology (not) of LLMsMark Cuban's grifty chatbotVia @HypervisiblePalate cleanser: "AI is the letdown"https://www.cnn.com/2025/03/27/tech/apple-ai-artificial-intelligence/index.htmlComic relief: "Fortified with AI"Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Linguists Versus 'AI' Speech Analysis (with Nicole Holliday), 2025.03.17

    Play Episode Listen Later Apr 2, 2025 60:28 Transcription Available


    Measuring your talk time? Counting your filler words? What about "analyzing" your "emotions"? Companies that push LLM technology to surveil and summarize video meetings are increasingly offering to (purportedly) analyze your participation and assign your speech some metrics, all in the name of "productivity". Sociolinguist Nicole Holliday joins Alex and Emily to take apart claims about these "AI" meeting feedback tools, and reveal them to be just sparkling bossware, with little insight into how we talk.Nicole Holliday is Acting Associate Professor of Linguistics at the University of California-Berkeley.Quick note: Our guest for this episode had some sound equipment issues, which unfortunately affected her audio quality.Main course:Read AI Review: This AI Reads Emotions During Video CallsMarketing video for Read AIZoom rebrands existing and introduces new generative AI featuresMarketing video for Zoom Revenue AcceleratorSpeech analysis startup releases AI tool that simulates difficult job interview conversationFresh AI Hell:Amazon Echo will send all recordings to Amazon beginning March 28Trump's NIST no longer concerned with “safety” or “fairness”Reporter Kevin Roose is feeling the bullshitUW's eScience institute pushing “AI” for information accessOpenAI whines about data being too expensive, with a side of SinophobiaCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    The Anti-Bookclub Tackles 'Superagency', 2025.03.03

    Play Episode Listen Later Mar 19, 2025 60:00 Transcription Available


    Emily and Alex read a terrible book so you don't have to! Come for a quick overview of LinkedIn co-founder and venture capitalist Reid Hoffman's opus of magical thinking, 'Superagency: What could possibly go right with our AI future' -- stay for the ridicule as praxis. Plus, why even this tortuous read offers a bit of comfort about the desperate state of the AI boosters.References:The cursèd book itselfAI and the Everything in the Whole Wide World BenchmarkMilitants and Citizens: The Politics of Participatory Democracy in Porto AlegreFresh AI Hell:Parents rationalizing exposing kids to AIUnderage, sexualized celebrity botsBossware a bad look, actuallyCalState faculty union opposes AI initiativeThe kids are alrightCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    The War on Knowledge (with Raina Bloom), 2025.02.24

    Play Episode Listen Later Mar 5, 2025 60:23 Transcription Available


    In the weeks since January 20, the US information ecosystem has been unraveling fast. (We're looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.) As the country's unelected South African tech billionaire continues to run previously secure government data through highly questionable LLMs, academic librarian Raina Bloom joins Emily and Alex for a talk about how we organize knowledge, and what happens when generative AI degrades or poison the systems that keep us all accurately -- and contextually -- informed.​​Raina Bloom is the Reference Services Coordinator for University of Wisconsin-Madison Libraries.References:OpenAI tries to 'uncensor' ChatGPTElon Musk's DOGE is feeding sensitive federal data into AI to target cutsGuardian Media Group announces strategic partnership with OpenAIElon Musk's AI-fuelled war on human agencyArchive version(Post now deleted) A DOGE intern asks Reddit for help with file conversionWhen is it safe to use ChatGPT in higher education? Raina recommends the table on page 6 of UNESCO's QuickStart guide.Fresh AI Hell:Irish educational body, while acknowledging genAI's problems, still gives LLMs too much creditFrom haircuts to dress design, AI slop is creating unrealistic expectations that hurt small businessesAttorneys still falling for "AI" searchThe latest in uncanny valley body horror roboticsGoogle claims to have developed AI "co-scientist"Is AI 'reasoning' or 'pretending'? It's a false chCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Petro-Masculinity Versus the Planet (with Tamara Kneese), 2025.01.27

    Play Episode Listen Later Feb 19, 2025 46:34 Transcription Available


    Sam Altman thinks fusion - particularly a company he's personally invested in - can provide the energy we "need" to develop AGI. Meanwhile, what if we just...put data centers on the Moon to save energy? Alex, Emily, and guest Tamara Kneese pour cold water on Silicon Valley's various unhinged, technosolutionist ideas about energy and the environment.Dr. Tamara Kneese is director of climate, technology and justice at the Data & Society Research InstituteDue to some technical issues during our recording, this week's episode is a bit shorter than usual.References:A data center … on the moon??Sam Altman is banking on fusionGreenland is the new Mars“Regenerative finance” in the crypto eraFears of subprime carbon assets stall crypto mission to save rainforestCorporate carbon offset company accidentally starts devastating wildfireThe AI/crypto crossoverAI/crypto crossover no one asked forBlockchains wanted to build a smart city. The state could not sign off on its water rightsOn petro-masculinityPredatory delay and other myths of sustainable AIBook: Digital Energetics, on Bitcoin/AI computing as a larger energy problemFresh AI Hell:Fake books about indigenous languagesSurveillance company harrasses own employees with camerasSchools SWATing kids based on AI outputsCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    The UK's Misplaced Enthusiasm (with Gina Neff), 2025.01.20

    Play Episode Listen Later Feb 5, 2025 62:12 Transcription Available


    In January, the United Kingdom's new Labour Party prime minister, Keir Starmer, announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to “mainline” it into the country's veins: everything from offering public data to private companies, to potentially fast-tracking miniature nuclear power plants to supply energy to data centers. UK-based researcher Gina Neff helps explain why this flashy policy proposal is mostly a blank check for big tech, and has little to offer either the economy or working people.Gina Neff is executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, and a professor of responsible AI at Queen Mary University of London.References:The AI Opportunities Action Plan‘Mainlined into UK's veins': Labour announces huge public rollout of AIGina Neff: Can democracy survive AI?Labour's AI Action Plan - a gift to the far rightFresh AI Hell:"AI" tool for predicting how Parliament will react to policy proposals"AI" detects age based on hand movementsApple Intelligence misleading summaries of newsBook simplification as a serviceCEO doesn't understand why kid turned AI features of toy offCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Sam Altman's Fever Dream, 2025.01.13

    Play Episode Listen Later Jan 22, 2025 62:19 Transcription Available


    Not only is OpenAI's new o3 model allegedly breaking records for how close an LLM can get to the mythical "human-like thinking" of AGI, but Sam Altman has some, uh, reflections for us as he marks two years since the official launch of ChatGPT. Emily and Alex kick off the new year unraveling these truly fantastical stories.References:OpenAI o3 Breakthrough High Score on ARC-AGI-PubFrom the blog of Sam Altman: ReflectionsMore about the ARC Prizeo3's environmental impactThe brain is a computer is a brain Fresh AI Hell:"Time to Edit" as a metric predicting the singularity (Contributed by Warai Otoko)AI 'tasting' colorsAn AI...faucet??Seattle Public Schools calls ChatGPT a "transformative technology"A GitHub pull request closed because change would have been unfriendly to "AI" chat interfaceCohere working with PalantirElsevier rewrites papers with "AI" without telling authors, editorsThe UK: mainlining AI straight into their veinsCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 47: Hell is Other People's AI Hype

    Play Episode Listen Later Dec 30, 2024 60:47 Transcription Available


    It's been a long year in the AI hype mines. And no matter how many claims Emily and Alex debunk, there's always a backlog of Fresh AI Hell. This week, another whirlwind attempt to clear it, with plenty of palate cleansers along the way.Fresh AI Hell:Part I: EducationMedical residency assignments"AI generated" UCLA course"Could ChatGPT get an engineering degree?"AI letters of recommendationChaser: 'AI' isn't Tinkerbell and we don't have to clapPart II: Potpourri, as in really rottenAI x parentingEt tu, Firefox?US military tests AI machine gun"Over-indexing" genAI failingsAI denying social benefitsChaser: AI 'granny' vs scammers Part III: The Endangered Information EcosystemFake Emily quote in LLM-written articleProtecting WikipediaAI: the new plasticGoogle AI on 'dressing'"AI" archaeologyMisinfo scholar used ChatGPTOpenAI erases lawsuit evidenceLAT "AI" bias meterWaPo AI search: The Washington Post burns its own archiveChaser: ShotSpotter as artPart IV: Surveillance, AI in science/medicineApple patents "body data"Chatbots "defeat" doctorsAlgorithm for healthcare "overuse""AI friendships""Can LLMs Generate Novel Research Ideas?"Another LLM for scienceChaser: FTC vs VenntelPart V: They tell us to believe the hypeThomas Friedman: AGI is comingMatteo Wong on o1's 'reasoning'WIRED editor: believe the hypeSalesforce CEO: The "unlimited age"Chaser: Emily and Alex's forthcoming book! Pre-order THE AI CON: How to Fight Big Tech's Hype and Create the Future We Want Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 46: AGI Funny Business (Model), with Brian Merchant, December 2 2024

    Play Episode Listen Later Dec 18, 2024 62:35 Transcription Available


    Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out how to turn a profit.Brian Merchant is an author, journalist in residence at the AI Now Institute, and co-host of the tech news podcast System Crash.References:Elon Musk and partners form nonprofit to stop AI from ruining the worldHow Elon Musk and Y Combinator Plan to Stop Computers From Taking OverElon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the WorldBrian's recent report on the business model of AGI, for the AI Now Institute: AI Generated Business: The rise of AGI and the rush to find a working revenue modelPreviously on MAIHT3K: Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld)Fresh AI Hell:OpenAI explores advertising as it steps up revenue driveIf an AI company ran Campbell's Soup with the same practices they use to handle dataHumans are the new 'luxury item'Itching to write a book? AI publisher Spines wants to make a dealA company pitched Emily her own 'verified avatar'Don't upload your medical images to chatbotsA look at a pilot program in Georgia that uses 'jailbots' to track inmatesYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 45: Billionaires, Influencers, and Ed Tech (feat. Adrienne Williams), November 18 2024

    Play Episode Listen Later Nov 26, 2024 60:33 Transcription Available


    From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed' tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite toys. Former educator and DAIR research fellow Adrienne Williams joins to explain the problems this tech-solutionist redirection fails to solve, and the new ones it creates.Adrienne Williams started organizing in 2018 while working as a junior high teacher for a tech owned charter school. She expanded her organizing in 2020 after her work as an Amazon delivery driver, where many of the same issues she saw in charter schools were also in evidence. Adrienne is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with the MacArthur Foundation, as well as a Research Fellow at both (DAIR) and Just Tech.References:Funding Helps Teachers Build AI ToolsSal Khan's 2023 Ted Talk: AI in the classroom can transform educationBill Gates: My trip to the frontier of AI educationBackground: Cory Booker Hates Public SchoolsBackground: Cory Booker's track record on educationBook: Access is Capture: How Edtech Reproduces Racial InequalityBook: Disruptive Fixation: School Reform and the Pitfalls of Techno-IdealismPreviously on MAIHT3K: Episode 26, Universities Anxiously Buy Into the Hype (feat. Chris Gilliard)Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp)Fresh AI Hell:"Streamlining" teachingGoogle, Microsoft and Perplexity are promoting scientific racism in 'AI overviews''Whisper' medical transcription tool used in hospitals is making things upX's AI bot can't tell the difference between a bad game and vandalismPrompting is not a substitute for probability measurements in large language modelsYet another 'priestbot'Self-driving wheelchairs at Seattle-Tacoma International AirpotYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 44: OpenAI's Ridiculous 'Reasoning'

    Play Episode Listen Later Nov 13, 2024 60:11 Transcription Available


    The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance.References:OpenAI: Learning to reason with LLMsHow reasoning worksGPQA, a 'graduate-level' Q&A benchmark systemFresh AI Hell:MIT Technology Review's AI 'AI hype index'CFPB Takes Action to Curb Unchecked Worker SurveillanceYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 43: AI Companies Gamble with Everyone's Planet (feat. Paris Marx)

    Play Episode Listen Later Oct 31, 2024 61:22 Transcription Available


    Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in the US.Paris Marx is a tech journalist and host of the podcast Tech Won't Save Us. He also recently launched a 4-part series, Data Vampires, (which features Alex) about the promises and pitfalls of data centers like the ones AI boosters rely on.References:Eric Schmidt says AI more important than climate goalsMicrosoft's sustainability reportSam Altman's “The Intelligence Age” promises AI will fix the climate crisisPreviously on MAIHT3K: Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023Fresh AI Hell:Rosetta to linguists: "Embrace AI or risk extinction" of endangered languagesA talking collar that you can use to pretend to talk with your petsGoogle offers synthetic podcasts through NotebookLMAn AI 'artist' claims he's losing millions of dolalrs from people stealing his workUniversity hiring English professor to teach...prompt engineeringYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 42: Stop Trying to Make 'AI Scientist' Happen, September 30 2024

    Play Episode Listen Later Oct 10, 2024 59:54 Transcription Available


    Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can't live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!References:Sakana.AI keeps trying to make 'AI Scientist' happenThe AI Scientist: Towards Fully Automated Open-Ended Scientific DiscoveryCan LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP ResearchersHow should the advent of large language models affect the practice of science?Relevant research ethics policies:ACL Policy on Publication EthicsCommittee On Public Ethics (COPE)The Vancouver Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly WorkFresh AI Hell:Should journals allow LLMs as co-authors?Business Insider "asks ChatGPT"Otter.ai sends transcript of private after-meeting discussion to everyone"Could AI End Grief?"AI generated crime scene footage"The first college of nursing to offer an MSN in AI"FTC cracks down on "AI" claimsYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 41: Sweating into AI Fall, September 9 2024

    Play Episode Listen Later Sep 26, 2024 61:28 Transcription Available


    Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You're not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.Fresh AI Hell:Synthetic data for Hollywood test screeningsNaNoWriMo's AI failAI is built on exploitationNaNoWriMo sponsored by an AI writing companyNaNoWriMo's AI writing sponsor creates bad writingAI assistant rickrolls customersProgramming LLMs with "fiduciary duty"Canva increasing prices thank to "AI" featuresAd spending by AI companiesClearview AI hit with largest GDPR fine yet'AI detection' in schools harms neurodivergent kidsCS prof admits unethical ChatGPT useCollege recruiter chatbot can't discuss politics"The AI-powered nonprofits reimagining education"Teaching AI at art schoolsProfessors' 'AI twins' as teaching assistantsA teacherless AI classroomAnother 'AI scientist'LLMs still biased against African American EnglishAI "enhances" photo of Black people into white-appearingEric Schmidt: Go ahead, steal data with ChatGPTThe environmental cost of Google's "AI Overviews"Jeff Bezos' "Grand Challenge" for AI in environmentWhat I found in an AI-company's e-wastexAI accused of worsening smog with unauthorized gas turbinesSmile surveillance of workersAI for "emotion recognition" of rail passengersChatbot harassment scenario reveals real victimAI has hampered productivity"AI" in a product description turns off consumersIs tripe kosher? It depends on the religion of the cow.You can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Mystery AI Hype Theater 3000, Episode 40: Elders Need Care, Not 'AI' Surveillance (feat. Clara Berridge), August 19 2024

    Play Episode Listen Later Sep 13, 2024 60:43 Transcription Available


    Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.Dr. Clara Berridge is an associate professor at the University of Washington's School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.References:Care.Coach's 'Avatar' chat program*For Older People Who Are Lonely, Is the Solution a Robot Friend?Care Providers' Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive RobotsSocio-Digital Vulnerability***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.Fresh AI Hell:Apple Intelligence hidden prompts include the command "don't hallucinate"The US wants to use facial recognition to identify migrant children as they ageFamily poisoned after following fake mushroom bookIt is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxiDynamic pricing + surveillance hell at the grocery storeChinese social media's newest trend: imitating AI-generated videosYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 39: Newsrooms Pivot to Bullshit (Feat. Sam Cole)

    Play Episode Listen Later Aug 29, 2024 62:04 Transcription Available


    The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.References:The Washington Post Tells Staff It's Pivoting to AI: "AI everywhere in our newsroom."Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated BlogThe Washington Post's First AI Strategy Editor Talks LLMs in the NewsroomAlso: New Washington Post CTO comes from UberThe Washington Post debuts AI chatbot, will summarize climate articles.Media companies are making a huge mistake with AIWhen ChatGPT summarizes, it does nothing of the kindFresh AI Hell:"AI" Alan TurningGoogle advertises Gemini for writing synthetic fan lettersDutch Judge uses ChatGPT's answers to factual questions in rulingIs GenAI coming to your home appliances?AcademicGPT (Galactica redux)"AI" generated images in medical science, again (now retracted)You can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 38: Deflating Zoom's 'Digital Twin,' July 29 2024

    Play Episode Listen Later Aug 14, 2024 62:29 Transcription Available


    Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.References:The CEO of Zoom wants AI clones in meetingsAll-knowing machines are a fantasyA reminder of some things chatbots are not good forMedical science shouldn't platform automating end-of-life careThe grimy residue of the AI bubbleOn the phenomenon of bullshit jobs: a work rantFresh AI Hell:LA schools' ed tech chatbot misusing student dataAI "teaching assistants" at Morehouse"Diet-monitoring AI tracks your each and every spoonful"A teacher's perspective on dealing with students who "asked ChatGPT"Are Swiss researchers affiliated with Israeli military industrial complex? Swiss institution asks ChatGPTUsing a chatbot to negotiate lower pricesYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 37: Chatbots Aren't Nurses (feat. Michelle Mahon), July 22 2024

    Play Episode Listen Later Aug 2, 2024 60:26 Transcription Available


    We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in health care as direct caregivers and patient advocates.References:NVIDIA's AI Bot Outperforms Nurses: Here's What It MeansHippocratic AI's roster of 'genAI healthcare agents'Related: Nuance's DAX CopilotFresh AI Hell:"AI-powered health coach" will urge you to drink water with lemon50% of 2024 Q2 VC investments went to "AI"Thanks to AI, Google no longer claiming to be carbon-neutralClick work "jobs" soliciting photos of babies through teensScreening of film "written by AI" canceled after backlashPutting the AI in IPAYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024

    Play Episode Listen Later Jul 19, 2024 62:00 Transcription Available


    When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.Ali Alkhatib is a computer scientist and former director of the University of San Francisco's Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.References:Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous CapabilitiesFresh AI Hell:Hacker tool extracts all the data collected by Windows' 'Recall' AIIn NYC, ShotSpotter calls are 87 percent false alarms"AI" system to make callers sound less angry to call center workersAnthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligenceOpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out withYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 35: AI Overviews and Google's AdTech Empire (feat. Safiya Noble), June 10 2024

    Play Episode Listen Later Jul 3, 2024 61:42 Transcription Available


    You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales.References:Blog post, May 14: Generative AI in Search: Let Google do the searching for youBlog post, May 30: AI Overviews: About last weekAlgorithms of Oppression: How Search Engines Reinforce Racism, by Safiya NobleFresh AI Hell:AI Catholic priest demoted after saying it's OK to baptize babies with GatoradeNational Archives bans use of ChatGPTChatGPT better than humans at "Moral Turing Test"Taco Bell as an "AI first" companyAGI by 2027, in one hilarious graphYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx, June 3 2024

    Play Episode Listen Later Jun 20, 2024 63:57 Transcription Available


    The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.References:Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United StatesTech Policy Press: US Senate AI Insight Forum TrackerPut the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy RoadmapEmily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtableFresh AI Hell:Homophobia in Spotify's chatbotStackOverflow in bed with OpenAI, pushing back against resistanceSee also: https://scholar.social/@dingemansemark/112411041956275543OpenAI making copyright claim against ChatGPT subredditIntroducing synthetic text for police reportsChatGPT-like "AI" assistant ... as a car feature?Scarlett Johansson vs. OpenAIYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 33: Much Ado About 'AI' 'Deception', May 20 2024

    Play Episode Listen Later Jun 5, 2024 60:30 Transcription Available


    Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype.Reference:Patterns: "AI deception: A survey of examples, risks, and potential solutions"Fresh AI Hell:Adobe's 'ethical' image generator is still pulling from copyrighted materialApple advertising hell: vivid depiction of tech crushing creativity, as if it were good"AI is more creative than 99% of people"AI generated employee handbooks causing chaosBumble CEO: Let AI 'concierge' do your dating for you.Some critiqueYou can check out future livestreams at https://twitch.tv/DAIR_Institute.Subscribe to our newsletter via Buttondown. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 32: A Flood of AI Hell, April 29 2024

    Play Episode Listen Later May 23, 2024 57:48 Transcription Available


    AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter.**Lyrics & video on Peertube.*Surveillance:*Public kiosks slurp phone dataWorkplace surveillanceSurveillance by bathroom mirrorStalking-as-a-serviceCops tap everyone else's videosFacial recognition at the doctor's office*Synthetic information spills:*Amazon products called “I cannot fulfill that request”AI-generated obituariesX's Grok treats Twitter trends as newsTouch the button. Touch it.Meta's chatbot enters private discussionsWHO chatbot makes up medical info*Toxic wish fulfillment:*Fake photos of real memories*ShotSpotter:*ShotSpotter adds surveillance to the over-policedChicago ending ShotSpotter contractBut they're listening anyway*Selling your data:*Reddit sells user dataMeta sharing user DMs with NetflixScraping Discord*AI is always people:*Amazon Fresh3D artGeorge Carlin impressionsThe people behind image selection*TESCREAL corporate capture:*Biden worried about AI because of "Mission: Impossible"Feds appoint AI doomer to run US AI safety instituteAltman & friends will serve on AI safety board*Accountability:*FTC denies facial recognition for age estimationSEC goes after misleading claimsUber Eats courier wins payout over ‘racist' facial recognition appYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 31: Science Is a Human Endeavor (feat. Molly Crockett and Lisa Messeri), April 15 2024

    Play Episode Listen Later May 7, 2024 62:57 Transcription Available


    Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research.Dr. Molly Crockett is an associate professor of psychology at Princeton University.Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in Los Angeles.References:AI For Scientific Discovery - A WorkshopNature: The Nobel Turing ChallengeNobel Turing Challenge WebsiteEric Schmidt: AI Will Transform ScienceMolly Crockett & Lisa Messeri in Nature: Artificial intelligence and illusions of understanding in scientific research404 Media: Is Google's AI actually discovering 'millions of new materials?'Fresh Hell:Yann LeCun realizes generative AI sucks, suggests shift to objective-driven AIIn contrast:https://x.com/ylecun/status/1592619400024428544https://x.com/ylecun/status/1594348928853483520https://x.com/ylecun/status/1617910073870934019CBS News: Upselling “AI” mammogramsArs Technica: Rhyming AI clock sometimes lies about the timeArs Technica: Surveillance by M&M's vending machineYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 30: Marc's Miserable Manifesto, April 1 2024

    Play Episode Listen Later Apr 19, 2024 60:45 Transcription Available


    Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge.References:Marc Andreessen: "The Techno-Optimism Manifesto"First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres)Business Insider: Explaining 'Pronatalism' in Silicon ValleyFresh AI Hell:CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says.The Markup: NYC's AI chatbot tells businesses to break the lawRead Emily's Twitter / Mastodon thread about this chatbot.The Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in EnglandThe Guardian: Wearable AI: Will it put our smartphones out of fashion?TheCurricula.comYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 29: How LLMs Are Breaking the News (feat. Karen Hao), March 25 2024

    Play Episode Listen Later Apr 3, 2024 62:30 Transcription Available


    Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications.References:Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI PlatformThe Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Fresh AI Hell:Alliance for the FutureVentureBeat: Google researchers unveil ‘VLOGGER', an AI that can bring still photos to lifeBusiness Insider: A car dealership added an AI chatbot to its site. Then all hell broke loose.More pranks on chatbotsYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 28: LLMs Are Not Human Subjects, March 4 2024

    Play Episode Listen Later Mar 13, 2024 60:57 Transcription Available


    Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data.References:PNAS: ChatGPT outperforms crowd workers for text-annotation tasksBeware the Hype: ChatGPT Didn't Replace Human Data AnnotatorsChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers SayPolitical Analysis: Out of One, Many: Using Language Models to Simulate Human SamplesBehavioral Research Methods: Can large language models help augment English psycholinguistic datasets?Information Systems Journal: Editorial: The ethics of using generative AI for qualitative data analysisFresh AI Hell:Advertising vs. reality, synthetic Willy Wonka editionhttps://x.com/AlsikkanTV/status/1762235022851948668?s=20https://twitter.com/CultureCrave/status/1762739767471714379https://twitter.com/xriskology/status/1762891492476006491?t=bNQ1AQlju36tQYxnm8BPVQ&s=19A news outlet used an LLM to generate a story...and it falsely quoted EmilyAI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Trump supporters target Black voters with faked AI imagesSeeking Reliable Election Information? Don't Trust AIYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

    Play Episode Listen Later Feb 29, 2024 64:42 Transcription Available


    Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable' trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech's greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.Watch the video of this episode on PeerTube.References:International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."Short versionLong version (pdf download)Fresh AI Hell:"I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."AI generated illustrations in a scientific paper -- rat balls edition.Per Retraction Watch: the paper with illustrations of a rat with enormous "testtomcels" has been retractedYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024

    Play Episode Listen Later Feb 15, 2024 59:52 Transcription Available


    Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.References:Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed PartnershipASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher EducationMLive: Your Classmate Could Be an AI Student at this Michigan UniversityChris Gilliard: How Ed Tech Is Exploiting StudentsFresh AI Hell:Various: “AI learns just like a kid”Infants' gaze teaches AI the nuances of language acquisitionSimilar from NeuroscienceNewsPolitico: Psychologist apparently happy with fake version of himselfWSJ: Employers Are Offering a New Worker Benefit: Wellness ChatbotsNPR: Artificial intelligence can find your location in photos, worrying privacy expertPalette cleanser: Goodbye to NYC's useless robocop.You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024

    Play Episode Listen Later Feb 1, 2024 56:29 Transcription Available


    Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math.Visit us on PeerTube for the video of this conversation.References:OpenAI: GPTs are GPTsGoldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic GrowthFYI: Over the last 60 years, automation has totally eliminated just one US occupation.Fresh AI Hell:Microsoft adding a dedicated "AI" key to PC keyboards.Dr. Damien P Williams: "Yikes."The AI-led enshittification at DuolingoShot: https://twitter.com/Rahll/status/1744234385891594380Chaser: https://twitter.com/Maccadaynu/status/1744342930150560056University of Washington Provost highlighting “AI”“Using ChatGPT, My AI eBook Creation Pro helps you write an entire e-book with just three clicks -- no writing or technical experience required.”"Can you add artificial intelligence to the hydraulics?"You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 24 - AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024

    Play Episode Listen Later Jan 17, 2024 60:17 Transcription Available


    New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.Watch the video version of this episode on PeerTube.References:HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and InclusionAlgorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot projectWant to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI's “Eradication of Difference” (Drage & McInerney, 2022)Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)Fresh AI HellInternet of Shit 2.0: a "smart" bidetFake AI “students” enrolled at Michigan UniversitySynthetic images destroy online crochet groups“AI” for teacher performance feedbackPalette cleanser: “Stochastic parrot” is the American Dialect Society's AI-related word of the year for 2023!You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 23: AI Hell Freezes Over, December 22 2023

    Play Episode Listen Later Jan 10, 2024 64:42 Transcription Available


    AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS.References:Pentagon moving toward letting AI weapons autonomously kill humansNYC Mayor uses AI to make robocalls in languages he doesn't speakUniversity of Michigan investing in OpenAITesla: claims of “full self-driving” are free speechLLMs may not "understand" output'Maths-ticated' dataLLMs can't analyze an SEC filingHow GPT-4 can be used to create fake datasetsPaper thanking GPT-4 concludes LLMs are good for scienceWill AI Improve Healthcare? Consumers Think SoUS struggling to regulate AI in healthcareAndrew Ng's low p(doom)Presenting the “Off-Grid AGI Safety Facility”Chess is in the training dataDropBox files now shared with OpenAIUnderline.io and ‘commercial exploitation'Axel Springer, OpenAI strike "real-time news" dealAdobe Stock selling AI-generated images of Israel-Hamas conflictSports Illustrated Published Articles by AI WritersCruise confirms robotaxis rely on human assistance every 4-5 milesUnderage workers training AI, exposed to traumatic contentPrisoners training AI in FinlandChatGPT gives better output in response to emotional language- An explanation for bad AI journalismUK judges now permitted to use ChatGPT in legal rulings.Michael Cohen's attorney apparently used generative AI in court petitionBrazilian city enacts ordinance secretly written by ChatGPTThe lawyers getting fired for using ChatGPTUsing sequences of life-events to predict human livesYour palette-cleanser: Is my toddler a stochastic parrot?You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 22: Congressional 'AI' Hearings Say More About Lawmakers (feat. Justin Hendrix), December 22 2023

    Play Episode Listen Later Jan 3, 2024 57:52 Transcription Available


    Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.Justin Hendrix is editor of the Tech Policy Press.References:TPP tracker for the US Senate 'AI Insight Forum' hearingsBalancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)Hearing charterEmily's opening remarks at virtual roundtable on AISenate hearing addressing national security implications of AIVideo: Rep. Nancy Mace opens hearing with ChatGPT-generated statement. Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk PredictionTPP: Senate Homeland Security Committee Considers Philosophy of AIAlex & Emily's appearance on the Tech Policy Press PodcastFresh AI Hell:Asylum seekers vs AI-powered translation appsUK officials use AI to decide on issues from benefits to marriage licensesPrior guest Dr. Sarah Myers West testifying on AI concentrationYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023

    Play Episode Listen Later Nov 30, 2023 64:08 Transcription Available


    Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.This episode was recorded on November 20, 2023.Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she's the author of the forthcoming book, "Tracing Code."Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He's a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.References:Yann LeCun testifies on 'open source' work at MetaMeta launches LLaMA 2Stanford Human-Centered AI's new transparency indexCoverage in The AtlanticEleuther critiqueMargaret Mitchell critiqueOpening up ChatGPT (Andreas Liesenfeld's work)WebinarFresh AI Hell:Sam Altman out at OpenAIThe Verge: Meta disbands their Responsible AI teamArs Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homesCall-out of Stability and others' use of “fair use” in AI-generated artA fawning profile of OpenAI's Ilya SutskeverYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 20 - Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023

    Play Episode Listen Later Nov 21, 2023 64:53 Transcription Available


    Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all.This episode was recorder on November 6, 2023. Watch the video version on PeerTube.References:"A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (1955)Re: methodological individualism, "The Role of General Theory in Comparative-historical Sociology," American Journal of Sociology, 1991Fresh AI Hell:Silly made-up graph about “intelligence” of AI vs. “intelligence” of AI criticismHow AI is perpetuating racism and other bias against Palestinians:The UN hired an AI company with "realistic virtual simulations" of Israel and PalestineWhatsApp's AI sticker generator is feeding users images of Palestinian children holding gunsThe Guardian on the same issueInstagram 'Sincerely Apologizes' For Inserting 'Terrorist' Into Palestinian Bio TranslationsPalette cleanser: An AI-powered smoothie shop shut down almost immediately after opening.OpenAI chief scientist: Humans could become 'part AI' in the futureA Brief History of Intelligence: Why the evolution of the brain holds the key to the future of AI.AI-centered 'monastic academy':“MAPLE is a community of practitioners exploring the intersection of AI and wisdom.”You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    Play Episode Listen Later Nov 8, 2023 61:21 Transcription Available


    Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story.This episode was recorded on November 6, 2023.References:"The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" The growing energy footprint of artificial intelligence- New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country""Energy and Policy Considerations for Deep Learning in NLP." "The 'invisible' materiality of information technology." "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning" "AI is dangerous, but not for the reasons you think." Fresh AI Hell:Not the software to blame for deadly Tesla autopilot crash, but the company selling the software.4chan Uses Bing to Flood the Internet With Racist ImagesFollowup from Vice: Generative AI Is a Disaster, and Companies Don't Seem to Really CareIs this evidence for LLMs having an internal "world model"?“Approaching a universal Turing machine”Americans Are Asking AI: ‘Should I Get Back With My Ex?'You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023

    Play Episode Listen Later Oct 31, 2023 60:02 Transcription Available


    Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts.References:Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research""Recoding Gender: Women's Changing Participation in Computing""The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise""Is chess the drosophila of artificial intelligence? A social history of an algorithm" "The logic of domains""Reckoning and Judgment"Fresh AI Hell:Using AI to meet "diversity goals" in modelingAI ushering in a "post-plagiarism" era in writing"Wildly effective and dirt cheap AI therapy."Applying AI to "improve diagnosis for patients with rare diseases."Using LLMs in scientific researchHealth insurance company Cigna using AI to deny medical claims.AI for your wearable-based workoutYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp), September 22 2023

    Play Episode Listen Later Oct 4, 2023 61:54 Transcription Available


    Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom.Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use for educational purposes. Haley has worked in many roles in the education technology sector, including curriculum design and NLP engineering. She holds an M.S. in Computational Linguistics from the University of Washington and B.S. in Science, Technology, and International Affairs from Georgetown University.References:University of Michigan debuts 'customized AI services'Al Jazeera: An AI classroom revolution is comingCalifornia Teachers Association: The Future of Education?Politico: AI is not just for cheatingExtra credit: "Teaching Machines: The History of Personalized Learning" by Audrey WattersFresh AI Hell:AI generated travel article for Ottawa -- visit the food bank! Microsoft Copilot is “usefully wrong”* Response from Jeff Doctor“Ethical” production of “AI girlfriends”Withdrawn AI-written preprint on millipedes resurfaces, causing alarm among myriapodological communityNew York Times: How to Tell if Your A.I. Is Conscious* Response from VentureBeat: Today's AI is alchemy.EUYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 16: Med-PaLM or Facepalm? A Second Opinion On LLMs In Healthcare (feat. Roxana Daneshjou), August 28, 2023

    Play Episode Listen Later Sep 28, 2023 62:02 Transcription Available


    Alex and Emily are taking another stab at Google and other companies' aspirations to be part of the healthcare system - this time with the expertise of Stanford incoming assistant professor of dermatology and biomedical data science Roxana Daneshjou. A look at the gap between medical licensing examination questions and real life, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process.References:Google blog post describing Med-PaLMNature: Large language models encode clinical knowledgePolitico: Microsoft teaming up with Epic Systems to integrate generative AI into electronic medical records softwareMedRXiv: Beyond the hype: large language models propagate race-based medicine (Omiye, Daneshjou, et al)Fresh AI hell:Fake summaries of fake reviewshttps://bsky.app/profile/hypervisible.bsky.social/post/3k4wouet3pg2uSchool administrators asking ChatGPT which books they have to remove from school libraries, given Iowa's book banMason City Globe Gazette: “Each of these texts was reviewed using AI software to determine if it contains a depiction of a sex act. Based on this review, there are 19 texts that will be removed from our 7-12 school library collections and stored in the Administrative Center while we await further guidance or clarity.”Loquacity and Visible Emotion: ChatGPT as a Policy AdvisorWritten by authors at the Bank of ItalyAI generated school bus routes get students home at 10pmLethal AI generated mushroom-hunting booksHow would RBG respond?You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023

    Play Episode Listen Later Sep 20, 2023 64:05 Transcription Available


    Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.Plus a full portion of Fresh Hell...and a little bit of good news.References:White House press release on voluntary commitmentsEmily's blog post critiquing the “voluntary commitments”An “AI safety” infused take on regulationAI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype“AI” Hurts Consumers and Workers — and Isn't IntelligentFresh AI Hell:Future of Life Institute hijacks SEO for EU's AI ActLLMs for denying health insurance claimsNHS using “AI” as receptionistAutomated robots in receptionCan AI language models replace human research participants?A recipe chatbot taught users how to make chlorine gasUsing a chatbot to pretend to interview Harriet TubmanWorldcoin Orbs & iris scansMartin Shkreli's AI for health start upAuthors impersonated with fraudulent books on Amazon/GoodreadsGood News:You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 14: Henry Kissinger, Machines of War, and the Age of Military AI Hype (feat. Lucy Suchman), July 21 2023

    Play Episode Play 41 sec Highlight Listen Later Sep 13, 2023 61:15 Transcription Available


    Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.This episode was recorded on July 21, 2023. Watch the video on PeerTube.References:Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)American Prospect: Meredith Whittaker & Lucy Suchman's review of Kissinger et al's bookVICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It Fresh AI Hell:American Psychological Association: how to cite ChatGPThttps://apastyle.apa.org/blog/how-to-cite-chatgptSpam reviews & children's books:https://twitter.com/millbot/status/1671008061173952512?s=20An analysis we like, comparing AI to the fossil fuel industry:https://hachyderm.io/@dalias/110528154854288688AI Heaven from Dolly Parton:https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023

    Play Episode Play 38 sec Highlight Listen Later Sep 7, 2023 60:53 Transcription Available


    Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable.Content note: This is a conversation that touches on mental health, people in crisis, and exploitation.This episode was originally recorded on June 8, 2023. Watch the video version on PeerTube.Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences (psychoanalysis, psychology, and psychiatry), the history of technology and media, feminist science and technology studies, and media theory. Zeavin is an Assistant Professor of the History of Science in the Department of History and The Berkeley Center for New Media at UC Berkeley. She is the author of, "The Distance Cure: A History of Teletherapy."References:VICE: Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization… and then pulls the chatbot.NPR: Can an AI chatbot help people with eating disorders as well as another human?Psychiatrist.com: NEDA suspends AI chatbot for giving harmful eating disorder advicePolitico: Suicide hotline shares data with for-profit spinoff, raising ethical questionsDanah Boyd: Crisis Text Line from my perspective.Tech Workers Coalition: Chatbots can't care like we do.Slate: Who's listening when you call a crisis hotline? Helplines and the carceral system.Hannah Zeavin: You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 12: It's All Hell, May 5 2023

    Play Episode Listen Later Aug 29, 2023 61:12 Transcription Available


    Take a deep breath and join Alex and Emily in AI Hell itself, as they take down a month's worth of hype in a mere 60 minutes.This episode aired on Friday, May 5, 2023.Watch the video of this episode on PeerTube.References:Terrifying NEJM article on GPT-4 in medicine“Healthcare professionals preferred ChatGPT 79% of the time”Good thoughts from various experts in responseChatGPT supposedly reading dental x-raysChatbots “need” therapistsCEO proposes AI therapist, removes proposal upon realizing there's regulation:https://twitter.com/BEASTMODE/status/1650013819693944833 (deleted)ChatGPT is more carbon efficient than human writersAsking disinformation machine for confirmation biasGPT-4 glasses to tell you what to say on dates, "Charisma as a Service"Context-aware fill for missing data“Overemployed” with help from ChatGPTPakistani court uses GPT-4 in bail decisionChatGPT in Peruvian and Mexican courtsElon Musk's deepfake defenseElon Musk's TruthGPTFake interview in German publication revealed as “AI” at the end of the articleYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 11: A GPT-4 Fanfiction Novella, April 7 2023

    Play Episode Listen Later Aug 24, 2023 63:21 Transcription Available


    After a hype-y few weeks of AI happenings, Alex and Emily shovel the BS on GPT-4's “system card,” its alleged “sparks of Artificial General Intelligence,” and a criti-hype heavy "AI pause" letter. Hint: for a good time, check the citations.This episode originally aired on Friday, April 7, 2023.You can also watch the video of this episode on PeerTube.References:GPT-4 system card: https://cdn.openai.com/papers/gpt-4-system-card.pdf“Sparks of AGI” hype: https://twitter.com/SebastienBubeck/status/1638704164770332674And the preprint from Bubeck et al.: https://arxiv.org/abs/2303.12712“Pause AI” letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/The “Sparks” paper points to this 1997 editorial in their definition of “intelligence”:https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdfRadiolab's miniseries, 'G': https://radiolab.org/series/radiolab-presents-gBaria and Cross, "The brain is a computer is a brain.": https://arxiv.org/abs/2107.14042Senator Chris Murphy buys the hype:https://twitter.com/ChrisMurphyCT/status/1640186536825061376Generative “AI” is making “police sketches”:https://twitter.com/Wolven/status/1624299508371804161?t=DXyucCPYPAKNn8TtAo0xeg&s=19More mathy math in policing:https://www.cbsnews.com/colorado/news/aurora-police-new-ai-system-bodycam-footage/?utm_source=dlvr.it&utm_medium=twitterUser Research without the Users:https://twitter.com/schock/status/1643392611560878086DoNotPay is here to cancel your gym membership:https://twitter.com/BrianBrackeen/status/1644193519496511488?s=20You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 10: Don't Be A Lawyer, ChatGPT. March 3, 2023

    Play Episode Listen Later Aug 16, 2023 66:49


    Alex and Emily are taking AI to court! Amid big claims about LLMs, a look at the facts about ChatGPT, legal expertise, and what the bar exam actually tells you about someone's ability to practice law--with help from Harvard legal and technology scholar Kendra Albert.This episode was first recorded on March 3, 2023.Watch the video of this episode on PeerTube.References:Social Science Research Network paper “written” by ChatGPTJoe Wanzala, “ChatGPT is ideal for eDiscovery”Legal applications for ChatGPT:Shot: GPT-4 'could pass the. bar exam'Chaser: ChatGPT had bigger dreams. "AI for law"“AI can legally run a company”Wired: Generative AI Is Coming for the Lawyers“This is a decision by a Colombian court in Cartagena (dated January, 30, 2023).As far as we know, it is the first time that a judicial decision has been taken by explicitly resorting to #ChatGPT @sama @OpenAI. The Court poses a series of specific questions to #ChatGPT""Don't Be A Lawyer" song from "Crazy Ex-Girlfriend"Rep. Ted Lieu introduces legislation written by an LLMDoNotPay offers money to anyone willing to use their AI to argue in courtFresh AI Hell:Vanderbilt University responds to MSU shooting with e-mail written using ChatGPTScience fiction magazine closes submissions due to LLM spamThe 1st International Workshop on Implicit Author Characterization from Texts for Search and Retrieval (IACT'23)You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 9: Call the AI Quack Doctor, February 17, 2023

    Play Episode Listen Later Aug 8, 2023 62:39 Transcription Available


    Should the mathy-maths be telling doctors what might be wrong with you? And can they actually help train medical professionals to treat human patients? Alex and Emily discuss the not-so-real medical and healthcare applications of ChatGPT and other large language models.Plus another round of fresh AI hell, featuring "charisma as a service," and other assorted reasons to tear your hair out.This episode was first recorded on February 17th of 2023.Watch the video of this episode on PeerTube.References:Glass.ai makes “diagnosis machine”:https://twitter.com/AiBreakfast/status/1620128621821317125?t=Q6tTAOcGAoFJ3Ko9m4EC9g&s=19Percy Liang claims 'PubMedGPT' can pass medical exams:https://crfm.stanford.edu/2022/12/15/pubmedgpt.htmlhttps://twitter.com/percyliang/status/1603469265583353856?s=20&t=SdWeINzUw92pbkTO8OAVqQEmily's reaction to the above:https://twitter.com/emilymbender/status/1603766381807570944?s=20&t=SdWeINzUw92pbkTO8OAVqQChatGPT gets 60 percent of questions right in US Medical Licensing Exam:https://healthitanalytics.com/news/chatgpt-passes-us-medical-licensing-exam-without-clinician-inputAn Apple Watch error is clogging up 911 lines:https://www.nytimes.com/2023/02/03/health/apple-watch-911-emergency-call.htmlChatGPT-assisted diagnosis: Is the future suddenly here?https://www.statnews.com/2023/02/13/chatgpt-assisted-diagnosis/NVIDIA “eye contact” demo:https://twitter.com/Jousefm2/status/1616878021280993284“Theory of the mind":https://twitter.com/LChoshen/status/1623575423652139015?t=Ohc9tzB09pAEddAReLc6mA&s=09You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 8: The ChatGPT Awakens, January 20, 2023

    Play Episode Listen Later Aug 4, 2023 64:13 Transcription Available


    New year, new hype? As the world gets swept up in the fervor over ChatGPT of late 2022, Emily and Alex give a deep sigh and begin to unpack the wave of fresh enthusiasm over large language models and the "chat" format specifically.Plus, more fresh AI hell.This episode was recorded on January 20, 2023.Watch the video of this episode on PeerTube.References:Situating Search (Shah & Bender 2022) Related op-ed: https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334Piantadosi's thread showing ChatGPT writing a program to classify white males as good scientistsFind Anna Lauren Hoffman's publications (though not yet the one we were referring to) here: https://www.annaeveryday.com/publicationsSarah T. Roberts, Behind the Screen Karen Hao's AI Colonialism series Milagros Miceli: https://www.weizenbaum-institut.de/en/spezialseiten/persons-details/p/milagros-miceli/Julian Posada: https://posada.website/“This Isn't Your Data, Friend”: Black Twitter as a Case Study on Research Ethics for Public Data (Klassen & Fiesler 2022) No Humans Here: Ethical Speculation on Public Data, Unintended Consequences, and the Limits of Institutional Review (Pater, Fiesler & Zimmer 2022) Casey Fiesler's publications: https://caseyfiesler.com/publications/And TikTok: https://www.tiktok.com/@professorcaseyWhere are human subjects in Big Data research? The emerging ethics divide. (Metcalf & Crawford 2016) You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Episode 7: There Are Now 15 Competing Evaluation Metrics (ft. Dr. Jeremy Kahn). December 12, 2022

    Play Episode Listen Later Jul 26, 2023 63:14 Transcription Available


    Emily and Alex are joined by Dr. Jeremy G. Kahn to discuss the distressingly large number of evaluation metrics for artificial intelligence, and some new AI hell.Jeremy G. Kahn has a PhD in computational linguistics, with a focus on information-theoretic and empirical engineering approaches to dealing with natural language (in text and speech). He's gregarious, polyglot, a semi-auto-didact, and occasionally prolix. He also likes comic books, coffee, progressive politics, information theory, lateral thinking, science fiction, science fact, linear thinking, bicycles, beer, meditation, love, play, and inquiry. He lives in Seattle with his wife Dorothy and son Elliott.This episode was recorded on December 12, 2022.Watch the video of this episode on PeerTube.References:XKCD: StandardsWikidataConGish GallopThe Bender RuleDJ Khaled - You Played YourselfJeff Kao's interrogation of public comment periods.Emily's blog post response to NYT pieceYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

    Claim Mystery AI Hype Theater 3000

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel