Podcast appearances and mentions of Emily M Bender

  • 56PODCASTS
  • 73EPISODES
  • 1h 10mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Emily M Bender

Latest podcast episodes about Emily M Bender

Techtonic with Mark Hurst | WFMU
Emily M. Bender and Alex Hanna, authors, "The AI Con" from Jun 2, 2025

Techtonic with Mark Hurst | WFMU

Play Episode Listen Later Jun 3, 2025


Emily M. Bender and Alex Hanna, authors, "The AI Con" Tomaš Dvořák - "Gameboy Tune" - "Mark's intro" - "Interview with Emily M. Bender and Alex Hanna" [0:02:55] - "Mark's comments" [0:44:42] XTC - "Real By Reel" [0:54:37] https://www.wfmu.org/playlists/shows/152572

This Is Hell!
Recognizing AI Hype and How People Can Fight Back/Dr. Emily Bender and Dr. Alex Hanna

This Is Hell!

Play Episode Listen Later Jun 2, 2025 91:05


Dr. Emily M. Bender and Dr. Alex Hanna Dr. Emily M. Bender and Dr. Alex Hanna come on the show to talk about their new book “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want” published but LitHub. The book tackles the pitfalls of AI and why it's so crucial to understand the capitalist greed that is manipulating AI behind the scenes. https://thecon.ai/ A new installment of “This Week In Rotten History” from Renaldo Migaldi follows the interview. Help keep This Is Hell! completely listener supported and access bonus episodes by subscribing to our Patreon: www.patreon.com/thisishell

The Sunday Show
Taking on the AI Con

The Sunday Show

Play Episode Listen Later Jun 1, 2025 36:41


Emily M. Bender and Alex Hanna are the authors of a new book that The Guardian calls “refreshingly sarcastic” and Business Insider calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. Justin Hendrix spoke to them about their new book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, just out from Harper Collins.

Voices of VR Podcast – Designing for Virtual Reality
#1563: Deconstructing AI Hype with “The AI Con” Authors Emily M. Bender and Alex Hanna

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later May 23, 2025 73:18


We are in the middle of a hype cycle peak around AI as there are a lot of hyperbolic claims being made about the capabilities and performance of large-language models (LLMs). Computational Linguist Emily M. Bender and Sociologist Alex Hanna have been writing academic papers about the limitations of LLMs, as well as some of the more pernicious aspects of benchmark culture in machine learning, as well as documenting some of the many environmental, labor, and human rights harms from both the creation and deployment of these LLMs. Their book The AI Con: How to Fight Big Tech's Hype and Create the Future We Want comprehensively deconstructs the many of the false promises of AI, the playbook for AI Hype, and the underlying dynamics of how AI is an automation technology designed to consolidate power. Their book unpacks so many vital parts of the Science and Technology Studies narrative around AI including: How big technology companies have been using AI as a marketing term to describe disparate technologies that have many limitations How we anthropomorphize AI tech from our concepts of intelligence How AI Boosting companies are devaluing what it means to be human in order to promote AI technology How AI Boosters and AI Doomers are two sides of the same coin of assuming that AI is all-powerful and completely inevitable How many of the harms and costs associated with the technology are often out-of-sight and out-of-mind. This book takes a critical look at these so-called AI technologies, deconstructs the language that we use as we talk about these automating technologies, breaks down the hype playbook of Big Tech, restores the relational quality of human intelligence that is often collapsed by AI. It also provides some really helpful questions to ask in order to interrogate the hyperbolic claims that we're hearing from AI boosters. We talk about all of this and more on today's episode, and I have a feeling that this is an episode that I'll be referring back to often. This is also the 100th Voices of VR podcast episode that explores the intersection of AI within the context of XR, and I expect to continue to cover how folks in the XR industry are using AI. Being in right relationship to every aspect of the economic, ethical & moral, social, labor, legal, and property rights dimensions of AI technologies is still an aspirational position. It's not impossible, but it is also not easy. But this conversation helps to frame a lot of the deeper questions that I will continue to have about AI. And Bender and Hanna also provide a lot of clues to the red flags of AI Hype, but also some of the core questions to ask that help to orient around these deeper ethical questions around AI. I've also been editing unpublished and vaulted episodes of the Voices of AI that I did with AI researchers at the International Joint Conference of Artificial Intelligence that I did back in 2016 and 2018 (as well as a couple of other conferences), and I'm hoping to relaunch the Voices of AI later this summer to look back at what researchers were saying about AI 7-9 years ago to give some important historical context that's often collapsed within the current days of AI Hype (SPOILER ALERT: this is not the first nor the last hype cycle that AI will have). I'll also be engaging within a Socratic Style Debate where I'll be mostly arguing critically against AI on the last day of AWE (Thursday, June 12th, 2:45p) after the Expo has closed down, and before the final session. So come check out a live debate with a couple of AI Boosters and an AI Doomer. Also look for an interview that I just recorded with Process Philosopher Matt Segall diving more into a Process-Relational Philosophy perspective on AI, intelligence, and consciousness coming here soon. Segall and I are going to explore an elemental approach to intelligence, which is based upon concepts that I explore in my elemental theory of presence talk. Intelligence, privacy,

Tech Won't Save Us
Generative AI is Not Inevitable w/ Emily M. Bender and Alex Hanna

Tech Won't Save Us

Play Episode Listen Later May 22, 2025 53:13


Paris Marx is joined by Emily M. Bender and Alex Hanna to discuss the harms of generative AI, how the industry keeps the public invested while companies flounder under the weight of unmet promises, and what people can do to push back.Emily M. Bender is a Professor in the Department of Linguistics at University of Washington. Alex Hanna is Director of Research at the Distributed AI Institute. They are the authors of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:New York Magazine reported on the consequences of increasingly widespread use of ChatGPT in education.Support the show

Start Making Sense
Generative AI is Not Inevitable w/ Emily M. Bender and Alex Hanna | Tech Won't Save Us

Start Making Sense

Play Episode Listen Later May 22, 2025 53:13


On this episode of Tech Won't Save Us, Paris Marx is joined by Emily M. Bender and Alex Hanna to discuss some of the harms caused by generative AI, address the industry's ploys to keep the public invested while companies flounder under the weight of unmet promises, and what folks can do to push back.Emily M. Bender is a Professor in the Department of Linguistics at University of Washington. Alex Hanna is Director of Research at the Distributed AI Institute. They are the authors of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Fiction Science
Get a reality check on AI hype

Fiction Science

Play Episode Listen Later May 19, 2025 40:42


Emily M. Bender and Alex Hanna, authors of "The AI Con," say the benefits of AI are being played up while the costs are being played down — and they lay out strategies for fighting the hype.

This Week in Google (MP3)
IM 819: Put The Fries in the Bag - Chaos at the Copyright Office

This Week in Google (MP3)

Play Episode Listen Later May 15, 2025 160:54


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

All TWiT.tv Shows (MP3)
Intelligent Machines 819: Put The Fries in the Bag

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 15, 2025 160:54 Transcription Available


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

Radio Leo (Audio)
Intelligent Machines 819: Put The Fries in the Bag

Radio Leo (Audio)

Play Episode Listen Later May 15, 2025 160:54 Transcription Available


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

This Week in Google (Video HI)
IM 819: Put The Fries in the Bag - Chaos at the Copyright Office

This Week in Google (Video HI)

Play Episode Listen Later May 15, 2025 160:54


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

All TWiT.tv Shows (Video LO)
Intelligent Machines 819: Put The Fries in the Bag

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 15, 2025 160:54 Transcription Available


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

Radio Leo (Video HD)
Intelligent Machines 819: Put The Fries in the Bag

Radio Leo (Video HD)

Play Episode Listen Later May 15, 2025 160:54 Transcription Available


The Copyright Office Issues A Largely Disappointing Report On AI Training, And Once Again A Major Fair Use Analysis Inexplicably Ignores The First Amendment Trump Appointees Blocked From Entering US Copyright Office Meta's new AI glasses could have a 'super-sensing' mode with facial recognition Three things we learned about Sam Altman by scoping his kitchen The House GOP Quietly Slipped In An AI Law That Would Accidentally Ban GOP's Favorite 'Save The Children' Laws neat Gemini airline hack Interview with Emily M. Bender and Alex Hanna AI Use Damages Professional Reputation, Study Suggests Gemini smarts are coming to more Android devices Amazon Upfront 2025: Prime Video will show you AI pause ads - Fast Company E-COM: The $40 million USPS project to send email on paper The CryptoPunks NFTs are being sold to a non-profit as their value continues to fall Crypto boys are the worst.... Parisbait: I've watched every single Nicolas Cage film made so far. Here's what I learned about him – and myself Exclusive: InventWood is about to mass-produce wood that's stronger than steel Uncle Tony's Reptile Shack neal.fun Testing Paris' language proficiency and youth The uncontroversial 'thingness' of AI Artifice and Intelligence The Anti-Bookclub Tackles 'Superagency' Information literacy and chatbots as search Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guests: Emily M. Bender and Alex Hanna Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: storyblok.com/twittv-25 outsystems.com/twit bigid.com/im canary.tools/twit - use code: TWIT

Big Technology Podcast
AI's Drawbacks: Environmental Damage, Bad Benchmarks, Outsourcing Thinking — With Emily M. Bender and Alex Hanna

Big Technology Podcast

Play Episode Listen Later May 14, 2025 61:55


Emily  Bender is a computational linguistics professor at the University of Washington. Alex Hanna is the Director of Research at the Distributed AI Research Institute. Bender and Hanna join Big Technology to discuss what their new book, “The AI‑Con," which they describe as the layered ways today's language‑model boom obscures environmental costs, labor harms, and shaky science. Tune in to hear a lively back‑and‑forth on whether chatbots are useful tools or polished parlor tricks. We also cover benchmark gaming, data‑center water use, doomerism, and more. Hit play for a candid debate that will leave you smarter about where generative AI really stands — and what comes next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year, which includes membership to our subscriber Discord: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Talk the Talk - a podcast about linguistics, the science of language.
118: The A.I. Con (with Emily M. Bender and Alex Hanna)

Talk the Talk - a podcast about linguistics, the science of language.

Play Episode Listen Later May 12, 2025 51:26


Artificial intelligence (so-called) is typified by its boom and bust cycles, and we're in a boom now. But as more and more money pours in with decreasing returns, we're going to see a shakeout, and hype is rushing in to stoke the enthusiasm. In other words, the con is on. Dr Emily M. Bender and Dr Alex Hanna are co-hosts of the podcast Mystery AI Hype Theater 3000, and the authors of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. They join us for this episode.

Keen On Democracy
Episode 2531: Emily Bender and Alex Hanna on the AI Con

Keen On Democracy

Play Episode Listen Later May 12, 2025 43:12


Is AI a big scam? In their co-authored new book, The AI Con, Emily Bender and Alex Hanna take aim at what they call big tech “hype”. They argue that large language models from OpenAI or Anthropic are merely what Bender dubs "stochastic parrots" that produce text without the human understanding nor the revolutionary technology that these companies claim. Both Bender, a professor of linguistics, and Hanna, a former AI researcher at Google, challenge the notion that AI will replace human workers, suggesting instead that these algorithms produce "mid" or "janky" content lacking human insight. They accuse tech companies of hyping fear of missing out (FOMO) to drive adoption. Instead of centralized AI controlled by corporations, they advocate for community-controlled technology that empowers users rather than exploiting them. Five Takeaways (with a little help from Claude)* Large language models are "stochastic parrots" that produce text based on probability distributions from training data without actual understanding or communicative intent.* The AI "revolution" is primarily driven by marketing and hype rather than groundbreaking technological innovations, creating fear of missing out (FOMO) to drive adoption.* AI companies are positioning their products as "general purpose technologies" like electricity, but LLMs lack the reliability and functionality to justify this comparison.* Corporate AI is designed to replace human labor and centralize power, which the authors see as an inherently political project with concerning implications.* Bender and Hanna advocate for community-controlled technology development where people have agency over the tools they use, citing examples like Teheku Media's language technology for Maori communities.Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into into how to understand so-called AI technologies.Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. She also works in the area of social movements, focusing on the dynamics of anti-racist campus protest in the US and Canada. She holds a BS in Computer Science and Mathematics and a BA in Sociology from Purdue University, and an MS and a PhD in Sociology from the University of Wisconsin-Madison. Dr. Hanna is the co-author of The AI Con (Harper, 2025), a book about AI and the hype around it. With Emily M. Bender, she also runs the Mystery AI Hype Theater 3000 series, playfully and wickedly tearing apart AI hype for a live audience online on Twitch and her podcast. She has published widely in top-tier venues across the social sciences, including the journals Mobilization, American Behavioral Scientist, and Big Data & Society, and top-tier computer science conferences such as CSCW, FAccT, and NeurIPS. Dr. Hanna serves as a Senior Fellow at the Center for Applied Transgender Studies and sits on the advisory board for the Human Rights Data Analysis Group. She is also recipient of the Wisconsin Alumni Association's Forward Award, has been included on FastCompany's Queer 50 (2021, 2024) List and Business Insider's AI Power List, and has been featured in the Cal Academy of Sciences New Science exhibit, which highlights queer and trans scientists of color.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Design of AI: The AI podcast for product teams
AI Promises us More Time. What Should we do With it?

Design of AI: The AI podcast for product teams

Play Episode Listen Later Apr 22, 2025 55:07


When reports like Adecco's Global Workforce of the Future survey find that the average saving for workers using AI is 1 hour a day, we should question this. * What did those workers do with their time savings? * Should that time savings benefit the employer or the employee?* Can we trust such a hard-to-measure stat?Our latest episode tackles this and other disruptions happening to the creative and production processes. Matthew Krissel is the Co-Founder of the Built Environment Futures Council and a Principal at Perkins&Will. For over two decades, he has led transformative architectural projects across North America and internationally. We discussed how AI is disrupting architecture and lessons for digital product teams. He really struck powerful points many times during our conversation about questioning the role of time and permanence in a world when we want more, faster.Other points covered in the conversation:* Commoditizing design makes production easier, enabling societies to tackle challenges like housing shortfalls* Commoditizing design devalues other vital processes, like community engagement, respectful place-making, and longevity of projects* Over-indexing AI's potential as a workflow optimizer, while under-indexing the potential to reimagine how complex projects are planned and operationalizedListen on Spotify | Listen on Apple PodcastsIn this newsletter, I'd like to tackle the concept of time saving and what it means from the perspective of crafting an AI strategy. Here was the most important quote from the episode: So just because something took half the time it did before, what happened is we just did more. So we just filled the time. Is there something higher and better use? I suspect that somewhere along the line the designs got better. Also I suspect that somewhere along there was diminishing returns. We were just doing more because we could not that it was actually yielding anything better.  Are you gonna focus on fewer, but better increase your quality? Are you going to spend more time on business development or some entrepreneurial side hustle? Just go home early? What you decide to do as we start to gain productivity time is going to shape a lot of where this is all happening.Newsletter recommendation: Scott BelskyEssential insights and lessons from Scott Belsky that anyone building with AI must read. His newsletter is fantastic and a must-subscribe because of his unique cross-section of expertise across creativity, product, and innovation. His books have also always been pivotal reads to advance your craft. Hopefully, we can do some of the same with our Design of AI podcast and newsletter. Who should benefit most from your ability to learn AI: You or your employer?The challenge to creatives and builders is to decide who should benefit from these transformative technologies if you're self-taught:* Should you gift your employer the benefits if you've taught yourself ways of getting 25% more work accomplished in a day?* Should you gift yourself the benefits of your increased productivity and work on side projects, or spend more time with your family?Historically speaking, employers were responsible for the means and training of production. They paid for novel technologies —desktops, SaaS, big data— and were responsible for training you on how to use them. AI is different because employers are often lagging behind employees in embracing and educating on how to use the technology effectively. It is very easy to argue that the 200 hours you've spent learning AI outside of work hours should exclusively benefit you.AI Time Savings: Benefits & RisksTechnologies have consistently saved us time, but the resulting effects have been questionable. The internet and mobile phones connected the world, while also leading to increased poor health outcomes due to more time sitting. We also spend more time alone than ever.Further back, the Industrial Revolution raised the quality of life for everyone. Still, the commoditization of work led to industrialists exploiting child labour and putting everyone into deplorable working conditions that polluted communities. The time the workforce saved most benefited employers, with employees giving up their ways of life in favour of steady incomes. Most relocated to cities, got cut off from their families, and learned the pain of commuting for the first time.When it comes to AI, the benefits we hope for centre on automation and augmentation. The hope is that we will benefit from less shitty work (automated away) and that we can our new capabilities (augmented by AI) will enable us all become wealthy entrepreneurs. Sure, this may be true for the top 0.01% of AI users who learn how to run a typically 10-person business by themselves. For the rest of us our work may in fact get a lot shittier. At least that's what the authors of the upcoming book, The AI Con, believe. The authors (and upcoming Design of AI guests), Alex Hanna and Emily M. Bender tell a tale of how AI's risks have been severely hidden under the rug. In their book, they document many examples of the technology performing so poorly at tasks that products were shut down within weeks.Maybe the future of businesses will look a lot like Amazon: A business offering endless products of questionable quality and provenance with no humans in sight except those working the worst possible jobs in sorting information, like something out of Severance. In this scenario, the majority of humans will be employed as mall cops of the technology, swooping in when a problem happens that slips between the programming and policies. At this point, AI hypers would argue that even if the enshittification of work is inevitable, AI will open up new and better types of jobs. Only time will tell. How does AI change our relationship with time?When buying productivity-boosting hardware and software, the expectation has always been that the results are undeniable. Going from handwriting to using a typewriter was immensely faster. The same is true when buying a new Saas platform that makes managing projects infinitely easier. Now, with GenAI-powered products, the ROI is unpredictable. The vast majority of capabilities deliver the illusion of rapid progress. Think of image and video generation —the immediate results are shockingly impressive. But getting results to be production-ready requires mastery of probabilistic software and/or resetting your expectations. It all means that the operator —you— ultimately plays a bigger role in the ROI of using this technology than with previous ones.So-called Vibe coding is a major testament to the time savings that AI can create. Anyone can now build a website and app without writing a line of code.Vibe coding platforms —like Cursor, Lovable, Replit, and many more— are fantastically easy to use… until they're absolutely painful to use. The stunning early rewards turn into confusingly broken components all over.Again, results depend on the operator's ability to debug using an entirely new interface paradigm (conversational). This continues the technology's remarkable inversion of the value paradigm, where workers define the quality of outputs.Looking ahead, mastery of data will triumph over mastery of interfaces. This favours employers who unlock the power of their first-party data and build solutions that augment and automate the expertise of their employees.Always worth reading, strategist and tech critic Tom Goodwin posted an intriguing analysis on LinkedIn this week. At the core of the guiding philosophy regarding AI-assistance is that the more complex the task, the less qualified AI is to work on the task unassisted.Check our previous podcast episode and newsletter for more details on how to unlock the power of your data. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Grumpy Old Geeks
693: Let Them Eat Space

Grumpy Old Geeks

Play Episode Listen Later Apr 18, 2025 83:59


This week, we blast off with a tale as old as grift: Fyre Fest 2 has been postponed—again—proving that you really can fail upward if you squint hard enough and wear enough white linen. Over at Automattic, employees discovered secret watermarks in their internal comms, because what workplace isn't better with a sprinkle of corporate surveillance cosplay? Meanwhile, Katy Perry took a joyride to the upper atmosphere with Gayle King and Bezos' better half, giving us the 2025 edition of the cringiest “Imagine”-style celebrity moment yet. Spoiler: no one needed this.In Elon World™, things are somehow even weirder. Seth Rogen dropped some truth bombs about Silicon Valley's MAGA leanings, only to have them surgically removed from the Breakthrough Prize stream. Musk, for his part, is managing his growing empire of baby mamas like a Bond villain with a baby registry. Add in a cringe-filled offer to a YouTuber to become Space Karen's next broodmare, and we've officially entered peak simulation. Meanwhile, whistleblowers are spilling DOGE secrets, OpenAI is building a social network (because we clearly don't have enough doomscrolling options), and 4chan has finally been hacked into oblivion. Pour one out—for the internet's dumpster fire.Also in the news: Google lost a big ad tech monopoly case (cue tiny violins), China is no longer buying the “autonomous” car hype after a fatal crash, and Trump's FCC chair is threatening Comcast for not being enough like Fox News (as if that's the journalistic gold standard). The Pentagon's nerd squad resigned after butting heads with DOGE, Reality Labs burned $45 billion like it was going out of style, and AI customer service bots are now inventing policies out of thin air. Oh, and if your AI thinks your Python package has a delivery issue—you're not crazy, it probably hallucinated it. Welcome to the future.Sponsors:Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/693FOLLOW UPFyre Fest 2 Postponed: “New Date Will Be Announced”Following Layoffs, Automattic Employees Discover Leak-Catching WatermarksIN THE NEWSUnfortunately for Katy Perry, That “Space Flight” Turned Out Exactly How We All Knew It WouldWe Finally Have 2025's “Imagine” VideoLet them eat spaceSeth Rogen's Criticism of Silicon Valley's Support for Trump Was Cut From the “Full” Stream of Breakthrough PrizeThe Tactics Elon Musk Uses to Manage His ‘Legion' of Babies—and Their MothersGlamorous influencer Tiffany Fong breaks silence on Elon Musk's 'offer to impregnate her' with shocking statementA whistleblower's disclosure details how DOGE may have taken sensitive labor dataElectronics exempted from reciprocal tariffs will soon be subject to new semiconductor tariffs insteadGoogle loses ad tech monopoly caseChina cracks down on 'autonomous' car claims after fatal accidentTrump's FCC chair threatens Comcast, demands changes to NBC news coverageOpenAI is building a social network4chan Likely Gone Forever After Hackers Take ControlCompany apologizes after AI support agent invents policy that causes user uproarPentagon tech unit resigns after clash with Musk's DOGEWhat Does a Corrupt Election Look Like?Tesla puts finishing touches on Hollywood charge-n-dinerInside the $45 billion cash burn at Reality LabsWe Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMsThe business of the AI labs by Max BolingbrokeMEDIA CANDYKilling an Arab on PandoraApple's ‘Mythic Quest' is ending with an updated Season 4 finaleSide QuestNight of the ZoopocalypseBlack MirrorDaredevilThe Last of UsG2028 Years Later Rises From the Grave With a New Trailer'Real Time' host Bill Maher says President Trump was "gracious" and "not fake" during his White House visit.Bringing Down a DictatorBlueprint for Revolution: How to Use Rice Pudding, Lego Men, and Other Nonviolent Techniques to Galvanize Communities, Overthrow Dictators, or Simply Change the World by Srdja PopovicAPPS & DOODADSApple is reportedly working on two new versions of the Vision ProIlya Bezdelev

Deep State Radio
Siliconsciousness: The AI Competition: Public Policy Strategies: Part 2

Deep State Radio

Play Episode Listen Later Apr 4, 2025 36:33


Welcome to part 2 of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some of the leading voices in AI policy from the public and private sectors to role-play these complex issues. These AI leaders play roles in the US, China, and The EU, and enact policies that best align with their roles interests in the AI space. This episode contains the second and final phase of the game. We hope you enjoy this insightful episode.  Our Players: US Government Players White House (NSA, AI & Crypto Czar, Assistant to Pres. For S&T) - Doug Calidas, Senior Vice President of Government Affairs for Americans for Responsible Innovation (ARI) Government research institutions (funding) - Stephen Ezell Standards and governance (NIST, DOS, etc.) - Vivek Wadhwa, Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley Regulatory and trade (DOS, Treasury, etc.) - Susan Ariel Aaronson, American author, public speaker, and GWU professor Department of Defense- Daniel Castro, vice president at the Information Technology and Innovation Foundation (ITIF) Commerce Department - Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Center Intel Community and Cyber Defense - David Mussington, professor of the practice the University of Maryland School of Public Policy, and currently serves as the CISA Executive Assistant Director Congress/State Department -  Cameron Kelly,  Distinguished Visiting Fellow, Brookings Institutution China players Central Military Committee representatives - Rohit Talwar, founder of FastFuture Intelligence and cyber - Daniel Richardson, President of Indepth Global AI  Public/Private Industry - Sarah Myers West, co-director at AI Now Ministry of Science and Technology (MOST)/Ministry of Industry and Information technology (MIIT) - David Lin, Senior Director for Future Technology Platforms at the Special Competitive Studies Project (SCSP) European Union Governance- Courtney Radsch, Director, Center for Journalism and Liberty at Open Markets Institute Military/Security - Gordon LaForge, senior policy analyst at New America Regulatory - Michelle Nie, EU Tech Policy Fellow at the Open Markets Institute Industrial and research policy - David Goldston, director of government affairs at the Natural Resources Defense Council Intelligence Agencies - Rumman Chowdhury, scientist, entrepreneur, and former responsible artificial intelligence lead at Accenture Civil Society  Large players (ChatGPT, META, Amazon, Microsoft) - Cody Buntain, Assistant Professor; Affiliate Fellow, UMD Honors College – Artificial Intelligence Cluster Medium players - Ramayya Krishnan, Dean, Heinz College Of Information Systems And Public Policy at Carnegie Mellon University Open-source communities - Jay Lee, Clark Distinguished Chair Professor and Director of Industrial AI Center in the Mechanical Engineering Dept. of the Univ. of Maryland College Park Advocacy Organizations - David Goldston, director of government affairs at the Natural Resources Defense Council  Legal Community - Kahaan Mehta, Research Fellow at the Vidhi Centre for Legal Policy  Universities and academia Large universities - Nita Farahany, Robinson O. Everett Distinguished Professor of Law at Duke Law Smaller schools - Anand Patwardhan, professor in the School of Public Policy at the University of Maryland Medium Universities - Elizabeth Bramson-Boudreau, CEO and Publisher at MIT Technology Review Government laboratories (Defense, DOE, etc.) - Emily M. Bender, University of Washington Professor This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

Deep State Radio
Siliconsciousness: The AI Competition: Public Policy Strategies: Part 2

Deep State Radio

Play Episode Listen Later Apr 4, 2025 36:33


Welcome to part 2 of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some of the leading voices in AI policy from the public and private sectors to role-play these complex issues. These AI leaders play roles in the US, China, and The EU, and enact policies that best align with their roles interests in the AI space. This episode contains the second and final phase of the game. We hope you enjoy this insightful episode.  Our Players: US Government Players White House (NSA, AI & Crypto Czar, Assistant to Pres. For S&T) - Doug Calidas, Senior Vice President of Government Affairs for Americans for Responsible Innovation (ARI) Government research institutions (funding) - Stephen Ezell Standards and governance (NIST, DOS, etc.) - Vivek Wadhwa, Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley Regulatory and trade (DOS, Treasury, etc.) - Susan Ariel Aaronson, American author, public speaker, and GWU professor Department of Defense- Daniel Castro, vice president at the Information Technology and Innovation Foundation (ITIF) Commerce Department - Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Center Intel Community and Cyber Defense - David Mussington, professor of the practice the University of Maryland School of Public Policy, and currently serves as the CISA Executive Assistant Director Congress/State Department -  Cameron Kelly,  Distinguished Visiting Fellow, Brookings Institutution China players Central Military Committee representatives - Rohit Talwar, founder of FastFuture Intelligence and cyber - Daniel Richardson, President of Indepth Global AI  Public/Private Industry - Sarah Myers West, co-director at AI Now Ministry of Science and Technology (MOST)/Ministry of Industry and Information technology (MIIT) - David Lin, Senior Director for Future Technology Platforms at the Special Competitive Studies Project (SCSP) European Union Governance- Courtney Radsch, Director, Center for Journalism and Liberty at Open Markets Institute Military/Security - Gordon LaForge, senior policy analyst at New America Regulatory - Michelle Nie, EU Tech Policy Fellow at the Open Markets Institute Industrial and research policy - David Goldston, director of government affairs at the Natural Resources Defense Council Intelligence Agencies - Rumman Chowdhury, scientist, entrepreneur, and former responsible artificial intelligence lead at Accenture Civil Society  Large players (ChatGPT, META, Amazon, Microsoft) - Cody Buntain, Assistant Professor; Affiliate Fellow, UMD Honors College – Artificial Intelligence Cluster Medium players - Ramayya Krishnan, Dean, Heinz College Of Information Systems And Public Policy at Carnegie Mellon University Open-source communities - Jay Lee, Clark Distinguished Chair Professor and Director of Industrial AI Center in the Mechanical Engineering Dept. of the Univ. of Maryland College Park Advocacy Organizations - David Goldston, director of government affairs at the Natural Resources Defense Council  Legal Community - Kahaan Mehta, Research Fellow at the Vidhi Centre for Legal Policy  Universities and academia Large universities - Nita Farahany, Robinson O. Everett Distinguished Professor of Law at Duke Law Smaller schools - Anand Patwardhan, professor in the School of Public Policy at the University of Maryland Medium Universities - Elizabeth Bramson-Boudreau, CEO and Publisher at MIT Technology Review Government laboratories (Defense, DOE, etc.) - Emily M. Bender, University of Washington Professor This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

Deep State Radio
Siliconsciousness: The AI Competition: Public Policy Strategies: Part 1

Deep State Radio

Play Episode Listen Later Mar 28, 2025 40:52


Welcome to a very different episode of Siliconsciousness. Today, we are taking a creative new approach to discussing the future of AI. This episode comprises the first part of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some of the leading voices in AI policy from the public and private sectors to role-play these complex issues. These AI leaders play roles in the US, China, and The EU, and enact policies that best align with their roles interests in the AI space. This first episode contains the first phase of the game, as well as introductions from the editor in chief of MIT Technology Review Mat Honan as well as game controller Ed McGrady. We hope you enjoy. Our Players: US Government Players White House (NSA, AI & Crypto Czar, Assistant to Pres. For S&T) - Doug Calidas, Senior Vice President of Government Affairs for Americans for Responsible Innovation (ARI) Government research institutions (funding) - Stephen Ezell Standards and governance (NIST, DOS, etc.) - Vivek Wadhwa, Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley Regulatory and trade (DOS, Treasury, etc.) - Susan Ariel Aaronson, American author, public speaker, and GWU professor Department of Defense- Daniel Castro, vice president at the Information Technology and Innovation Foundation (ITIF) Commerce Department - Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Center Intel Community and Cyber Defense - David Mussington, professor of the practice the University of Maryland School of Public Policy, and currently serves as the CISA Executive Assistant Director Congress/State Department -  Cameron Kelly,  Distinguished Visiting Fellow, Brookings Institutution China players Central Military Committee representatives - Rohit Talwar, founder of FastFuture Intelligence and cyber - Daniel Richardson, President of Indepth Global AI  Public/Private Industry - Sarah Myers West, co-director at AI Now Ministry of Science and Technology (MOST)/Ministry of Industry and Information technology (MIIT) - David Lin, Senior Director for Future Technology Platforms at the Special Competitive Studies Project (SCSP) European Union Governance- Courtney Radsch, Director, Center for Journalism and Liberty at Open Markets Institute Military/Security - Gordon LaForge, senior policy analyst at New America Regulatory - Michelle Nie, EU Tech Policy Fellow at the Open Markets Institute Industrial and research policy - David Goldston, director of government affairs at the Natural Resources Defense Council Intelligence Agencies - Rumman Chowdhury, scientist, entrepreneur, and former responsible artificial intelligence lead at Accenture Civil Society  Large players (ChatGPT, META, Amazon, Microsoft) - Cody Buntain, Assistant Professor; Affiliate Fellow, UMD Honors College – Artificial Intelligence Cluster Medium players - Ramayya Krishnan, Dean, Heinz College Of Information Systems And Public Policy at Carnegie Mellon University Open-source communities - Jay Lee, Clark Distinguished Chair Professor and Director of Industrial AI Center in the Mechanical Engineering Dept. of the Univ. of Maryland College Park Advocacy Organizations - David Goldston, director of government affairs at the Natural Resources Defense Council  Legal Community - Kahaan Mehta, Research Fellow at the Vidhi Centre for Legal Policy  Universities and academia Large universities - Nita Farahany, Robinson O. Everett Distinguished Professor of Law at Duke Law Smaller schools - Anand Patwardhan, professor in the School of Public Policy at the University of Maryland Medium Universities - Elizabeth Bramson-Boudreau, CEO and Publisher at MIT Technology Review Government laboratories (Defense, DOE, etc.) - Emily M. Bender, University of Washington Professor Learn more about your ad choices. Visit megaphone.fm/adchoices

Lingthusiasm - A podcast that's enthusiastic about linguistics
100: A hundred reasons to be enthusiastic about linguistics

Lingthusiasm - A podcast that's enthusiastic about linguistics

Play Episode Listen Later Jan 17, 2025 41:16


This is our hundredth episode that's enthusiastic about linguistics! To celebrate, we've put together 100 of our favourite fun facts about linguistics, featuring contributions from previous guests and Lingthusiasm team members, fan favourites that resonated with you from the previous 99 episodes, and new facts that haven't been on the show before but might star in one of the next 100 episodes in greater detail. In this episode, your hosts Gretchen McCulloch and Lauren Gawne talk about brains, gesture, etymology, famous example sentences, languages by the numbers, a few special facts about the word "hundred" and way more! This episode is both a fun overview of the vibe of Lingthusiam if you've never listened before, and a bonus bingo card game for diehard fans to see how many facts you can recognize. We also invite you to share this episode alongside one of your favourite fun facts about linguistics and help more people find Lingthusiasm in honour of our 100th episodiversary! Whether you pick something new that resonates from this episode, or share the fact you were sitting on the edge of your seat hoping we'd mention, we look forward to staying Lingthusiastic with you for the next 100 episodes. Click here for a link to this episode in your podcast player of choice: episodes.fm/1186056137/episode/dGFnOnNvdW5kY2xvdWQsMjAxMDp0cmFja3MvMjAxMDg1Njk3MQ Read the transcript here: lingthusiasm.com/post/772874564563845120/transcript-episode-100 Announcements: In this month's bonus episode we get enthusiastic about some of our favourite deleted bits from recent interviews that we didn't quite have space to share with you! First, we go back to our interview with phonetician Jacq Jones, previously seen talking about how binary and non-binary people talk. Then, we return to computational linguist Emily M. Bender to talk about how Emily's students made a computational model of Lauren's grammar of Lamjung Yolmo and how linguistics is a team sport. Finally, we return to our group interview with the team behind Tom Scott's Language Files to talk about sneaky Icelandic jokes and the unedited behind-the-scenes version of the gif/gif joke. Join us on Patreon now to get access to this and 90+ other bonus episodes. You'll also get access to the Lingthusiasm Discord server where you can chat with other language nerds: patreon.com/posts/118982443 For links to things mentioned in this episode: lingthusiasm.com/post/772874257193730048/lingthusiasm-episode-100-a-hundred-reasons-to-be

Fluidity
Better Text Generation With Science And Engineering

Fluidity

Play Episode Listen Later Jan 12, 2025 38:20


Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy's paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et al., “Language Models as Knowledge Bases?": https://aclanthology.org/D19-1250/ Gwern Branwen, “The Scaling Hypothesis”: gwern.net/scaling-hypothesis Rich Sutton's “Bitter Lesson”: www.incompleteideas.net/IncIdeas/BitterLesson.html Guu et al.'s “Retrieval augmented language model pre-training” (REALM): http://proceedings.mlr.press/v119/guu20a/guu20a.pdf Borgeaud et al.'s “Improving language models by retrieving from trillions of tokens” (RETRO): https://arxiv.org/pdf/2112.04426.pdf Izacard et al., “Few-shot Learning with Retrieval Augmented Language Models”: https://arxiv.org/pdf/2208.03299.pdf Chirag Shah and Emily M. Bender, “Situating Search”: https://dl.acm.org/doi/10.1145/3498366.3505816 David Chapman's original version of the proposal he puts forth in this episode: twitter.com/Meaningness/status/1576195630891819008 Lan et al. “Copy Is All You Need”: https://arxiv.org/abs/2307.06962 Mitchell A. Gordon's “RETRO Is Blazingly Fast”: https://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html Min et al.'s “Silo Language Models”: https://arxiv.org/pdf/2308.04430.pdf W. Daniel Hillis, The Connection Machine, 1986: https://www.amazon.com/dp/0262081571/?tag=meaningness-20 Ouyang et al., “Training language models to follow instructions with human feedback”: https://arxiv.org/abs/2203.02155 Ronen Eldan and Yuanzhi Li, “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?”: https://arxiv.org/pdf/2305.07759.pdf Li et al., “Textbooks Are All You Need II: phi-1.5 technical report”: https://arxiv.org/abs/2309.05463 Henderson et al., “Foundation Models and Fair Use”: https://arxiv.org/abs/2303.15715 Authors Guild v. Google: https://en.wikipedia.org/wiki/Authors_Guild%2C_Inc._v._Google%2C_Inc. Abhishek Nagaraj and Imke Reimers, “Digitization and the Market for Physical Works: Evidence from the Google Books Project”: https://www.aeaweb.org/articles?id=10.1257/pol.20210702 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

Lingthusiasm - A podcast that's enthusiastic about linguistics
98: Helping computers decode sentences - Interview with Emily M. Bender

Lingthusiasm - A podcast that's enthusiastic about linguistics

Play Episode Listen Later Nov 22, 2024 56:10


When a human learns a new word, we're learning to attach that word to a set of concepts in the real world. When a computer "learns" a new word, it is creating some associations between that word and other words it has seen before, which can sometimes give it the appearance of understanding, but it doesn't have that real-world grounding, which can sometimes lead to spectacular failures: hilariously implausible from a human perspective, just as plausible from the computer's. In this episode, your host Lauren Gawne gets enthusiastic about how computers process language with Dr. Emily M. Bender, who is a linguistics professor at the University of Washington, USA, and cohost of the podcast Mystery AI Hype Theater 3000. We talk about Emily's work trying to formulate a list of rules that a computer can use to generate grammatical sentences in a language, the differences between that and training a computer to generate sentences using the statistical likelihood of what comes next based on all the other sentences, and the further differences between both those things and how humans map language onto the real world. We also talk about paying attention to communities not just data, the labour practices behind large language models, and how Emily's persistent questions led to the creation of the Bender Rule (always state the language you're working on, even if it's English). Click here for a link to this episode in your podcast player of choice: episodes.fm/1186056137/episode/dGFnOnNvdW5kY2xvdWQsMjAxMDp0cmFja3MvMTk2NDIxOTY5OQ Read the transcript here: lingthusiasm.com/post/767803835730231296/transcript-episode-98 Announcements: The 2024 Lingthusiasm Listener Survey is here! It's a mix of questions about who you are as our listener, as well as some fun linguistics experiments for you to participate in. If you have taken the survey in previous years, there are new questions, so you can participate again this year. Take the survey here: bit.ly/lingthusiasmsurvey24 In this month's bonus episode we get enthusiastic about three places where we can learn things about linguistics!! We talk about two linguistically interesting museums that Gretchen recently visited: the Estonian National Museum, as well as Mundolingua, a general linguistics museum in Paris. We also talk about Lauren's dream linguistics travel destination: Martha's Vineyard. Join us on Patreon now to get access to this and 90+ other bonus episodes. You'll also get access to the Lingthusiasm Discord server where you can chat with other language nerds. Sign up here: patreon.com/posts/115117867 Also, Patreon now has gift memberships! If you'd like to get a gift subscription to Lingthusiasm bonus episodes for someone you know, or if you want to suggest them as a gift for yourself, here's how to gift a membership: patreon.com/lingthusiasm/gift For links to things mentioned in this episode: lingthusiasm.com/post/767803572750581760/lingthusiasm-episode-98-helping-computers-decode

New Books Network
Emily M. Bender on AI Hype

New Books Network

Play Episode Listen Later Sep 23, 2024 71:58


Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Economics
Emily M. Bender on AI Hype

New Books in Economics

Play Episode Listen Later Sep 23, 2024 71:58


Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/economics

New Books in Politics
Emily M. Bender on AI Hype

New Books in Politics

Play Episode Listen Later Sep 23, 2024 71:58


Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics

New Books in Science, Technology, and Society
Emily M. Bender on AI Hype

New Books in Science, Technology, and Society

Play Episode Listen Later Sep 23, 2024 71:58


Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Technology
Emily M. Bender on AI Hype

New Books in Technology

Play Episode Listen Later Sep 23, 2024 71:58


Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

Artificiality
Emily M. Bender: AI, Linguistics, Parrots, and more!

Artificiality

Play Episode Listen Later Aug 2, 2024 57:18


We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington. As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily's expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that's why we were eager to talk with her. Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development. In this conversation, we explore the issues of current AI systems with a particular focus on Emily's view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines. Let's dive into our conversation with Emily Bender. ---------------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

The Electorette Podcast
Are Our Fears About Artificial Intelligence Unfounded & How Should Legislators Protect Us: A Conversation with Prof. Emily M. Bender and Dr. Alex Hanna

The Electorette Podcast

Play Episode Listen Later Jul 11, 2024 50:05


Professor Emily M. Bender, and sociologist Dr. Alex Hanna discuss the impact of Artificial Intelligence and whether our government can protect us from its potential harms via legislation, and regulation. We also discuss the sociological harms of AI, as well as the environmental impact. Lastly, Professor Bender discusses the reaction to a paper she presented in 2021, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

The Daily Zeitgeist
Down The Stupid AI Rabbithole 06.25.24

The Daily Zeitgeist

Play Episode Listen Later Jun 25, 2024 79:56 Transcription Available


In episode 1697, Jack and Miles are joined by hosts of Mystery AI Hype Theater 3000, Prof. Emily M. Bender & Dr. Alex Hanna, to discuss… AI Is Breaking The Internet and The World AI, Debunking Lies About AI Magic, Dangerous And Harmful Ways AI Is Actually Being Used and more! LISTEN: Out In The Sun (Hey-O) by The Beach-NutsSee omnystudio.com/listener for privacy information.

Carnegie Council Audio Podcast
Linguistics, Automated Systems, & the Power of AI, with Emily M. Bender

Carnegie Council Audio Podcast

Play Episode Listen Later Jun 18, 2024 46:59


In this "AI & Equality" podcast, guest host and AIEI board advisor Dr. Kobi Leins is joined by University of Washington's Professor Emily Bender for a discussion on systems, power, and how we are changing the world, one technological decision at a time. With a deep expertise in language and computers, Bender brings her perspective on how language and systems are being perceived and used—and changing us through automated systems and AI. Why do words and linguistics matter when we are thinking about these emerging technologies? How can we more thoughtfully automate the use of AI? For more, please go to: https://carnegiecouncil.co/aiei-leins-bender

The 10 Minute Teacher Podcast
Some Big AI Problems: The Eliza Effect and More

The 10 Minute Teacher Podcast

Play Episode Listen Later Jun 8, 2024 14:44


Yes, everyone is talking about AI. However, how do the concerns about AI apply to our classrooms today? Tom Mullaney talks about concerns with: The Eliza effect—where people attribute human characteristics such as trust and credibility to text-generating computers—can be dangerous when combined with the biases and inaccuracies inherent in large language models. It is vital for educators to understand this as we talk about AI with students. There are concerns about using AI as "guest speakers" even for something seemingly "harmless" like "the water cycle." Concerns with humanizing AI. Discussing the "On the Dangers of Stochastic Parrots" paper by Dr. Emily M. Bender et all which discusses the ethical issues and harms of large language models, including bias and environmental racism. Debunking the myth that AI will have values and beliefs. Practical applications of AI in the classroom The challenges of citing generative AI in the classroom. Why it is vital to teach about AI's ethical implications and encourage critical thinking with the use of AI in the classroom. Why educators should stay informed about AI so they can guide students to effectively and responsibly use the AI that is becoming embedded in their technology. Sponsor: Juicemind - https://www.juicemind.com/ As I taught coding this year in AP Computer Science Principles, I found JuiceMind so useful. Not only do they have the team coding tools we educators need (since Replit was discontinued) but they have Kahoot-like games where students can write code as part of the quizzing process. Juicemind also works with many math courses. I love their tools for studying in my coding classes and highly recommend Juicemind. Disclosure of Material Connection: This is a “sponsored podcast episode.” The company who sponsored it compensated me via cash payment, gift, or something else of value to include a reference to their product. Regardless, I only recommend products or services I believe will be good for my readers and are from companies I can recommend. I am disclosing this in accordance with the Federal Trade Commission's 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising."

Our Opinions Are Correct
The Turing Test is bullsh*t (w/Alex Hanna and Emily M. Bender)

Our Opinions Are Correct

Play Episode Listen Later Apr 4, 2024 48:21


We're talking about the Turing Test, the grandmother of all tests for AI sentience. Joining us are AI researchers Alex Hanna and Emily M. Bender, hosts of the Mystery AI Hype 3000 podcast. We discuss why the Turing Test is so influential in both fiction and reality – and why it is completely wrong. Later in the episode, we'll talk about another thing that humans got wrong when it comes to non-human intelligence: dog breeding.  

Lexis
Episode 51 - Emily M. Bender and 'AI' hype

Lexis

Play Episode Listen Later Mar 19, 2024 33:07


Show notes for Episode 51 Here are the show notes for Episode 51, in which Dan and (new Lexis team member) Raj talk to Professor Emily M. Bender of the University of Washington about: Why ‘Artificial Intelligence' is not really the right term at all How Large Language Models work and why we should be sceptical of many of the claims made for them The biases inherent in LLMs and what to do about them Whether ‘neural networks' and language processing can shed any light on child language development The discourses around ‘AI': from booster to doomer.  Emily M. Bender's University of Washington page: https://faculty.washington.edu/ebender/  A great interview from 2023: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html Time Magazine on the ‘machine-learning myth buster': https://time.com/collection/time100-ai/6308275/emily-m-bender/  Mystery AI Hype Theater 3000 podcast: https://www.dair-institute.org/maiht3k/ Emily's book recommendations:  ‘Babel', R.F. Kuang: https://uk.bookshop.org/p/books/babel-or-the-necessity-of-violence-an-arcane-history-of-the-oxford-translators-revolution-r-f-kuang/6627642?ean=9780008501853  ‘A Memory Called Empire, Arkady Martine: https://uk.bookshop.org/p/books/a-memory-called-empire-winner-of-the-hugo-award-for-best-novel-arkady-martine/219166?ean=9781529001594  Other links from the interview Jess Dodge's work: https://jessedodge.github.io/  Batya Friedman & Helen Nissenbaum, Bias in Computer Systems (1996): https://nyuscholars.nyu.edu/en/publications/bias-in-computer-systems  Some further reading:  Police worried 101 call bot would struggle with 'Brummie' accents https://www.bbc.co.uk/news/technology-68466369 BBC News - 'Journalists are feeding the AI hype machine' https://www.bbc.co.uk/news/business-68488924  Bias against African American English  Paper: https://arxiv.org/abs/2403.00742  Register article: https://www.theregister.com/2024/03/11/ai_models_exhibit_racism_based/  An Al-Jazeera opinion piece about AI and borders:  https://www.aljazeera.com/opinions/2023/4/20/ban-racist-and-lethal-ai-from-europes-borders  Contributors Lisa Casey  blog: https://livingthroughlanguage.wordpress.com/ & Twitter: Language Debates (@LanguageDebates) Dan Clayton  blog: EngLangBlog & Twitter: EngLangBlog (@EngLangBlog) Bluesky: https://bsky.app/profile/englangblog.bsky.social  Jacky Glancey  Twitter: https://twitter.com/JackyGlancey Raj Rana Matthew Butler  Twitter: https://twitter.com/MatthewbutlerCA  Music: Serge Quadrado - Cool Guys  Cool Guys by Serge Quadrado is licensed under a Attribution-NonCommercial 4.0 International License. From the Free Music Archive: https://freemusicarchive.org/music/serge-quadrado/urban/cool-guys 

In Conversation with UX Magazine
S3E8 LLM? More Like "Limited" Language Model with Emily M. Bender, University of Washington

In Conversation with UX Magazine

Play Episode Listen Later Feb 29, 2024 60:03


As a co-author of the often cited (and debated) Stochastic Parrots paper from 2021, Emily M. Bender is a staunch critic of large language models (LLMs). Having worked in computational linguistics for more than 20 years, Emily's deep understanding of LLM mechanics has her questioning many of the emerging use cases we see in the world. Also, Emily hosts Mystery AI Hype Theater 3000, where, alongside sociologist Dr. Alex Hanna, she breaks down the AI hype, separates fact from fiction, and distinguishes science from bloviation. She joins Robb and Josh for a provocative exploration of generative AI on an important episode of Invisible Machines.

The Daily Zeitgeist
Resist The Urge To Be Impressed By A.I. 02.27.24

The Daily Zeitgeist

Play Episode Listen Later Feb 27, 2024 71:55 Transcription Available


In episode 1631, Jack and Miles are joined by hosts of Mystery AI Hype Theater 3000, Dr. Emily M. Bender & Dr. Alex Hanna, to discuss… Limited And General Artificial Intelligence, The Distinction Between 'Hallucinating' And Failing At Its Goal, How Widespread The BS Is and more! LISTEN: A Dream Goes On Forever by VegynSee omnystudio.com/listener for privacy information.

The Good Robot IS ON STRIKE!
Emily M. Bender and Alex Hanna on Why You Shouldn't Believe the AI Hype

The Good Robot IS ON STRIKE!

Play Episode Listen Later Feb 6, 2024 29:18


 In this episode, we talk to Emily M. Bender and Alex Hanna. AI ethics legends and now the co-hosts of the Mystery AI Hype Theatre 3000 podcast which is a new podcast where they dispel the hype storm around AI. Emily is a professor of linguistics at university of Washington and the co-author of that stochastic parrots paper that you may have heard of, because two very important people in the Google AI ethics team allegedly got fired over it, and that's Timnit Gebru and Meg Mitchell. And Alex Hanna is the director of research at the Distributed AI Research Institute known by its acronym, DAIR, which is now run by Timnit. In this episode, they argue that we should stop using the term AI altogether, and that the world might be better without text to image systems like DALL·E and Midjourney. They tell us how the AI hype agents are getting high on their own supply, and give some advice for young people going into tech careers.

Luiza's Podcast
#9: Understanding LLMs and Breaking Down the AI Hype, with Dr. Alex Hanna & Prof. Emily M. Bender

Luiza's Podcast

Play Episode Listen Later Sep 21, 2023 57:15


In this exclusive live talk, Luiza Jarovsky discusses with Dr. Alex Hanna and Prof. Emily M. Bender:What the current AI hype is about, and what are the main counter-arguments;Why the "Stochastic Parrots"

Yeah Nah Pasaran!
Emily M. Bender on AI Hype

Yeah Nah Pasaran!

Play Episode Listen Later Sep 21, 2023


This week we have a chat with Prof. Emily M. Bender about stochastic parrots, Large Language Models and AI hype.

Wild with Sarah Wilson
EMILY M. BENDER: AI won't kill us any time soon (don't believe the bro' hype!)

Wild with Sarah Wilson

Play Episode Listen Later Sep 19, 2023 64:48


Emily M. Bender (ChatGPT expert) is a linguist, a scholar of the societal impact of language AI and a professor at the University of Washington where she's director of the Computational Linguistics Laboratory. She recently became internet-famous for her no-nonsense, almost comical, papers that criticise the hype around large language models (LLMs) and ChatGPT. Her message is: Don't believe the tech bro' hype; it's spin!In this chat we cover whether AI can take over the world; the real motives behind Elon Musk and Sam Altman's excited calls for an “AI pause”; where longtermism, the singularity, effective altruism, pro-natalism and transhumanism (I've covered these in previous eps and on my Substack) all fit into the palaver; plus what we really should be terrified about. This is a thoroughly important and correcting conversation.I flag this explainer that I wrote on my Substack: Say it isn't so: Human EugenicsYou can read Emily's papers “On the Dangers of Stochastic Parrots” and the “Octopus Paper” Here's the Statement from the listed authors of Stochastic Parrots on the “AI pause” letterEmily also wanted to point everyone to this paper on AI Safety vs. AI EthicsAnd if you want to do more of a deep dive into all this, check out her podcastIf you need to know a bit more about me… head to my "about" pageFor more such conversations subscribe to my Substack newsletter, it's where I interact the most!Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram Hosted on Acast. See acast.com/privacy for more information.

re:verb
E82: The Rhetoric of AI Hype (w/ Dr. Emily M. Bender)

re:verb

Play Episode Listen Later Jul 28, 2023 53:02


Are you a writing instructor or student who's prepared to turn over all present and future communication practices to the magic of ChatGPT? Not so fast! On today's show, we are joined by Dr. Emily M. Bender, Professor in the Department of Linguistics at the University of Washington and a pre-eminent academic critic of so-called “generative AI” technologies. Dr. Bender's expertise involves not only how these technologies work computationally, but also how language is used in popular media to hype, normalize, and even obfuscate AI and its potential to affect our lives.Dr. Bender's most well-known scholarly work related to this topic is a co-authored conference paper from 2021 entitled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In our conversation, Emily explains why she and her co-authors chose the “stochastic parrot” metaphor – how this helps us to understand large language models and other related technologies more accurately than many competing metaphors. We go on to discuss several actual high-stakes, significant issues related to these technologies, before Dr. Bender provides a helpful index of some the most troublesome ways they are talked about in the media: synthetic text “gotcha”s, infancy metaphors, linear models of progress, inevitability framings, and many other troublesome tropes. We conclude with a close reading of a recent piece in the Chronicle of Higher Education about using synthetic text generators in writing classrooms: “Why I'm Excited About Chat GPT” by Jenny Young. Young's article exemplifies many of the tropes Emily discussed earlier, as well as capturing lots of strange prevailing ideas about writing pedagogy, genre, and rhetoric in general. We hope that you enjoy this podcast tour through the world of AI hype media, and we ask that you please remain non-synthetic ‘til next time – no shade to parrots!

The Politics of Everything
The Great AI Hallucination (Rerun)

The Politics of Everything

Play Episode Listen Later Jul 19, 2023 44:27


Tech futurists have been saying for decades that artificial intelligence will transform the way we live. In some ways, it already has: Think autocorrect, Siri, facial recognition. But ChatGPT and other generative A.I. models are also prone to getting things wrong—and whether the programs will improve with time is not altogether clear. So what purpose, exactly, does this iteration of A.I. actually serve, how is it likely to be adopted, and who stands to benefit (or suffer) from it? On episode 67 of The Politics of Everything, hosts Laura Marsh and Alex Pareene talk with Washington Post reporter Will Oremus about a troubling tale of A.I. fabulism; with science fiction author Ted Chiang about ramifications of an A.I-polluted internet; and with linguist Emily M. Bender about what large-language models can and cannot do—and whether we're asking the right questions about this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices

Mystery AI Hype Theater 3000
Episode 6: Stochastic Parrot Galactica, November 23, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later Jul 17, 2023 63:12 Transcription Available


Emily and Alex discuss MetaAI's bullshit science paper generator, Galactica, along with its defenders. Plus, where could AI actually help scientific research? And more Fresh AI Hell. Watch the video of this episode on PeerTube. References:Imre Lakatos on research programsShah, Chirag and Emily M. Bender. 2022. Situating Search. Proceedings of the 2022 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR '22). UW RAISE (Responsibility in AI Systems and Experiences)Stochastic Parrots:Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

The Sunday Show
Your Guides Through the Hellscape of AI Hype

The Sunday Show

Play Episode Listen Later Jul 2, 2023 25:41


Alex Hanna, the director of research at the Distributed AI Research Institute and Emily M. Bender, a professor of linguistics at the University of Washington, are the hosts of Mystery AI Hype Theater 3000, a show that seeks to "break down the AI hype, separate fact from fiction, and science from bloviation." Justin Hendrix spoke to Alex and Emily about the show's origins, and what they hope will come of the effort to scrutinize statements about the potential of AI that are often fantastical.

Mystery AI Hype Theater 3000
Episode 4: Is AI Art Actually 'Art'? October 26, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later Jul 2, 2023 63:38 Transcription Available


AI is increasingly being used to make visual art. But when is an algorithmically-generated image art...and when is it just an aesthetically pleasing arrangement of pixels? Technology researchers Emily M. Bender and Alex Hanna talk to a panel of artists and researchers about the hype, the ethics, and even the definitions of art when a computer is involved.This episode was recorded in October of 2022. You can watch the video on PeerTube.Dr. Johnathan Flowers is an assistant professor in the department of philosophy at California State University, Northridge. His research interest is at the intersection of American Pragmatism, Philosophy of Disability, and Philosophy of Race, Gender and Sexuality as they apply to socio-technical systems. Flowers also explores the impacts of cultural narratives on the perception and development of sociotechnical systems.Dr. Jennifer Lena is a professor at Teachers College, Columbia University, where she runs the Arts Administration program. She's published books on music genres, the legitimation of art, and the measurement of culture.Dr. Negar Rostamzadeh is a Senior Research Scientist at Google Responsible AI team. Her recent research is at the intersection of computer vision and sociotechnical research. She studies creative computer vision technologies and the broader social impact of them. Kevin Roose, "An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy."Jo Lawson-Tancred, "Robot Artist Ai-Da Just Addressed U.K. Parliament About the Future of A.I. and ‘Terrified' the House of Lords"Marco Donnarumma, "AI Art Is Soft Propaganda for the Global North"Jane Recker, "U.S. Copyright Office Rules A.I. Art Can't Be Copyrighted"Richard Whiddington, "Shutterstock Inks Deal With DALL-E Creator to Offer A.I.-Generated Stock Images. Not All Artists Are Rejoicing."Stephen Cave and Kanta Dihal, "The Whiteness of AI"Follow our guests:Dr. Johnathan Flowers - https://twitter.com/shengokai // https://zirk.us/@shengokaiDr. Negar Rostamzadeh - twitter.com/negar_rzDr. JeYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Mystery AI Hype Theater 3000
Episode 2: "Can Machines Learn To Behave?" Part 2, September 6, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later Jun 19, 2023 61:30 Transcription Available


Technology researchers Emily M. Bender and Alex Hanna kick off the Mystery AI Hype Theater 3000 series by reading through, "Can machines learn how to behave?" by Blaise Aguera y Arcas, a Google VP who works on artificial intelligence.This episode was recorded in September of 2022, and is the second of three about Aguera y Arcas' post.You can also watch the video of this episode on PeerTube.You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Mystery AI Hype Theater 3000
Episode 3: "Can Machines Learn To Behave?" Part 3, September 23, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later Jun 19, 2023 63:00 Transcription Available


Technology researchers Emily M. Bender and Alex Hanna kick off the Mystery AI Hype Theater 3000 series by reading through, "Can machines learn how to behave?" by Blaise Aguera y Arcas, a Google VP who works on artificial intelligence.This episode was recorded in September of 2022, and is the last of three about Aguera y Arcas' post.You can watch the video of this episode on PeerTube.You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Mystery AI Hype Theater 3000
Episode 1: "Can Machines Learn To Behave?" Part 1, August 31, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later May 31, 2023 44:58 Transcription Available


Technology researchers Emily M. Bender and Alex Hanna kick off the Mystery AI Hype Theater 3000 series by reading through, "Can machines learn how to behave?" by Blaise Aguera y Arcas, a Google VP who works on artificial intelligence.This episode was recorded in August of 2022, and is the first of three about Aguera y Arcas' post.Watch the video stream on PeerTube.You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Word of Mouth
Chatbots

Word of Mouth

Play Episode Listen Later May 12, 2023 37:16


Michael is joined by Emily M Bender, Professor of computational linguistics at the University of Washington and co-author of the infamous paper ‘On the Dangers of Stochastic Parrots'. Cutting through the recent hype, she explains how chatbots do what they do, how they have become so fluent and why she thinks we should be careful with the terminology we employ when talking about them. Presented by Michael Rosen and produced for BBC Audio in Bristol by Ellie Richold.

The Politics of Everything
The Great A.I. Hallucination

The Politics of Everything

Play Episode Listen Later May 10, 2023 44:18


Tech futurists have been saying for decades that artificial intelligence will transform the way we live. In some ways, it already has: Think autocorrect, Siri, facial recognition. But ChatGPT and other generative A.I. models are also prone to getting things wrong—and whether the programs will improve with time is not altogether clear. So what purpose, exactly, does this iteration of A.I. actually serve, how is it likely to be adopted, and who stands to benefit (or suffer) from it? On episode 67 of The Politics of Everything, hosts Laura Marsh and Alex Pareene talk with Washington Post reporter Will Oremus about a troubling tale of A.I. fabulism; with science fiction author Ted Chiang about ramifications of an A.I-polluted internet; and with linguist Emily M. Bender about what large-language models can and cannot do—and whether we're asking the right questions about this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices

Tech Won't Save Us
ChatGPT Is Not Intelligent w/ Emily M. Bender

Tech Won't Save Us

Play Episode Listen Later Apr 13, 2023 64:21


Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology. Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master's Program. She's also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon. The podcast is produced by Eric Wickham and part of the Harbinger Media Network.  Also mentioned in this episode:Emily was one of the co-authors on the “On the Dangers of Stochastic Parrots” paper and co-wrote the “Octopus Paper” with Alexander Koller. She was also recently profiled in New York Magazine and has written about why policymakers shouldn't fall for the AI hype.The Future of Life Institute put out the “Pause Giant AI Experiments” letter and the authors of the “Stochastic Parrots” paper responded through DAIR Institute.Zachary Loeb has written about Joseph Weizenbaum and the ELIZA chatbot.Leslie Kay Jones has researched how Black women use and experience social media.As generative AI is rolled out, many tech companies are firing their AI ethics teams.Emily points to Algorithmic Justice League and AI Incident Database.Deborah Raji wrote about data and systemic racism for MIT Tech Review.Books mentioned: Weapons of Math Destruction by Cathy O'Neil, Algorithms of Oppression by Safiya Noble, The Age of Surveillance Capitalism by Shoshana Zuboff, Race After Technology by Ruha Benjamin, Ghost Work by Mary L Gray & Siddharth Suri, Artificial Unintelligence by Meredith Broussard, Design Justice by Sasha Costanza-Chock, Data Conscience: Algorithmic S1ege on our Hum4n1ty by Brandeis Marshall.Support the show

Parlons Futur
Approximations et omissions du dernier C dans l'Air sur l'IA

Parlons Futur

Play Episode Listen Later Apr 6, 2023 55:43


Attention : je diffuse des extraits de C dans l'Air et les commente ensuite, les extraits sont difficilement audibles malheureusement, mais il suffit de les passer, car je les résume ensuite avant de les commenter, désolé, je m'y prendrai mieux la prochaine fois.   Ressources utiles discutées dans le podcast : Lien vers l'épisode de C dans l'Air en question, avec notamment comme invités : - Gaspard Koenig, philosophe, auteur de "La fin de l'individu : voyage d'un philosophe au pays de l'intelligence artificielle" - Laurence Devillers, professeur en intelligence artificielle : Université La Sorbonne, experte en interaction humain / machine, auteur de "Les robots "émotionnels" La pétition en question, signée notamment par Yoshua Bengio, un des 3 parrains du deeplearning qui est derrière les dernières grandes avancées en IA. Yoshua Bengio est le seul des 3 à ne pas avoir rejoint le privé (à la différene de ses aînés Yann LeCun chez Meta et Geoffrey Hinton chez Google Brain) La pétition s'interroge "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?", considérant ainsi des risques de court terme, moyen terme et long terme. La position la plus radicales sur les risques liés à l'IA, celle d'Eliezer Yudkowsky :  "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” (tribune dans le magazine Time Pausing AI Developments Isn't Enough. We Need to Shut it All Down, écouter aussi mon précédent épisode de podcast) Chimp beats students at computer game (article dans le magazine Nature, en vidéo sur Youtube) Why Are Our Brains Shrinking? (University of San Francisco) "over the last 20,000 years alone, human brains have shrunk from 1,500 cubic centimeters (cc) to 1,350 cc, roughly the size of a tennis ball." Les 4 grandes positions face à l'IA pour simplifier Ceux qui ont signé cette pétition (Gary Marcus et Yoshua Bengio notamment) ou qui soutiennent les mêmes positions sans avoir pour autant signé ou soutenu la pétition, s'inquiétant des risques de court, moyen et long terme (notamment parmi ceux qui n'ont pas signé : Geoffrey Hinton, le plus anciens des 3 parrains de l'IA ; Demis Hassabis, CEO de Deepmind, l'autre labo de pointe sur l'IA avec openAI ; une partie des employés d'OpenAI en off, et même Sam Altman CEO d'OpenAI sur les enjeux de moyen et long terme) Ceux qui ont refusé de signer cette pétition car elles n'insistent pas assez sur les risques de court terme (en anglais: fairness, algorithmic bias, accountability, transparency, inequality, cost to the environment, disinformation, cybercrime) et trop sur les risques de long terme relevant de la science fiction (notamment les chercheuses en AI et sciences du langage Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell) Ceux qui refusent de signer car ils considèrent que c'est en développant l'IA qu'on trouvera aussi les solutions à ses problèmes, et que les risques encourus ne sont pas encore assez clairs, l'IA étant encore très primitive (notamment Yann LeCun, et OpenAI pour l'instant) Enfin les plus radicaux comme Eliezer Yudkowsky qui refusent de signer cette pétition car ils pensent qu'elle ne vas pas assez loin et minimise le risque existentiel.

Marketplace Tech
Do we have an AI hype problem?

Marketplace Tech

Play Episode Listen Later Apr 3, 2023 9:21


Last week, more than 1,000 experts in science and technology signed an open letter to labs developing advanced artificial intelligence, asking them to pause the “out of control race” to train ever more powerful systems. The letter warns that these “non-human minds” might eventually outsmart us, risking the “loss of control of our civilization.” But such framing misses the mark, according to Emily M. Bender, a computational linguist at the University of Washington who is skeptical of “AI hype.” Marketplace’s Meghan McCarty Carino spoke with Bender about what she sees as the real dangers in these models, starting with the way they use language itself.

Marketplace All-in-One
Do we have an AI hype problem?

Marketplace All-in-One

Play Episode Listen Later Apr 3, 2023 9:21


Last week, more than 1,000 experts in science and technology signed an open letter to labs developing advanced artificial intelligence, asking them to pause the “out of control race” to train ever more powerful systems. The letter warns that these “non-human minds” might eventually outsmart us, risking the “loss of control of our civilization.” But such framing misses the mark, according to Emily M. Bender, a computational linguist at the University of Washington who is skeptical of “AI hype.” Marketplace’s Meghan McCarty Carino spoke with Bender about what she sees as the real dangers in these models, starting with the way they use language itself.

Machine Learning Street Talk
#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

Machine Learning Street Talk

Play Episode Listen Later Apr 1, 2023 26:57


Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Send us a voice message which you want us to publish: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/message In a recent open letter, over 1500 individuals called for a six-month pause on the development of advanced AI systems, expressing concerns over the potential risks AI poses to society and humanity. However, there are issues with this approach, including global competition, unstoppable progress, potential benefits, and the need to manage risks instead of avoiding them. Decision theorist Eliezer Yudkowsky took it a step further in a Time magazine article, calling for an indefinite and worldwide moratorium on Artificial General Intelligence (AGI) development, warning of potential catastrophe if AGI exceeds human intelligence. Yudkowsky urged for an immediate halt to all large AI training runs and the shutdown of major GPU clusters, calling for international cooperation to enforce these measures. However, several counterarguments question the validity of Yudkowsky's concerns: 1. Hard limits on AGI 2. Dismissing AI extinction risk 3. Collective action problem 4. Misplaced focus on AI threats While the potential risks of AGI cannot be ignored, it is essential to consider various arguments and potential solutions before making drastic decisions. As AI continues to advance, it is crucial for researchers, policymakers, and society as a whole to engage in open and honest discussions about the potential consequences and the best path forward. With a balanced approach to AGI development, we may be able to harness its power for the betterment of humanity while mitigating its risks. Eliezer Yudkowsky: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Connor Leahy: https://twitter.com/NPCollapse (we will release that interview soon) Gary Marcus: http://garymarcus.com/index.html Tim Scarfe is the innovation CTO of XRAI Glass: https://xrai.glass/ Gary clip filmed at AIUK https://ai-uk.turing.ac.uk/programme/ and our appreciation to them for giving us a press pass. Check out their conference next year! WIRED clip from Gary came from here: https://www.youtube.com/watch?v=Puo3VkPkNZ4 Refs: Statement from the listed authors of Stochastic Parrots on the “AI pause” letterTimnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell https://www.dair-institute.org/blog/letter-statement-March2023 Eliezer Yudkowsky on Lex: https://www.youtube.com/watch?v=AaTRHFaaPG8 Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Pausing AI Developments Isn't Enough. We Need to Shut it All Down (Eliezer Yudkowsky) https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

This Week in Google (MP3)
TWiG 705: I Was Merely Fluffing - TikTok ban coming to the US? ChatGPT Windows 11 taskbar, Twitter job cuts

This Week in Google (MP3)

Play Episode Listen Later Mar 2, 2023 182:38


Our Growing TikTok Moral Panic Still Isn't Addressing The Actual Problem. U.S. House panel approves bill giving Biden power to ban TikTok. Apple Watch potential ban: What you need to know. All about Salt_Hank's upcoming cookbook. YouTube's new head talks 2023 priorities, including AI, podcasting, Shorts and more. Microsoft brings its new AI-powered Bing to the Windows 11 taskbar. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit. Planning for AGI and beyond. You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub. Talking about the diminishing drive to share on social media. Jack Dorsey-backed Twitter alternative Bluesky hits the App Store as an invite-only app. YouTube video causes Pixel phones to instantly reboot. In latest round of Twitter cuts, some see hints of its next CEO. Elon Musk's defense of Scott Adams shows why he is misguided and dangerous. Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke. Mozilla leads Mastodon app Mammoth's pre-seed funding. Google Drops 9 New Android and Wear OS Features. Google Chrome's new zoom on mobile blows things up by up to 300 percent. Google Keep's new Android widget makes it easier to check off items on your to-do list. Android is adding support for eSIM transfer between devices. Chrome tweaked to improve memory use & battery life on MacBook. Waymo starts autonomous testing in LA with no human driver. Google's bringing Magic Eraser to all Google One subscribers — including iPhone users. Gmail's client-side encryption is now available to more businesses. Google rolls out fall detection on the Pixel Watch. Your Google Docs are about to look a little bit different. Picks: Jason - Artifact for Android. Jeff - Jeff with teenage look filter. Ant - New Affordable Amaran COB S-Series Lights. Ant - How Y'all Doing by Leslie Jordon. Hosts: Leo Laporte, Jeff Jarvis, Ant Pruitt, and Jason Howell Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: fastmail.com/twit

All TWiT.tv Shows (MP3)
This Week in Google 705: I Was Merely Fluffing

All TWiT.tv Shows (MP3)

Play Episode Listen Later Mar 2, 2023 182:38


Our Growing TikTok Moral Panic Still Isn't Addressing The Actual Problem. U.S. House panel approves bill giving Biden power to ban TikTok. Apple Watch potential ban: What you need to know. All about Salt_Hank's upcoming cookbook. YouTube's new head talks 2023 priorities, including AI, podcasting, Shorts and more. Microsoft brings its new AI-powered Bing to the Windows 11 taskbar. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit. Planning for AGI and beyond. You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub. Talking about the diminishing drive to share on social media. Jack Dorsey-backed Twitter alternative Bluesky hits the App Store as an invite-only app. YouTube video causes Pixel phones to instantly reboot. In latest round of Twitter cuts, some see hints of its next CEO. Elon Musk's defense of Scott Adams shows why he is misguided and dangerous. Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke. Mozilla leads Mastodon app Mammoth's pre-seed funding. Google Drops 9 New Android and Wear OS Features. Google Chrome's new zoom on mobile blows things up by up to 300 percent. Google Keep's new Android widget makes it easier to check off items on your to-do list. Android is adding support for eSIM transfer between devices. Chrome tweaked to improve memory use & battery life on MacBook. Waymo starts autonomous testing in LA with no human driver. Google's bringing Magic Eraser to all Google One subscribers — including iPhone users. Gmail's client-side encryption is now available to more businesses. Google rolls out fall detection on the Pixel Watch. Your Google Docs are about to look a little bit different. Picks: Jason - Artifact for Android. Jeff - Jeff with teenage look filter. Ant - New Affordable Amaran COB S-Series Lights. Ant - How Y'all Doing by Leslie Jordon. Hosts: Leo Laporte, Jeff Jarvis, Ant Pruitt, and Jason Howell Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: fastmail.com/twit

Radio Leo (Audio)
This Week in Google 705: I Was Merely Fluffing

Radio Leo (Audio)

Play Episode Listen Later Mar 2, 2023 182:38


Our Growing TikTok Moral Panic Still Isn't Addressing The Actual Problem. U.S. House panel approves bill giving Biden power to ban TikTok. Apple Watch potential ban: What you need to know. All about Salt_Hank's upcoming cookbook. YouTube's new head talks 2023 priorities, including AI, podcasting, Shorts and more. Microsoft brings its new AI-powered Bing to the Windows 11 taskbar. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit. Planning for AGI and beyond. You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub. Talking about the diminishing drive to share on social media. Jack Dorsey-backed Twitter alternative Bluesky hits the App Store as an invite-only app. YouTube video causes Pixel phones to instantly reboot. In latest round of Twitter cuts, some see hints of its next CEO. Elon Musk's defense of Scott Adams shows why he is misguided and dangerous. Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke. Mozilla leads Mastodon app Mammoth's pre-seed funding. Google Drops 9 New Android and Wear OS Features. Google Chrome's new zoom on mobile blows things up by up to 300 percent. Google Keep's new Android widget makes it easier to check off items on your to-do list. Android is adding support for eSIM transfer between devices. Chrome tweaked to improve memory use & battery life on MacBook. Waymo starts autonomous testing in LA with no human driver. Google's bringing Magic Eraser to all Google One subscribers — including iPhone users. Gmail's client-side encryption is now available to more businesses. Google rolls out fall detection on the Pixel Watch. Your Google Docs are about to look a little bit different. Picks: Jason - Artifact for Android. Jeff - Jeff with teenage look filter. Ant - New Affordable Amaran COB S-Series Lights. Ant - How Y'all Doing by Leslie Jordon. Hosts: Leo Laporte, Jeff Jarvis, Ant Pruitt, and Jason Howell Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: fastmail.com/twit

This Week in Google (Video HI)
TWiG 705: I Was Merely Fluffing - TikTok ban coming to the US? ChatGPT Windows 11 taskbar, Twitter job cuts

This Week in Google (Video HI)

Play Episode Listen Later Mar 2, 2023 182:38


Our Growing TikTok Moral Panic Still Isn't Addressing The Actual Problem. U.S. House panel approves bill giving Biden power to ban TikTok. Apple Watch potential ban: What you need to know. All about Salt_Hank's upcoming cookbook. YouTube's new head talks 2023 priorities, including AI, podcasting, Shorts and more. Microsoft brings its new AI-powered Bing to the Windows 11 taskbar. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit. Planning for AGI and beyond. You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub. Talking about the diminishing drive to share on social media. Jack Dorsey-backed Twitter alternative Bluesky hits the App Store as an invite-only app. YouTube video causes Pixel phones to instantly reboot. In latest round of Twitter cuts, some see hints of its next CEO. Elon Musk's defense of Scott Adams shows why he is misguided and dangerous. Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke. Mozilla leads Mastodon app Mammoth's pre-seed funding. Google Drops 9 New Android and Wear OS Features. Google Chrome's new zoom on mobile blows things up by up to 300 percent. Google Keep's new Android widget makes it easier to check off items on your to-do list. Android is adding support for eSIM transfer between devices. Chrome tweaked to improve memory use & battery life on MacBook. Waymo starts autonomous testing in LA with no human driver. Google's bringing Magic Eraser to all Google One subscribers — including iPhone users. Gmail's client-side encryption is now available to more businesses. Google rolls out fall detection on the Pixel Watch. Your Google Docs are about to look a little bit different. Picks: Jason - Artifact for Android. Jeff - Jeff with teenage look filter. Ant - New Affordable Amaran COB S-Series Lights. Ant - How Y'all Doing by Leslie Jordon. Hosts: Leo Laporte, Jeff Jarvis, Ant Pruitt, and Jason Howell Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: fastmail.com/twit

All TWiT.tv Shows (Video LO)
This Week in Google 705: I Was Merely Fluffing

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Mar 2, 2023 182:38


Our Growing TikTok Moral Panic Still Isn't Addressing The Actual Problem. U.S. House panel approves bill giving Biden power to ban TikTok. Apple Watch potential ban: What you need to know. All about Salt_Hank's upcoming cookbook. YouTube's new head talks 2023 priorities, including AI, podcasting, Shorts and more. Microsoft brings its new AI-powered Bing to the Windows 11 taskbar. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit. Planning for AGI and beyond. You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub. Talking about the diminishing drive to share on social media. Jack Dorsey-backed Twitter alternative Bluesky hits the App Store as an invite-only app. YouTube video causes Pixel phones to instantly reboot. In latest round of Twitter cuts, some see hints of its next CEO. Elon Musk's defense of Scott Adams shows why he is misguided and dangerous. Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke. Mozilla leads Mastodon app Mammoth's pre-seed funding. Google Drops 9 New Android and Wear OS Features. Google Chrome's new zoom on mobile blows things up by up to 300 percent. Google Keep's new Android widget makes it easier to check off items on your to-do list. Android is adding support for eSIM transfer between devices. Chrome tweaked to improve memory use & battery life on MacBook. Waymo starts autonomous testing in LA with no human driver. Google's bringing Magic Eraser to all Google One subscribers — including iPhone users. Gmail's client-side encryption is now available to more businesses. Google rolls out fall detection on the Pixel Watch. Your Google Docs are about to look a little bit different. Picks: Jason - Artifact for Android. Jeff - Jeff with teenage look filter. Ant - New Affordable Amaran COB S-Series Lights. Ant - How Y'all Doing by Leslie Jordon. Hosts: Leo Laporte, Jeff Jarvis, Ant Pruitt, and Jason Howell Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: fastmail.com/twit

The Radical AI Podcast
The Limitations of ChatGPT with Emily M. Bender and Casey Fiesler

The Radical AI Podcast

Play Episode Listen Later Mar 1, 2023 62:02


In this episode, we unpack: is ChatGPT Ethical? In what ways?    We interview Dr. Emily M. Bender and Dr. Casey Fiesler about the limitations of ChatGPT – we cover ethical considerations, bias and discrimination, and the importance of algorithmic literacy in the face of chatbots.   Emily M. Bender is a Professor of Linguistics and an Adjunct Professor in the School of Computer Science and the Information School at the University of Washington, where she has been on the faculty since 2003. Her research interests include multilingual grammar engineering, computational semantics, and the societal impacts of language technology. Emily was also recently nominated as a Fellow of the American Association for the Advancement of Science (AAAS).     Casey Fiesler is an associate professor in Information Science at University of Colorado Boulder. She researches and teaches in the areas of technology ethics, internet law and policy, and online communities. Also a public scholar, she is a frequent commentator and speaker on topics of technology ethics and policy, and her research has been covered everywhere from The New York Times to Teen Vogue.   Full show notes for this episode can be found at Radicalai.org.

On Tech Ethics with CITI Program
The Impact of ChatGPT on Academic Integrity - On Tech Ethics

On Tech Ethics with CITI Program

Play Episode Listen Later Feb 21, 2023 28:49


Discusses the impact of AI on academic integrity, with a focus on ChatGPT.Our guest is Chirag Shah, Ph.D. Chirag is a Professor of Information and Computer Science at the University of Washington. He is the Founding Director of InfoSeeking Lab and Founding Co-Director of the Center for Responsibility in AI Systems & Experiences (RAISE). He works on intelligent information access systems focusing on fairness and transparency.Additional Resources:· InfoSeeking Lab: https://infoseeking.org/ · RAISE: https://www.raise.uw.edu/· “Situating Search” by Chirag Shah and Emily M. Bender: https://dl.acm.org/doi/fullHtml/10.1145/3498366.3505816For more information about CITI Program, please visit: https://about.citiprogram.org/

Teaching in Higher Ed
ChatGPT and Good Intentions in Higher Ed

Teaching in Higher Ed

Play Episode Listen Later Feb 9, 2023 43:06


Autumm Caines discusses chatGPT and good intentions in higher ed on episode 452 of the Teaching in Higher Ed podcast. Quotes from the episode I am fascinated by the intersection between who were are and the environments we inhabit. -Autumm Caines The process of writing is thinking. -Autumm Caines We want our students to learn how to think through the act of writing. -Autumm Caines Resources Craft App's AI Assistant About Is a Liminal Space ChatGPT and Good Intentions in Higher Ed In Defense of “Banning” ChatGPT Prior to (or Instead of) Using ChatGPT with Your Students On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Brain Inspired
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Brain Inspired

Play Episode Listen Later Aug 17, 2022 71:41


Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more. Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words. Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more. EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) 0:00 - Intro 4:35 - Language and cognition 15:38 - Grasping for meaning 21:32 - Are large language models producing language? 23:09 - Next-word prediction in brains and models 32:09 - Interface between language and thought 35:18 - Studying language in nonhuman animals 41:54 - Do we understand language enough? 45:51 - What do language models need? 51:45 - Are LLMs teaching us about language? 54:56 - Is meaning necessary, and does it matter how we learn language? 1:00:04 - Is our biology important for language? 1:04:59 - Future outlook

Here & Now
It's thyme for herb season; Don't worry about the robot revolution

Here & Now

Play Episode Listen Later Jun 27, 2022 41:40


Kathy Gunst's three new recipes are all herb-forward ("herbaceous" as chefs might say) as well as a guide to some of her favorite herbs. And, earlier this month, Google engineer Blake Lemoine claimed the company's artificial intelligence had achieved sentience. While Lemoine's claims made waves online, many experts are pretty skeptical. University of Washington professor Emily M. Bender joins us.

Gradient Dissent - A Machine Learning Podcast by W&B
Emily M. Bender — Language Models and Linguistics

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Sep 9, 2021 72:55


In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study. Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender --- Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP. --- Timestamps: 0:00 Sneak peek, intro 1:03 Stochastic Parrots 9:57 The societal impact of big language models 16:49 How language models can be harmful 26:00 The important difference between linguistic form and meaning 34:40 The octopus thought experiment 42:11 Language acquisition and the future of language models 49:47 Why benchmarks are limited 54:38 Ways of complementing benchmarks 1:01:20 The #BenderRule 1:03:50 Language diversity and linguistics 1:12:49 Outro

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell.  Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

The Radical AI Podcast
The Power of Linguistics: Unpacking Natural Language Processing Ethics with Emily M. Bender

The Radical AI Podcast

Play Episode Listen Later Jul 1, 2020 60:24


What are the societal impacts and ethics of Natural Language Processing (or NLP)? How can language be a form of power? How can we effectively teach ethics in the NLP classroom? How can we promote healthy interdisciplinary collaboration in the development of NLP products? To answer these questions and more we welcome Dr. Emily M. Bender to the show. Dr. Emily M. Bender researches linguistics, computational linguistics, and ethical issues in Natural Language Processing.  Emily is currently a Professor in the Department of Linguistics and an Adjunct Professor in the Department of Computer Science and Engineering at the University of Washington. She is also the faculty director of the CLMS program and the director of the Computational Linguistics Laboratory. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod    

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later May 18, 2020 52:34


Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington.  Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine? Later this afternoon (3pm PT) we’ll be hosting a viewing party with Emily over on our YouTube channel. Sam and Emily will be in the live chat answering your questions from the conversation. Register at twimlai.com/376viewing! Check out the complete show notes for this conversation at twimlai.com/talk/376.

NLP Highlights
106 - Ethical Considerations In NLP Research, with Emily Bender

NLP Highlights

Play Episode Listen Later Feb 17, 2020 39:18


In this episode, we talked to Emily Bender about the ethical considerations in developing NLP models and putting them in production. Emily cited specific examples of ethical issues, and talked about the kinds of potential concerns to keep in mind, both when releasing NLP models that will be used by real people, and also while conducting NLP research. We concluded by discussing a set of open-ended questions about designing tasks, collecting data, and publishing results, that Emily has put together towards addressing these concerns. Emily M. Bender is a Professor in the Department of Linguistics and an Adjunct Professor in the Department of Computer Science and Engineering at the University of Washington. She's active on Twitter at @emilymbender.

This Week In Voice
This Week In Voice, Episode 12

This Week In Voice

Play Episode Listen Later Sep 21, 2017 43:15


An all-star panel (Emily M. Bender, Karen Kaushansky, Jess Thornhill) discuss the latest in voice technology news, including Amazon's upcoming foray into #VoiceFirst Alexa-driven glasses, Voicebot.ai's Story Of The Week in which VoiceLabs reports voice apps are getting double the "retention" over nine months ago, Amazon's Fire HD 10 tablet as an "Echo Show in tablet form," Google's new Google Home Mini hitting the market on October 4, Roku's smart speaker offering heading to market, and a riveting discussion around whether developing Alexa skills is essentially working for Amazon for free. The Medium post referenced by Emily M. Bender in this episode, "Google Home vs. Alexa: Two Simple User Experience Design Gestures That Delighted A Female User," can be found here: https://medium.com/startup-grind/google-home-vs-alexa-56e26f69ac77 This Week In Voice is hosted by Bradley Metrock (CEO, Score Publishing) and is part of the VoiceFirst.FM podcast network.