Podcast appearances and mentions of timnit gebru

Computer scientist

  • 190PODCASTS
  • 323EPISODES
  • 54mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 13, 2025LATEST
timnit gebru

POPULARITY

20172018201920202021202220232024


Best podcasts about timnit gebru

Latest podcast episodes about timnit gebru

Tech Won't Save Us
AI Hype Enters Its Geopolitics Era w/ Timnit Gebru

Tech Won't Save Us

Play Episode Listen Later Mar 13, 2025 55:19


Paris Marx is joined by Timnit Gebru to discuss where the AI industry stands in 2025 as AI increasingly becomes a geopolitical football even as the big promises made by AI companies fail to materialize.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham.Also mentioned in this episode:Timnit wrote about the TESCREAL bundle of ideologies with Émile Torres.The Data Workers' Inquiry allowed data workers to share their experiences in their workplaces.Support the show

Start Making Sense
AI Hype Enters Its Geopolitics Era w/ Timnit Gebru | Tech Won't Save Us

Start Making Sense

Play Episode Listen Later Mar 13, 2025 55:19


Paris Marx is joined by Timnit Gebru to discuss where the AI industry stands in 2025 as AI increasingly becomes a geopolitical football even as the big promises made by AI companies fail to materialize.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Big Tech
How Silicon Valley Monopolized Our Imagination

Big Tech

Play Episode Listen Later Dec 3, 2024 45:39


The past few months have seen a series of bold proclamations from the most powerful people in tech.In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he'll land rockets on Mars by 2026.We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn't need to be that way.Mentioned:“Imagination: A Manifesto,” by Ruha BenjaminSummer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove“The Black Woman: An Anthology,” by Toni Cade Bambara“The New Artificial Intelligentsia,” by Ruha Benjamin“Race After Technology,” by Ruha BenjaminBreonna's Garden, with Ju'Niyah Palmer“Viral Justice,” by Ruha BenjaminThe Parable Series, by Octavia ButlerFurther Reading:“AI could make health care fairer—by helping us believe what patients say,” by Karen Hao“How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell“Unmasking AI: My Mission to Protect What Is Human in a World of Machines,'” by Joy Buolamwini“The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres

Edge of the Web - An SEO Podcast for Today's Digital Marketer
723 | Concepts, Philosophies, and the Future of AI w/ Britney Muller

Edge of the Web - An SEO Podcast for Today's Digital Marketer

Play Episode Listen Later Nov 1, 2024 45:01


Britney Muller rejoins the EDGE to weigh in on the progress of AI and LLMs for use in marketing, the over-hyped announcements to Google's transformative papers, and the ethical dilemmas in AI development. Discover how emergent properties of AI models are often misunderstood and why ethical integrity is paramount in technology communication. Plus, get an inside scoop on Google's AI controversies and the practical uses of AI technology in everyday tasks.  Britney underscores the need for human oversight in AI projects to guarantee content quality. She mentions significant academic contributions and controversies in the AI field, including the firing of researchers like Margaret Mitchell and Dr. Timnit Gebru. Always a wealth of info, Muller is seeing the confirmation of her commitment to machine learning that she started over 5 years ago. You can check out what she said back then in Episode 309 in 2019: How to Use Machine Learning for SEO w/Britney Muller Key Segments: [00:05:08] Introduction and Tribute to AI Pioneers [00:10:00] AI Timeline from 2019 to 2024 [00:11:38] The Language of AI Releases [00:14:31] EDGE of the Web Title Sponsor: Site Strategics [00:16:00] Ethical Challenges and Controversies in AI [00:21:16] The Importance of Human Oversight in AI [00:30:36] The Human-Centered Approach to AI [00:32:34] EDGE of The Web Sponsor: Wix Studio [00:38:30] Guidance to SEO Professionals Thanks to Our Sponsors! Site Strategics: http://edgeofthewebradio.com/site Wix: http://edgeofthewebradio.com/wixstudio Follow Our Guest Twitter: https://x.com/britneymuller  LinkedIn: https://www.linkedin.com/in/britneymuller/ Instagram: https://www.instagram.com/britneymuller/

RNZ: Saturday Morning
Danyl McLauchlan: Silicon Valley's cult of tech utopianism

RNZ: Saturday Morning

Play Episode Listen Later Jul 5, 2024 8:39


Writer Danyl McLauchlan joins Susie to tackle life's big questions, ideas, and thinkers. This week he tackles TESCREAL, the acronym you may have seen pop up online over the past few months. Coined by the computer scientist Timnit Gebru and the philosopher Emil Torres, TESCREAL stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long-termism. The pair propose this acronym describes a suite of right-wing ideologies that are coming to dominate Silicon Valley.

The Nonlinear Library
EA - The "TESCREAL" Bungle by ozymandias

The Nonlinear Library

Play Episode Listen Later Jun 4, 2024 22:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "TESCREAL" Bungle, published by ozymandias on June 4, 2024 on The Effective Altruism Forum. A specter is haunting Silicon Valley - the specter of TESCREALism. "TESCREALism" is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to: Transhumanism - the belief that we should develop and use "human enhancement" technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann's. Extropianism - the belief that we should settle outer space and create or become innumerable kinds of "posthuman" minds very different from present humanity. Singularitarianism - the belief that humans are going to create a superhuman intelligence in the medium-term future. Cosmism - a near-synonym to extropianism. Rationalism - a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people's ability to make good decisions and come to true beliefs. Effective altruism - a community focused on using reason and evidence to improve the world as much as possible. Longtermism - the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1] TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times. The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley - principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen - two influential thinkers Torres and Gebru have identified as TESCREAList - don't agree on much. Eliezer Yudkowsky believes that with our current understanding of AI we're unable to program an artificial general intelligence that won't wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren't special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3] As an analogy, Republicans and Democrats don't seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you'd call this "liberal democracy." Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it's easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It's easy to stumble across Andreesen's or Yudkowsky's writing without knowing anything about transhumanism. The TESCREALism concept can clarify what's going on for confused outsiders. How...

Ground Truths
Kate Crawford: A Leading Scholar and Conscience for A.I.

Ground Truths

Play Episode Listen Later May 12, 2024 51:06


“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe

Effective Altruism Forum Podcast
[Linkpost] “Motivation gaps: Why so much EA criticism is hostile and lazy” by titotal

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 23, 2024 34:52


Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction. I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees [...] ---Outline:(02:21) No door to door atheists(04:51) What went wrong here?(08:40) Motivation gaps in AI x-risk(10:59) EA gap analysis(15:12) Counter-motivations(25:49) You can't rely on ingroup criticism(29:10) How to respond to motivation gaps--- First published: April 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/CfBNdStftKGc863o6/motivation-gaps-why-so-much-ea-criticism-is-hostile-and-lazy Linkpost URL:https://titotal.substack.com/p/motivation-gaps-why-so-much-ea-criticism --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Motivation gaps: Why so much EA criticism is hostile and lazy by titotal

The Nonlinear Library

Play Episode Listen Later Apr 22, 2024 30:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Motivation gaps: Why so much EA criticism is hostile and lazy, published by titotal on April 22, 2024 on The Effective Altruism Forum. Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees the whole AI safety field as racist nutjobs. In response, @JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors? There are a few answers to this. Good writing is hard regardless of the subject matter. More inflammatory rhetoric gets more clicks, shares and discussion. EA figures have been involved in bad things (like SBF's fraud), so nasty words in response are only to be expected. I think there's a more interesting explanation though, and it has to do with motivations. I think the average EA-critical person doesn't hate EA, although they might dislike it. But it takes a lot of time and effort to write an article and have it published in TIME magazine. If Leif Weinar didn't hate EA, he wouldn't have bothered to write the article. In this article, I'm going to explore the concept of motivation gaps, mainly using the example of AI x-risk, because the gaps are particularly stark there. I'm going to argue that for certain causes, the critiques being hostile or lazy is the natural state of affairs, whether or not the issue is actually correct, and that you can't use the unadjusted quality of each sides critiques to judge an issues correctness. No door to door atheists Disclaimer: These next sections contains an analogy between logical reasoning about religious beliefs and logical reasoning about existential risk. It is not an attempt to smear EA as a religion, nor is it an attack on religion. Imagine a man, we'll call him Dave, who, for whatever reason, has never once thought about the question of whether God exists. One day he gets a knock on his door, and encounters two polite, well dressed and friendly gentlemen who say they are spreading the word about the existence of God and the Christian religion. They tell them that a singular God exists, and that his instructions for how to live life are contained within the Holy Bible. They have glossy brochures, well-prepared arguments and evidence, and represent a large organisation with a significant following and social backing by many respected members of society. He looks their website and finds that, wow, a huge number of people believe this, there is a huge field called theology explaining why God exists, and some of the smartest people in history have believed it as well. Dave is impressed, but resolves to be skeptical. He takes their information and informs them that he while he finds them convincing, he wants to hear the other side of the story as well. He tells them that he'll wait for the atheist door-to-door knockers to come and make their case, so he can decide for himself. Dave waits for many months, but to his frustration, no atheists turn up. Another point for the Christians. He doesn't give up though, and looks online, and finds the largest atheist forum he can find, r/atheism. Dave is shoc...

Mystery AI Hype Theater 3000
Episode 30: Marc's Miserable Manifesto, April 1 2024

Mystery AI Hype Theater 3000

Play Episode Listen Later Apr 19, 2024 60:45 Transcription Available


Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge.References:Marc Andreessen: "The Techno-Optimism Manifesto"First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres)Business Insider: Explaining 'Pronatalism' in Silicon ValleyFresh AI Hell:CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says.The Markup: NYC's AI chatbot tells businesses to break the lawRead Emily's Twitter / Mastodon thread about this chatbot.The Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in EnglandThe Guardian: Wearable AI: Will it put our smartphones out of fashion?TheCurricula.comYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Voices of VR Podcast – Designing for Virtual Reality
#1353: “Our Next Reality” Book Debates Future of XR + AI, and Speculations of Superintelligence Promises & Perils

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Mar 3, 2024 110:00


The book Our Next Reality: How the AI-powered Metaverse Will Reshape the World is structured as a debate between Alvin Wang Graylin and Louis Rosenberg, who each have over 30 years of experience in XR and AI. Graylin embodies the eternal optimist and leans towards techno-utopian views while Rosenberg voices the more skeptical perspectives while leaning more towards cautious optimism and acknowledging the privacy hazards, control and alignment risks, as well as the ethical and moral dilemmas. The book is the strongest when it speaks about the near-term implications of how AI will impact XR in specific contexts, but starts to go off the rails for me when they start exploring the more distant-future implications of Artificial Superintelligence at the economic and political scales of society. At the same time, both sides acknowledge the positive and negative potential futures, and that neither path are necessarily guaranteed as it will be up to the tech companies, governments, and broader society which path of the future we go down. What I really appreciated about the book is that both Graylin and Rosenberg reference many personal examples and anecdotes around the intersection of XR and AI throughout each of their three decades of experience working with emerging technologies. Even though the book is structured as a debate, they also both agree on some fundamental premises that the Metaverse is inevitable (or rather spatial computing, XR, or mixed reality), and that AI has been and will continue to be a critical catalyst for it's growth and evolution. They both also wholeheartedly agree that it is a matter of time before we achieve either an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but they differ on the implications of these technologies. Graylin believes that ASI has the potential to lead humanity into post-labor, post-scarcity, techno-utopian future reality where all of humanity has willingly given up all cultural, political, and economic control over to our ASI overlords who become these perfectly rationally-driven philosopher kings, but yet still see humans as their ancestors via an uncharacteristically anthropomorphized emotional connection with compassionate affinity. Rosenberg dismisses this as a sort of wishful thinking that humans would be able to exert any control over ASI, and that ASI would be anything other than cold-hearted, calculating, ruthless, and unpredictably alien. Rosenberg also cautions that humanity could be headed towards cultural stagnation if the production of all art, media, music, and creative endeavors is ceded over to ASI, and that unaligned and self-directed ASI could be more dangerous than nuclear weapons. Graylin acknowledges the duality of possible futures within the context of this interview, but also tends to be biased towards the more optimistic future within the actual book. There is also a specific undercurrent of ideas and philosophies about AI that are woven throughout Graylin's and Rosenberg's book. Philosopher and historian Dr. Émile P. Torres has coined the acronym "TESCREAL" in collaboration with AI Ethicist Dr. Timnit Gebru that stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. Torres wrote an article in Truthdig elaborating on these interconnected bundle of TESCREAL ideologies are the underpinnings of many of the debates about ASI and AGI (with links included in the original quote): At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.

Reimagining the Internet
Rewind: Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science

Reimagining the Internet

Play Episode Listen Later Feb 21, 2024 40:02


Rerun of episode 85 from August, 16 2023 Timnit Gebru is not just a pioneering critic of dangerous AI datasets who calls bullshit on bad science pushed by the likes of OpenAI, or a tireless champion of racial, gender, and climate justice in computing. She’s also someone who wants to build something different. This week […]

The Good Robot IS ON STRIKE!
Emily M. Bender and Alex Hanna on Why You Shouldn't Believe the AI Hype

The Good Robot IS ON STRIKE!

Play Episode Listen Later Feb 6, 2024 29:18


 In this episode, we talk to Emily M. Bender and Alex Hanna. AI ethics legends and now the co-hosts of the Mystery AI Hype Theatre 3000 podcast which is a new podcast where they dispel the hype storm around AI. Emily is a professor of linguistics at university of Washington and the co-author of that stochastic parrots paper that you may have heard of, because two very important people in the Google AI ethics team allegedly got fired over it, and that's Timnit Gebru and Meg Mitchell. And Alex Hanna is the director of research at the Distributed AI Research Institute known by its acronym, DAIR, which is now run by Timnit. In this episode, they argue that we should stop using the term AI altogether, and that the world might be better without text to image systems like DALL·E and Midjourney. They tell us how the AI hype agents are getting high on their own supply, and give some advice for young people going into tech careers.

Tech Won't Save Us
AI Hype Distracted Us From Real Problems w/ Timnit Gebru

Tech Won't Save Us

Play Episode Listen Later Jan 18, 2024 61:26


Paris Marx is joined by Timnit Gebru to discuss the past year in AI hype, how AI companies have shaped regulation, and tech's relationship to Israel's military campaign in Gaza. Timnit Gebru is the founder and executive director of the Distributed AI Research Institute. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.  The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.  Also mentioned in this episode:Billy Perrigo reported on OpenAI lobbying to water down the EU's AI Act.Nitasha Tiku wrote about the push to train students in a particular idea of AI.Politico has been doing a lot of reporting on the influences on AI policy in the US and UK.OpenAI made a submission in the UK to try to get permission to train on copyrighted material.Arab workers in the tech industry fear the consequences of speaking out for Palestinian rights.972 Magazine reported on Israel's use of AI to increase its targets in Gaza.Jack Poulson chronicles the growing ties between military and tech.Timnit mentioned No Tech for Apartheid, Antony Loewenstein's The Palestine Laboratory, and Malcolm Harris' Palo Alto.Support the show

Start Making Sense
A.I. Hype Distracted Us From Real Problems, w/ Timnit Gebru | Tech Won't Save Us

Start Making Sense

Play Episode Listen Later Jan 18, 2024 61:26


On this episode of Tech Won't Save Us, Paris Marx is joined by Timnit Gebru to discuss the past year in AI hype, how AI companies have shaped regulation, and tech's relationship to Israel's military campaign in Gaza. Timnit Gebru is the founder and executive director of the Distributed AI Research Institute.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Haymarket Books Live
Palo Alto: The Grit Beneath Tech's Glitter

Haymarket Books Live

Play Episode Listen Later Dec 22, 2023 88:09


Join us for a conversation on the seedy underside to Tech's past, present, and future. This event took place on May 30, 2023. If the industry's most credulous boosters are to be taken at their word, the contemporary tech industry is an economic freight train driven by big-brained disrupters who are charting a path toward a future of mutual prosperity, boundless leisure, and unfettered innovation. But in recent years some of the luster has come off of Tech's carefully crafted reputation—thanks to stories of self-combusting cars, high-profile fraud convictions, and other headline grabbing fiascos. Just how much bluff and bluster, not to mention skeletons, lay buried beneath Silicon Valley's idyllic hills? And what does a future without cheap credit and greatly diminished credibility mean for the tech industry? For this event, Malcolm Harris, author of Palo Alto: A History of California, Capitalism, and the World will be conversation with Timnit Gebru, found and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google in December 2020 for raising issues of discrimination in the workplace, where she was serving as co-lead of the Ethical AI research team. She received her PhD from Stanford University, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge. Malcolm Harris is a freelance writer and the author of Palo Alto: A History of California, Capitalism, and the World, Kids These Days: The Making of Millennials, and Shit is Fucked Up and Bullshit: History Since the End of History. He was born in Santa Cruz, CA and graduated from the University of Maryland. Watch the live event recording: https://youtube.com/live/ayLtwiP0uoo?feature=share Buy books from Haymarket: www.haymarketbooks.org Follow us on Soundcloud: soundcloud.com/haymarketbooks

The Nonlinear Library
EA - Effective Aspersions: How the Nonlinear Investigation Went Wrong by TracingWoodgrains

The Nonlinear Library

Play Episode Listen Later Dec 19, 2023 51:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Aspersions: How the Nonlinear Investigation Went Wrong, published by TracingWoodgrains on December 19, 2023 on The Effective Altruism Forum. The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another. The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good. In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future. This is not an essay about the New York Times. The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different - in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies. That essay segues neatly into my next statement, one I never imagined I would make: You are very very lucky the New York Times does not cover you the way you cover you. A Word of Introduction Since this is my first post here, I owe you a brief introduction. I am a friendly critic of EA who would join you were it not for my irreconcilable differences in fundamental values and thinks you are, by and large, one of the most pleasant and well-meaning groups of people in the world. I spend much more time in the ACX sphere or around its more esoteric descendants and know more than anyone ought about its history and occasional drama. Some of you know me from my adversarial collaboration in Scott's contest some years ago, others from my misadventures in "speedrunning" college, still others from my exhaustively detailed deep dives in...

Effective Altruism Forum Podcast
“Effective Aspersions: How the Nonlinear Investigation Went Wrong” by TracingWoodgrains

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 19, 2023 54:17


The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of [...] ---Outline:(00:06) The New York Times(03:08) A Word of Introduction(07:35) The Story So Far: A Recap(11:08) Avoidable, Unambiguous Falsehoods in Sharing Information About Nonlinear(21:32) These Issues Were Known and Knowable By Lightcone and the Community. The EA/LW Community Dismissed Them(27:03) Better processes are both possible and necessary(38:44) On Lawsuits(47:15) First Principles, Duty, and Harm(50:43) What of Nonlinear?The original text contained 16 footnotes which were omitted from this narration. --- First published: December 19th, 2023 Source: https://forum.effectivealtruism.org/posts/bwtpBFQXKaGxuic6Q/effective-aspersions-how-the-nonlinear-investigation-went --- Narrated by TYPE III AUDIO.

Tech Won't Save Us
How Effective Accelerationism Divides Silicon Valley w/ Émile Torres

Tech Won't Save Us

Play Episode Listen Later Dec 14, 2023 74:17


Paris Marx is joined by Émile Torres to discuss Silicon Valley's recent obsession with effective accelerationism, how it builds on the TESCREAL ideologies, and why it shows the divide at the top of the AI industry. Émile Torres is a postdoctoral fellow at Case Western Reserve University. They're also the author of Human Extinction: A History of the Science and Ethics of Annihilation.  Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.  The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.  Also mentioned in this episode:Emile wrote about the TESCREAL ideologies and AI extinction scaremongering.Timnit Gebru also did a great presentation on TESCREAL.Paris wrote about the religious nature of Marc Andreessen's techno-solutionist manifesto and about Sam Altman's (temporary) ouster from OpenAI.The Year In Tech livestream for Patreon supporters is on December 17 at 1pm PT / 4pm ET / 9pm GMT. More details on Patreon or Twitter.The Information did a great profile on effective accelerationism.Forbes revealed the man behind the e/acc moniker Beff Jezos.972 Magazine reported on Israeli's use of AI to expand targets in Gaza.UK plans a “hit squad” to replace public servants with AI. Paris wrote about the threat it poses.Support the show

FT Tech Tonic
Superintelligent AI: Transhumanism etc.

FT Tech Tonic

Play Episode Listen Later Dec 5, 2023 25:59


What are the ideas driving the pursuit of human-level AI? In the penultimate episode of this Tech Tonic series, hosts Madhumita Murgia and John Thornhill look at some of the futuristic objectives that are at the centre of the AI industry's quest for superintelligence and hear about the Extropians, a surprisingly influential group of futurists from the early 1990s. Anders Sandberg, senior research fellow at Oxford university's Future of Humanity Institute, sets out some of the ideas developed in the Extropians mailing list while Connor Leahy, co-founder of Conjecture and Timnit Gebru, founder of the Distributed AI Research Institute (DAIR) explain why they worry about the Extropians' continued influence today.Free links:OpenAI and the rift at the heart of Silicon ValleyWe need to examine the beliefs of today's tech luminariesOpenAI's secrecy imperils public trustBig tech companies cut AI ethics staff, raising safety concernsTech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.Clips: Alcor CryonicsRead a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

FUTURE FOSSILS

Subscribe and review at Apple Podcasts and/or Spotify. Or wherever!This week on the show I speak with physicist Geoffrey West (SFI) and evolutionary biologist Manfred Laubichler (ASU, SFI) about the transformations that our geosphere, biosphere, technosphere, and noosphere are undergoing as the “extended phenotype” of human innovation runs rampant across the surface of Planet Earth.  These two distinguished scientists are some of the most profound thinkers I've ever encountered, helping midwife a new understanding of what it means to be human and a planetary citizen. I have wanted Geoffrey West on Future Fossils since well before I even started working for SFI in 2018, so this episode is the consummation of a years-long journey and I cannot be more excited to share it with you!  It feels a little like crossing the streams in Ghostbusters, but we live in an increasingly-intertwingled world, so let's make the best of it!  I wouldn't be where I am today without these two fine minds and their important work.  Enjoy…“The consequences of the Anthropocene are the product of innovations, and yet somehow we think the way out is through EVEN MORE innovation. This is a predicament…Innovation has to be looked at critically. One of the interesting things in the history of life is the OPPRESSION of innovation.”– Manfred Laubichler✨ Support Future Fossils & Feed My Kids:• Become a patron on Substack, Patreon, and/or Bandcamp for MANY extras, including a members-only FB Group and private channels on our Discord Server• Donate directly: @futurefossils on Venmo • $manfredmacx on CashApp • @michaelgarfield on PayPal• Browse my art and buy original paintings and prints (or commission new work)• Buy (NEARLY) all of the books we mention on the show at the Future Fossils Bookshop.org page• Show music:  “Sonnet A” from my 2008 Double-Edged Sword EP (Bandcamp, Spotify)• Follow my music and awesome, eclectic playlists on Spotify✨ Special thanks to my friends at Noonautics.org & Gregory Landua of The Regen Foundation for supporting both the show and pioneering research to make the world a better place!✨ Your Anthropocene & Technosphere Syllabus:More Is Different: Broken symmetry and the nature of the hierarchical structure of science.Phil AndersonPopulation growth, climate change create an ‘Anthropocene engine' that's changing the planetManfred LaubichlerScale and information-processing thresholds in Holocene social evolutionJaeweon Shin et al.Policies may influence large-scale behavioral tippingKarine Nyborg et al.Teaching the Anthropocene from a Global Perspective (2014!)Manfred Laubichler & Jürgen RennMore from them:Seminar: Co-Evolutionary Perspectives on the TechnosphereAnthropocene Campus | Technosphere / Co-Evolution, presented by Jürgen Renn and Manfred LaubichlerThe Growth and Differentiation of Metabolism: Extended Evolutionary Dynamics in the TechnosphereSFI Community Event - Panel discussion on the Past, Present, and Future of the AnthropoceneSander van der Leeuw, D.A. Wallach, & Geoffrey West, moderated by Manfred LaubichlerWelcome to the Future: Four Pivotal Trends You Should Be Aware OfEd William on the work of Dror PolegThe Future is Fungi: The Rise and Rhizomes of Mushroom CultureJeff VanderMeer, Kaitlin Smith, & Merlin Sheldrake, moderated by Corey PressmanDoes the Ecology of Somatic Tissue Normally Constrain the Evolution of Cancer?John Pepper at SFIThe Acronym Behind Our Wildest AI Dreams and NightmaresRe: TESCREAL, coined by Timnit Gebru & Émile TorresComplexity Literacy for a Sustainable Digital Transition: Cases and Arguments From Transdisciplinary Education ProgramsGerald SteinerRelevant episodes from my past life as the host of SFI's Complexity Podcast:Olivia Judson on Major Energy Transitions in Evolutionary HistoryMelanie Moses on Metabolic Scaling in Biology & ComputationChris Kempes on The Physical Constraints on Life & EvolutionThe Future of the Human Climate Niche with Tim Kohler & Marten SchefferScaling Laws & Social Networks in The Time of COVID-19 with Geoffrey West (Part 1)Geoffrey West on Scaling, Open-Ended Growth, and Accelerating Crisis/Innovation Cycles: Transcendence or Collapse? (Part 2)Reflections on COVID-19 with David Krakauer & Geoffrey WestMichael Garfield & David Krakauer on Evolution, Information, and Jurassic Park This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe

Queerly Recommended
Nice! (QR 069)

Queerly Recommended

Play Episode Listen Later Oct 3, 2023 75:00


This week, Tara and Kris start by celebrating the conclusion of the WGA strike and sending support for SAG-AFTRA in their continued quest for an equitable deal. Looking ahead, they wonder about the future of AI. Tara doesn't think it's as bleak as it sounds. Kris disagrees. Luckily we have opinions on survival reality shows to bring us together.  After getting deep into Alone Australia and Survivor, we hear more about RuPaul's Drag Race UK from Tara and then dive headlong into recommendations! Official Recommendations From Kris: Gaga: Five Foot Two (Netflix) This week, Kris recommends the 2017 Netflix documentary, Gaga: Five Foot Two, which covers a year in the life of Lady Gaga as she works on her fifth studio release album, Joanne. It's a deeply vulnerable piece according to Kris, examining long standing pain conditions that the artist suffers from. It also details the 51st Super Bowl half-time show where Gaga gave her critically acclaimed appearance. From Tara: Bingo Love by Tee Franklin (Author), Jenn St-Onge (Artist) After comic creator Lawrence Lindell talked about it in our bonus episode with him, Tara picked up one of the books he suggested: Bingo Love. Spoiler: she fell in love with it. Bingo Love is a second-chance romance between two Black women that starts in the 1960s when they meet and become instant best friends at their church's Sunday bingo. Separated once their relationship turns romantic and is discovered, they go on to marry men, have families, and then meet once more sixties at another Sunday bingo when they're in their 60's. It's a beautiful story about love, second chances, and honestly how many stories like this include Black women? We love to see it. Check it out. Works/People Discussed If Books Could Kill (Podcast) "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" By Emily M Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell Alone Australia (SBS) Survivor (CBS) https://www.tiktok.com/@208sportscards RuPaul's Drag Race UK (BBC) The Big Nailed It Baking Challenge (Netflix) Mimosa by Archie Bongiovanni Black Mirror, Series Three, Episode Four “San Junipero” (Channel Four/Netflix) Support & follow the show Buy us a Ko-fi Sign up for our newsletter Twitter: @queerlyrec Facebook: @QueerlyRecommended Instagram: @queerlyrecommended Tumblr: @queerlyrecommended TikTok: @queerlyrecommended Get all our links on Linktr.ee Support local animal shelters by joining Kris's Patreon

Midday
A.I. & Race: Will future technology include racial equity?

Midday

Play Episode Listen Later Sep 18, 2023 48:32


Today, a conversation about equity and AI. Tom's guest is Dr. Timnit Gebru, a computer scientist and tech expert who is working to reduce the negative effects of artificial intelligence. In 2021, Dr. Gebru founded the Distributed AI Research Institute (DAIR), a non-profit she currently serves as executive director. Before that, she was employed at Google, where she co-led the Ethical Artificial Intelligence research team. She was fired in 2020 — although the company insists she resigned — after a dispute with Google about its suppression of some of her research, and her criticism of the company's diversity, equity and inclusion (DEI) policies. She had previously done ground-breaking work at Apple and Microsoft.Dr. Gebru is also the co-founder of Black in AI, a nonprofit whose stated mission is "to increase the presence, inclusion, visibility and health of Black people in the field of AI."Dr. Timnit Gebru joined us on Zoom from San Francisco.Email us at midday@wypr.org, tweet us: @MiddayWYPR, or call us at 410-662-8780.

Technically Optimistic
Timnit Gebru is asking different questions about AI

Technically Optimistic

Play Episode Listen Later Sep 15, 2023 42:08


Timnit Gebru is a co-author of one of the most influential research papers on AI from this decade, which coined the term “stochastic parrots” to describe large language models. Following her very public departure from Google in 2020, Gebru founded the Distributed AI Research (DAIR) Institute, an organization that describes itself as doing independent, community-rooted work, free from the pervasive influence of Big Tech. She's now DAIR's executive director. And recently, she was selected as one of TIME's 100 Most Influential People in AI — like several other guests you hear from in season one of this show. Gebru sat down with host Raffi Krikorian for a wide-ranging and deep conversation about AI, touching on things like the obfuscation around its capabilities, what Big Tech hopes we don't pay attention to, and the importance of imagining alternative possible futures. To learn more about Technically Optimistic: emersoncollective.com/technicallyoptimistic For more on Emerson Collective: emersoncollective.com Learn more about our host, Raffi Krikorian: emersoncollective.com/persons/raffi-krikorian Technically Optimistic is produced by Emerson Collective with music by Mattie Safer.  Email us with questions and feedback at technicallyoptimistic@emersoncollective.com. Subscribe to Emerson Collective's newsletter: emersoncollective.com To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Best of the Left - Leftist Perspectives on Progressive Politics, News, Culture, Economics and Democracy
#1578 A.I. is a big tech airplane with a 10% chance of crashing, should society fly it? ChatGPT was only the starter pistol but it has shown the sloppiness of these companies and the need for regulate artificial intelligence for the public good.

Best of the Left - Leftist Perspectives on Progressive Politics, News, Culture, Economics and Democracy

Play Episode Listen Later Aug 20, 2023 64:53


Air Date 8/20/2023 Big tech is currently scrambling to bring untested A.I. products to market, over-promising, under-delivering, and working hard obscure and ignore any possible downsides for society. Big tech needs A.I. regulation now before we all suffer the easily foreseeable consequences as well as some unforeseeable ones. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript BestOfTheLeft.com/Support (Members Get Bonus Clips and Shows + No Ads!) Join our Discord community! SHOW NOTES Ch. 1: A.I. is B.S. - Adam Conover - Air Date 3-31-23 The real risk of A.I. isn't that some super-intelligent computer is going to take over in the future - it's that the humans in the tech industry are going to screw the rest of us over right now. Ch. 2: Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 1 - Summit - Air Date 6-15-23 What does it look like to align technology with humanity's best interests? Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society Ch. 3: Tech's Mask Off Moment - What Next: TBD | Tech, power, and the future - Air Date 8-13-23 When conservative writer Richard Hanania's old posts, originally published under a pseudonym, came to light people were shocked at just how racist and reactionary they were. Perhaps less shocking were the tech moguls who were revealed to be supporting him Ch. 4: Pregnant Woman's False Arrest in Detroit Shows “Racism Gets Embedded” in Facial Recognition Technology - Democracy Now! - Air Date 8-7-23 A shocking story of wrongful arrest in Detroit has renewed scrutiny of how facial recognition software is being deployed by police departments, despite major flaws in the technology. Ch. 5: Princeton University's Ruja Benjamin on Bias in Data and A.I. - The Data Chief - Air Date - 2-3-21 Joining Cindi today is Ruha Benjamin, a professor of African American Studies at Princeton University and the founding director of the IDA B. WELLS Just Data Lab. She has studied the social dimensions of science, technology, and medicine for over 15 years Ch. 6: AI ethics leader Timnit Gebru is changing it up after Google fired her - Science Friction - Air Date 4-17-22 Timnit Gebru was fired by Google in a cloud of controversy, now she's making waves beyond Big Tech's pervasive influence Ch. 7: Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 2 - Summit - Air Date 6-15-23 Ch. 8: Can We Govern AI? - Your Undivided Attention - Air Date 4-21-23 Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI's complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products MEMBERS-ONLY BONUS CLIP(S) Ch. 9: Buddhism in the Age of AI - Soryu Forall - Monastic Academy - Air Date 6-21-23 FINAL COMMENTS Ch. 10: Final comments on the difference between Microsoft's marketing and the realities of capitalism MUSIC (Blue Dot Sessions)   Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com

Reimagining the Internet
85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science

Reimagining the Internet

Play Episode Listen Later Aug 16, 2023 40:02


Timnit Gebru is not just a pioneering critic of dangerous AI datasets who calls bullshit on bad science pushed by the likes of OpenAI, or a tireless champion of racial, gender, and climate justice in computing. She’s also someone who wants to build something different. This week on Reimagining, we talk to the thrilling, funny… Continue reading 85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science

The Nonlinear Library
LW - The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate by Adam David Long

The Nonlinear Library

Play Episode Listen Later Aug 1, 2023 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...

The Institute of Black Imagination.
Summer School - Timnit Gebru: Asylum From A.I.

The Institute of Black Imagination.

Play Episode Listen Later Jul 30, 2023 93:00


Today's Summer School episode from the IBI Archive is episode 39 with Timnit Gebru an artificial intelligence researcher. Timnit advocates for fair and just use of the technology we use everyday. A former employee of Google, Timnit consistently calls in and calls out a Big Tech industry that leverages power, capital, and bias in favor of, well, themselves and their wallets. From language to surveillance- Timnit knows the potential harms of artificial intelligence know no bounds.In a time when we're at war, today's episode calls into question for whom we are fighting? Whose wars are worthy of discussion and what harms are so deeply ingrained within our consciousness that we ignore our own civilian casualties. As the world witnesses the 16th month of a war in Ethiopia, Timnint's journey reminds us of the refugee, the warrior, and the heroes we often dismiss and determine unworthy of home. This conversation was recorded on Jan 27, 2022Learn More about this topic Rhua benjaminSimone browne (Dark Matters: on Surveillance of Blackness) Coded bias Tawana pettySupport regulations to safeguard Mar Hicks wrote op ed for Wired (tech historian)Who to follow? Algorithm justice league Data society Data for black livesAI Now DAIR Other Things we mention contentauthenticity.org The fairness doctrine Fairness doctrine washington post article Visit us on IBI Digital at blackimagination.com Watch other episodes on YouTube at The Institute of Black Imagination.Connect with us on Instagram at @blackimaginationHost Dario Calmese

The Circuit with Emily Chang
Whistleblowers Warn About the Dark Side of AI

The Circuit with Emily Chang

Play Episode Listen Later Jul 28, 2023 56:36 Transcription Available


On this episode of The Circuit, Bloomberg's Emily Chang is joined by three women who have raised the alarm on big tech's harms: Ifeoma Ozoma, Timnit Gebru, and Safiya Noble. They discuss their experiences speaking out and the risks of what they see as tech's newest emerging threat: artificial intelligence. See omnystudio.com/listener for privacy information.

Mystery AI Hype Theater 3000
Episode 6: Stochastic Parrot Galactica, November 23, 2022

Mystery AI Hype Theater 3000

Play Episode Listen Later Jul 17, 2023 63:12 Transcription Available


Emily and Alex discuss MetaAI's bullshit science paper generator, Galactica, along with its defenders. Plus, where could AI actually help scientific research? And more Fresh AI Hell. Watch the video of this episode on PeerTube. References:Imre Lakatos on research programsShah, Chirag and Emily M. Bender. 2022. Situating Search. Proceedings of the 2022 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR '22). UW RAISE (Responsibility in AI Systems and Experiences)Stochastic Parrots:Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Debout les copains !
Des femmes savantes !

Debout les copains !

Play Episode Listen Later Jul 16, 2023 72:32


Historiquement Vôtre réunit 3 femmes savantes : Mileva Einstein, la scientifique restée, hélas, dans l'ombre de son mari, un certain Albert, qu'elle a aidé dans ses travaux et sa théorie de la rélativité aussi. Puis Angélique du Coudray, la première sage-femme de l'Histoire qui a inventé un mannequin obstétrique pédagogique pour apprendre le métier. Et une savante plus vivante, une chercheuse en intelligence artificielle : Timnit Gebru.

Rien ne s'oppose à midi - Matthieu Noël

Historiquement Vôtre réunit 3 femmes savantes : Mileva Einstein, la scientifique restée, hélas, dans l'ombre de son mari, un certain Albert, qu'elle a aidé dans ses travaux et sa théorie de la rélativité aussi. Puis Angélique du Coudray, la première sage-femme de l'Histoire qui a inventé un mannequin obstétrique pédagogique pour apprendre le métier. Et une savante plus vivante, une chercheuse en intelligence artificielle : Timnit Gebru.

The Tech Jawn
Happy Juneteenth: The Tech Jawn 87

The Tech Jawn

Play Episode Listen Later Jun 20, 2023 54:19


Timnit Gebru, co-founder of Black in AI, advocate for diversity in technology, and founder of the DAIR, believes that we should have some type of government oversight for Artificial intelligence.The European Parliament, a main legislative branch of the European Union, passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology's riskiest uses.AIs trained on other AI output will start producing junk within a few generations according to researchers so maybe the solution to the AI apocalypse is waiting it out.IKEA Uses AI to Transform Call Center Employees Into Interior Design Advisors.Link to Show Notes Hosted on Acast. See acast.com/privacy for more information.

Dave Troy Presents
Understanding TESCREAL with Dr. Timnit Gebru and Émile Torres

Dave Troy Presents

Play Episode Listen Later Jun 14, 2023 95:05


Everyone's talking about AI, how it will change the world, and even suggesting it might end humanity as we know it. Dave is joined by Dr. Timnit Gebru and Émile Torres, two prominent critics of AI doomerism, to cut through the noise, and look at where these ideas really came from, and offer suggestions on how we might look at these problems differently. And they also offer a picture of the darker side of these ideas and how they connect to Eugenics and other ideologies historically. Together Émile and Timnit coined an acronym called TESCREAL, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism — and yeah, that's a lot of -isms. But it ties into other topics that we have covered in this series, including Russian Cosmism and Longtermism. Dr. Gebru came to prominence in 2020 after she was fired from Google for speaking up about the company's lack of ethical guardrails in its AI development work. Émile Torres studies existential risk and has been a critic of the "longtermist" movement for several years. Dave last spoke with them last year in Season 1, Episode 23. Here are some relevant articles from Timnit and Émile. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922 The Acronym Behind Our Wildest AI Dreams and Nightmares https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ Longtermism and Eugenics: A Primer, by Émile Torres https://www.truthdig.com/articles/longtermism-and-eugenics-a-primer/ The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley's Rightward Turn, by Dave Troy https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/ Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. (New York Times; Cade Metz, Daisuke Wakabayashi) https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html Keywords: existential risk, artificial intelligence, TESCREAL, Yudkowsky, Sam Altman, Elon Musk, Peter Diamandis, Ray Kurzweil, Timnit Gebru, Émile Torres, Gary Marcus, OpenAI, Google, doomerism.

This Machine Kills
*Unlocked* – The Tescreal Theology

This Machine Kills

Play Episode Listen Later Jun 1, 2023 79:34


We dig into the technological theology that has been taking hold in Silicon Valley and its orbital communities – in various ways, different degrees, and over time. The components of this theology are nicely compiled into an acronym coined by Timnit Gebru and Émile Torres: Tescreal – transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, longtermism. Stuff we reference: ••• We need to examine the beliefs of today's tech luminaries https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab ••• Technology and Its Discontents: On the Verge of the Posthuman https://www.jstor.org/stable/40068384 ••• Why We are Called Hammer and Hope https://hammerandhope.org/article/issue-1-article-4 Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)

This Machine Kills
Patreon Preview – 256. The Tescreal Theology

This Machine Kills

Play Episode Listen Later May 19, 2023 8:28


We dig into the technological theology that has been taking hold in Silicon Valley and its orbital communities – in various ways, different degrees, and over time. The components of this theology are nicely compiled into an acronym coined by Timnit Gebru and Émile Torres: Tescreal – transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, longtermism. Stuff we reference: ••• We need to examine the beliefs of today's tech luminaries https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab ••• Technology and Its Discontents: On the Verge of the Posthuman https://www.jstor.org/stable/40068384 ••• Why We are Called Hammer and Hope https://hammerandhope.org/article/issue-1-article-4 Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)

For The Worldbuilders
013. Suprise! They're Late: Black Feminism and The Whitewashing of AI Ethics

For The Worldbuilders

Play Episode Listen Later May 19, 2023 29:16


Timnit Gebru is fired and Geoffrey Hinton left. Read Timnit's story here and read Geoffrey's story here then reflect on the contrast. I also invite you to read the Seeda School newsletter referenced in this podcast, Surprise! They're Late: No surprise, black feminism is right on time. Due to the commitment to care rooted in the research of Dr. Ruha Benjamin, Dr. Safiya Noble, Timnit Gebru, and many, many others, powerful white men like Eric get to perform “thought leadership” in a cultural and media landscape where concern around AI and algorithmic bias is normalized thanks to the work black feminists cared enough to share while simultaneously being attacked. I'd love to hear your thoughts! Email me at info@seedaschool.com.

Tech Won't Save Us
Pronatalism and Silicon Valley's Right-Wing Turn w/ Julia Black

Tech Won't Save Us

Play Episode Listen Later May 18, 2023 61:07


Paris Marx is joined by Julia Black to discuss tech billionaires' embrace of pronatalism and how it's part of a broader rationalist project to remake society and protect their privileged positions. Julia Black is  a senior correspondent at Insider and previously worked at Esquire and Vox. Follow Julia on Twitter at @mjnblack.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Julia wrote about pronatalism, Lex Fridman, and Sam Altman.Paris wrote about eugenics in Silicon Valley.Marc Andreessen wrote “It's Time to Build” in April 2020.Timnit Gebru gave a presentation on the TESCREAL bundle of ideologies. Émile Torres made a thread on Twitter about it.Support the show

Hipsters Ponto Tech
Ajuda entre IAs, limitação de tokens, intimidade artificial, e mais – Hipsters: Fora de Controle #03

Hipsters Ponto Tech

Play Episode Listen Later Apr 28, 2023 59:36


O Hipsters: Fora de Controle é o novo podcast da Alura com notícias sobre Inteligência Artificial aplicada e todo esse novo mundo no qual estamos começando a engatinhar, e que você vai poder explorar conosco!E o assunto de hoje é sobre ajuda entre Inteligências Artificiais, limitação de tokens, intimidade artificial e muito mais! Vem ver quem acompanha a gente neste papo!

Factually! with Adam Conover
A.I. and Stochastic Parrots with Emily Bender and Timnit Gebru

Factually! with Adam Conover

Play Episode Listen Later Apr 26, 2023 76:11


So-called “artificial intelligence” is one of the most divisive topics of the year, with even those who understand it in total disagreement about its potential impacts. This week, A.I. reseachers and authors of the famous paper “On the Dangers of Stochastic Parrots”, Emily Bender and Timnit Gebru, join Adam to discuss what everyone gets wrong about A.I. Learn more about your ad choices. Visit megaphone.fm/adchoices

Factually! with Adam Conover
A.I. and Stochastic Parrots with Emily Bender and Timnit Gebru

Factually! with Adam Conover

Play Episode Listen Later Apr 26, 2023 69:56


So-called “artificial intelligence” is one of the most divisive topics of the year, with even those who understand it in total disagreement about its potential impacts. This week, A.I. reseachers and authors of the famous paper “On the Dangers of Stochastic Parrots”, Emily Bender and Timnit Gebru, join Adam to discuss what everyone gets wrong about A.I. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Midday
AI & Race: How do we build racial equity into our smart machines?

Midday

Play Episode Listen Later Mar 28, 2023 48:39


Today, a conversation about equity and AI. Tom's guest is Dr. Timnit Gebru, a computer scientist and tech expert who is working to reduce the negative effects of artificial intelligence. In 2021, Dr. Gebru founded the Distributed AI Research Institute (DAIR), a non-profit she currently serves as executive director. Before that, she was employed at Google, where she co-led the Ethical Artificial Intelligence research team. She was fired in 2020 — although the company insists she resigned — after a dispute with Google about its suppression of some of her research, and her criticism of the company's diversity, equity and inclusion (DEI) policies. She had previously done ground-breaking work at Apple and Microsoft. Dr. Gebru is also the co-founder of Black in AI, a nonprofit whose stated mission is "to increase the presence, inclusion, visibility and health of Black people in the field of AI." Her DAIR research includes studying how artificial intelligence often reinforces and amplifies existing prejudices and marginalization. She has looked at how facial recognition programs are much less accurate in analyzing faces of people of color. She has also written about the need for regulation in the tech industry, and the environmental impact of AI. Dr. Timnit Gebru joins us on Zoom from San Francisco.See omnystudio.com/listener for privacy information.

RNZ: Saturday Morning
AI ethicist Timnit Gebru: why we can't trust Silicon Valley

RNZ: Saturday Morning

Play Episode Listen Later Mar 3, 2023 28:58


In the race to create and release Artificial Intelligence (AI) tools, are Silicon Valley companies such as OpenAI failing to fully consider the consequences of their work? The speed of development in the field is dizzying, with new tools such as ChatGPT and DALL·E offering a sneak peek at the potential of AI to work for us. Ethiopian born US computer scientist Dr Timnit Gebru is a leading researcher on the ethics of artificial intelligence.

Teaching in Higher Ed
ChatGPT and Good Intentions in Higher Ed

Teaching in Higher Ed

Play Episode Listen Later Feb 9, 2023 43:06


Autumm Caines discusses chatGPT and good intentions in higher ed on episode 452 of the Teaching in Higher Ed podcast. Quotes from the episode I am fascinated by the intersection between who were are and the environments we inhabit. -Autumm Caines The process of writing is thinking. -Autumm Caines We want our students to learn how to think through the act of writing. -Autumm Caines Resources Craft App's AI Assistant About Is a Liminal Space ChatGPT and Good Intentions in Higher Ed In Defense of “Banning” ChatGPT Prior to (or Instead of) Using ChatGPT with Your Students On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Tech Won't Save Us
Don't Fall for the AI Hype w/ Timnit Gebru

Tech Won't Save Us

Play Episode Listen Later Jan 19, 2023 63:32


Paris Marx is joined by Timnit Gebru to discuss the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute and former co-lead of the Ethical AI research team at Google. You can follow her on Twitter at @timnitGebru.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Please participate in our listener survey this month to give us a better idea of what you think of the show: https://forms.gle/xayiT7DQJn56p62x7Timnit wrote about the exploited labor behind AI tools and how effective altruism is pushing a harmful idea of AI ethics.Karen Hao broke down the details of the paper that got Timnit fired from Google.Emily Tucker wrote an article called “Artifice and Intelligence.”In 2016, ProPublica published an article about technology being used to “predict” future criminals that was biased against black people.In 2015, Google Photos classified black women as “gorillas.” In 2018, it still hadn't really been fixed.Artists have been protesting AI-generated images that train themselves on their work and threaten their livelihoods.OpenAI used Kenyan workers paid less than $2 an hour to try to make ChatGPT less toxic.Zachary Loeb described ELIZA in his article about Joseph Weizenbaum's work and legacy.Support the show

The Ezra Klein Show
Is ethical AI possible?

The Ezra Klein Show

Play Episode Listen Later Jan 9, 2023 47:59


Sean Illing talks with Timnit Gebru, the founder of the Distributed AI Research Institute. She studies the ethics of artificial intelligence and is an outspoken critic of companies developing new AI systems. Sean and Timnit discuss the power dynamics in the world of AI, the discriminatory outcomes that these technologies can cause, and the need for accountability and transparency in the field. Host: Sean Illing (@seanilling), host, The Gray Area Guest: Timnit Gebru (@timnitGebru), founder, Distributed AI Research Institute References:  “The Exploited Labor Behind Artificial Intelligence" by Adrienne Williams, Milagros Miceli, and Timnit Gebru (Noema; Oct. 13, 2022) “Effective Altruism is Push a Dangerous Brand of ‘AI Safety'” by Timnit Gebru (Wired; Nov. 30, 2022) Datasheets for Datasets by Timnit Gebru, et al. (CACM; Dec. 2021) “In Emergencies, Should You Trust a Robot?” by John Toon (Georgia Tech; Feb. 29, 2016) “We read the paper that forced Timnit Gebru out of Google. Here's what it says” by Karen Hao (MIT Technology Review; Dec. 4, 2020) “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by Timnit Gebru, et al. (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; March 2021)   Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts. Subscribe for free. Be the first to hear the next episode of The Gray Area. Subscribe in your favorite podcast app. Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcasts This episode was made by:  Producer: Erikk Geannikis Editor: Amy Drozdowska Engineer: Patrick Boyd Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices

New Books Network
Ethical AI

New Books Network

Play Episode Listen Later Dec 31, 2022 22:39


In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex helps us think about the complicated recipes we call “artificial intelligence” and what we mean when we ask our technologies to be ethical. In the episode Alex references an article by Emily Tucker, called “Artifice and Intelligence,” (Tech Policy Press, 17 March 2022) which suggests we should stop using terms like “artificial intelligence” and an opinion piece in the Washington Post, on a similar theme, by Timnit Gebru and Margaret Mitchell, “We warned Google that people might believe AI was sentient. Now it's happening” (17 June 2022). She also mentions a claim by Blake Lemoine that Google's LaMDA (Language Model for Dialogue Applications) is sentient. We'll leave that one to your googling, if not your judgment. Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. You can read her recent article, “AI Ethics Are in Danger. Funding Independent Research Could Help,” co-authored with Dylan Baker in the Stanford Social Innovation Review, and learn more about her work on her website. This week's image was produced by DALL-E 2 responding to the prompt: "generate the image of an artificial intelligence entity, deciding to protect shareholder interests over public good, in the style of Van Gogh." Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

KQED’s Forum
The Fight for “Ethical AI” and the Hidden Laborers Behind Artificial Intelligence

KQED’s Forum

Play Episode Listen Later Oct 31, 2022 55:31


Terms such as artificial intelligence, big data and machine learning bring to mind computers processing loads of data into uses for the real world. But the authors of a recent essay published in the magazine Noema say the truth is much uglier, and often ignored. “Far from the sophisticated, sentient machines portrayed in media and pop culture, so-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions,” they write in the essay, “The Exploited Labor Behind Artificial Intelligence.” One of the authors, Timnit Gebru, lost her job at Google for raising ethical concerns about AI, and now advocates for better practices and regulations to protect people. Another, Adrienne Williams, is a former Amazon delivery driver who organized in protest of unsafe conditions. They join us to discuss the hidden dangers and ethical questions surrounding AI. Guests: Adrienne Williams, a former Amazon delivery driver and organizer. Timnit Gebru, computer scientist and artificial intelligence researcher.

Radiolab
40,000 Recipes for Murder

Radiolab

Play Episode Listen Later Sep 9, 2022 30:15


Two scientists realize that the very same AI technology they have developed to discover medicines for rare diseases can also discover the most potent chemical weapons known to humankind. Inadvertently opening the Pandora's Box of WMDs. What should they do now? Special thanks to, Xander Davies, Timnit Gebru, Jessica Fjeld, Bert Gambini and Charlotte HsuEpisode Credits: Reported by Latif NasserProduced by Matt KieltyOriginal music and sound design contributed by Matt KieltyMixing help from Arianne WackFact-checking by Emily KriegerCITATIONS:Articles:Read the Sean and Fabio's paper here. Get Yan Liu's book Healing with Poisons: Potent Medicines in Medieval China here. Yan is now Assistant Professor of History at the University at Buffalo.Our newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)!Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today.Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org.

IRL - Online Life Is Real Life

An aerial picture can tell a thousand stories. But who gets to tell them? From above the clouds, our world is surveilled and datafied. Those who control the data, control the narratives. We explore the spatial legacy of apartheid in South Africa's townships, and hear from people around the world who are reclaiming power over their own maps.Raesetje Sefala is mapping the legacy of spatial apartheid in South Africa as a computer vision researcher with Timnit Gebru's Distributed AI Research Institute (DAIR).Astha Kapoor researches how communities and organizations can be ‘stewards' of data about people and places as co-founder of the Aapti Institute in India.Michael Running Wolf is the founder of Indigenous in AI. He is working on speech recognition and immersive spatial experiences with augmented and virtual reality in Canada.Denise McKenzie is a location data expert who works with the global mapping organization PLACE to empower governments and communities to use advanced spatial data.IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 6, host Bridget Todd shares stories of people who make AI more trustworthy in real life. This season doubles as Mozilla's 2022 Internet Health Report.  Go to the report for show notes, transcripts, and more.