POPULARITY
Send us a textWho knew that the breakthrough moment of AI sentience would come from interacting with an annoying neo-Luddite?After failing to raise a single dollar for PCI's newest initiative — the $350 billion Transdisciplinary Institute for Phalse Prophet Studies and Education (TIPPSE) — Jason, Rob, and Asher devise the only profitable pitch for raising capital: using AI technology to cure the loneliness that technology itself causes. The only problem is that AI chatbots won't talk to us, as evidenced by Asher's experience of being blocked by an AI “friend.” So Asher turns to the flesh-and-blood author of Blood in the Machine, Brian Merchant, to discuss the rise of the neo-Luddite movement — the only people who might be able to stand your humble Crazy Town hosts. Brian Merchant is a writer, reporter, and author. He is currently reporter in residence at the AI Now Institute and publishes his own newsletter, Blood in the Machine, which has the same title as his 2023 book. Previously, Brian was the technology columnist at the Los Angeles Times and a senior editor at Motherboard.Originally recorded on 1/3/25 (warm-up conversation) and 3/24/25 (interview with Brian).Warning: This podcast occasionally uses spicy language.Sources/Links/Notes:Press Release announcing closure of TIPPSEFunding for FriendScreenshot of Asher's conversation with Friend's bot, FaithLyrics to “Not Going to Mars” by PyrrhonBrian Merchant's Substack, Blood in the MachineBrian's book, Blood in the Machine: The Origins of the Rebellion Against Big Tech New York Times article on the Luddite Club: “‘Luddite' Teens Don't Want Your Likes”Crazy Town Episode 72: Sucking CO2 and Electrifying Everything: The Climate Movement's Desperate Dependence on Tenuous TechnologiesBrian's essay in The Atlantic, “The New Luddites Aren't Backing Down”Support the show
Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She is also the co-founder of NYU’s AI Now Institute. Whittaker got her start at Google, where she worked for 13 years until resigning in 2019 after she helped organize the Google Walkouts. She speaks with Oz about learning on the job, championing data privacy and being awarded the Helmut Schmidt Future Prize for “her commitment to the development of AI technology oriented towards the common good.”See omnystudio.com/listener for privacy information.
Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out how to turn a profit.Brian Merchant is an author, journalist in residence at the AI Now Institute, and co-host of the tech news podcast System Crash.References:Elon Musk and partners form nonprofit to stop AI from ruining the worldHow Elon Musk and Y Combinator Plan to Stop Computers From Taking OverElon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the WorldBrian's recent report on the business model of AGI, for the AI Now Institute: AI Generated Business: The rise of AGI and the rush to find a working revenue modelPreviously on MAIHT3K: Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld)Fresh AI Hell:OpenAI explores advertising as it steps up revenue driveIf an AI company ran Campbell's Soup with the same practices they use to handle dataHumans are the new 'luxury item'Itching to write a book? AI publisher Spines wants to make a dealA company pitched Emily her own 'verified avatar'Don't upload your medical images to chatbotsA look at a pilot program in Georgia that uses 'jailbots' to track inmatesYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
This close to the end of 2024, it's clear that one of the most significant tech stories of the year was the outcome of the Google search antitrust case. It will also make headlines next year and beyond as the remedies phase gets worked out in the courts. For this episode, Justin Hendrix turns the host duties over to someone who has looked closely at this issue: Alissa Cooper, the Executive Director of the Knight-Georgetown Institute (KGI). Alissa hosted a conversation with three individuals who are following the remedies phase with an expert eye, including:Cristina Caffarra is a competition economist and an honorary Professor at University College London, and cofounder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London.Kate Brennan is associate director at the AI Now Institute; andDavid Dinielli is an attorney and a visiting clinical lecturer and senior research scholar at Yale Law School.
What do cybersecurity experts, journalists in foreign conflicts, indicted New York City Mayor Eric Adams and Drake have in common? They all use the Signal messaging app. Signal's protocol has been the gold standard in end-to-end encryption, used by Whatsapp, Google and more, for more than a decade. But it's been under fire from both authoritarian governments and well-meaning democracies who see the privacy locks as a threat. Since 2022, former Google rabble-rouser and AI Now Institute co-founder Meredith Whittaker has been president of the Signal Foundation, the nonprofit that runs the app. Kara talks with Meredith about her fight to protect text privacy, the consolidation of power and money in AI and how nonprofits can survive in a world built on the surveillance economy. Questions? Comments? Email us at on@voxmedia.com or find Kara on Threads/Instagram @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode, we talk to Amba Kak and Sarah Myers West of the AI Now Institute, who are the co directors of this leading policy think tank. In the episode, which is the second installment of our EU AI Act series, Amba and Sarah explore why different tech policy narratives matter, the difference between the US and the EU regulatory landscape, why this idea that AI is simply outstripping regulation is an outdated maxim, and then finally, their policy wish list for 2024.
If we know that ChatGPT makes things up, when should we avoid Large Language Models? Is generative AI really safe to use when it matters? Listen to this interview with Dr Heidy Khlaaf to find out. Dr Khlaaf is the Principal Research Scientist at the AI Now Institute focusing on the assessment and safety of AI within autonomous weapons systems. She previously worked at OpenAI and Microsoft, amongst others. Timestamps 00:00 Introduction 06:32 The Problem-First Approach to AI 14:20 Limitations of Large Language Models 20:49 Augmenting Human Knowledge with AI 23:37 AI Systems Gone Wrong 28:22 AI in Safety Critical Systems 33:47 Questioning Technological Determinism 38:19 AI in Defense For the transcript, go to: https://www.techfornontechies.co/blog/217-when-not-to-use-ai-in-business-and-warfare For more career & tech lessons, subscribe to Tech for Non-Techies on: Apple Spotify YouTube Amazon Podcasts Stitcher Pandora Growth Through Innovation If your organisation wants to drive revenue through innovation, book a call with us here. Our workshops and innovation strategies have helped Constellation Brands, the Royal Bank of Canada and Oxford University.
In this episode of Technology and Security, Dr Miah Hammond-Errey speaks with Meredith Whittaker, president of Signal. The interview explores key contemporary issues in technology and Artificial Intelligence (AI). They discuss the impact of AI in elections and democracies, including the need for stronger local media ecosystems and improved focus on the ‘mediating' role of social media platforms and the information ecosystem. They discuss the concentration of AI power and reliance of the business model on mass collection, including the need to write the tech stack for privacy, not surveillance. This episode also explores developing democratically focused public digital infrastructure without profit incentives and highlights the role of open-source libraries and systems as part of the core infrastructure of the technology ecosystem. This episode also covers the significance of autonomy and agency in neurotech applications. They discuss how to improve tech board governance, through increased personal liability, accountability and transparency. Also, how many downloads signal has actually had! Meredith Whittaker is the president of Signal Foundation. She has nearly 20 years of experience in the tech industry, academia, and government and co-founded the AI Now Institute. Resources mentioned in the recording: · Meredith Whittaker, link to talk· Meredith Whittaker, link to reading · Meredith Whittaker, link to watching · Meredith Whittaker, link to listening · Miah Hammond-Errey, 2024, Big Data, Emerging Technologies and Intelligence: National Security Disrupted, Routledge (20% discount code for book AFL04)· Byte-sized diplomacy (column), The Interpreter, 3 July 2024, AI-enabled elections or deepfake democracy? This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people. Thanks to the talents of those involved. Music by Dr Paul Mac and production by Elliott Brennan.
The race between the U.S. and China for supremacy in artificial intelligence is cause for concern given the technology's potential to spread misinformation, enhance government surveillance and harm the climate—that's what leading tech figure Meredith Whittaker, has warned. Not only is Meredith Whittaker the President of Signal, she is the current Chief Advisor, and the former Faculty Director and Co-Founder of the AI Now Institute. Her research and advocacy focus on the social implications of artificial intelligence and the tech industry responsible for it, with a particular emphasis on power and the political economy driving the commercialization of computational technology. She was in Vienna for a series of appearances and events. Riem Higazi met up with Meredith Whittaker to ask questions like: who is it better to be controlled by when it comes to AI technology, China or the United States? Sendungshinweis: FM4, OKFM4, 13.05.2024, 17:00 Uhr
“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe
From deepfakes to the fear of AI taking jobs, to the social media giants making money from abusive content, our technology dominated world is in a crisis – what are the solutions?AI researcher Kerry McInerney applies a feminist perspective to data, algorithms and intelligent machines. AI-powered tech, and generative AI in particular, pose new challenges for cybersecurity. Kerry proposes a new take on AI, looking at how it can be used on a small scale, acknowledging culture and gender, tailoring the technology for local applications rather than trying to push for global, one size fits all strategies.And in addressing corporate responsibility for Big Tech, Kerry discusses how tackling harassment online requires an understanding of the social, political and psychological dimensions of harassment, particularly of women in the wider world, as opposed to seeing this as a technical problem.Dr Kerry McInerney is a research fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and the AI Now Institute.This is the last of four programmes from the Oxford Literary Festival, presented by Nuala McGovern, produced by Julian Siddle.Recorded in front of an audience at Worcester College Oxford.
How has our data become the world's most valuable resource? What privacy tradeoffs are we making when we engage with personalized apps, recommendations, and always-connected smart devices? Is our personal data being used to make things better, or to make tech giants even more powerful? And what do “cookies” have to do with all this? Host Raffi Krikorian chats with experts about data's role in AI, “big data” and the data economy, surveillance capitalism, and much more. Guests include AI researcher Amba Kak, executive director of the AI Now Institute; data scientist Chris Wiggins, co-author of How Data Happened; media scholar, tech writer, and internet activist Ethan Zuckerman; engineer and inventor of the cookie Lou Montulli; and Harvard professor Jonathan Zittrain. To learn more about Technically Optimistic and to read the transcript for this episode: emersoncollective.com/technically-optimistic-podcast For more on Emerson Collective: emersoncollective.com Learn more about our host, Raffi Krikorian: emersoncollective.com/raffi Technically Optimistic is produced by Emerson Collective with music by Mattie Safer. Subscribe to our weekly newsletter: technicallyoptimistic.substack.com Follow on social media @emersoncollective and @emcollectivepodcasts Email us with questions and feedback at us@technicallyoptimistic.com To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
It's been a wild last year or so in tech. We've seen a marked rise in the development of artificial intelligence, large language models and prolific growth of augmented reality systems. At the same time, it can feel like we're moving backwards as concerns continue to rise about user privacy and the methods by which personal data is collected and monetized. Our guest this week points out that protecting privacy requires tech companies to ditch traditional business models that monetize user surveillance. Meredith Whittaker is president of Signal App and serves as the chief advisor for the AI Now Institute. She joins WITHpod to discuss the rise of big tech, the trajectory of the internet from being more commercialized to open, concerns about tech's role in American democracy, her thoughts on proposed TikTok bans and more.
Today's episode is a special treat for all you tech enthusiasts, Black Mirror binge-watchers, and fans of the Terminator franchise! I am delving deep into the realm of artificial intelligence, but with a twist – through the lens of feminist ethics. I sit down with the brilliant Dr. Kerry McInerney, a scholar and AI ethicist whose work sheds light on the intersection of feminism, race, and technology. She is currently a Research Associate at the Leverhulme Centre for the Future of Intelligence and a research fellow at AI Now Institute. Her scholarly writing has appeared in multiple journals, and in 2021, she released her podcast The Good Robot, which she cohosts alongside fellow AI scholar Dr. Eleanor Drage. This podcast explores the more nuanced side of AI—not just the science behind it, but its social and ethical sides. Immigrantly is a weekly podcast that celebrates the extraordinariness of immigrant life. We do this by providing our listeners with authentic, unvarnished insights into the immigrant identity in America. Immigrantly has garnered significant recognition and has been featured in renowned media outlets such as the Nieman Storyboard, The Guardian, The Slowdown, and CNN. Join us as we create new intellectual engagement for our audience. You can get more information at http://immigrantlypod.com Please share the love and leave us a review on Apple Podcasts & Spotify to help more people find us! You can connect with Saadia on Twitter @swkkhan Email: saadia@immigrantlypod.com Host & Producer: Saadia Khan I Content Writer: Michaela Strauther and Saadia Khan I Editorial review: Shei Yu I Sound Designer & Editor: Haziq Ahmad Farid I Immigrantly Theme Music: Simon Hutchinson | Other Music: Epidemic Sound Immigrantly podcast is an Immigrantly Media Production. For advertising inquiries, you can contact us at info@immigrantlypod.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Right now, the federal government is piloting its response to Silicon Valley's AI boom. It's called the National Artificial Intelligence Research Resource, and it's supposed to “democratize” access to AI by making gigantic and expensive AI models available to academic researchers. Marketplace's Lily Jamali spoke with Sarah Myers West, co-executive director of the AI Now Institute, who is skeptical of the initiative's goals. As Myers West explains, the issue with the NAIRR is the government can't launch an AI program of its own without partnerships that are potentially lucrative for Big Tech.
Right now, the federal government is piloting its response to Silicon Valley's AI boom. It's called the National Artificial Intelligence Research Resource, and it's supposed to “democratize” access to AI by making gigantic and expensive AI models available to academic researchers. Marketplace's Lily Jamali spoke with Sarah Myers West, co-executive director of the AI Now Institute, who is skeptical of the initiative's goals. As Myers West explains, the issue with the NAIRR is the government can't launch an AI program of its own without partnerships that are potentially lucrative for Big Tech.
Subscriber-only episodeThe Global AI Arms RaceLeading global powers have for some time been positioning themselves at the forefront of AI development. All these governments see AI as such a powerful tool they want to be first to be in control, however they also want to regulate it. But with regulations, development can be slow says Dr Kerry McInerney from the University of Cambridge and the AI Now Institute, which leaves the dilemma of who should control AI progress? Dr McInerney is on the show to discuss the Global AI Arms race.Having your say in AI developmentAs governments race to win the Global AI Race, with industry pushing development too, is there any possibility that the public can have a say in how AI should operate? A project involving AI is itself being used as a tool to determine which areas should be discussed with regards to the progress of AI models. It's called Recursive Public and it's arguably one of the first serious efforts to frame some global consensus on AI's workings and governance. It's a ground up project involving ordinary members of the public as well as various professional bodies and the AI community at large. SOEP reporter Shiroma Silva looks to see if we really can have a say in the development of AI.The programme is presented by Gareth Mitchell and the studio expert is Peter Guest.More on this week's stories:Dr Kerry McInerney Recursive Public Editor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts.Follow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
The Global AI Arms RaceLeading global powers have for some time been positioning themselves at the forefront of AI development. All these governments see AI as such a powerful tool they want to be first to be in control, however they also want to regulate it. But with regulations, development can be slow says Dr Kerry McInerney from the University of Cambridge and the AI Now Institute, which leaves the dilemma of who should control AI progress? Dr McInerney is on the show to discuss the Global AI Arms race.Having your say in AI development As governments race to win the Global AI Race, with industry pushing development too, is there any possibility that the public can have a say in how AI should operate? A project involving AI is itself being used as a tool to determine which areas should be discussed with regards to the progress of AI models. It's called Recursive Public and it's arguably one of the first serious efforts to frame some global consensus on AI's workings and governance. It's a ground up project involving ordinary members of the public as well as various professional bodies and the AI community at large. SOEP reporter Shiroma Silva looks to see if we really can have a say in the development of AI. The programme is presented by Gareth Mitchell and the studio expert is Peter Guest. More on this week's stories:Dr Kerry McInerney Recursive Public Support the showEditor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts.Follow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.Watch the video version of this episode on PeerTube.References:HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and InclusionAlgorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot projectWant to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI's “Eradication of Difference” (Drage & McInerney, 2022)Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)Fresh AI HellInternet of Shit 2.0: a "smart" bidetFake AI “students” enrolled at Michigan UniversitySynthetic images destroy online crochet groups“AI” for teacher performance feedbackPalette cleanser: “Stochastic parrot” is the American Dialect Society's AI-related word of the year for 2023!You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Meredith Whittaker is the President of Signal and chief advisor to the AI Now Institute. She joins Big Technology Podcast for a lively discussion about the state of Google, whether AI is for real or a marketing gimmick, whether the online advertising business model is ethically broken, and the state of the Signal messaging app. Stay tuned for the second half, where we discuss the mysterious nature of Telegram. And enjoy the cool-headed arguments throughout. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
If Then | News on technology, Silicon Valley, politics, and tech policy
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023's biggest, strangest, and best stories. Regularly scheduled programming resumes in January. Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU Originally aired May 12th, 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.This episode was recorded on November 20, 2023.Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she's the author of the forthcoming book, "Tracing Code."Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He's a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.References:Yann LeCun testifies on 'open source' work at MetaMeta launches LLaMA 2Stanford Human-Centered AI's new transparency indexCoverage in The AtlanticEleuther critiqueMargaret Mitchell critiqueOpening up ChatGPT (Andreas Liesenfeld's work)WebinarFresh AI Hell:Sam Altman out at OpenAIThe Verge: Meta disbands their Responsible AI teamArs Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homesCall-out of Stability and others' use of “fair use” in AI-generated artA fawning profile of OpenAI's Ilya SutskeverYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
On November 15, the Open Markets Institute and the AI Now Institute hosted an event in Washington D.C. featuring discussion on how to understand the promise, threats, and practical regulatory challenges presented by artificial intelligence. Justin Hendrix moderated a discussion on harms to artists and creators, exploring questions around copyright and fair use, the ways in which AI is shaping the entire incentive structure for creative labor, and the economic impacts of the "junkification" of online content. The panelists included Liz Pelly, a freelance journalist specialized in the music industry; Ashley Irwin, President of the Society of Composers & Lyricists; and Jen Jacobsen, Executive Director of the Artist Rights Alliance.
Artificial Intelligence (AI) is on every business leader's agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise. Today's pick is Azeem's conversation with Meredith Whittaker, president of the Signal Foundation. Meredith is a co-founder and chief advisor of the AI Now Institute, an independent research group looking at the social impact of artificial intelligence.
In this episode, we speak with Julia Rhodes Davis, a Senior Advisor at Data & Society, about her recent report "Advancing Racial Equity Through Technology Policy" published by the AI Now Institute. This comprehensive report provides an in-depth examination of how the technology industry impacts racial inequity and concrete policy recommendations for reform. A critical insight from the report is that advancing racial equity requires a holistic approach. The report provides policy recommendations to reform antitrust law, ensure algorithmic accountability, and support tech entrepreneurship for people of color.In our interview, Julia explains how advancing racial equity requires policy change as well as coalition-building with impacted communities. She discusses the urgent need to reform practices of algorithmic discrimination that restrict opportunities for marginalized groups. Julia highlights some positive momentum from federal and state policy efforts and she encourages people to get involved with local organizations, providing a great list of organizations you might consider.Links:AI Now InstituteAdvancing Racial Equity Through Technology Policy reportAlgorithmic Justice LeagueAthenaColor of ChangeData for Black LivesData & SocietyMedia JusticeOur Data BodiesAbout Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world
Jai Vipra is a research fellow at the AI Now Institute where she focuses on competition issues in frontier AI models. She recently published the report Computational Power and AI which focuses on compute as a core dependency in building large-scale AI. We found this report to be an important addition to the work covering the generative AI industry because compute is incredibly important but not very well understood. In the report, Jai breaks down the key components of compute, analyzes the supply chain and competitive dynamics, and aggregates all the known economics. In this interview, we talk with Jai about the report, its implications, and her recommendations for industry and policy responses.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world
Internet of Humans, with Jillian York & Konstantinos Komaitis
In this episode, Konstantinos Komaitis and Jillian York talk with Meredith Whittaker,the President of the Signal Foundation. Meredith is also the current Chief Advisor, and the former Faculty Director and Co-Founder of the AI Now Institute. Meredith shares with us the role of Signal in ensuring the privacy and security of communications and her plans for the messaging service. We also discuss the fight for encryption, as pressure to undermine it in jurisdictions across the world is increasing. Finally, we also discuss about AI, its governance and the concerns that have recently manifested by policy makers. This podcast is edited by Javier Pallero. The music in this episode is Nightlapse by Arthur Vyncke | https://soundcloud.com/arthurvostMusic promoted by https://www.free-stock-music.comCreative Commons / Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/deed.en_US
Discussions about artificial intelligence (AI) have quickly turned from the excited to the apocalyptic. Are warnings that AI could pose an existential threat valid, or do they distract from the real danger AI is already causing?Contributors:Charlie Beckett - head, Polis LSE JournalismAI ProjectYoshua Bengio - professor of computer science, University of MontrealSarah Myers West - director, AI Now InstituteÉmile P Torres - author of Human Extinction: A History of the Science and Ethics of AnnihilationOn our radar:A year on from the murders of British journalist Dom Phillips and Brazilian Indigenous activist Bruno Pereira, producer Flo Phillips reports on the justice being served and how their work goes on, done by others.Hitler, the Holocaust and the politics of memory:Eight decades after the first train of prisoners arrived at the Auschwitz-Birkenau extermination camp, Holocaust survivors - and their testimonies - are dwindling. Producer Johanna Hoes explores the politics of memory and the importance of recounting history, so it doesn't repeat itself.Contributors:Jan Grabowski - professor of history, University of OttawaMindu Hornick - Auschwitz survivorNelly Ben-Or - Holocaust survivor and pianistPawel Sawicki - press officer, Auschwitz-Birkenau State MuseumSubscribe to our channel http://bit.ly/AJSubscribeFollow us on Twitter https://twitter.com/AJEnglishFind us on Facebook https://www.facebook.com/aljazeeraCheck our website: http://www.aljazeera.com/Check out our Instagram page: https://www.instagram.com/aljazeeraenglish/@AljazeeraEnglish#Aljazeeraenglish#News
Artificial intelligence is a powerful technology that can transform human lives but industry leaders warn that the tool they built could one day pose an existential threat to humanity. So, how can governments regulate AI without stifling innovation? Join host Mohammed Jamjoom. Guests: David Krueger - Assistant Professor in Machine Learning and Computer Vision at the University of Cambridge. Sarah Myers West - Managing Director of the AI Now Institute. Ramesh Srinivasan - Professor of Information Studies at the University of California.
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
If Then | News on technology, Silicon Valley, politics, and tech policy
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn't look like war with Skynet; it's happening right now on the lines of the Writer's Guild strike. Guests: Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you'll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work. Learn more about your ad choices. Visit megaphone.fm/adchoices
We are very pleased to be joined by Sarah Myers West— managing director of AI Now Institute and former Senior Advisor on AI to the FTC—to discuss their excellent new report, 2023 Landscape: Confronting Tech Power. We lay out how the dominance of Big Tech is built on a triforce of advantages: data, compute, geopolitical. These advantages serve as both core pillars and core chokepoints for corporate control over AI. And if we hope to overturn their power, then we must undermine these advantages. The report lays out a number of key interventions for policy, regulation, and organizing. ••• 2023 Landscape report: https://ainowinstitute.org/general/2023-landscape-executive-summary ••• Sarah's twitter: https://twitter.com/sarahbmyers Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)
It's shaping up to be the great question of our time; AI is moving at a pace that its creators never really expected. This week on Taking Stock Mandy Johnston asks can we control it before it controls us when she talks to Sarah Mayer West of the AI Now Institute. And back home, as organised crime featured large in the news this week, Mandy looks at the figures involved and where and how criminal gangs are stashing their money in the modern era with John Mooney of the Sunday Times. And finally bumper revenues and exports were outlined this week by Government and Enterprise Ireland. Mandy is joined by Cliff Taylor of the Irish Times to outline what this might mean for the future and if tax cuts will be a major discussion point in the months ahead for the Government.
Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate Crawford has long drawn attention to the potential harms AI poses for society: exploitation, discrimination, and more. She argues that minimizing risks depends on civil society, not technology. The ability of people to govern AI is often overlooked because many people approach new technologies with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive policy approaches around the social consequences of AI. Across her work in industry, academia, and elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections into product development and deployment. Resources Learn more about Kate Crawford's work by visiting her website and the AI Now Institute. Read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Visit the Anatomy of an AI System artwork at the Museum of Modern Art, or see and learn about it virtually here. Working with machine learning datasets? Check out Crawford's critical field guide to think about how to best work with these data.
Meredith Whittaker, co-founder and faculty director of the AI Now Institute and Minderoo Research Professor at New York University, talks about the politics of digital technologies with Peoples & Things host Lee Vinsel. The conversation examines Whittaker's fascinating career moving between industry and academia, her role in the Google walkout, and her hopes for the future, including the role of social movements in fomenting political change. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Meredith Whittaker, co-founder and faculty director of the AI Now Institute and Minderoo Research Professor at New York University, talks about the politics of digital technologies with Peoples & Things host Lee Vinsel. The conversation examines Whittaker's fascinating career moving between industry and academia, her role in the Google walkout, and her hopes for the future, including the role of social movements in fomenting political change. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics
Meredith Whittaker, co-founder and faculty director of the AI Now Institute and Minderoo Research Professor at New York University, talks about the politics of digital technologies with Peoples & Things host Lee Vinsel. The conversation examines Whittaker's fascinating career moving between industry and academia, her role in the Google walkout, and her hopes for the future, including the role of social movements in fomenting political change. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
We're joined by Dr. Theodora Dryer, the research lead for Climate + Water at the AI Now Institute, and research assistant professor at New York University's Tandon School of Engineering. We're also joined by Dr. Amrah Salomón, an assistant professor of English at the University of California Santa Barbara and a founding member of the Center for Interdisciplinary Environmental Justice. Dr. Dryer and Dr. Salomon, among others, collaborated to produce the report Water Justice and Technology, covering topics on both North and Central America.
We're joined by Dr. Theodora Dryer, the research lead for Climate + Water at the AI Now Institute, and research assistant professor at New York University's Tandon School of Engineering. We're also joined by Dr. Amrah Salomón, an assistant professor of English at the University of California Santa Barbara and a founding member of the Center for Interdisciplinary Environmental Justice. Dr. Dryer and Dr. Salomon, among others, collaborated to produce the report Water Justice and Technology, covering topics on both North and Central America.
I have been patiently awaiting for this episode to be released! I am really excited to pair my nerdiness on artificial intelligence and sex work on this episode with fellow dominatrix, researcher and professor, Dr. Mistress Snow, onto the show this week! Dr. Snow is an adjunct professor and researcher at NYU at the AI Now Institute and the UCLA Center for Critical Intelligence and as a sex worker, she understands the many discriminatory practices through artificial intelligence that major platforms and companies adopt to prevent sex workers from doing our jobs. Dr. Snow opens up about her current research, shares her thoughts on the recent acquisition of Twitter by Elon Musk and what that might mean for sex workers, and ways that artificial intelligence works in ways that do not allow us to thrive. It is a completely fascinating conversation (atleast for me!) that absolutely affects our livelihood and how we conduct business. Watch the video exclusive only on Patreon: https://patreon.com/strippedbysia Twitter.com/MistressSnowPHD Don't forget to LIKE, RATE, SHARE, REVIEW, and SUBSCRIBE! Twitter.com/strippedbysia Patreon.com/strippedbysia Instagram.com/strippedbysia
In this episode, Columbia literature curator Lina Moe sits down with historian and curator of NYU's AI Now Institute and author of A People's History of Computing in the United States, Joy Lisi Rankin. Lina and Joy discuss urgent questions about the social history of computing; the ethical dilemmas posed by the power of tech industry giants today; and how race, class, and gender factor into online culture. Lina and Joy also speculate on the paths not taken in computing. Instead of understanding computers as commodities for purchase, for example, computers could have been considered necessary public goods, similar to utilities. Joy provides fascinating archival stories that shift the paradigms of computer history, like how instant messaging was created as an educational tool decades before AOL popularized it--or how a Minnesota librarian wrote the early software for what became Apple's music library, but was never paid for it. Overdue Conversations is a podcast about the ways archives inform our discussions around history, literature, and politics. From digital publishing to reparative justice, climate change to public health, this series of overdue conversations takes archival documents out of the stacks and into the public forum to consider how collecting practices, selective reading, and erasure of past knowledge informs and distorts contemporary debates. Music is by Poddington Bear via the Free Music Archive used under Creative Commons license. Overdue tile design by Amy Howden-Chapman. Researched and produced by Lina Moe and Anirbaan Banerjee with assistance from Amanda Martin-Hardin.