POPULARITY
www.yinzaregood.comHere at Yinz Are Good, we've got an abundance of generosity, selflessness, and kindness to share. We've got joy, inspiration, laughter, and stories of people helping people. We've got...community. And we're so glad that you're part of it. In this episode, you'll hear from two remarkable women, Rachel Antin and Kate Crawford, from One Day To Remember. This incredible nonprofit provides parents with advanced-stage cancer the opportunity to have one fun and all-expenses-paid day with their children - a day where they can forget about cancer and enjoy making memories together. Tressa sits down with Rachel and Kate to learn all about the org, as well as about each of their journeys. This is one beautiful conversation about people taking care of each other.Want to learn more about the podcast? Check out our website: https://www.yinzaregood.com/Instagram: @yinzaregoodFacebook: @YinzAreGoodHave a story of GENEROSITY or KINDNESS to share with us? Email us at yinzaregood@gmail.comTo request a KINDNESS CRATE drop off at your business or school, email us at yinzaregood@gmail.com
José Allona, Claudia Gutierrez y Damián Calderón son diseñadores y profesores de diseño. En esta entrevista cuentan porque les parece que vale la pena investigar el tema en Latinoamérica, y qué es lo que están haciendo. Hablamos de decolonizar la Inteligencia Artificial. Ellos dicen que: 1. Estamos dando por sentado que al usar IA los resultados son certeros. 2. Se nos está imponiendo esta tecnología. 3. Queremos tener un rol en el desarrollo de servicios basados en IA, y no ser solo usuarios. 4. Tenemos que entender cómo queremos que nos beneficien estas tecnologías en el Sur Global 5. Tenemos que actuar y actuar ya, porque sino después va a ser demasiado tardeLas siguientes entrevistas de esta serie las hice con Damián y Claudia de co-pilotos. Entre los 3 eligieron a quién entrevistar y propusieron el orden de las entrevistas. Esta entrevista es parte de las listas: Argentina y diseño, Chile y diseño, Diseño UX, Inteligencia Artificial.Ellos nos recomiendan: El Atlas de IA, de Kate Crawford. Timnit GebruSherpas, un podcast argentino. Guias de Google: AI researchHCI, Carnegie Melon; mapeos de las capacidades de IA Enfoque de IA en base a los riesgos de la Unión EuropeaHCI in AI Carnegie MellonGuía de IA para la Educación de la UnescoUX IA America Latina.
Complex problems cannot be solved if examined only through a narrow lens. Enter interdisciplinarity. It is now widely accepted that drawing on varied expertise and perspectives is the only way we can understand and tackle many of the most challenging issues we face, as individuals and as a species. So, there is a growing movement towards more cross disciplinary working in higher education but it faces challenges. Interdisciplinarity requires a shift of mindset in an academy built upon clear disciplinary distinctions and must compete for space in already overcrowded curricula. We speak to two leadings scholars in interdisciplinary research and teaching to find out why it is so important and how they are encouraging more academics and students to break out of traditional academic silos. Gabriele Bammer is a professor of integration and implementation sciences (i2S) at the Australian National University. She is author of several books including ‘Disciplining Interdisciplinarity' and is inaugural president of the Global Alliance for Inter- and Transdisciplinarity. To support progress in interdisciplinarity around the world, she runs the Integration and Implementation Insights blog and repository of theory, methods and tools underpinning i2S. Gabriele has held visiting appointments at Harvard University's John F. Kennedy School of Government, the National Socio-Environmental Synthesis Center at the University of Maryland and the Institute for Advanced Sustainability Studies in Potsdam, Germany. Kate Crawford is an international scholar of the social implications of artificial intelligence who has advised policymakers in the United Nations, the White House, and the European Parliament on AI, and currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning. She is a research professor at USC Annenberg in Los Angeles, a senior principal researcher at MSR in New York, an honorary professor at the University of Sydney, and the inaugural visiting chair for AI and Justice at the École Normale Supérieure in Paris. Her award-winning book, Atlas of AI, reveals the extractive nature of this technology while her creative collaborations such as Anatomy of an AI System with Vladan Joler and Excavating AI with Trevor Paglen explore the complex processes behind each human-AI interaction, showing the material and human costs. Her latest exhibition, Calculating Empires: A Genealogy of Technology and Power 1500-2025, opened in Milan, November 2023 and won the Grand Prize of the European Commission for art and technology. More advice and insight can be found in our latest Campus spotlight guide: A focus on interdisciplinarity in teaching.
For this episode of the Global Exchange podcast, Colin Robertson talks with Solange Marquez and Andres Rozental about the Mexican reaction to the return of President Donald Trump and the continuing threat of tariffs. // Participants' bios - CGAI Fellow Solange Marquez is a professor at the Law School of the National Autonomous University of Mexico (UNAM). A former VP of the Mexican Council on International Affairs (Comexi) she is its representative in Canada. - Andres Rozental served as Mexico's ambassador to Sweden and the United Kingdom and as deputy foreign minister. He is the Founding President of the Mexican Council on Foreign Relations. He holds the lifetime rank of eminent ambassador of Mexico. // Host bio: Colin Robertson is a former diplomat and Senior Advisor to the Canadian Global Affairs Institute, www.cgai.ca/colin_robertson // // Reading Recommendations: - "Juan Carlos: Steering Spain from Dictatorship to Democracy", by Paul Preston: https://www.amazon.ca/Juan-Carlos-Steering-Dictatorship-Democracy-ebook/dp/B009UL1WO8 - "Ir a La Habana", by Leonardo Padura: https://www.amazon.ca/Habana-Cr%C3%B3nica-viajes-Havana-Chronicle/dp/6073918933 - "Prime Target": https://www.imdb.com/title/tt31186958/ - "White Working Class: Overcoming Class Cluelessness in America", by Joan C. Williams: https://www.amazon.ca/White-Working-Class-Overcoming-Cluelessness/dp/1633693783 - "Atlas of AI", by Kate Crawford: https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/ // Recording Date: February 9, 2025.
Kate Crawford, senior research manager for Workplace Safety Programs at the National Safety Council, joins the podcast to discuss tips and guidance on adopting new safety technologies in the “Five Questions With …” segment. We also discuss content from the February issue of Safety+Health, including our annual recognition of CEOs Who Get It when it comes to safety. Read episode notes, visit links, sign up to be notified by email when each new episode has been published, and find other ways to subscribe. http://www.safetyandhealthmagazine.com/articles/26444-safe-side-podcast-episode-60-adoption-new-safety-technology Published February 2025
Kate Crawford, senior research manager for Workplace Safety Programs at the National Safety Council, joins the podcast to discuss tips and guidance on adopting new safety technologies in the “Five Questions With …” segment. We also discuss content from the February issue of Safety+Health, including our annual recognition of CEOs Who Get It when it comes to safety. Read episode notes, visit links, sign up to be notified by email when each new episode has been published, and find other ways to subscribe. http://www.safetyandhealthmagazine.com/articles/26444-safe-side-podcast-episode-60-adoption-new-safety-technology Published February 2025
Pour ce 4ème épisode du WAC Morning, Diane Drubay revient sur les grandes actualités Web3 qui touchent les musées et institutions culturelles. Le programme WAC (Web3 for the Art and Culture) poursuit son ambition d'accompagner les musées dans l'adoption des technologies blockchain, immersives et d'intelligence artificielle. Pour cette nouvelle saison, deux grandes institutions américaines rejoignent l'initiative : le Museum of Art and Light et le Toledo Museum of Art, qui explorent notamment l'usage des NFT pour engager leur public.L'épisode aborde aussi la place croissante de l'intelligence artificielle dans l'art, avec en toile de fond la controverse autour de la vente IA organisée par Christie's. Certains artistes dénoncent l'utilisation de modèles entraînés sans respect des droits d'auteur, tandis que d'autres défendent l'approche d'une IA maîtrisée et nourrie par des jeux de données propriétaires. Cette question s'inscrit dans un débat plus large, alors que l'IA Summit de Paris mettait justement en avant des œuvres d'artistes numériques sur les écrans géants du Grand Palais.Un autre projet marquant est le lancement par le Metropolitan Museum of Art de Art Links, un jeu mobile éducatif dont l'objectif est d'explorer la collection du musée à travers des associations d'œuvres et de concepts, tout en récompensant les joueurs avec des NFT. Cette initiative illustre la manière dont les musées cherchent à capter un public plus jeune et connecté, tout en valorisant leurs collections permanentes souvent méconnues.Le Web3 continue aussi de s'imposer à travers les acquisitions de musées. Le Francisco Carolina Museum de Linz, pionnier dans la collection d'œuvres numériques sur blockchain, a récemment ajouté plusieurs NFT à ses collections, dont des créations d'Auriea Harvey, Too Much Lag et Andrea Chiampo. De son côté, le Museum of Moving Images à New York a officialisé l'entrée d'un ensemble d'œuvres de Auriea Harvey dans sa collection permanente.Enfin, deux expositions à venir illustrent l'intérêt grandissant des institutions pour l'art numérique. À Lyon, le MAC présentera en mars Echoes of the Past, Premises of the Future, une exploration de la nature sublimée par le numérique.À Paris, le Jeu de Paume accueillera en avril Le Monde selon l'IA, une exposition majeure sur la photographie générative et analytique, avec des artistes comme Trevor Paglen, Kate Crawford et Refik Anadol.Phrase Clé de l'Épisode :"Ce n'est pas parce qu'une œuvre n'est pas vendue immédiatement qu'elle ne finira pas dans une collection de musée prestigieuse." – Diane DrubayPour approfondir :* Programme WAC Lacs * Le jeu Art Links du Metropolitan Museum* Exposition au MAC Lyon : mac-lyon.com* Exposition au Jeu de Paume : jeudepaume.org* Compte Twitter de RuneArt* Replay video de l'épisode ici This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nftmorning.com
We've spent a lot of time on this show talking about AI: how it's changing war, how your doctor might be using it, and whether or not chatbots are curing, or exacerbating, loneliness.But what we haven't done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.Where do our queries go once they've been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.Further Reading:“Yoshua Bengio Doesn't Think We're Ready for Superhuman AI. We're Building It Anyway,” Machines Like Us podcast“ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths“A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge“Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell“Anatomy of an AI System,” by Kate Crawford and Vladan Joler“Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo
As Fianna Fáil and Fine Gael engage in talks to form the next Irish government, the controversial issue of facial recognition technology (FRT) in policing is back in the spotlight. With plans to introduce FRT into Garda operations already on the table, this topic is expected to become a flashpoint in political and public debates in the months ahead. Adding to the conversation, a public Think-In event titled Facing the Future: Let's Talk Facial Recognition Technology was held recently at The Digital Hub as part of Beta Festival. Co-organised by Dr Ciara Bracken-Roche and Dr Emma Clarke of the ADAPT Research Ireland Centre for AI-Driven Digital Content Technology, the event provided a platform for experts and citizens to critically assess the potential impact of FRT on Irish society. The session featured contributions from Daniel Kahn Gillmor, Senior Staff Technologist at the ACLU, and Olga Cronin, Senior Policy Officer at the Irish Council for Civil Liberties (ICCL). Both highlighted major concerns, including the risk of bias in FRT systems, threats to personal privacy, and the broader implications for civil liberties. Participants were invited to discuss real-world scenarios, such as using FRT to identify a vandal after a car was damaged or tracking a hit-and-run driver. These discussions revealed a complex web of ethical and practical questions about how this technology might be used responsibly, or abused, in law enforcement. The Think-In also included Calculating Empires, an immersive research visualisation by Kate Crawford and Vladan Joler. The artwork examines how technological systems and societal structures have evolved over centuries, offering a powerful lens through which to view the modern surveillance landscape. This debate takes place against the backdrop of significant political change. As the new government takes shape, its stance on FRT will likely signal Ireland's broader approach to balancing technological innovation with the protection of civil rights. The issue became especially pressing last year, when the government proposed using FRT for serious crimes, including riots and violent disorder, following public disturbances in Dublin. Supporters argue that FRT could improve Garda efficiency by speeding up video analysis in investigations, while opponents, including the Irish Council for Civil Liberties, warn of the potential for mass surveillance and errors that disproportionately affect vulnerable communities. Calls for robust safeguards and comprehensive legislative scrutiny have been growing louder. With public trust, privacy, and security at stake, the debate over facial recognition technology is certain to remain a high-profile issue as the next government sets its priorities. ADAPT researchers are at the forefront of addressing these challenges. Dr. Abeba Birhane and Dr. Ciara Bracken-Roche have made expert testimonies for the Oireachtas' Joint Committee on Justice's Pre-Legislative Scrutiny of the General Scheme of the Garda Síochána (Recording Devices) (Amendment) Bill 2023, and co-authored prominent opinion pieces warning about granting Gardaí extensive FRT capabilities risks creating "roaming surveillance units" and foreshadows "big problems" if such technology is adopted without rigorous safeguards. ADAPT's work on trustworthy AI focuses on ensuring that emerging technologies like FRT are developed and deployed ethically, transparently, and with public trust at their core. See more stories here.
On this final episode of Byte Into IT for the year, the whole crew is in and they're joined by Kate Crawford, an internationally-leading scholar of artificial intelligence and its impacts. Her most recent book, Atlas of AI: Power, Politics and The Planetary Costs of Artificial Intelligence explores AI's rising impact.
Wir sind psychologisch nicht auf das KI-Zeitalter vorbereitet. Manche werden sich in KI verlieben, wir alle werden sie vermenschlichen – beides hat unerwartete Konsequenzen. Die Roboterpsychologin Martina Mara über die Psychologie der KI.
This week we're speaking with Steven Heath, technical director at Knauf Insulation (UK and Ireland) and a really interesting and experienced person in the sector.So while we had him, we ran through a bunch of our favourite hoary subjects: measuring performance, performance guarantees, and what we think about EPCs.Knauf is a firm that's done some really interesting work in all of these areas and has even managed to make headway with the UK state in getting them to think about the value of testing performance, with EPCs and whatever SHDF is called now (the state-driven money tap for decarbonising social housing).Notes from the showSteven Heath on LinkedInKnauf Insulation's websiteThe ZAP episode with Kate Crawford about HTC and the 'snug factor': A new way to measure performance, negative energy use, and learning from disaster zones, with Kate Crawford (KLH Sustainability)**SOME SELF-PROMOTING CALLS TO ACTION**We don't actually earn anything from this, and it's quite a lot of work, so we have to promote the day jobs.Follow us on the Zero Ambitions LinkedIn page (we still don't have a proper website)Jeff, Alex, and Dan about websites, branding, and communications - zap@eiux.agency; Everything is User ExperienceSubscribe and advertise with Passive House Plus (UK edition here too)Check Lloyd's Substack: Carbon UpfrontJoin ACANJoin the AECB Join the IGBCCheck out Her Own Space, the renovation and retrofit platform for women**END OF SELF-PROMOTING CALLS TO ACTION**
Pfister, Sandra www.deutschlandfunk.de, Andruck - Das Magazin für Politische Literatur
"I'm just here to scream from the rooftops that the physical body is just straight up done being the primary focus. It doesn't want to be the primary focus. It was never intended to be the primary focus. It's just that people make a shit ton of money off of the physical body being the primary focus."Today, I'm joined by Kate Crawford, a long-term client, who's a licensed Physical Therapist and CEO of Korē Breathwork specializing in using the body to uncover hidden traumas and emotional blocks. Kate's journey from dealing with debilitating anxiety to leaving her job in healthcare and starting her own business is nothing short of inspiring. Through her program, "The Secret Language of the Body," Kate helps individuals understand and heal chronic pain by addressing ancestral trauma, emotional wounds, and energetic imbalances.We discuss her unique approach, blending physical therapy with deep emotional and spiritual work, and how she has successfully integrated the tools from the Metamorphosis Method to create her own powerful methodology. If you're interested in holistic healing, personal transformation, and the incredible wisdom our bodies hold, this one's for you. TODAY'S HIGHLIGHTS(00:00) Intro(01:23) In Today's Episode...Kate Crawford: A Journey of Transformation and Commitment(04:36) The Mothership Mastermind(07:28) Kate's Background and How it All Started(12:56) Processing The Metamorphosis. Understanding The Secret Language of the Body(20:11) Exploring the Mother Wound(26:16) The Ultimate Nervous System Reset(32:15) The 3 Major Energy Patterns(33:28) Anger, Back Pain and Emotional Childhood (37:32) Migraines and Suppressed Emotions(39:54) Hypervigilance and Over Responsibility(42:10) Grieving and Self-Acceptance(47:31) Ancestral Feelings and Chronic Pain(51:14) Breathwork and Energetic Shifts(56:36) Special Session OfferCONTACT KATEGet The Body Communication Call A 45-minute 1:1 virtual call to pinpoint the exact energetic pattern responsible for why pain continues to show up in your body!Follow on IG the_metaphysical_therapistVisit korebreathwork.comFind all of Kate's offerings HERE**WAYS TO ENTER MY WORLD**Leave a review, send us a screenshot and get a $250 credit, you can apply to anything else in my world.The Mothership gives you full premium access to my entire body of work. Sign up before the end of September to get a 60 min 1 on 1 call with me. You'll also be included in my exclusive 4 week mastermind TURNING POINT to create a permanent shift to get you to the next level.The Metamorphosis Method starts February, 2025. Master my proven methodology to guide your clients to rapidly and efficiently transmute lifetimes of familial and ancestral trauma on the deepest possible level.For coaches, healers and anyone who works with clients to create transformation (or if you desire to) this program provides you with a solid, deep and foundational skillset to create predictable results with your clients that you will become known for.CONTACT ALYSEJoin my FB groupIG @alyse_breathesVisit alysebreathes.cominfo@alysebreathes.com
CAISzeit – In welcher digitalen Gesellschaft wollen wir leben?
Algorithmen bestimmen unser Leben: Von den Inhalten, die wir in sozialen Medien sehen, bis hin zu den Krediten, die uns gewährt werden. Aber inwieweit sind Algorithmen fair und transparent? Und welche Folgen kann es haben, wenn sie es nicht sind? Ist Gerechtigkeit programmierbar? Diese Fragen und mehr besprechen wir in dieser CAISzeit mit Miriam Fahimi. Miriam ist von April bis September 2024 als Fellow am CAIS und promoviert derzeit in den Science and Technology Studies am Digital Age Research Center (D!ARC) der Universität Klagenfurt. Sie erforscht die „Fairness in Algorithmen“ und hat über eineinhalb Jahre in einem Kreditunternehmen beobachtet, wie dort über transparente und faire Algorithmen diskutiert wird. Empfehlungen zum Thema Forschung: · Digital Age Research Center (D!ARC), Universität Klagenfurt. https://www.aau.at/digital-age-research-center/ · Meisner, C., Duffy, B. E., & Ziewitz, M. (2022). The labor of search engine evaluation: Making algorithms more human or humans more algorithmic? New Media & Society. https://doi.org/10.1177/14614448211063860 · Poechhacker, N., Burkhardt, M., & Passoth, J.-H. (2024). 10. Recommender Systems beyond the Filter Bubble: Algorithmic Media and the Fabrication of Publics. In J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, & M. Arnold (Hrsg.), Algorithmic Regimes (S. 207–228). Amsterdam University Press. https://doi.org/10.1515/9789048556908-010 Populärwissenschaftliche Literatur: · Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. · Webseite von Kate Crawford. https://katecrawford.net Dokumentarfilm: · Coded Bias (dt. Vorprogrammierte Diskriminierung; abrufbar auf Netflix): In dieser Dokumentation werden die Vorurteile in Algorithmen untersucht, die die Forscherin am MIT Media Lab Joy Buolamwini in Systemen zur Gesichtserkennung offenlegte. https://www.netflix.com/de/title/81328723 Newsletter: · AI Snake Oil von Arvind Narayanan & Sayash Kapoor. https://www.aisnakeoil.com Ticker vom D64 –Zentrum für Digitalen Fortschritt: https://kontakt.d-64.org/ticker/
Linß, Vera www.deutschlandfunkkultur.de, Studio 9
Linß, Vera www.deutschlandfunkkultur.de, Studio 9
Lesart - das Literaturmagazin (ganze Sendung) - Deutschlandfunk Kultur
Linß, Vera www.deutschlandfunkkultur.de, Studio 9
For this episode of the Global Exchange podcast, Colin Robertson talks with Solange Marquez and Andres Rozental about the recent Mexican election and how the new administration might impact North American relations. // Participants' bios - Solange Marquez is a professor at the Law School of the National Autonomous University of Mexico (UNAM). A former VP of the Mexican Council on International Affairs (Comexi) she is its representative in Canada. Solange is also a CGAI Fellow. - Andres Rozental served as Mexico's ambassador to Sweden and the United Kingdom and as deputy foreign minister. He is the Founding President of the Mexican Council on Foreign Relations. He holds the lifetime rank of eminent ambassador of Mexico. // Host bio: Colin Robertson is a former diplomat and Senior Advisor to the Canadian Global Affairs Institute, www.cgai.ca/colin_robertson // Read & Watch: - "Grands Diplomates: Les maîtres des relations internationales de Mazarin à nos jours, by Hubert Védrine: https://www.lisez.com/livre-grand-format/grands-diplomates/9782262101398 - "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence", by Kate Crawford: https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/ // Recording Date: June 19, 2024.
It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there's a good chance AI is going to lead to the end of humanity as we know it.While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.Kate Crawford has been trying to understand how AI systems are built for more than a decade. She's the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn't lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that's something we need to be paying attention to. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters“Meta ‘discussed buying publisher Simon & Schuster to train AI'” by Ella Creamer“Google pauses Gemini AI image generation of people after racial ‘inaccuracies'” by Kelvin Chan And Matt O'brien“OpenAI and Apple announce partnership,” OpenAIFairwork“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz“Generative AI's environmental costs are soaring – and mostly secret” by Kate Crawford“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer“Calculating Empires” by Kate Crawford and Vladan Joler Further Reading:“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford“Excavating AI” by Kate Crawford and Trevor Paglen“Understanding the work of dataset creators” from Knowing Machines“Should We Treat Data as Labor? Moving beyond ‘Free'” by I. Arrieta-Ibarra et al.
Controlling technology means controlling the world. While this statement rings painfully true today, it is as old as the idea of technology itself. In other words, as old as humanity. In this episode, Paola Antonelli interviews renowned researcher, author, and artist Kate Crawford, a leading voice on the social, ethical, and planetary implications of all technologies––artificial intelligence in particular. Kate uses art and information design to manifest histories and connections that would otherwise remain invisible because of their long time span and complexity. The interview is centered around one of Kate's latest collaborations with artist-researcher Vladan Joler, “Calculating Empires: A Genealogy of Technology and Power, 1500-2025,” an ambitious 24-m (ca. 79 ft) long fresco that was conceived during the Covid pandemic, perfected in the isolation of a monastery in Montenegro, and is now traveling around the world, after an inauguration at the Prada Foundation in Milan in 2023.Kate describes Calculating Empires as a visual history of the present––after French philosopher Michel Foucault's theory––and shows how the dangerous intersection of technology and power we witness today has happened many times before. If we abandon our tendency towards short-termism, she believes, there is a lot we can learn from past experiences.You can find images of Calculating Empire on Design Emergency's Instagram platform, @design.emergency. Please join us for future episodes of Design Emergency when we will hear from other important voices who, like Kate, are at the forefront of positive change.Design Emergency is supported by a grant from the Graham Foundation for Advanced Studies in the Fine Arts. Hosted on Acast. See acast.com/privacy for more information.
Controlling technology means controlling the world. While this statement rings painfully true today, it is as old as the idea of technology itself. In other words, as old as humanity. In this episode, Paola Antonelli interviews renowned researcher, author, and artist Kate Crawford, a leading voice on the social, ethical, and planetary implications of all technologies––artificial intelligence in particular. Kate uses art and information design to manifest histories and connections that would otherwise remain invisible because of their long time span and complexity. The interview is centered around one of Kate's latest collaborations with artist-researcher Vladan Joler, “Calculating Empires: A Genealogy of Technology and Power, 1500-2025,” an ambitious 24-m (ca. 79 ft) long fresco that was conceived during the Covid pandemic, perfected in the isolation of a monastery in Montenegro, and is now traveling around the world, after an inauguration at the Prada Foundation in Milan in 2023.Kate describes Calculating Empires as a visual history of the present––after French philosopher Michel Foucault's theory––and shows how the dangerous intersection of technology and power we witness today has happened many times before. If we abandon our tendency towards short-termism, she believes, there is a lot we can learn from past experiences.You can find images of Calculating Empire on Design Emergency's Instagram platform, @design.emergency. Please join us for future episodes of Design Emergency when we will hear from other important voices who, like Kate, are at the forefront of positive change.Design Emergency is supported by a grant from the Graham Foundation for Advanced Studies in the Fine Arts. Hosted on Acast. See acast.com/privacy for more information.
Sztuczna inteligencja może zniszczyć świat i ludzkość? Mamy dla Was dobrą wiadomość. Nie zniszczy. A przynajmniej raczej nie będzie to “ta zła zbuntowana AI”, którą straszy nas popkultura. Ale jest i zła wiadomość. Niestety, budowanie AI w tak wykładniczym tempie, jakie obserwujemy, może nasz świat sprowadzić jeszcze szybciej na krawędź katastrofy klimatycznej. Już teraz, gdyby podliczyć globalne zużycie energii na potrzeby generatywnej AI to… okazałoby się, że mamy na Ziemi jedno państwo więcej. Sam Sam Altman, prezes OpenAI nawoływał w styczniu w Davos, że potrzebujemy przełomu w energetyce, bo AI jest po prostu wyjątkowo energochłonna i na obecnych rozwiązaniach daleko nie zajedziemy. Co więcej OpenAI szacuje, że od 2012 r. moc obliczeniowa wykorzystywana do trenowania jednego modelu AI każdego roku wzrastała 10-krotnie! W efekcie sam ChatGPT w wersji 3 już zużywa tyle energii, co 33 tys. amerykańskich gospodarstw domowych. Gośćmi podcastu są: - Patryk Strzałkowski, dziennikarz Gazety.pl specjalizujący się w tematyce środowiska i ekologii - Natalia Kotłowska-Wochna, prawniczka nowych technologii - Szymon Opryszek, reporter Oko.press i autor książki “Woda. Historia pewnego porwania”. Śródtytuły; 02:14 - Energia na wagę złota 18:36 - Chmura - 6 państwo świata 29:45 - Brak transparentności 32:45 - Minerały ziem rzadkich 35:50 - Czysta energia: będzie przełom? 42:54 - Czy można być eko bez AI Żródła: - Szymon Opryszek, “Woda. Historia pewnego porwania” - Kate Crawford "Atlas Sztucznej Inteligencji" - Raport Schneider Electric o zużyciu energi na potrzeby AI: https://www.androidheadlines.com/2023/10/ai-systems-power-consumption-small-countries.html - Analiza Shaolei Rena, badacza z Uniwersytetu Kalifornijskiego w Riverside o zużyciu wody: https://arxiv.org/pdf/2304.03271 - Reportaż o centrum danych Facebooka w Lulei: https://wyborcza.biz/biznes/7,177150,22095952,lulea-dom-wielkiego-brata-facebook-trzyma-historie-naszych.html - Artykuł Piotra Cieślińskiego "Internet i AI to nie tylko algorytmy i dane. To też skały, solanka litowa i ropa naftowa":https://wyborcza.pl/7,75517,30889297,mam-dla-was-zla-wiadomosc-czysta-energia-nie-istnieje.html#S.MT-K.C-B.1-L.1.duzy - Czy Helion da nam fuzję jądrową: https://www.theguardian.com/technology/2023/aug/01/techscape-environment-cost-ai-artificial-intelligence - Jak marihuana generuje ogromne zapotrzebowanie na energię:https://fortune.com/2024/03/22/crypto-marijuana-data-center-power-use-electric-grid/
“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe
The session explores the relationships between the scales of the room, the house, and the street, inspired by Virginia Woolf's concept of ‘a room of one's own'. The session then examines the influence of generative technological advancements on these scales.Read the interview with the curators and the co-hosts of the symposium here: https://koozarch.com/interviews/before-being-home-doing-domesticity-at-prada-frames-podcastThe podcast "Prada Frames: Being Home" is a project produced by KoozArch in partnership with Prada, and curated by FormaFantasma for Prada. The episode is presented by KoozArch's chief editor Shumi Bose.
Kate Crawford is a building nerd who is obsessed with measuring performance. She's currently, Technical Director at KLH Sustainability, a multidisciplinary consultancy working in the built environment. Kate has a very interesting background in terms of her experience and she's now working on a very fascinating project in which she's researching and developing a "Smart Meter Enabled Thermal Energy Rating (SMETER)" system that uses a new approach to measuring building performance and a different kind of metric for assessing it. The result has been something that they call "the snug factor", which is the heat-transfer coefficient of the building (Kate explains it all in the episode). The way they generate their heat-transfer coefficient has led to incredibly accurate estimations for energy use in a home. Notes from the showKate Crawford on LinkedInKLH Sustainability's websiteThe research Jeff mentions about low pressure showers using more waterReal performance and the HEMReal performance and the SAPKate's little (and excellent) graphic novel on her experience of aid work**SOME SELF-PROMOTING CALLS TO ACTION**We don't actually earn anything from this, and it's quite a lot of work, so we have to promote the day jobs.Follow us on the Zero Ambitions LinkedIn pageJeff, Alex, and Dan about websites, branding, and communications - zap@eiux.agency; Everything is User ExperienceSubscribe and advertise with Passive House Plus (UK edition here too)Check Lloyd's Substack: Carbon UpfrontJoin ACANJoin the AECB Join the IGBCCheck out Her Own Space, the renovation and retrofit platform for women (but not in a patronizing way)**END OF SELF-PROMOTING CALLS TO ACTION**
Find out how your students can contribute to NatureMapr, a huge citizen science database that fosters natural curiosity of the environment and gives everyone a real-world experience in fields such as ecology, conservation, and science! Dr Kate Crawford and Aaron Clausen drop by to go through how you can get involved. Hosted by Ben Newsome from Fizzics Education About NatureMapr Citizen science is accessible to all members of the community and is an integral part of a community's understanding and responsibility for the local biodiversity. NatureMapr is an interactive medium that uses sightings made by the local community to inform council and state decision-making regarding flora and fauna in your region. NatureMapr's mission is to empower anybody to report plant or animal information anywhere in Australia and ensure the information gets to the people that need to know about it. Learn more About Dr Kate Crawford Dr Kate Crawford (Director) is a co-founder of NatureMapr and has extensive experience as a researcher, developer and facilitator. Her work is a continuing application of research and re-evaluation. Kate works with her clients, as a catalyst, to enable them to independently build self-governing, creative, agile and adaptive communities and organisations. Learn more about her organisation Eviva About Aaron Clausen Aaron is the Managing Director & Major Program Architect at at3am IT Pty Ltd and is a co-founder and Director of NatureMapr. Learn more Hosted by Ben Newsome from Fizzics Education With interviews with leading science educators and STEM thought leaders, this science education podcast is about highlighting different ways of teaching kids within and beyond the classroom. It's not just about educational practice & pedagogy, it's about inspiring new ideas & challenging conventions of how students can learn about their world! https://www.fizzicseducation.com.au/ Know an educator who'd love this STEM podcast episode? Share it!The FizzicsEd podcast is a member of the Australian Educators Online Network (AEON )http://www.aeon.net.au/See omnystudio.com/listener for privacy information.
On this week's episode, host Chuck Marohn talks with Eric Goldwyn, a leading urban scholar and program director at the Marron Institute of Urban Management, as well as a Clinical Assistant Professor in the Transportation and Land-Use program at the NYU Marron Institute. He is known for his pioneering research on urban issues, fostering collaboration to improve city living, and he's here to talk with us today about the importance of transit for the future of cities, as well as the importance of local government (and the fact that local government is more than just an appendage of state and federal government). ADDITIONAL SHOW NOTES “Slow Boring x Transit Costs Project Event,” by Kate Crawford, Slow Boring (March 2023). Transit Costs (website). Eric Goldwyn (Twitter/X). Chuck Marohn (Twitter/X).
Recognize that AI is probably net harmful: Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe. https://betterwithout.ai/AI-is-harmful Create a negative public image for AI: Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for AI would eliminate their incentive to fund it. https://betterwithout.ai/AI-is-public-relations Seth Lazar's "Legitimacy, Authority, and the Political Value of Explanations": https://arxiv.org/ftp/arxiv/papers/2208/2208.08628.pdf Kate Crawford's "Atlas Of AI": https://www.amazon.com/dp/B08WKQ1MTM/?tag=meaningness-20 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Senior Product Designer, Aneliya Kyurkchiyska #interview #storytelling #creativechat Bézier is an interview podcast amplifying voices in our creative communities. With guests from all over the world and representative of as many of us as possible. Subscribe here or at bezier.show Aneliya's Links: Web: twitter.com/nelli_tomatelli and instagram.com/nelli.tomatelli/ Read: "Atlas of AI" by Kate Crawford and "Art & fear" by David Bayles & Ted Orland Check out: village.one, twitter.com/JordanAmblin Transcript link. --- Support this podcast: https://podcasters.spotify.com/pod/show/bezier/support
#51: In today's episode, I have a profoundly honest and enlightening conversation about trauma, anxiety, and the power of healing through breath with breathwork expert Kate Crawford. Kate, of Korē Breathwork, blends her extensive scientific knowledge with a spiritual perspective to guide clients in understanding and processing their trauma. She shares her personal journey and how it led her to discover the transformative effects of breathwork. We talked about the connection between trauma, illness, chronic pain, and unexpressed emotions, and Kate provided invaluable insights into the body's communication and the role of breath in regulating the nervous system. If you're looking to discover and remember your true self, this episode is a must-listen. Click here to download Kate's free breathwork package.Stay connected with Kate at:www.korebreathwork.com@korebreathworkJoin 1k others in my weekly unhinged newsletter where I get honest about entrepreneurship.Learn more about the Ironically Serious podcast at www.ironicallyserious.comSubmit a guest or topic for the podcast here.Follow the podcast on Instagram @ironicallyseriouspodStay connected with Taylor @taytorres @chanelandleeSubmit your SOS to be featured on an episode here.Leave Taylor a voicemail here.
Det koster milliarder af liter vand at udvikle og køre ChatGPT. Men hvor meget vand koster det at lave en prompt? Og er det værre end at google? Vi spørger forskeren Kate Crawford, der er på Time's top 100-liste over vigtigste AI-personer i verden. Vi skal også forbi kultsitet Hestenettet, som et hold forskere i Danmark har brugt til skelettet til en dansk chatbot. Vi diskuterer, hvorfor Hestenettet er det perfekte grundlag at bygge en AI på. Og så er der store opdateringer til ChatGPT og Dall-E - billedgeneratoren - på vej. Fremover kan du fx tale med ChatGPT, som kan svare tilbage, og Dall-E bliver mere menneskelig. Vi gennemgår forandringerne og spørger, om opdateringerne kommer til at gøre vores liv nemmere - eller bare er en gang salgs-bluff. Værter: Marcel Mirzaei-Fard, tech-analytiker, og Henrik Moltke, tech-korrespondent.
This week, the academic Kate Crawford tells us how she travelled the world to find the true cost of AI. Reporter Chris Vallance updates us on a watermark system - developed by Deepmind, Google's AI arm - which aims to show whether an image was generated by a machine or designed by a human. Mansoor Hamayun, Co-Founder and CEO of Bboxx tells us about the company's smart cooking valve, designed to protect lives - and trees - in Rwanda. We speak to Fu'ad Lawal, the founder of Archivi.ng,and archivist Grace Abraham, about why the key to Nigeria's tech future may lie in digitsing newspapers from its past. (Picture credit: an imagined digital landscape, by Andriy Onufriyenko, for Getty images)
Today on Sense of Soul Podcast we have, Kate Crawford, she is a Certified Trauma-Informed Breathwork Practitioner, licensed Physical Therapist, and the CEO of Korē Breathwork. Kate has a Master's Degree of Science in Physical Therapy, a certification in Pelvic Floor, as well as a certification in Trauma-Informed Breathwork (600+ hrs YACEP). As a highly empathic person who has worked in the medical model, Kate witnessed firsthand that our emotional wellbeing directly affects our physical wellbeing. Kate has always been fascinated by the human body and has suffered from chronic pain and fatigue her whole life. Until she discovered breathwork, a perfect modality for empathic individuals. It allows and encourages us to tap into our own energetic and emotional wellbeing. This trauma- informed approach to care and heal through breathwork transformed Kate's life and inspired her to create Korē Breathwork— so that others can heal their emotional and physical pain as well! Korē Breathwork integrates Kate's years of experience as a Physiotherapist with this beautiful modality of breathwork. The crux of the work she does with clients includes creating a process for connecting with the body, listening to its wisdom, and creating lasting change. Follow her journey on Instagram and learn more at https://korebreathwork.com. Learn more about Sense of Soul Podcast: https://www.senseofsoulpodcast.com Check out the NEW affiliate deals! https://www.mysenseofsoul.com/sense-of-soul-affiliates-page Check out the Ethereal Network! https://www.mysenseofsoul.com/ethereal-network Follow Sense of Soul on Patreon, and join to get ad free episodes, circles, mini series and more! https://www.patreon.com/senseofsoul Follow Sense of Soul on Social Media! https://www.mysenseofsoul.com/sos-links
Since Chris is just getting back from vacation this week, we're re-sharing one of our favorite episodes. You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...
Building off episode 101, about the rise of AI chatbots, we delve into the rise of AI-powered worker surveillance on this episode. Jeanne Hruska speaks with Dr. Ifeoma Ajunwa about the use of automated hiring processes, the risk of automated discrimination, and surveillance in the workplace. Should employers have to disclose surveillance programs? Can workers refuse to be surveilled - and stay employed? Join the Progressive Legal Movement Today: ACSLaw.org Today's Host: Jeanne Hruska, ACS Sr Advisor for Communications and Strategy Guest: Dr. Ifeoma Ajunwa, AI. Humanity Professor of Law, Emory Law Link: The Quantified Worker, by Ifeoma Ajunwa Link: "Limitless Worker Surveillance," by Ifeoma Ajunwa, Kate Crawford, and Jason Schultz Link: ACS Book Club Summer Reading List Visit the Podcast Website: Broken Law Podcast Email the Show: Podcast@ACSLaw.org Follow ACS on Social Media: Facebook | Instagram | Twitter | LinkedIn | YouTube ----------------- Production House: Flint Stone Media
Honesty Bomb, Magic Makers: I'm a recovering co-dependent person with anxiety...Over the past few months I switched from "regular" talk therapy to a new healing modality: Somatic Therapy, which focuses on healing the nervous system. One of the tools I use regularly is breathwork, which is why I'm so excited to speak with Kate Crawford, a Certified Trauma-Informed Breathwork Practitioner (600+ hrs YACEP), and the founder and CEO of Korē Breathwork. From Kate: Throughout my 15-year-long career as a highly empathic Physical Therapist, I witnessed first hand the undeniable connection between my clients stuck in pain + chronic illness and their unexpressed emotions. My signature breath technique, informed by my own healing journey and medical background, profoundly shifts the relationship that my beautifully sensitive clients have with themselves; clearing the held emotional trauma at the root of their discomfort and allowing them to move forward in a life of unapologetic alignment with who they truly are. CONNECT WITH KATE: Freebie (The Empath's Survival Toolkit) - https://katecrawford.podia.com/a7c97ccc-642a-4726-896a-f1bec5a38491 The Secret Language of the Body Mini-Package - https://calendly.com/korebreath/the-secret-language-of-the-body-mini-package CONNECT WITH KELSEY http://www.kelseyformost.com http://www.instagram.com/kelsey.writes **DISCLAIMER: Neither Kate nor I are Doctors - this episode is meant to share information, not make any diagnosis or suggestions about your care. If you need it, please seek professional support!
In this episode of Serious Privacy, Paul Breitbarth of Catawiki and Dr. K Royal discuss all things #AI - or at least all the basics, in light of the EU Parliament passing the #AIAct last week. In addition, the US has active measures evaluating AI (such as appointing VP Kamala Harris as AI Czar, US National AI initiative), the #OECD efforts, and various uses of AI, e.g., Lethal Autonomous Weapons Systems (#LAWS), facial recognition technology, #chatGPT, and# non-consensual pornography fakes. We also discuss some of the #ethics of AI along with some of the #AIscarymoments.The story of Clever Hans on Wikipedia, which was discussed in the book Atlas of AI by Kate Crawford. As always, if you have comments or questions, find us on LinkedIn, Twitter @podcastprivacy @euroPaulB @heartofprivacy and email podcast@seriousprivacy.eu. Rate and Review us! #heartofprivacy #seriousprivacy #privacy #dataprotection #cybersecuritylaw #CPO #DPO
Episode Summary:In this captivating episode, we journey with Tega Brain from her roots as an environmental engineer to her evolution into an art-tech visionary. Exploring the digital art landscape reshaped by AI and Machine Learning, she draws parallels with influential figures like Ian Cheng, Refik Anadol, and Elon Musk. Her works mirror the transformative power these technologies wield in creating unique artistic experiences, akin to what Trevor Paglen and Agnes Denes are known for. Amidst our tech-driven world, Tega challenges the status quo, intertwining creativity with environmental sustainability, and navigating ethical concerns similar to scholars like Kate Crawford, Timnit Gebru, and Joy Buolamwini. This episode is a must for anyone keen on the intersection of technology, art, and environmental sustainability.In what ways artificial intelligence and machine learning are transforming the digital art landscape, and what opportunities do these technologies present for artists?How do you address ethical concerns when incorporating AI and other emerging technologies into your art practice?The Speaker:Tega Brain is an Australian-born artist, environmental engineer, and educator whose work intersects art, technology, and ecology. Her projects often address environmental issues and involve creating experimental systems, installations, and software. She has exhibited her work at various venues, including the Victoria and Albert Museum, the Whitney Museum of American Art, and the Haus der Kulturen der Welt. In addition to her art practice, Tega Brain is an Assistant Professor of Integrated Digital Media at New York University's Tandon School of Engineering. Her research and teaching focus on the creative and critical applications of technology, with an emphasis on sustainability and environmental concerns.Follow Tega Brain's journey.Hosts: Farah Piriye & Elizabeth Zhivkova, ZEITGEIST19 FoundationFor sponsorship enquiries, comments, ideas and collaborations, email us at info@zeitgeist19.com Follow us on InstagramHelp us to continue our mission and to develop our podcast: Donate
Today I had a great conversation with a Physical Therapist turned Trauma Breathwork practitioner, Kate Crawford.In this episode, you'll hear:How to embrace your deep connection to empathy and use it as a superpower Why chronic illness is so prevalent in women and how it relates to societal pressures Womb healing, feeling safe in our bodies, and what “safety” really means from a physiological perspectiveHow our emotions present in the body; and express as chronic pain or mystery pain The importance of having healthy emotional and energetic boundariesPhysical Therapy and the importance of movement as a way communicating with the bodyLightworkers Lounge Listeners get 10% off a session with Kate. Click Here to reserve your spot.Find Kate: @kore_breathworkWebsite: www.korebreathwork.com
Today I had an AWESOME conversation with Kate Crawford of Kore Breathwork where we dive into the depths of understanding pain and the signals our bodies are trying to give us on a deeper level. So many of us who are the 'helpers' of the world spend our days and hours helping others and not realizing the depth to which we're taking on the energy of the world around us. Kate gives incredible insights in today's episode on how to understand the signals your body is giving you to try to communicate with you and how to clear your own energy as an empath. If you consider yourself to be empathic or a highly sensitive person, you will love this episode! During our conversation, we dive into: - Kate's background in physical therapy and how that informs what she does today -How to see being "empathic” as a superpower -How our emotions present in the body; and express as chronic pain or mystery pain -The spiritual or metaphysical meaning behind your pain A Bit About Kate: Kate Crawford is a Certified Trauma-Informed Breathwork Practitioner, Spiritual Mentor, and the founder and CEO of Korē Breathwork. Throughout her 15-year-long career as a highly empathic Physical Therapist, she witnessed firsthand the undeniable connection between her clients stuck in pain + chronic illness and their unexpressed emotions. Her signature breath technique, informed by her own healing journey and medical background, profoundly shifts the relationship that her beautifully sensitive clients have with themselves; clearing the held emotional trauma at the root of their discomfort and allowing them to move forward in a life of unapologetic alignment with who they truly are. Check out Kate's Free "The Empath's Survival Toolkit": https://katecrawford.podia.com/a7c97ccc-642a-4726-896a-f1bec5a38491 You can also check out her upcoming program "Trust": https://katecrawford.podia.com/trust Connect with Kate on Instagram: instagram.com/kore_breathwork/ Or on Facebook: www.facebook.com/groups/korewellness
You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.
Scott's back on Twitter, and Elon says the platform could be cash-flow positive this quarter - coincidence? Kara and Scott discuss growing calls for Senator Feinstein to resign, a delay in the Dominion v. Fox News trial, and impressive JPMorgan Chase earnings. Also, a tech consultant has been arrested for the murder of Cash App founder Bob Lee. And U.S. National Security is in disarray over Discord after an Air National Guardsman allegedly leaked classified documents on the platform. Then, we're joined by Principal Researcher at Microsoft Research Lab and Professor at USC Annenberg, Kate Crawford to talk everything AI. You can find Kate on Twitter at @katecrawford, and can buy “Atlas of AI” here. We're nominated for a Webby! Vote for us here. Send us your questions! Call 855-51-PIVOT or go to nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate Crawford has long drawn attention to the potential harms AI poses for society: exploitation, discrimination, and more. She argues that minimizing risks depends on civil society, not technology. The ability of people to govern AI is often overlooked because many people approach new technologies with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive policy approaches around the social consequences of AI. Across her work in industry, academia, and elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections into product development and deployment. Resources Learn more about Kate Crawford's work by visiting her website and the AI Now Institute. Read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Visit the Anatomy of an AI System artwork at the Museum of Modern Art, or see and learn about it virtually here. Working with machine learning datasets? Check out Crawford's critical field guide to think about how to best work with these data.
First aired in 2015, this is an episode about social media, and how, when we talk online, things can quickly go south. But do they have to? In the earlier days of Facebook, we met with a group of social engineers who were convinced that tiny changes in wording can make the online world a kinder, gentler place. We just have to agree to be their lab rats. Because Facebook, or something like it, is where we share and like and gossip and gripe. And before we were as aware of its impact, Facebook had a laboratory of human behavior the likes of which we'd never seen. We got to peek into the work of Arturo Bejar and a team of researchers who were tweaking our online experience, to try to make the world a better place. And even now, just under a decade later, we're still left wondering if that's possible, or even a good idea. EPISODE CREDITS Reported by - Andrew ZolliOriginal music and sound design contributed by - Mooninites REFERENCES: ArticlesAndrew Zolli's blog post about Darwin's Stickers (https://zpr.io/ZpMeUnRmVMgP) which highlights another one of these Facebook experiments that didn't make it into the episode. BooksAndrew Zolli's Resilience: Why Things Bounce Back (https://zpr.io/7fYQ9iDYAQBu)Kate Crawford's Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (https://zpr.io/9rU5CGSit3W4) Our newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)!Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today.Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab's science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.
If surveillance capitalism permeates all of modern society, how on earth can we step back to think critically about what it may be doing to us? In this episode we think through more of the implications of living in a non-private digital village in the 21st century, but is privacy even a Christian virtue in the first place? We also ponder the implications of the more deceptive and destructive aspects of addictive digital technologies and think through some initial efforts believers have made to carve out space for family time and spirituality in our disembodied always-on world. Some extra reading: Surveillance capitalism: the hidden costs of the digital revolution, Jonathan Ebsworth, Samuel Johns, Michael Dodson, Cambridge Papers June 2021 The Question of Surveillance Capitalism, Nathan Mladin and Stephen Williams, in The Robot will see you Now: Artificial Intelligence and the Christian Faith, ed John Wyatt and Stephen Williams, SPCK, 2021 The Age of Surveillance Capitalism, Shoshana Zuboff, Profile Books, 2019 Atlas of AI: Power politics and the planetary costs of artificial intelligence, Kate Crawford, Yale University Press, 2021 Irresistible: The rise of addictive technology and the business of keeping us hooked, Adam Alter, Penguin, 2017 Hooked: how to build habit forming products, Nir Eyal, Penguin, 2019 Weapons of Math Destruction, Cathy O'Neil, Penguin, 2017 Subscribe to the Matters of Life and Death podcast: https://pod.link/1509923173 If you want to go deeper into some of the topics we discuss, visit John's website: http://www.johnwyatt.com For more resources to help you explore faith and the big questions, visit: http://www.premierunbelievable.com
Every tap, swipe and click we make on our phones, tablets and laptops is being recorded by big tech firms. This is often called surveillance capitalism – a network of products and services we use every day which sucks up large quantities of data about us and then sells it on to advertisers at huge profits. It's garnering increasing concern from citizens and regulators around the world, but should we care as Christians? What impact is this system having on once flourishing industries such as journalism or bookselling, let alone on us as human beings? And why have tech companies made their products so addictively hard to put down and stop tapping, swiping and clicking? Some extra reading... Surveillance capitalism: the hidden costs of the digital revolution, Jonathan Ebsworth, Samuel Johns, Michael Dodson, Cambridge Papers June 2021 The Question of Surveillance Capitalism, Nathan Mladin and Stephen Williams, in The Robot will see you Now: Artificial Intelligence and the Christian Faith, ed John Wyatt and Stephen Williams, SPCK, 2021 The Age of Surveillance Capitalism, Shoshana Zuboff, Profile Books, 2019 Atlas of AI: Power politics and the planetary costs of artificial intelligence, Kate Crawford, Yale University Press, 2021 Irresistible: The rise of addictive technology and the business of keeping us hooked, Adam Alter, Penguin, 2017 Hooked: how to build habit forming products, Nir Eyal, Penguin, 2019 Weapons of Math Destruction, Cathy O'Neil, Penguin, 2017 Subscribe to the Matters of Life and Death podcast: https://pod.link/1509923173 If you want to go deeper into some of the topics we discuss, visit John's website: http://www.johnwyatt.com For more resources to help you explore faith and the big questions, visit: http://www.premierunbelievable.com
Venessa Paech is an internationally regarded online community strategist with over 25 years of experience building community online. Venessa is also a PhD candidate studying the intersection of AI and community, and a global authority on communities and community management. In the first Cohere episode of 2023, Venessa joins Bill Johnston and Dr. Lauren Vargas to discuss the quickly evolving role of AI in our digital experiences, how AI is currently playing a role in online communities, and what the future may hold regarding our collective relationship with AI. Key Quote: "It's still a relationship business. It's just we now have relationships with tools and machines in a new way: in a more anthropomorphized way and in ways that mimic our own thinking and behavior sufficiently that we do need to recontextualize them. So how do we do that in a way that still prioritizes and centers the human work of what we're doing and brings us to those core community protocols of: How are we building a healthy, thriving, constructive space for constituents? is it accessible? Is it productive in meaningful ways? Is it relevant? And honoring the context, always honoring our context, which is one of the biggest problems we do see with so many different sorts of automated and or AI tools, is they tend to flatten and standardize context because that is how they operate. … But for community, which is typically a smaller, more intimate, and more nuanced sort of cluster of relations and ties, that does not work.” Resources From This Episode: All things In Moderation Conference: SWARM (Australia's Community Management Conference): Australian Community Managers: Books: by Kate Crawford by Carrie Melissa Jones and Charles Vogl by Howard Rheingold by Adrian Speyer by Howard Rheingold Venessa's scholarship: Where to find Venessa:
From search engines to chatbots to driverless taxis – artificial intelligence is increasingly a part of our daily lives. But is it always ethical? In this episode, Katie Barnfield explores some of the moral questions raised by new developments in smart technology. Leading researcher Dr Kate Crawford tells us about the powerful AI art software that reinforces gender stereotypes. We'll hear from Bloomberg technology columnist Parmy Olson about the eyebrow raising conversation she had with Meta's new chatbot. As driverless 'robotaxis' become more popular in China and the US, we'll look at the difficult moral choices involved in their design. And how would you feel about AI that can read your emotions? We'll hear why some companies have decided it's a step too far. Presenter/ producer: Katie Barnfield (Image: Robot using AI. Credit: Getty)