POPULARITY
Book Club Podcast? Before we even got to the News and Research, this week we discussed the AI-related books we're currently reading: Dan's reading: Where Good Ideas Come From, by Steven Johnson (TED Talk) Why Data Science Projects Fail, by Douglas Gray and Evan Shellshear (An interview with Evan) Ray's reading The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, by Kate Crawford (Wikipedia page) News Links Links to the reports and news we discuss in the episode: OpenAI's new Education newsletter https://openaiforeducation.substack.com/ Ethan Mollick's new "AI in Education: Leveraging ChatGPT for Teaching" course on Coursera https://www.coursera.org/learn/wharton-ai-in-education-leveraging-chatgpt-for-teaching World Economic Forum "Future of Jobs report" https://www.weforum.org/publications/the-future-of-jobs-report-2025/infographics-94b6214b36/ Student expelled and deported because they were accused of using ChatGPT by their professor. So they're suing their professor https://www.fox9.com/video/1574324 Digital Education Council Global AI Faculty Survey 2025 https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey We'll discuss this report with one of the authors in next week's episode UK government policy paper on "Generative artificial intelligence (AI) in education" https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education Year13 Case Study on AI use https://news.microsoft.com/en-au/2024/12/13/guiding-school-leavers-with-ai-support-year13s-mission-to-democratise-opportunities-for-young-people/ AI Use by industry employees - US, 2024 https://www.nber.org/papers/w32966 In the discussion of energy use by AI, Ray mentioned some stats from this research report: "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" https://arxiv.org/ftp/arxiv/papers/2303/2303.06219.pdf Research Papers And finally, links to the research papers we discussed this week ChatGPT and Its Educational Impact: Insights from a Software Development Competition https://arxiv.org/abs/2409.03779 How to Align Large Language Models for Teaching English? Designing and Developing LLM based-Chatbot for Teaching English Conversation in EFL, Findings and Limitations https://arxiv.org/abs/2409.04987 AI Meets the Classroom: When Does ChatGPT Harm Learning? https://arxiv.org/abs/2409.09047 Are Large Language Models Good Essay Graders? https://arxiv.org/abs/2409.13120 An Education Researcher's Guide to ChatGPT https://osf.io/spbz3 A Step Towards Adaptive Online Learning: Exploring the Role of GPT as Virtual Teaching Assistants in Online Education https://osf.io/preprints/edarxiv/rw45b The AI Assessment Scale (AIAS) in action: A pilot implementation of GenAI-supported assessment https://ajet.org.au/index.php/AJET/article/view/9434
On this final episode of Byte Into IT for the year, the whole crew is in and they're joined by Kate Crawford, an internationally-leading scholar of artificial intelligence and its impacts. Her most recent book, Atlas of AI: Power, Politics and The Planetary Costs of Artificial Intelligence explores AI's rising impact.
For this episode of the Global Exchange podcast, Colin Robertson talks with Solange Marquez and Andres Rozental about the recent Mexican election and how the new administration might impact North American relations. // Participants' bios - Solange Marquez is a professor at the Law School of the National Autonomous University of Mexico (UNAM). A former VP of the Mexican Council on International Affairs (Comexi) she is its representative in Canada. Solange is also a CGAI Fellow. - Andres Rozental served as Mexico's ambassador to Sweden and the United Kingdom and as deputy foreign minister. He is the Founding President of the Mexican Council on Foreign Relations. He holds the lifetime rank of eminent ambassador of Mexico. // Host bio: Colin Robertson is a former diplomat and Senior Advisor to the Canadian Global Affairs Institute, www.cgai.ca/colin_robertson // Read & Watch: - "Grands Diplomates: Les maîtres des relations internationales de Mazarin à nos jours, by Hubert Védrine: https://www.lisez.com/livre-grand-format/grands-diplomates/9782262101398 - "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence", by Kate Crawford: https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/ // Recording Date: June 19, 2024.
It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there's a good chance AI is going to lead to the end of humanity as we know it.While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.Kate Crawford has been trying to understand how AI systems are built for more than a decade. She's the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn't lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that's something we need to be paying attention to. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters“Meta ‘discussed buying publisher Simon & Schuster to train AI'” by Ella Creamer“Google pauses Gemini AI image generation of people after racial ‘inaccuracies'” by Kelvin Chan And Matt O'brien“OpenAI and Apple announce partnership,” OpenAIFairwork“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz“Generative AI's environmental costs are soaring – and mostly secret” by Kate Crawford“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer“Calculating Empires” by Kate Crawford and Vladan Joler Further Reading:“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford“Excavating AI” by Kate Crawford and Trevor Paglen“Understanding the work of dataset creators” from Knowing Machines“Should We Treat Data as Labor? Moving beyond ‘Free'” by I. Arrieta-Ibarra et al.
This episode of Rhetoricity features members of the MLA-CCCC Joint Task Force on AI and Writing: Antonio Byrd, Holly Hassel, Sarah Z. Johnson, Anna Mills, and Elizabeth Losh. The task force also includes Leonardo Flores, David Green, Matthew Kirschenbaum, and A. Lockett. In July 2023, that task force published a working paper laying out issues, principles, and recommendations related to the effects of generative artificial-intelligence tools on the college writing courses. In this episode's roundtable discussion, these task force members clarify some of the terminology around AI technologies, reflect on the process of writing the working paper, and discuss the pedagogical, historical, and labor implications of large language models for students and teachers working in higher education. This episode is part of The Big Rhetorical Podcast Carnival 2023, which runs from August 28–31. The theme of this year's carnival is "Artificial Intelligence: Applications and Trajectories," and it features a keynote by Dr. Isabel Pedersen. Other participating podcasts include 10-Minute Tech Comm; Defend, Publish and Lead; Kairoticast; Live Theory; Neurodissent; Pedagogue; TC Talk; and Writing Remix. Here's a list of some of the materials referenced in this episode: The task force's Quick Start Guide TextGenEd: Teaching with Text Generation Technologies, including Antonio Byrd's chapter "Using LLMs as Peer Reviewers for Revising Essays" Anna Mills's How Arguments Work Black in AI Kate Crawford's Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence The White House's Blueprint for an AI Bill of Rights Kathryn Conrad's "A Blueprint for an AI Bill of Rights for Education" Hugging Face The Wall Street Journal's "Cleaning Up ChatGPT Takes Heavy Toll on Human Workers" The Washington Post's "Behind the AI Boom, an Army of Overseas Workers in 'Digital Sweatshops'" Memes of the brawl in Montgomery, Alabama This episode features a clip from "Artificial Problems" by Smoked Meat Fax Machine. Episode Transcript
Since Chris is just getting back from vacation this week, we're re-sharing one of our favorite episodes. You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.
You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.
Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate Crawford has long drawn attention to the potential harms AI poses for society: exploitation, discrimination, and more. She argues that minimizing risks depends on civil society, not technology. The ability of people to govern AI is often overlooked because many people approach new technologies with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive policy approaches around the social consequences of AI. Across her work in industry, academia, and elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections into product development and deployment. Resources Learn more about Kate Crawford's work by visiting her website and the AI Now Institute. Read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Visit the Anatomy of an AI System artwork at the Museum of Modern Art, or see and learn about it virtually here. Working with machine learning datasets? Check out Crawford's critical field guide to think about how to best work with these data.
TOPICS WE EXPLOREWho is Leon Furze?Why did Leon go into a Ph.D in artificial intelligence prior to ChatGPT blowing up?How does Leon help educators working through the process of how AI can benefit the classroom?Why is the conversation on cheating is a distraction to the bigger issues to tackle?The big topic of ethics and AIHow do we tackle these topics when most have no idea about these tools at all?What are the questions Leon is hearing from educators in the field?YOUR CHALLENGEShare ideas you gathered from the conversation with us on the socials.What resonated with you?RESOURCES MENTIONED IN SHOWWebsite: https://leonfurze.com/LinkedIn: https://www.linkedin.com/in/leonfurzeAtlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence https://amzn.to/3ZxCAnkCollege Essay Is Dead: Generative AI is progressing furiously—and educators need to catch up fast, @StephenMarche writes. Just a few of my favorite ideas from the conversation!End of High School EssayHas Elon Musk Infiltrated Generative AI(ChatGPT)? Unlocking the Power of AI for Next-Level Critical Thinking Skills JUST A FEW OF MY FAVORITE IDEAS FROM THE CONVERSATION!
On this episode Tricia responds to feedback from listeners like you who requested a few more 'solo shows.' Discussed in this episode: Using ChatGPT as a thought partner with your SOGI/GSA group, get the free guide: https://shiftingschools.lpages.co/chatgpt-and-your-gsasogi-group/ Want to try out the Equity and Generative AI course free as a listener? Learn more about the course: https://www.shiftingschools.com/store-2/p/5-day-ai-challenge-83rw3 Email me: tricia(at) shiftingschools (dotcom) to request your free pass. Learn more about the books I recommended for updating your professional development library: Deepfakes: The Coming Infocalypse by Nina Schick Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble Power and Prediction: The Disruptive Economics of Artificial Intelligence by Ajay Agrawal , Joshua Gans ,Avi Goldfarb Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place by Janelle Shane Explore my free guide on teaching students about social media campaigns: https://tinyurl.com/onlinecampaignallyed "Apple and Google are not enforcing their stated terms of service as the Daily Wire App spreads violent anti-LGBTQ hate" from Media Matters: https://www.mediamatters.org/daily-wire/apple-and-google-are-not-enforcing-their-stated-terms-service-daily-wire-app-spreads What impact will and should artificial intelligence have on assessment? @GOAlearning is hosting a free online event Join us on March 22! Register here: https://goacademy.zoom.us/meeting/register/tJErdu6rrDoiGtHAiEd8gwnozz1H9shUaLSr Check out the podcast we will be talking about next week: https://www.beyond6seconds.com/
First aired in 2015, this is an episode about social media, and how, when we talk online, things can quickly go south. But do they have to? In the earlier days of Facebook, we met with a group of social engineers who were convinced that tiny changes in wording can make the online world a kinder, gentler place. We just have to agree to be their lab rats. Because Facebook, or something like it, is where we share and like and gossip and gripe. And before we were as aware of its impact, Facebook had a laboratory of human behavior the likes of which we'd never seen. We got to peek into the work of Arturo Bejar and a team of researchers who were tweaking our online experience, to try to make the world a better place. And even now, just under a decade later, we're still left wondering if that's possible, or even a good idea. EPISODE CREDITS Reported by - Andrew ZolliOriginal music and sound design contributed by - Mooninites REFERENCES: ArticlesAndrew Zolli's blog post about Darwin's Stickers (https://zpr.io/ZpMeUnRmVMgP) which highlights another one of these Facebook experiments that didn't make it into the episode. BooksAndrew Zolli's Resilience: Why Things Bounce Back (https://zpr.io/7fYQ9iDYAQBu)Kate Crawford's Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (https://zpr.io/9rU5CGSit3W4) Our newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)!Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today.Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab's science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.
This week: The future of search and chatbots looks a lot like our ancient past. Why do we keep making the same tools over and over again?What We Can Do:⚡️ Addiction is brutal. Help yourself or a loved one or someone you've never met with Shatterproof. ⚡️ I'm so excited to share that my favorite event on Earth, “LA Loves Alex's Lemonade Stand” is finally back. Support pediatric cancer research and buy yourself some tickets to eat food from some of the greatest chefs on the planet. ⚡️ It's a pretty pretty pretty good moment to read Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence and The Alignment Problem: Machine Learning and Human Values. ⚡️ Want to switch your retirement fund to one without fossil fuels? Check out Fossil Free Funds to find mutual funds and ETFs that qualify. News RoundupHealth & MedicineIt's probably helpful to understand that half of all American hospitals are projected to be in the red this year American teenage girls are suffering in a very real way. There are a million things we can do, here are a few ways to start to helpE-bikes incentives are SO POPULARSure, we need to get toxic gas out of our homes, but also —
We talk about both the upsides and downsides of AI, particularly when it comes to its tangible impact on the environment and argue that if a job can be done using less compute-intensive (and environmentally unfriendly) resources, we should use them, providing several examples of how memoQ approaches this issue. We talk about the history of translation memories, machine translation, some hidden (and not so hidden) gems in memoQ, favorite movies and impactful books.Timecode:00:00:20 - Intro00:02:47 - What makes memoQ special (how did memoQ come to be)00:06:49 - Balazs and his background in book publication00:08:25 - Atlas of AI, book by Kate Crawford00:09:38 - Computational intensity of AI, its impact on the environment00:13:00 - The ALPAC Report and machine translation00:17:03 - Are we using more resources than we need00:20:47 - The four components of AI00:24:22 - Summa Technologiae00:27:52 - Where is memoQ using AI and ML00:33:33 - The big difference between statistical machine translation and neural machine translation00:35:21 - The environmental impact of data centers00:40:30 - Why do we refer to neural networks as a black box00:43:59 - The Match Patch feature and the history of the translation memory technology00:49:30 - The new kind of translation memory: TM+00:53:22 - The technology that translators do not like talking about00:59:05 - Do you see the future where machine translation is the only translation we use01:03:10 - Privacy01:04:23 - The main rule of GDPR in one sentence01:06:40 - The challenges regarding data security when you are a cloud provider01:11:24 - Do you have a favorite Sci-Fi movieLinks: Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence: https://www.amazon.com/Atlas-AI-Kate-Crawford/dp/0300209576The ALPAC Report: https://en.wikipedia.org/wiki/ALPAChttps://nap.nationalacademies.org/resource/alpac_lm/ARC000005.pdfBalazs' first computer, ZX81: https://en.wikipedia.org/wiki/ZX81Datasheets for Datasets: https://www.microsoft.com/en-us/research/uploads/prod/2019/01/1803.09010.pdfSumma Technologiae: https://en.wikipedia.org/wiki/Summa_TechnologiaeNeural networks: https://www.ibm.com/cloud/learn/neural-networksMovies and TV series:The Andromeda Strain: https://en.wikipedia.org/wiki/The_Andromeda_Strain_(film)Annihilation: https://en.wikipedia.org/wiki/Annihilation_(film)Alien: https://en.wikipedia.org/wiki/Alien_(film)Dark: https://en.wikipedia.org/wiki/Dark_(TV_series)Useful blog posts or ebooks by memoQ: Machine Translation: The Complete Guide: https://blog.memoq.com/machine-translation-the-complete-guidePost-editing Machine Translation (PEMT): Four Things to Know: https://blog.memoq.com/post-editing-machine-translation-pemt-four-things-to-knowMachine Translation vs. Machine Translation Post-editing: Which One to Use and When?: https://blog.memoq.com/machine-translation-vs.-machine-translation-post-editing-which-one-to-use-and-whenThe Future of Machine Translation: Where are we Headed?: https://blog.memoq.com/the-future-of-machine-translation-where-are-we-headedHow to Choose a Machine Translation Engine: https://blog.memoq.com/how-to-choose-a-machine-translation-engine
Antariksh Matters #1: Dual-use Dilemmas in the OEWG on Space Threats— Pranav R SatyanathThe first session of the Open-Ended Working Group (OEWG) on Reducing Space Threats was held last week between the 9th and 13th of May in Geneva. The OEWG was created under the requirement of the United Nations Resolution 75/36 which called on member states to exchange views on norms, threats and behaviours in outer space.Deliberations on matters of space security are not new. They’ve been taking place under the framework of the Prevention of an Arms Race in Outer Space (PAROS), with countries divided between two broad themes: first, whether to regulate space capabilities or to regulate space activities; and second, whether to negotiate legally-binding treaties or whether to agree on non legally binding transparency and confidence-building measures.The latest set of deliberations have attempted to focus on the norms, principles and behavioural guidelines that can be established in order to make space a secure environment for all countries. However, since space capabilities and space activities carried out by countries are intrinsically linked, and since space assets have both civilian and military applications, any attempt to regulate space activities will have both direct and indirect consequences on the interpretation of international law. The dilemma of dual-use capabilities, was highlighted in a presentation made by David Koplow of Georgetown University, who pointed to the intersection of dual-use capabilities in space and the Law of Armed Conflict (LoAC). Koplow argues that by making it harder to distinguish civilian and military assets in space, countries may be violating a vital tenet of the LoAC. During an armed conflict, any asset of a country used for military purposes can be targeted by the adversary. Therefore, countries must separate their civilian and military assets to the greatest extent possible.Making this distinction in practice is, however, a challenge as countries regularly use civilian assets for carrying out military activities. For example, civilian rockets are used to launch military satellites and the Global Positioning System (GPS) and similar systems are used for both civilian and military purposes. Some countries also use commercial Earth-imaging services for gathering intelligence on an adversary's military capabilities, making them potential targets during a conflict.One possible solution to this problem was suggested by Almudena Azcárate Ortega, a researcher at the United Nations Institute of Disarmament Research (UNIDIR). Ortega proposed that countries could choose to distinguish their space capabilities into two categories:Dual-use assets: Space capabilities that are designed to perform both civilian and military functions. GPS satellites and similar systems fall into this category.Dual-capable assets: Space capabilities that perform civilian functions but that can be repurposed for military functions. Satellites used for debris removal or on-orbit servicing fall into this category.Categorising space assets as dual-use or dual-purpose may indeed serve useful. However, some capabilities are more difficult to distinguish than others. For example, it is reported that Ukrainian forces are using Starlink satellites for assisting in drone strikes in Russia. Starlink is a space-based Internet service provided by the American company SpaceX. It is alleged that Russia attempted to cyberattacks on Starlink in order to prevent its use by Ukrainian forces.As dual-use space technologies proliferate to more countries, the need for regulating both capabilities and activity will likely become a greater challenge for the international community.Cyberpolitik: China's Position in OEWG (2021-2025) on Information Security— Megha PardhiBetween 28 March and 1 April 2022, the UN held the second substantive session of the "Open-ended Working Group on the Security of and the use of Information and Communications Technologies" (OEWG (2021-2025)). This is the second such working group constituted by the UN on information security. The OEWG (2021-2025) was formed in Nov 2020 and commenced in 2021. The final report of the working group will be presented to the UN General Assembly in 2025.These working groups are the international community's attempts to shape norms governing cyberspace. Developing norms of behavior in cyberspace has been a contested issue for a long time. Russia was among the first countries to propose rules and norms on cyberspace. However, early attempts fell prey to geopolitical tug of war.In the recently concluded session, many countries have put out statements expressing their position, suggestions, and concerns. In the statement released by the Chinese delegation, the Chinese government seems more worried about cyberspace norms being used against China or basically whoever does not fit into the US' definition of acceptability. The Chinese delegation made four key points:Maintaining peace in cyberspace is crucial. The division of cyberspace into peaceful and non-peaceful periods would send the wrong signal to the international community.Security of cyberspace is necessary for all countries. The statement also has the usual rhetoric of abandoning 'Zero-sum thinking' and 'cold war' mentality.First mover advantage in cyberspace should not be weaponized. China objected to the use of unilateral sanctions and weaponizing the first-mover advantage 'some countries' have over others.The Chinese statement reflected that some countries are creating "deliberately creating closed, exclusive circles for discussing supply chain issues."Objections over the division of activities in the peaceful and non-peaceful periods are understandable. Activities in cyberspace tend to intersect personal and state matters. Wars often blur this distinction. However, normalizing such division in cyberspace might set a dangerous precedent for the norms of behavior in cyberspace.There is a veiled reference to the Quad in the statement. The reference to "closed, exclusive circles for discussing supply chain issues" is similar to the terminology used when Chinese leaders and foreign ministry spokespersons talk about Quad and AUKUS. This again reflects the fact that the Quad and AUKUS have got Beijing worried about similar groupings emerging in cyberspace. Beijing's fear of isolation might seem contradictory as China's own 'Great Firewall' has strived to separate Chinese cyberspace from the world. However, there is a difference between choosing to stay isolated and being forced to isolate. Currently, Beijing decides the rules of operation in China's cyberspace. If states form an alliance to isolate China in cyberspace, the rules would be different. The Chinese government understands it could be detrimental to China's long-term interests.Additionally, the point of reference to the Quad and AUKUS is not just Beijing's fear of isolation. These references and the complaints of a 'cold war' mentality also mean China is trying to project itself as a norms follower while projecting others as 'arm twisting' bullies trying to get their own way. This is most evident in the fourth point of China's statement which roughly says, "this makes people doubt that the real goal of some countries participating in the UN information security process is to build 'international rules of cyberspace that other countries abide by, but they are above all countries’" (这令人不得不怀疑,某些国家参与联合国信息安全进程的真实目标是,构建 "其他各国都遵守,而其自身则凌驾于各国的网络空间国际规则").The war in Ukraine also loomed over the second session of the OEWG (2021-2025). Some states expressed concerns over the way Ukraine War will shape behavior in cyberspace and objections and statements over cyber activities during the war. China's objection to the use of unilateral sanctions and weaponizing first-mover advantage by 'some countries' most likely refers to the sanctions imposed by US and allies on Russia. Again, Chinese leaders have used similar terminology to express their displeasure over sanctions on Russia.Antariksh Matters #2: How Adversaries Might Challenge India’s Use of Space— Aditya RamanathanSceptics sometimes ask me how, in fact, India’s space assets could be threatened in the future and what forms such threats could take. It’s true that it’s hard to envisage what such attacks might look like. Our understanding of space warfare is limited by a merciful lack of precedence. Limited as our understanding may be, it’s worth trying to think of the conditions under which India’s chief adversaries, China and Pakistan, might use space warfare capabilities against it. Broadly, India could face space warfare under three types of circumstances: peacetime (meaning the absence of unusual tensions), crisis (a spike in tensions and/or standoffs, skirmishes) or conflict (a state of violent hostilities in one or more theatres).In the table below, I attempt to map the tools of space warfare to the circumstances India is likely to face.Peacetime In peacetime, adversaries will focus on demonstrating capabilities, probing defences, mounting disruptive cyber attacks, and infiltrating computer worms and viruses. Demonstrations of capabilities can help an adversary deter future threats. These could include ‘dazzling’ satellites with lasers, electronic jamming or spoofing, or conducting non-kinetic rendezvous and proximity operations around a satellite.CrisesIn crises, adversaries will primarily want to signal not just the existence of a capability but also the resolve to use it imminently if its demands are not met. Therefore, while an RPO craft circling around a satellite in peacetime is mainly a demonstration of capability, in a crisis, it is a coercive act meant to shape the outcomes of high stakes bargaining. ConflictIn conflict, the tools of space warfare will most likely be used for effect – to actively deny the use of space and consequently degrade the effectiveness of the adversary’s Earth-based forces. An adversary could strike in six ways during a conflict: A splendid first strike could deny India the effective use of space. Such a strike, usually carried out at the outset of a conflict (the frequently discussed ‘space Pearl Harbor’), would probably be part of a broader plan to degrade Indian forces with simultaneous strikes in space and on Earth. A graduated response would involve managing an exchange of blows and seeking to end it on favourable terms. This would entail targeting specific space capabilities in retaliation and attempting to dissuade the other side from further action.A focused strike targets specific capabilities for a finite set of time in a bid to degrade specific Earth-based capabilities. An Indian strike on Chinese ISR satellites over the Indian Ocean is an example of such a strike.Disruptive strikes create uncertainty about the reliability of space assets. These are low grade, seemingly random strikes that force the state under attack to continually react rather than seize the initiative.Disproportionate retaliation occurs in response to a smaller strike and is meant to dissuade the adversary from launching further attacks. Disproportionate retaliation must remain partial or temporary to provide the adversary an incentive to halt space warfare.A catalytic strike seeks to precipitate third party intervention in a conflict and force its termination on the best terms available. The American political scientist Vipin Narang argues that Pakistan has, in the past, used the catalytic threat of nuclear strikes to hasten American intervention in crises with India. A kinetic attack from a future Pakistani ASAT missile could catalyse frantic calls for ending a conflict that is tilting in India’s favour.To be clear, none of these types of strikes falls into discrete or self-contained categories. A focused strike can lead to a graduated response, which can, in turn, devolve into disruptive strikes or escalate into disproportionate retaliation. These categories are simply meant to clarify the likely intent behind the waging of space warfare. There are reasons for the aforementioned sceptics to be, well, sceptical about the value of an exercise such as this. In the real world, any target state would find it difficult to accurately gauge an adversary’s intentions while an attack is underway. Also, future contingencies are likely to take unexpected forms and contain surprises. However, the value of this sort of undertaking is that it can (a) help clarify the sort of situations that can trigger an attack on space assets, (b) provide clarity on the sort of challenges India will need to deter in the coming years. As the much-used adage goes, plans are useless but planning is indispensable.Siliconpolitik: The Transatlantic Semiconductor Alliance in the Making— Pranay Kotasthane(First published on takshashila.org.in)Over the last couple of years, we have consistently argued that in order to make the semiconductor supply chain resilient, plurilateral cooperation is a necessity, not a choice. Subsidising semiconductor firms in the hope of achieving national self-sufficiency is counterproductive and futile. Futile in the limited sense that such measures won’t achieve the aim of full indigenisation. Counterproductive because a sole focus on domestic subsidies would displace the opportunity to really make a resilient, China-independent, cutting-edge semiconductor supply chain.Nevertheless, as it so often happens, subsidies are an easier policy option. This pro-business instrument—as against a pro-market one—also suits semiconductor firms better. Subsidies finance their heavy capital investments in the short term. And so, we had a number of national governments—the US, the EU, Japan, South Korea, China, India, and Taiwan to name a few—launch their own versions of semiconductor subsidy programmes.However, it does seem that the tide is now turning from a public and foreign policy perspective. Apart from subsidies, governments are now realising the value of coordinating their efforts. In an earlier post, I had discussed a reported semiconductor alliance involving the US, Japan, South Korea, and Taiwan. Although we haven’t heard about this grouping since then, there is now a new grouping that we need to take note of.The US and EU announced a new initiative on similar lines as part of the US-EU Trade and Technology Council (TTC) that concluded in Paris on Monday, 16th May. The detailed joint statement shows that the scope of this transatlantic partnership on technology is vast. Initiatives were announced on areas as diverse as solar supply chains, climate and cleantech, rare earth materials, technology standards and semiconductors. For this post, let’s focus on understanding what the announcements on semiconductors mean to the US, the EU, and India.The Transatlantic Approach for SemiconductorsAs part of the initiative, the two parties agreed on two key areas:That the US and the EU will coordinate their respective chip investments so that it doesn’t end up being a ‘subsidy race’ to the bottom. In practice, this means that the US and EU are likely to share information with each other on their planned fab investments, the companies they plan to target, and so on. In ideal circumstances, they would like to reach a stage where the EU has enough production capacity for automotive chips, while the US invests in production capacity for leading-edge nodes. In the future, the two partners would also want to agree on preferential treatment for their own fabless companies to access the fabs in each other’s national jurisdiction. For now, they have agreed on consulting each other on subsidies for semiconductor firms.The two partners also agreed to develop an early warning detection system for supply chain disruptions. A similar announcement was also made as part of the Quad Semiconductor Supply Chain initiative during the last Summit meeting, where the four members agreed to “map capacity, identify vulnerabilities, and bolster supply-chain security for semiconductors and their vital components.” The motivation for this initiative is to keep a closer eye on wafer capacities across the globe so that stockpiling or additional capacity addition can be coordinated.Both the moves indicate the willingness to collaborate with partners instead of going it all alone.The India AngleThese moves are consequential for India. Apart from the US, the EU has a Trade and Technology Council arrangement with just one other nation-state—India. India should use this arrangement and become a part of this semiconductor supply chain alliance. There’s also the opportunity to combine the US-EU effort with the Quad’s Semiconductor Supply Chain Initiative, as the goals of the two mechanisms are identical.With these new semiconductor alliances taking shape, it’s important for India to become a part of these formations. Foreign Policy in the Information Age needs to go beyond the traditional defensive approach of ‘protecting’ one’s critical technologies and instead become a key driver for enhancing India’s high-tech power.Our Reading Menu[Article] Why Drones Have Not Revolutionized War: The Enduring Hider-Finder Competition in Air Warfare by Antonio Calcara, Andrea Gilli, Mauro Gilli, Raffaele Marchetti, Ivan Zaccagnini[Book] The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hightechir.substack.com
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/mathematics
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/world-affairs
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/public-policy
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), Kate Crawford reveals how this planetary network is fuelling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Matthew Jordan is a university instructor, funk musician, and clear writing enthusiast. He studies the history of science and technology, driven by the belief that we must understand the past in order to improve the future. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
Kate Crawford joins PTO to talk about her book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. We talked about why some proponents of AI are so convinced - against so much evidence - that systems of machine learning will truly be able to replicate human intelligence, and how the extraordinary material impacts of AI systems - from rare earth mining to electrical power needs - tend not to be discussed. We also talked about the early history of statistical analysis and the new discipline's relationship to eugenics. And finally we discussed how workplace surveillance systems, in spite of their apparent novelty, are in some ways replicating the practices of managerial control of the early factory system.
Sam and Emma host Kate Crawford, Research Professor at the University of Southern California Annenberg, to discuss her recent book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, on our relationship with big tech, and the concept of the AI industry as a continuation of the extractive practices and power dynamics in the workplace that we have been building for centuries. They start off with a discussion on how Professor Crawford got into AI, and how her perspective on it has changed, from the view of the industry as one defined by algorithmic capacity and data infrastructure to an industry of extraction and exploitation, jump-started by diving into the stages of production from lithium mines to Amazon fulfillment warehouses to labs. Getting into the abstraction and depoliticization of artificial intelligence, Professor Crawford explores how Big Tech has become perhaps the most concentrated industry since the railroads, hiding behind an ideology of technocratic utopianism that has built a fear of regulation, a consolidation of power, and a growing perspective that it is essential to our way of life. Next, she, Emma, and Sam dive into the relationship between AI and labor practices, looking at the hyper “efficiency” created by workplace algorithms (e.g. at Amazon warehouses) as a revolutionary progression of Frederick Taylor's factory ideology, and exploring how this disembodied management serves to hamper organizing, particularly with the effect of the bias to believe technology. They then move onto the importance of highlighting the overlap of the surveillance state and corporate surveillance, this creation of a multi-headed hydra, and why the American public only sees the state as a threat, before expanding on the surveillance apparatus are already working to reinforce power structures that have exceeded regulation, looking particularly at the Police state and counter-terrorism measures. They wrap up the interview by discussing the future of regulation in the industry and the impact anti-trust measures could have (and why that won't be enough). Sam and Emma read some Majority Reporter updates on the mass unionization efforts going on in the US, and discuss Manchin's posturing with Bernie (physically). And in the Fun Half: Nomiki Konst joins Sam and Emma as they cover the New York democratic machine getting blindsided by a democratic socialist woman of color taking on an established incumbent, inspiring Chuck Schumer to take a break from politics, Chris Leal for TX House District #114 gives some updates on his campaign, and the MR crew discusses the judgments of Meghan McCain in the workplace for simply acting the part she's paid for. They also cover the silencing of Bari Weiss with her free airtime on CNN, Dan Bongino's poor connection in the Glenn Greenwald telephone game, plus, your calls and IMs! Become a member at JoinTheMajorityReport.com Subscribe to the AMQuickie newsletter here. Join the Majority Report Discord! http://majoritydiscord.com/ Get all your MR merch at our store https://shop.majorityreportradio.com/ (Merch issues and concerns can be addressed here: majorityreportstore@mirrorimage.com) You can now watch the livestream on Twitch Check out today's sponsors: Podium makes doing business as easy as sending a text. All your employees can text from a single inbox, offering a smoother experience for your customers. Whether you're answering questions, collecting reviews, scheduling appointments and deliveries or dealing with payment collection – all you have to do is just send a text. Stay ahead of the competition with Podium – they have free plans for growing businesses, plus all the power growing businesses need to scale. Get started free today at Podium.com/MAJORITY. LiquidIV: The hot summer months are here and we need to be proactive to keep our body fueled up & hydrated. Liquid I.V. contains 5 essential vitamins—more Vitamin C than an orange and as much potassium as a banana. Healthier than sugary sports drinks, there are no artificial flavors or preservatives and less sugar than an apple. Grab your Liquid I.V. in bulk nationwide at Costco or you can get 25% off when you go to liquidIV.com and use code MAJORITYREP at checkout. That's 25% off ANYTHING you order when you get better hydration today using promo code MAJORITYREP at liquidIV.com. Shopify: Scaling your business is a journey of endless possibility. Shopify is here to help, with tools and resources that make it easy for any business to succeed from down the street to around the globe. Shopify powers over 1.7 million businesses - from first-sale to full-scale. Shopify gives entrepreneurs the resources once reserved for big business - so upstarts, start-ups, and established businesses alike can sell everywhere, synchronize online and in-person sales, and effortlessly stay informed. Go to shopify.com/majority, for a FREE fourteen-day trial and get full access to Shopify's entire suite of features! Support the St. Vincent Nurses today as they continue to strike for a fair contract! https://action.massnurses.org/we-stand-with-st-vincents-nurses/ Subscribe to Discourse Blog, a newsletter and website for progressive essays and related fun partly run by AM Quickie writer Jack Crosbie. https://discourseblog.com/ Subscribe to AM Quickie writer Corey Pein's podcast News from Nowhere, at https://www.patreon.com/newsfromnowhere Check out Matt's show, Left Reckoning, on Youtube, and subscribe on Patreon! Subscribe to Matt's other show Literary Hangover on Patreon! Check out The Letterhack's upcoming Kickstarter project for his new graphic novel! https://www.kickstarter.com/projects/milagrocomic/milagro-heroe-de-las-calles Check out Matt Binder's YouTube channel! Subscribe to Brandon's show The Discourse on Patreon! Check out The Nomiki Show live at 3 pm ET on YouTube at patreon.com/thenomikishow Check out Jamie's podcast, The Antifada, at patreon.com/theantifada, on iTunes, or at twitch.tv/theantifada (streaming every Monday, Wednesday, Thursday and Friday at 7pm ET!) Follow the Majority Report crew on Twitter: @SamSeder @EmmaVigeland @MattBinder @MattLech @BF1nn @BradKAlsop Donate to Chris in Dallas's campaign for the Texas State House here!
Hear from Kate Crawford, Honorary Professor at the University of Sydney and one of the world's foremost scholars on the social and political implications of artificial intelligence. Timed for the Australian launch of her new book, 'Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence', Kate speaks with Fenella Kernebone to explore how artificial intelligence is not objective or neutral but a technology of extraction. Hailed in Nature as an essential read, Kate's book "exposes the dark side of AI success" by taking us on a journey that uncovers how planetary computation is fueling a shift toward undemocratic governance and increased inequity. For further links and resources, including a transcript for this podcast, visit our website: https://bit.ly/3yLBV4P This conversation was recorded on 6 July, 2021.
Shobita and Jack discuss the Innovation and Competition Act making its way through the US Congress as well as the most up-to-date geopolitics of COVID, including the TRIPS waiver and the "lab leak" theory. And we interview Kate Crawford, a leading scholar on the social and political implications of artificial intelligence.- Kate Crawford (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press.Kate Crawford and Vladan Joler (2018). Anatomy of an AI System.- Alex Campolo and Kate Crawford (2020). "Enchanted Determinism: Power without Responsibility in Artificial Intelligence". Engaging Science, Technology, and Society. 6: 1-19.Transcript and study questions available at www.thereceivedwisdom.org.
Artificial intelligence is perhaps the most hyped technology in the world. In today's episode, we're going to hear a discussion that invites the listener to think about how money, power and other troubling forces and ideas that shape our society are built into AI systems and the ways we think about deploying them. In May, the University of Washington's Tech Policy Lab and Center for an Informed Public cohosted a virtual book talk featuring Kate Crawford, a leading scholar of the social implications of artificial intelligence and author of the recently published book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence published this spring by Yale University Press, in conversation with Ryan Calo, co-founder of the Center for an Informed Public co-founder, founding co-director of the interdisciplinary UW Tech Policy Lab and a UW School of Law professor.
Dan and Lee take stock of the current climate for AI from security and chip shortages to policy and government strategy. Kate Crawford podcast and commentary on the planet/sustainability cost of AI - The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence: Crawford, Kate: Amazon.com.au: Books European legislation on AI - Europe seeks to limit use of AI in society - BBC News Microsoft purchase of Nuance - Microsoft makes $20bn bet on speech AI firm Nuance - BBC News Disney Project Kiwi - Disney Imagineering’s Project Kiwi is a free-walking robot that will make you believe in Groot – TechCrunch Who wrote the dead sea scrolls - Who wrote the Dead Sea Scrolls? Digital handwriting analysis and artificial intelligence offer new clues - ABC News Building AI Partners - I created an AI boyfriend. Here's how it went - Hack - triple j (abc.net.au) Call for $250m injection into AI economy - Tech industry urges $250 million AI budget cash splash (afr.com)
Artificial Intelligence is everywhere you look, from mapping our route to targeting advertisements. But how can we maintain agency and ensure AI is trustworthy when it is developing so much faster than policy? What role should the European Parliament and national governments play to regulate these technologies, and what value would international cooperation bring? We’ll answer these questions with Eva Kaili, Greek MEP and Chair of the European Parliament’s Panel for the Future of Science and Technology; Anthony Gooch, Director of the OECD’s Public Affairs and Communications Directorate and chair of the Global Parliamentary Network; Jeremias Prassl, AI expert and Deputy Director of the Institute of European and Comparative Law in the University of Oxford; and Kate Crawford, AI scholar and author of the book “The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence”. This podcast is first in a series on artificial intelligence, brought to you by the OECD’s Global Parliamentary Network and the European Parliament’s Panel for the Future of Science and Technology, also known as STOA. Guests: Eva Kaili, Anthony Gooch, Jeremias Prassl, Kate Crawford Host: Robin Allison Davis To learn more about the EU Parliament’s Panel for the Future of Science and Technology’s work on AI, go to: https://www.europarl.europa.eu/stoa/en/home/highlights To learn more about the OECD’s work on AI, go to: https://oecd.ai Get the latest OECD content delivered directly to your inbox! Subscribe to our newsletters: www.oecd.org/newsletters Follow us on social media: www.oecd.org/social-media ©OECD Podcast 2021
An MIT study showed that fake news travels six times faster on Twitter than real news. Different phenomenons occurring online are a powerful weapon and a fertile territory for spreading information based on a bias. Is AI a dehumanization weapon in the wrong hands or just a progressive tool? What is the cost of creating machine learning and similar algorithms? A grown and wild digital ecosystem based on an attention economy, engaging business model and behavioral advertising is editing reality thus becoming a vital policymaker. The utopic idea of the Internet and the existing perceptions of AI and robots, or as Meredith Broussard coined it Technochauvinism, is being challenged and rethought. Where lies a link between technology and justice? Is a relationship between competition and privacy laws on a good path? Why is ethics an important factor in the digital surroundings?In a new episode of the k/talks podcast with Ivana Bartoletti, a Technical Director at Deloitte and an internationally recognised thought leader in the field of responsible technology, we are discussing these and many more questions and concepts. Enjoy!More info about our guest at www.ivanabartoletti.co.uk Exclusively to our listeners, please see the discount code (KTALKS) for the purchase of the book “An Artificial Revolution” by Ivana Bartoletti (paperback, eBook or paperback and eBook bundle).This episode is in English.Books:Bartoletti, Ivana, “An Artificial Revolution”, The Indigo Press (2019) Broussard, Meredith, “Artificial Unintelligence: How Computers Misunderstand the World”, MIT Press (2018) Crawford, Kate, “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence”, Yale University Press (2021)Dignum, Virginia, “Responsible Artificial Intelligence”, Springer (2019) Eubanks, Virginia, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor”, St Martin's Press (2018) Pasquale, Frank, “The Black Box Society”, Harvard University Press (2016) Veliz, Carissa, “Privacy is Power: Why and How You Should Take Back Control of Your Data”, Transworld Publishers (2020)Movies:“Coded Bias“ – Director: Shalini Kantayya (2020) “Social Dilemma“ – Director: Jeff Orlowski (2020) “iHuman” – Director: Tonje Hessen Schei (2019)
What is the Atlas of AI? Why is it important? How is AI an industry of extraction? How is AI impacting the planet? What can be done? To answer these questions and more we welcome to the show Dr. Kate Crawford to discuss Kate's new book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence Dr. Kate Crawford is a leading scholar of the social and political implications of artificial intelligence. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at Microsoft Research in New York City, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris. Full show notes for this episode can be found at Radicalai.org. If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod