American artist, geographer, and author
POPULARITY
“Le monde selon l'IA”au Jeu de Paume, Parisdu 11 avril au 21 septembre 2025Entretien avec Antonio Somaini, professeur de théorie du cinéma, des médias et de la culture visuelle à l'Université Sorbonne Nouvelle et commissaire général de l'exposition,par Anne-Frédérique Fer, à Paris, le 28 avril 2025, durée 16'50,© FranceFineArt.https://francefineart.com/2025/05/10/3616_le-monde-selon-l-ia_jeu-de-paume/Communiqué de presse Commissaire général : Antonio SomainiCommissaires associés : Ada Ackerman, Alexandre Gefen, Pia ViewingLe Jeu de Paume présente, du 11 avril au 21 septembre 2025, une exposition explorant les liens entre intelligence artificielle et l'art contemporain, qui sera la première au monde de cette ampleur.Développées à vitesse accélérée dans tous les champs de la société, les intelligences artificielles suscitent aujourd'hui étonnement, frayeur, enthousiasme ou scepticisme.Le monde selon l'IA présente une sélection d'oeuvres d'artistes qui, au cours de ces dix dernières années, se sont emparés de ces questions en art, photographie, cinéma, sculpture, littérature… Elle d.voile des oeuvres – en grande parties inédites – d'artistes de la scène française et internationale tels Julian Charrière, Grégory Chatonsky, Agnieszka Kurant, Christian Marclay, Trevor Paglen, Hito Steyerl, Sasha Stiles,…De l' “IA analytique”, sur laquelle se fondent les systèmes de vision artificielle et de reconnaissance faciale, . l' “IA générative”, capable de produire de nouvelles images, sons et textes, l'exposition traite de la manière dont ces technologies bouleversent les processus créatifs, redéfinissent les frontières de l'art, sans oublier d'en interroger les enjeux sociaux, politiques et environnementaux. Des capsules temporelles jalonnent par ailleurs le parcours, sous forme de vitrines suggérant des liens historiques et généalogiques entre ces phénomènes contemporains et différents objets issus du passé. Au-delà de toute fascination technophile ou de rejet technophobe, le Jeu de Paume propose, à travers cette exposition, une réflexion sur la manière dont l'IA transforme notre rapport visuel et sensible au monde, comme nos sociétés.L'intelligence artificielle, notion introduite en 1955, désigne de nos jours l'apprentissage automatique qui transforme tous les domaines de la société, avec des applications remplaçant l'action humaine sur la détection, la prise de décision ou la création de contenus textuels et visuels. Ces avancées soul.vent des enjeux éthiques, économiques, politiques et sociaux, entre autres en matière de vie privée et de discrimination, tout en bouleversant notre rapport aux images et aux textes. Dans le domaine artistique, l'IA redéfinit les processus de création, de production et de réception, mettant en crise les notions de créativité, d'originalité et de droits d'auteur. Les artistes de l'exposition mobilisent ces technologies aussi bien pour interroger leurs conséquences sur l'art et la société que pour expérimenter de nouvelles formes possibles d'expression.[...] Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Through her “archaeology of the future” design approach, the Lebanese-born, Paris-based architect Lina Ghotmeh has firmly established herself as a humanist who brings a profound awareness of past, present, and presence to all that she does. In the two decades since winning her breakthrough commission—the Estonian National Museum in Tartu—her practice has taken off, with Ghotmeh swiftly becoming one today's fastest-rising architectural stars. Just a week after we recorded this episode of Time Sensitive, she was named the winner of a competition to design the British Museum's Western Range and, shortly after that, she was announced as the architect of the new Qatar Pavilion in the historic Giardini of Venice; she is also the designer of the Bahrain Pavilion at the just-opened 2025 Osaka Expo. Across her high-touch, high-craft projects, whether a brick-clad Hermès leather-goods workshop in Normandy, France, completed in 2023; the timber-framed 2023 Serpentine Pavilion in London; or the concrete-walled Stone Garden apartment tower (2020) in Beirut, Ghotmeh celebrates the hand.On the episode, Ghotmeh reflects on the long-view, across-time qualities of her work and outlines what she believes is architecture's role in shaping a better world ahead.Special thanks to our Season 11 presenting sponsor, L'École, School of Jewelry Arts.Show notes:Lina Ghotmeh[5:01] “The Shape of Time: Remarks on the History of Things”[5:01] George Kubler[5:01] Trevor Paglen[8:41] “The Long View: Why We Need to Transform How the World Sees Time”[8:41] Tim Ingold[11:15] “Windows of Light”[11:15] “Lecture: Lina Ghotmeh”[12:06] Beatriz Colomina[12:06] “Are We Human?”[19:58] Gaston Bachelard[24:04] Olga de Amaral[24:04] Cartier Foundation[24:04] Juhani Pallasmaa[24:04] “The Eyes of the Skin”[26:39] Luis Barragán[31:09] Stone Garden (2020)[31:09] Hermès Workshops (2023)[36:36] Peter Zumthor[36:36] “Atmospheres”[41:53] Khalil Khouri[44:51] Jean Nouvel[44:51] Norman Foster[44:51] Estonian National Museum (2016)[46:41] Renzo Piano[46:41] Richard Rogers[46:41] Maya Lin[46:41] Dan Dorell[46:41] Tsuyoshi Tane[50:45] “The Poetic, Humanistic Architecture of Lina Ghotmeh”[51:40] Rimbaud Museum[54:48] “Light in Water” (2015)[54:48] The Okura Tokyo[59:22] Les Grands Verres, Palais de Tokyo (2017)[59:44] Zero-Carbon Hotel Concept (2019)[59:42] Serpentine Pavilion (2023)[1:04:11] Osaka Expo Bahrain Pavilion (2025)
Complex problems cannot be solved if examined only through a narrow lens. Enter interdisciplinarity. It is now widely accepted that drawing on varied expertise and perspectives is the only way we can understand and tackle many of the most challenging issues we face, as individuals and as a species. So, there is a growing movement towards more cross disciplinary working in higher education but it faces challenges. Interdisciplinarity requires a shift of mindset in an academy built upon clear disciplinary distinctions and must compete for space in already overcrowded curricula. We speak to two leadings scholars in interdisciplinary research and teaching to find out why it is so important and how they are encouraging more academics and students to break out of traditional academic silos. Gabriele Bammer is a professor of integration and implementation sciences (i2S) at the Australian National University. She is author of several books including ‘Disciplining Interdisciplinarity' and is inaugural president of the Global Alliance for Inter- and Transdisciplinarity. To support progress in interdisciplinarity around the world, she runs the Integration and Implementation Insights blog and repository of theory, methods and tools underpinning i2S. Gabriele has held visiting appointments at Harvard University's John F. Kennedy School of Government, the National Socio-Environmental Synthesis Center at the University of Maryland and the Institute for Advanced Sustainability Studies in Potsdam, Germany. Kate Crawford is an international scholar of the social implications of artificial intelligence who has advised policymakers in the United Nations, the White House, and the European Parliament on AI, and currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning. She is a research professor at USC Annenberg in Los Angeles, a senior principal researcher at MSR in New York, an honorary professor at the University of Sydney, and the inaugural visiting chair for AI and Justice at the École Normale Supérieure in Paris. Her award-winning book, Atlas of AI, reveals the extractive nature of this technology while her creative collaborations such as Anatomy of an AI System with Vladan Joler and Excavating AI with Trevor Paglen explore the complex processes behind each human-AI interaction, showing the material and human costs. Her latest exhibition, Calculating Empires: A Genealogy of Technology and Power 1500-2025, opened in Milan, November 2023 and won the Grand Prize of the European Commission for art and technology. More advice and insight can be found in our latest Campus spotlight guide: A focus on interdisciplinarity in teaching.
Pour ce 4ème épisode du WAC Morning, Diane Drubay revient sur les grandes actualités Web3 qui touchent les musées et institutions culturelles. Le programme WAC (Web3 for the Art and Culture) poursuit son ambition d'accompagner les musées dans l'adoption des technologies blockchain, immersives et d'intelligence artificielle. Pour cette nouvelle saison, deux grandes institutions américaines rejoignent l'initiative : le Museum of Art and Light et le Toledo Museum of Art, qui explorent notamment l'usage des NFT pour engager leur public.L'épisode aborde aussi la place croissante de l'intelligence artificielle dans l'art, avec en toile de fond la controverse autour de la vente IA organisée par Christie's. Certains artistes dénoncent l'utilisation de modèles entraînés sans respect des droits d'auteur, tandis que d'autres défendent l'approche d'une IA maîtrisée et nourrie par des jeux de données propriétaires. Cette question s'inscrit dans un débat plus large, alors que l'IA Summit de Paris mettait justement en avant des œuvres d'artistes numériques sur les écrans géants du Grand Palais.Un autre projet marquant est le lancement par le Metropolitan Museum of Art de Art Links, un jeu mobile éducatif dont l'objectif est d'explorer la collection du musée à travers des associations d'œuvres et de concepts, tout en récompensant les joueurs avec des NFT. Cette initiative illustre la manière dont les musées cherchent à capter un public plus jeune et connecté, tout en valorisant leurs collections permanentes souvent méconnues.Le Web3 continue aussi de s'imposer à travers les acquisitions de musées. Le Francisco Carolina Museum de Linz, pionnier dans la collection d'œuvres numériques sur blockchain, a récemment ajouté plusieurs NFT à ses collections, dont des créations d'Auriea Harvey, Too Much Lag et Andrea Chiampo. De son côté, le Museum of Moving Images à New York a officialisé l'entrée d'un ensemble d'œuvres de Auriea Harvey dans sa collection permanente.Enfin, deux expositions à venir illustrent l'intérêt grandissant des institutions pour l'art numérique. À Lyon, le MAC présentera en mars Echoes of the Past, Premises of the Future, une exploration de la nature sublimée par le numérique.À Paris, le Jeu de Paume accueillera en avril Le Monde selon l'IA, une exposition majeure sur la photographie générative et analytique, avec des artistes comme Trevor Paglen, Kate Crawford et Refik Anadol.Phrase Clé de l'Épisode :"Ce n'est pas parce qu'une œuvre n'est pas vendue immédiatement qu'elle ne finira pas dans une collection de musée prestigieuse." – Diane DrubayPour approfondir :* Programme WAC Lacs * Le jeu Art Links du Metropolitan Museum* Exposition au MAC Lyon : mac-lyon.com* Exposition au Jeu de Paume : jeudepaume.org* Compte Twitter de RuneArt* Replay video de l'épisode ici This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nftmorning.com
Full episode released to subscribers: 20 Jun 2024 | Subscribe --> https://newmodels.io _ Hallucinating Sense in the Era of Infinity Content Written & read by Caroline Busta Document Journal SS24 https://www.documentjournal.com/2024/05/technical-images-film01-angelicism-art-showtime-true-detective-shein/ What if, in a time of infinity content, a meta-reading of the shape and feel of content has become a survival skill? What if we thought about generative AI more as an expression of an epochal shift in human communication than a root cause? More than transmitting specific information, “content”—whether a mukbang video or this 3600 word essay—is now foremost a conductor of “vibes.” "Hallucinating Sense in the Era of Infinity Content" was written in early 2024 by Caroline Busta for Document Journal SS 2024. Editors: Drew Zeiba & Camille Sojit Pechat Audio production: Lil Internet In the training data: Vilém Flusser, Kevin Munger, K Allado McDowell, Holly Herndon & Mat Dryhurst, Jon Rafman, Dean Kissick, Theo Anthony, Lola Jusidman, Film01, Bernard Stiegler, Olivia Kan-Sperling, Chris Blohm, Niklas Bildstein Zaar, Andreas Grill, Anna Uddenberg, Simon Denny, Trevor Paglen, Joshua Citarella, Jak Ritger, Hari Kunzru, Loretta Fahrenholz, Dorian Electra, Michael Franz, Kolja Reichart, Shein, Lil Internet & the NM Discord.
My guest is artist and geographer Trevor Paglen. His groundbreaking work links new technological developments to larger political issues that are often secret or unseen. We discuss the emerging paradigm of "psyops capitalism", the history of the CIA's MK Ultra psychological operations and how magic shapes our perception of reality. These technologies culminate to form a new economic order of private surveillance, manipulation and financial extraction.
For the second time in a row, PARIS PHOTO, the most important art photography fair, will feature a dedicated DIGITAL SECTOR curated by Nina Roehrs. To cover this event, which takes place from November 6th to November 10th, NFT Morning is hosting a special week focused on this Digital Sector.Today, in addition to Trevor presenting his new body of work titled “Evolved Hallucinations,” we had the opportunity to engage in a deeply insightful conversation between Alejandro Cartagena, founder of Fellowship, and Trevor Paglen. We highly encourage you to listen! To learn more, visit:• Paris Photo Digital Sector* Evolved Hallucinations on Daily.xyz This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nftmorning.com
Full ep released to subscribers: 10 Aug 2023 | To join New Models, find us via patreon.com/newmodels & newmodels.substack.com _ Artist Trevor Paglen speaks with New Models about systems of “influence” past and present – pointing to a transition from a world of surveillance capitalism that is potentially becoming one of PSYOPS capitalism. This conversation follows Paglen's parallel 2023 exhibitions “Hide the Real, Show the False” at n.b.k. Berlin and “You've Just Been F*cked by PSYPOS” at Pace Gallery in New York. For more: Tw/X: @trevorpaglen https://paglen.studio/ Jak Ritger, https://www.punctr.art/unlimited-hangout-the-ufo-story
“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe
This week on the podcast I speak with Mark Burchick, a filmmaker and artist from Baltimore, MD who is exploring anomalous studies topics in his artwork and research. He most recently presented a paper, co-written with Diana Walsh Pasulka, about using AI art programs to visualize encounter experiences with angelic beings from the Catholic tradition. He also wrote a recent episode of Kelly Chase's UFO Rabbit Hole podcast discussing the “Black World” of government secrecy and the work of artist and researcher Trevor Paglen. Mark and I discuss his childhood which was shaped by intense fear of alien abduction, his strong desire for devotion and connection to God, as well as his certainty that due to world events and online religious prophecy, he wouldn't live to see his adult years. We talk about sexually coded ecstatic religious experience, his work around Marian apparitions, AI, and rose petals as acheiropoieta - or religious objects not made by human hands. We even take a little detour in Gnosticism (at my insistence) and Mark shares an experience of sending himself a message from the future. This is a good one, folks. Please enjoy. Links: American Academy of Religion presentation on how AI visualizes the religious sacred: https://www.patreon.com/posts/how-does-ai-art-101132739 Choreographing Shadows paper, co-written with Diana Walsh Pasulka: https://mjburchick.com/choreographing-shadows/ UFO Rabbit Hole Episode: https://uforabbithole.com/podcast/ep-34-the-disclosure-field-guide-pt-2-black-budgets-the-secret-space-program/ Portfolio Website: https://mjburchick.com/ Social Media: https://twitter.com/mjburchick The Experiencer Group My book is out!! Earth: A Love Story exists as a physical object in the world. Deep forever gratitude to those of you who have purchased the book and left reviews on Amazon. Thank you, thank you, thank you. Our beautiful musical soundscapes are provided by Morgan Jenks. You can support his new album on bandcamp, or find out more at morganjenks.com Find me on instagram @robin_lassiter_honeyheart and @earth_a_love_story To join my mailing list or book a 1:1 session with me, visit honeyheart.org If you'd like to support our work, please consider becoming a Patron
How dangerous is metadata? According to the artist and author Trevor Paglen, it can be deadly. Paglen joins host Alice Loxton to shine a light on Skynet – a network of all-seeing satellites – and the ominous AI algorithm that farms metadata and gets to decide who lives and who dies. From SPYSCAPE, the home of secrets. A Cup And Nuzzle production. Series produced by Alex Burnard, Morgan Childs, Claire Crofton, Joe Foley, Frank Palmer, Kellie Redmond and Isabel Sutton. Music by Nick Ryan. Learn more about your ad choices. Visit megaphone.fm/adchoices
If you look, you'll see. Most people don't look. Produced by Stephen Cassidy Jones and Liza Yeager.Edited by Mitchell Johnson, with editorial oversight from Daniel Denvir.Featuring Mark Pilkington, Valerie Kuletz, and Trevor Paglen.Support The Dig at Patreon.com/TheDigBuy Blood Red Lines at haymarketbooks.orgSubscribe to Jacobin at bit.ly/digjacobin Hosted on Acast. See acast.com/privacy for more information.
If you look, you'll see. Most people don't look. Produced by Stephen Cassidy Jones and Liza Yeager. Edited by Mitchell Johnson, with editorial oversight from Daniel Denvir. Featuring Mark Pilkington, Valerie Kuletz, and Trevor Paglen. Support The Dig at Patreon.com/TheDig Buy Blood Red Lines at haymarketbooks.org Subscribe to Jacobin at bit.ly/digjacobin
We live in exciting times for ufology. We’re also coming to realise that many of the modern-day myths around visiting spacecraft were in fact planted by our own intelligence services. But how to tell fact from fiction? Trevor Paglen is an American artist known for investigating the invisible through the visible. His practice has taken […]
Episode Summary:In this captivating episode, we journey with Tega Brain from her roots as an environmental engineer to her evolution into an art-tech visionary. Exploring the digital art landscape reshaped by AI and Machine Learning, she draws parallels with influential figures like Ian Cheng, Refik Anadol, and Elon Musk. Her works mirror the transformative power these technologies wield in creating unique artistic experiences, akin to what Trevor Paglen and Agnes Denes are known for. Amidst our tech-driven world, Tega challenges the status quo, intertwining creativity with environmental sustainability, and navigating ethical concerns similar to scholars like Kate Crawford, Timnit Gebru, and Joy Buolamwini. This episode is a must for anyone keen on the intersection of technology, art, and environmental sustainability.In what ways artificial intelligence and machine learning are transforming the digital art landscape, and what opportunities do these technologies present for artists?How do you address ethical concerns when incorporating AI and other emerging technologies into your art practice?The Speaker:Tega Brain is an Australian-born artist, environmental engineer, and educator whose work intersects art, technology, and ecology. Her projects often address environmental issues and involve creating experimental systems, installations, and software. She has exhibited her work at various venues, including the Victoria and Albert Museum, the Whitney Museum of American Art, and the Haus der Kulturen der Welt. In addition to her art practice, Tega Brain is an Assistant Professor of Integrated Digital Media at New York University's Tandon School of Engineering. Her research and teaching focus on the creative and critical applications of technology, with an emphasis on sustainability and environmental concerns.Follow Tega Brain's journey.Hosts: Farah Piriye & Elizabeth Zhivkova, ZEITGEIST19 FoundationFor sponsorship enquiries, comments, ideas and collaborations, email us at info@zeitgeist19.com Follow us on InstagramHelp us to continue our mission and to develop our podcast: Donate
Ever have the feeling that you're being watched? You might not be wrong... Directed by Yaara Bou Melham, the new documentary UNSEEN SKIES follows Trevor Paglen as he travels through the desolate Nevada desert as he attempts to launch a satellite into orbit. Wrestling with themes of government surveillance and our lack of privacy, the film attempts to pull back the layers on the nature of language and how we identify ourselves and others. In this 1on1, we speak to Yaara Bou Melham about giving up our freedom online, 'making meaning' and what defines the 'image of man'.
For episode 15, we have a quick take on artist Trevor Paglen’s recent thoughts on truthfulness of NASA images from … More
To start off season 3 of PhotoWork with Sasha Wolf, Sasha and photographer, writer, curator and educator, Aaron Schuman discuss Aaron's monographs including, SLANT and his latest, SONATA, both published by MACK. Aaron reveals how he was approached to create a Masters program at the University of the West of England and how the idea of research is more than just a singular conscious effort to pursue an idea but a lifelong endeavor that permeates your work. https://www.aaronschuman.com/index.html AARON SCHUMAN is an American photographer, writer, curator and educator based in the United Kingdom. He received a BFA in Photography and History of Art from New York University's Tisch School of the Arts in 1999, and an MA in Humanities and Cultural Studies from the University of London: London Consortium at Birkbeck College in 2003. Schuman is the author of several critically-acclaimed monographs: SONATA, published by MACK in the summer of 2022; SLANT, published by MACK, which was cited as one of 2019's "Best Photobooks" by numerous photographers, critics and publications, including The Guardian, Internazionale, American Suburb X, Photoeye (Jason Fulford / Rebecca Norris Webb), Photobookstore (Vanessa Winship / Mark Power / Robin Titchener), and Deadbeat Club Press (Raymond Meeks / Brad Feuerhelm); and FOLK, published by NB Books, which was cited as one of 2016's "Best Photobooks" by Alec Soth (Photoeye), Sean O'Hagan (The Guardian), and Jason Fulford (TIME), and was long-listed for the Deutsche Börse Photography Prize 2017. His work has been exhibited internationally - at institutions such as Tate Modern, Hauser & Wirth, Christie's London, Christie's New York, the Institute of Contemporary Arts London, the Ethnographic Museum Krakow, Format Festival and elsewhere - and is held in many public and private collections, including the Museum of Fine Arts Houston, The British Library, the National Art Library, and the Museum of Modern Art Library. In addition to to his own photographic work, Schuman has contributed essays, interviews, texts and photographs to many other books and monographs, including Matteo Giovanni: I Had to Shed My Skin (Artphilein, 2022), OK No Response (Twin Palms, 2021), Keeper of the Hearth: Picturing Roland Barthes' Unseen Photograph (Schilt, 2021), Photo No-Nos: Meditations on What Not to Shoot (Aperture, 2021), Amak Mahmoodian: Zanjir (RRB, 2019), Aperture Conversations: 1985 to the Present (Aperture, 2018), Another Kind of Life: Photography on the Margins (Prestel / Barbican, 2018), George Rodger: Nuba & Latuka, The Colour Photographs: The Color Photographs (Prestel, 2017), Alec Soth: Gathered Leaves (MACK, 2015), Visions Anew: The Lens and Screen Arts (University of California Press, 2015), Storyteller: The Photographs of Duane Michals (Prestel / Carnegie Museum, 2014) and The Photographer's Playbook (Aperture, 2014), amongst many others. He has also written and photographed for a wide variety of journals, magazines and publications, such as Aperture, Foam, ArtReview, Frieze, Magnum Online, Hotshoe, The British Journal of Photography and more. Schuman has curated several major international festivals and exhibitions, including: In Progress: Laia Abril, Hoda Afshar, Widline Cadet, Adama Jalloh, Alba Zari (Royal Photographic Society, 2021), Indivisible: New American Documents (FOMU Antwerp, 2016), In Appropriation (Houston Center of Photography, 2012), Other I: Alec Soth, WassinkLundgren, Viviane Sassen (Hotshoe London, 2011), and Whatever Was Splendid: New American Photographs (FotoFest, 2010). In 2018, he served as co-Curator of JaipurPhoto Festival 2018. In 2014, he served as Guest Curator of Krakow Photomonth 2014 - entitled Re:Search, the main programme featured solo exhibitions by Taryn Simon, Trevor Paglen, David Campany / Walker Evans, Clare Strand, Forensic Architecture, Jason Fulford and more. Schuman was the founder and editor of the online photography journal, SeeSaw Magazine (2004-2014). He is Associate Professor in Photography and Visual Culture, and the founder and Programme Leader of the MA/Masters in Photography programme, at the University of the West of England (UWE Bristol). Find out more at https://photowork.pinecast.co
MacArthur Genius Grant winner, artist, and experimental geographer Trevor Paglen and host Dylan Thuras discuss documenting government surveillance, AI, and place.
Trevor Paglen je umělec, jehož práce se pohybuje na hraně teorie a investigativní novinařiny. Řeší nejsoučasnější témata související s technologiemi – takzvanou umělou inteligenci, proměnu fotografie v digitální době nebo nebezpečí automatizovaného dohledu, ať už ze strany státu nebo technologických firem. V našem rozhovoru se tak potkávají Edward Snowden, náklaďáky a systémy pro získávání osobních dat.Všechny díly podcastu Otevřené hlavy můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.
Trevor Paglen je umělec, jehož práce se pohybuje na hraně teorie a investigativní novinařiny. Řeší nejsoučasnější témata související s technologiemi – takzvanou umělou inteligenci, proměnu fotografie v digitální době nebo nebezpečí automatizovaného dohledu, ať už ze strany státu nebo technologických firem. V našem rozhovoru se tak potkávají Edward Snowden, náklaďáky a systémy pro získávání osobních dat.
This week, Shane Harris speaks to artist Trevor Paglen, who explores themes of surveillance, security, and secrecy. Shane first got to know Trevor's work through his delightful and spooky book I Could Tell You But Then You Would Have to Be Destroyed By Me: Emblems from the Pentagon's Black World. It's a collection of patches and insignia used by secretive military units and classified programs. They're interesting works of art in their own right. But they also use images and numbers to communicate a hidden meaning. Trevor has spent much of his time decoding those messages and turning them into visual art. Trevor has turned secret code names into visual installations, tracked and photographed spy satellites as they crossed the night sky, and taken rare photographs of intelligence agency complexes and military installations. He told Shane about his career, as well as his days as a punk musician and his work in journalism. Chatter is a production of Lawfare and Goat Rodeo. This episode was produced and edited by Cara Shillenn of Goat Rodeo with engineering assistance from Ian Enright.Podcast theme by David Priess, featuring music created using Groovepad.See acast.com/privacy for privacy and opt-out information. Trevor Paglen's website: https://paglen.studio/ I Could Tell You But Then You Would Have to Be Destroyed By Me: https://www.amazon.com/Could-Tell-Then-Would-Destroyed/dp/193555414X From the Archives of Peter Merlin, Aviation Archaeologist https://www.amazon.com/Trevor-Paglen-Archives-Aviation-Archaelogist/dp/173209862X ImageNet Roulette: https://excavating.ai/ Trevor's photos of intelligence agencies https://theintercept.com/2014/02/10/new-photos-of-nsa-and-others/ Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
Kate Crawford, author of "Atlas AI: Power, Politics and Planetary Costs of Artificial Intelligence," is a leading international scholar of the social and political implications of artificial intelligence. This program looks at the intersection of art, activism and artificial intelligence. Series: "UC Berkeley Graduate Lectures" [Humanities] [Science] [Show ID: 37967]
We discussed: artistic discipline in the 21st century, curating photography, contemporary photography, interdisciplinary art, subjectivity, the evolution on photography, beauty in art, the importance of text with your artwork, artist statements, the responsibilities of artists https://archipelagoprojects.com People + Places mentioned: Trevor Paglen - https://www.pacegallery.com/artists/trevor-paglen/ Hito Steyerl - https://www.moma.org/artists/43752 Gerhard Richter - https://www.gerhard-richter.com Audio engineering by Mickey at CushAudio Services Music by Peat Biby Supported in part by: EEA Grants from Iceland, Liechtenstein + Norway – https://eeagrants.org And we appreciate the assistance of our partners in this project: Hunt Kastner – https://huntkastner.com Kunstsentrene i Norge – https://www.kunstsentrene.no
We discussed: artistic discipline in the 21st century, curating photography, contemporary photography, interdisciplinary art, subjectivity, the evolution on photography, beauty in art, the importance of text with your artwork, artist statements, the responsibilities of artists https://archipelagoprojects.com People + Places mentioned: Trevor Paglen - https://www.pacegallery.com/artists/trevor-paglen/ Hito Steyerl - https://www.moma.org/artists/43752 Gerhard Richter - https://www.gerhard-richter.com Audio engineering by Mickey at CushAudio Services Music by Peat Biby Supported in part by: EEA Grants from Iceland, Liechtenstein + Norway – https://eeagrants.org And we appreciate the assistance of our partners in this project: Hunt Kastner – https://huntkastner.com Kunstsentrene i Norge – https://www.kunstsentrene.no
Nato Thompson is a curator and the founder of the Alternative Art School. Before setting up this experimental education project, Nato was the artistic director of Philadelphia Contemporary and a key figure at Creative Time, New York's influential organization focusing on art in public space.You will listen to Nato reflecting on that shift, from working within institutions to setting up one's own. His insights on the inner workings of the art industry are totally thought-provoking. And it's the first time we are talking about NFT's at Ahali! This conversation really shows the many blind spots, or things we tend to overlook about the status quo.EPISODE NOTES & LINKSNato Thompson is an author, curator, and self-proclaimed “cultural infrastructure builder”. www.natothompson.com/aboutThe Alternative Art School (TAAS) is an affordable learning program run by a stellar faculty offering intimate class sizes. TAAS emphasizes group work, community building, and dynamic modes of socializing and art-making. www.thealternativeartschool.net/how-it-works-1Philadelphia Contemporary is an independent and free-standing venue that celebrates the abundance of genre-bending, multidisciplinary practices that make up the field of contemporary art. www.philadelphiacontemporary.orgOperating since 1974, Creative Time is an influential public arts organization in New York. creativetime.org/about/Growing out of a major exhibition that had occurred in Creative Time, Living as Form contains commissioned essays from noted critics and theorists who look at Socially Engaged Art practiced between the years of 1991-2011. mitpress.mit.edu/books/living-formMASS MoCA is a contemporary art museum located in North Adams, Massachusetts. massmoca.org/about/Seeing Power: Art and Activism in the 21st Century (2015) www.penguinrandomhouse.com/books/214258/seeing-power-by-nato-thompson/9781612190440/Culture as Weapon: The Art of Influence in Everyday Life. www.penguinrandomhouse.com/books/545444/culture-as-weapon-by-nato-thompson/Part of the TAAS faculty, Paul Chan is an artist, writer, and publisher. en.wikipedia.org/wiki/Paul_Chan_(artist)Trevor Paglen is an artist, geographer, and author whose work critically deals with mass surveillance and data collection. en.wikipedia.org/wiki/Trevor_PaglenA fierce political figure of our time that operates within (but not limited to) the field of contemporary art, Tania Bruguera's work pivots around issues of power and control, and several of her works interrogate and re-present events in Cuban history. en.wikipedia.org/wiki/Tania_BrugueraSimone Leigh is an artist who reflects on the black female subjectivity through her practice. en.wikipedia.org/wiki/Simone_LeighBlack Lives Matter (BLM) is a decentralized political and social movement protesting against incidents of police brutality and all racially motivated violence against black people. en.wikipedia.org/wiki/Black_Lives_MatterStar Trek is a science fiction media franchise created by Gene Roddenberry. en.wikipedia.org/wiki/Star_TrekJanine Antoni is an artist who focuses on process and the transitions between the making and finished product, often portraying feminist ideals. en.wikipedia.org/wiki/Janine_AntoniMiguel López is an artist, researcher, and writer. www.bakonline.org/person/miguel-a-lopez/Yael Bartana is an artist whose work focuses on political or feminist themes. en.wikipedia.org/wiki/Yael_BartanaMotivated largely by political, cultural, and social circumstances, Mel Chin is an artist. en.wikipedia.org/wiki/Mel_ChinHito Steyerl is an artist, theoretician, and educator. en.wikipedia.org/wiki/Hito_SteyerlMarinella Senatore is an artist. en.wikipedia.org/wiki/Marinella_SenatoreMia Yu is an independent researcher, curator, and educator. portal.cca.edu/events-calendar/curatorial-practice-lecture-mia-yu/Mario Ybarra Jr. employs his multi-layered artistic practice to e various components of Mexican-American identity. www.otis.edu/faculty/mario-ybarra-jrKathrin Böhm is an artist whose practice focuses on the collective re-production of public space; on the economy as a public realm; and every day as a starting point for culture. Check out Ahali Conversations Episode 13 to get inspired by Kathrin's way to build diverse economies within, out of, and around the field of culture. www.ahali.space/episodes/episode-13-kathrinbohmJ.K Gibson-Graham is the pen-name of Katherine Gibson and the late Julie Graham. As feminist political economists and economic geographers, they have extensively written about diverse economies, urbanism, alternative communities, and regional economic development.en.wikipedia.org/wiki/J._K._Gibson-GrahamSmashcut is an online learning platform built for real-time, media-based education. www.smashcut.com/aboutPedagogy of the Oppressed is considered one of the foundational texts of critical pedagogy and proposes a pedagogy with a new relationship between teacher, student, and society by Paolo Freire. en.wikipedia.org/wiki/Pedagogy_of_the_OppressedBlack Mountain College was an experimental college founded in 1933.John Cage (1912 – 1992) was a composer, music theorist, artist, and philosopher who was a teacher at BMC."Merce" Cunningham (1919 – 2009) was a dancer and choreographer who was a teacher at BMC.Gezi Park Protests occurred in Turkey in 2013 to contest the urban development plan for Istanbul's Taksim Gezi Park. en.wikipedia.org/wiki/Gezi_Park_protestsThe Arab Spring was a series of anti-government protests, uprisings, and armed rebellions that spread across much of the Arab world in the early 2010s. en.wikipedia.org/wiki/Arab_SpringOccupy Wall Street (OWS) was a protest movement against economic inequality and the influence of money in politics that began in New York City's Wall Street financial district, in 2011. It gave rise to the wider Occupy movement in the United States and other countries. en.wikipedia.org/wiki/Occupy_Wall_StreetSotheby's is a multinational corporation headquartered in New York City. It is one of the world's largest brokers of fine and decorative art, jewelry, and collectibles. www.sothebys.com/en/Christie's is an auction house like Sotheby's known for its involvement in high-profile private transactions. www.christies.com/enSAHA is an association that seeks to support contemporary art from Turkey. www.saha.org.trProtocinema is a cross-cultural art organization that commissions and presents works and exhibitions of contemporary art. www.protocinema.org/aboutFırat Engin is an artist based in İstanbul and Ankara. firatengin.com/cvVahap Avşar is an artist based in New York and İstanbul. vahapavsar.com/bio/DC hardcore, sometimes referred to in writing as harDCore, is the hardcore punk scene of Washington, D.C. Emerging in late 1979, it is considered one of the first and most influential punk scenes in the United States. en.wikipedia.org/wiki/Washington,_D.C._hardcoreBad Brains are a rock band formed in Washington, D.C. in 1977. en.wikipedia.org/wiki/Bad_BrainsMinor Threat was a hardcore punk band, formed in 1980 in Washington, D.C. en.wikipedia.org/wiki/Minor_ThreatFugazi is a post-hardcore band that formed in Washington, D.C. in 1986. en.wikipedia.org/wiki/FugaziThe Nation of Ulysses was a punk rock band from Washington, D.C., formed in spring 1988. en.wikipedia.org/wiki/The_Nation_of_UlyssesMinecraft is an influential sandbox video game with a major impact on popular internet culture. en.wikipedia.org/wiki/MinecraftRed Dead Redemption is a Western-themed action-adventure game. en.wikipedia.org/wiki/Red_Dead_RedemptionGrand Theft Auto (GTA) is a series of action-adventure games. en.wikipedia.org/wiki/Grand_Theft_AutoThe Sandinista National Liberation Front is a socialist political party in Nicaragua. en.wikipedia.org/wiki/Sandinista_National_Liberation_FrontJust google Google. g.co/kgs/2CdUks
Filmmaker and award-winning journalist Yaara Bou Melhem makes their documentary debut with their impressive new film, Unseen Skies. Following artist-activist-writer-musician Trevor Paglen as he makes art out of the plentiful black op sites around the world, Unseen Skies interrogates what it means to be surveilled, what is happening with the wealth of data being collected from us, and more. It's a visually stunning film with a soaring score by Helena Czajka. Unseen Skies plays at the Sydney Film Festival on November 10th, November 13th, and will be available online via SFF On Demand on November 12th. Purchase tickets here.See omnystudio.com/listener for privacy information. Hosted on Acast. See acast.com/privacy for more information.
Filmmaker and award-winning journalist Yaara Bou Melhem makes their documentary debut with their impressive new film, Unseen Skies. Following artist-activist-writer-musician Trevor Paglen as he makes art out of the plentiful black op sites around the world, Unseen Skies interrogates what it means to be surveilled, what is happening with the wealth of data being collected from us, and more. It's a visually stunning film with a soaring score by Helena Czajka. Unseen Skies plays at the Sydney Film Festival on November 10th, November 13th, and will be available online via SFF On Demand on November 12th. Purchase tickets here. See omnystudio.com/listener for privacy information.
When we look up into the night sky - what is looking back at us? Trevor Paglen is an artist whose work looks at mass surveillance, data collection and secret military bases. And he's the subject of a new documentary, Unseen Skies, by award-winning journalist Yaara Bou Melhem.
When we look up into the night sky - what is looking back at us? Trevor Paglen is an artist whose work looks at mass surveillance, data collection and secret military bases. And he's the subject of a new documentary, Unseen Skies, by award-winning journalist Yaara Bou Melhem.
On episode 202 of The Quarantine Tapes, Paul Holdengräber is joined by Nato Thompson. Nato's most recent project is The Alternative Art School, an online arts program launched in 2020. Paul and Nato talk in depth about Nato's history in the art world and this thrilling new endeavor.Nato explains what excites him about the possibilities this online-only school creates and discusses their mission to approach art as a life project. They discuss some of the artists Nato has worked with, from Trevor Paglen and Tracy K. Smith to Werner Herzog. Paul and Nato dig into the issues with existing structures in the art world, from museums to non-profits, and go back and forth about the problems and potential of virtual space. Nato Thompson is an author, curator, and what he describes as “cultural infrastructure builder”. He has worked as Artistic Director at Philadelphia Contemporary, and Creative Time as Artistic Director and as Curator at MASS MoCA. He is currently the founder of an online global art school titled The Alternative Art School which began after the summer of 2020.Thompson organized major Creative Time projects including The Creative Time Summit (2009–2015), Pedro Reyes' Doomocracy (2016), Kara Walker's A Subtlety (2014), Living as Form (2011), Trevor Paglen's The Last Pictures (2012), Paul Ramírez Jonas's Key to the City (2010), Jeremy Deller's It is What it is (2009, with New Museum curators Laura Hoptman and Amy Mackie), Democracy in America: The National Campaign (2008), and Paul Chan's Waiting for Godot in New Orleans (2007), among others.He has written two books of cultural criticism, Seeing Power: Art and Activism in the 21st Century (2015) and Culture as Weapon: The Art of Influence in Everyday Life (2017).
Trevor Paglen aspires to see the unseen. The artist explores the act of looking through various angles—such as how artificial-intelligence systems have been trained to “see” and categorize the world, or the disquieting sense of being “watched” by a security camera—and creates scenarios that frequently implicate viewers in the experience. At other times, he'll take pictures of places that are typically kept far out of sight, including the rarely seen headquarters of America's National Security Agency, or the Mojave Desert, home to numerous military facilities, prisons, and a former nuclear testing site. Paglen, who has a Ph.D. in geography from University of California, Berkeley, also thinks about the relationship between space and time, and how the associations a person makes while looking at something—be it an age-old landscape or a satellite in endless orbit around the Earth—are fleeting and constantly changing. By highlighting invisible frameworks that exist in the world, Paglen invites viewers to think about life's inconspicuous, and often unsettling, realities. Paglen, who is 47 and has studios in New York and Berlin, draws on science, technology, and investigative journalism to make his wide-ranging work. In one of his early projects, “Recording Carceral Landscapes” (1995–2004), he wore a concealed microphone and posed as a criminology student to document the interiors of California penitentiaries. For “The Last Pictures” (2012), he collaborated with materials scientists at M.I.T. to devise an ultra-archival disc, micro-etched with a collection of 100 images, and launched it into space on a communications satellite for aliens to find. More recently, his viral digital art project and app “ImageNet Roulette” (2020), which allowed users to upload photos of their faces to see how A.I. might label them, horrified many users with racist, sexist, or overtly stereotypical results, leading ImageNet, a leading image database, to remove half a million images. Beyond his art practice, Paglen continues his preoccupation with perception. He studies martial arts, surfs, and composes music—activities that require constant, intense awareness. It all stems from a heightened consciousness of, and interest in, the concept of observation that he's carried for nearly his entire life. “We're all trying to learn different ways of seeing,” he says. On this episode, Paglen discusses his deep-seated fascination with perception, talking with Spencer about the impacts of surveillance, deserts as sites of secrecy, and the value of trying to perceive forces that seem impossible to see. Show notes:Full transcript on timesensitive.fm@trevorpaglenpaglen.studio04:54: “The Last Pictures” project (2012)19:51: “Orbital Reflector” (2018)29:48: Robert Smithson's “Spiral Jetty” (1970) 42:53: Paglen's thrash group, Noisegate47:15: “Recording Carceral Landscapes” (1995–2004) 1:05:13: “ImageNet Roulette” (2020) 1:05:13: “Bloom” (2020)
Secondo il filosofo Bernard Stiegler, per centinaia di migliaia di anni l'umanità non ha percepito alcuna distinzione tra il mondo naturale e quello creato dalla tecnica perché la tecnica si evolveva lentamente, di pari passo con l'evoluzione umana. Ma dalla Prima rivoluzione industriale, i cambiamenti tecnologici sono diventati molto più veloci di quelli evolutivi e culturali: un fenomeno che Stiegler definisce “innovazione permanente”. Per questo, la distinzione tra il mondo naturale e quello modificato attraverso la tecnica si fa sempre più netta. È su questa perdita di prospettiva che si concentrano alcuni dei migliori artisti contemporanei, da Trevor Paglen a Cao Fei, mentre i musei si riempiono di trovate circensi come gli “artisti robot” e un grafico mediocre come Beeple diventa una celebrità grazie a un file venduto per 69 milioni di dollari.In questa puntata si parla di Bernard Stiegler, Mauro Corona, Fernando Botero, Igor Mitoraj, Yuk Hai, Martin Heidegger, Parker Ito, Hieronymus Bosch, Pieter Bruegel il Vecchio, Pablo Picasso, Massimo Bottura, Beeple (Mike Vinkelmann), Trevor Paglen, Charles Manson, Hillary Clinton, Julian Assange, Ai-Da, Jordan Wolfson, Milton Glaser, Pinocchio, Random International, Marco Balich, Cao Fei, Nanni Moretti, Ken Russell, Massimo Gramellini, Mauro Calabresi, Peter Christopherson, David Cronenberg, Jeremy Irons, Kevin Spacey, William Burroughs, Netflix, Hal Ashby, Martin Scorsese e Alvaro Vitali.
Wassan Al-Khudhairi, Chief Curator of the Contemporary Art Museum St. Louis, stopped by to talk with Nancy about the museum and one of the current exhibitions, Stories of Resistance. Stories of Resistance explores artistic forms of resistance from across the world. Through visual narratives, artists amplify and bring to focus the multitude of conditions that ignite and inspire people to resist. The exhibition activates the entire museum space, inside and out, with video, photography, drawing, sculpture, painting, and installation. Presenting narratives from many social, political, and geographical spaces, the artists include: Bani Abidi, Andrea Bowers, Banu Cennetoğlu, Torkwase Dyson, Emily Jacir, Glenn Kaino, Bouchra Khalili, Candice Lin, Jen Liu, Guadalupe Maravilla, Tuấn Andrew Nguyễn, Trevor Paglen, PSA: (Jen Everett, Aida Hasanović, Simiya Sudduth), Wendy Red Star, Dread Scott, Kemang Wa Lehulere, and Wide Awakes (Maryam Parwana, Combo, Otherward). St. Louis serves as an ideal platform for Stories of Resistance. Resistance movements that have arisen here, most especially the rise of Black Lives Matter in response to the police killing of Michael Brown, have incited global actions against racism and injustice. By looking through a local lens, the exhibition draws connections worldwide, revealing profound influences that traverse borders and cultures. With this in mind, Radio Resistance, an integral component of the exhibition, will broadcast conversations between exhibiting artists and artists, activists, scholars, and others with a deep knowledge and experience of St. Louis. Because of radio's legacy as a tool for dissent, it serves as the medium for dialogue between intersecting local and global agents of change. Alongside the exhibition and radio program, a CAM publication will include images of works in the exhibition and writings that further explore and expand on the ideas and themes of Stories of Resistance. Stories of Resistance, installation view, CAM. Photo: Dusty Kessler Wassan Al-Khudhairi: Chief Curator at CAM
On this inaugural episode, it's a pleasure to be in conversation with Kate Crawford. I've known Kate for quite a few years; I've invited her to be a part of numerous panels and projects throughout that time and am always captivated by her eloquence in laying out emerging challenges in our relationships to technology. She does so in a way that centers artists as part of the solution. It's unsurprising to discover that she also is a musician, with a background in composition and currently in the band Metric Systems, which I'd highly recommend.Kate is a principal researcher at Microsoft Research, the co-founder and director of research at the AI Now Institute at NYU, a visiting professor at the MIT Center for Civic Media, a senior fellow at the Information Law Institute at NYU, and an associate professor in the Journalism and Media Research Centre at the University of New South Wales.She recently published a book called Atlas of AI. This book really opened my mind about the ways in which our current adoption of AI, at a societal level, is really just the latest moment in which humans have been blinded by science, allowing tools to determine our ethics, rather than the other way around.Atlas of AIAI Now InstituteMetric Systems (Kate's band)Trevor Paglen
hi, i'm lulu romano and this is your daily dose with lulu romano//transition// video slides/ pshpshhello and welcome to the first daily dose with yours truly, lulu romano. another raining day here in the Hamptons, eastern long island. idk about you but i kinda like this shitty weather, maybe it's just the hopeless romantic in me.TODAY: TJ Maxx Bridgehampton is closed because of flooding bur problems are really just opportunities and if you like like pop art check out Pace Gallery in East Hampton featuring David Hockney. There's also a private showing featuring artists such as Trevor Paglen and Roy Lichtenstein. the busiest season in the hamptons is approaching. whether you're downsizing or moving into a bigger place, Life Storage got your back. Talk to Ty and he will hook you up stress free. if you're listing to this this, you can also *watch* this video by searching my name lulu romano or simply click the link in the description box because all the addresses and hours of places mentioned will be thereand if you're *watching* this, hey how's it going. i'll see you tomorrow Subscribe at lulusdailydose.substack.com
On episode 186 of The Quarantine Tapes, Paul Holdengräber is joined by Trevor Paglen. Trevor is an artist and he talks with Paul about surprising himself with the type of work he found himself interested in this year. They discuss his focus on flowers in his most recent project, “Bloom.”Trevor and Paul dig into how images are changing in recent years and talk about Trevor’s essay, “Invisible Images.” They discuss the proliferation of algorithms and computer vision and talk about NFTs, Walter Benjamin, and the changing meaning of images under the pandemic. Then, Trevor unpacks the worrisome implications of images being separated from human eyes and his concerns over how these systems monitor and extract value from our lives. Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. Paglen’s work has had one-person exhibitions at the Smithsonian Museum of American Art, Washington D.C.; Carnegie Museum of Art, Pittsburgh; Fondazione Prada, Milan; the Barbican Centre, London; Vienna Secession, Vienna; and Protocinema Istanbul, and participated in group exhibitions the Metropolitan Museum of Art, the San Francisco Museum of Modern Art, the Tate Modern, and numerous other venues. Paglen has launched an artwork into distant orbit around Earth in collaboration with Creative Time and MIT, contributed research and cinematography to the Academy Award-winning film Citizenfour, and created a radioactive public sculpture for the exclusion zone in Fukushima, Japan. Paglen is the author of several books and numerous articles on subjects including experimental geography, artificial intelligence, state secrecy, military symbology, photography, and visuality. Paglen’s work has been profiled in the New York Times, the New Yorker, the Wall Street Journal, Wired, the Financial Times, Art Forum, and Aperture. In 2014, he received the Electronic Frontier Foundation’s Pioneer Award and in 2016, he won the Deutsche Börse Photography Prize. Paglen was named a MacArthur Fellow in 2017. Paglen holds a B.A. from U.C. Berkeley, an MFA from the Art Institute of Chicago, and a Ph.D. in Geography from U.C. Berkeley.
In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's (re-aired) episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career.
In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's (re-aired) episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career.
Dai tempi del Vietnam, la guerra ha abbandonato le trincee e si è riversata in tv. Ma se tra eserciti e media non c'è più distinzione, come possiamo fidarci dell'informazione? Costantino e Francesco ci spiegano che sul filo spinato che divide realtà e finzione vivono alcune delle voci più interessanti dell'arte contemporanea, ma anche il personaggio più amato da chi di arte non ne capisce assolutamente nulla: Banksy.In questa puntata si parla di Buffy Sainte-Marie, Francisco Goya, Ernest Meissonier, Régis Debray, Martha Rosler, Harun Farocki, Robert Capa, Phil Collins, Marina Abramović, Hans Haacke, Hito Steyerl, Marco Travaglio, Trevor Paglen, Walid Raad, Akram Zaatari, Bernard Khoury, David Mamet, Bernard Stiegler, Banksy, Ezio Greggio, Tracey Rose, Fernando Botero, Igor Mitoraj, Lucio Fontana, Pablo Picasso e Tony Shafrazi
In this special interview, Monocle's Augustin Macellari talks to artist Trevor Paglen about his new London exhibition, ‘Bloom', and reflect on how technology's complicated systems are shaping our lives. See omnystudio.com/listener for privacy information.
In this special interview, Monocle’s Augustin Macellari talks to artist Trevor Paglen about his new London exhibition, ‘Bloom’, and reflects on how technology’s complicated systems are shaping our lives.
In this special interview, Monocle’s Augustin Macellari talks to artist Trevor Paglen about his new London exhibition, ‘Bloom’, and reflect on how technology’s complicated systems are shaping our lives.
From photography’s beginnings, clouds had been central to aesthetic and technological debates in photographic circles. Moreover, they featured in discussions about the nature of the medium itself. Through the juxtaposition of nineteenth-century and contemporary works, the exhibition further considers the longstanding metaphorical relationship between clouds and photography. This audio tour features contemporary artists from the exhibition, who each answer the question: why clouds?
In this episode, we host globetrotter artist Ahmet Öğüt. With many trades in his pocket, Ahmet is a tireless people’s person as well as a hacker of sorts whose art-making competence allows him to infiltrate institutions from within to create parasitic organizations that instigate change!Ahmet Öğüt is an internationally acclaimed artist. His works often delve into topics such as religion, social norms, and politics using humor. Ahmet is the founder of the "Silent University", a roaming learning platform that aims to empower undocumented people through employing them as tutors. He is also the co-initiator of the "Code of Acquisitions" (with Burak Arikan), a watch-dog for ethical practices of art institutions, galleries and studios.LinksAhmet’s website is packed with further readings and additional material on his works. http://www.ahmetogut.comInitiated by Ahmet in 2012, The Silent University is a mobile and iterative institution. As a solidarity school run by refugees, asylum seekers, and migrants, it facilitates knowledge exchange dedicated to academics that cannot share their knowledge due to bureaucratic or juridical obstacles. https://thesilentuniversity.orgCo-founded by Ahmet with Burak Arıkan, Code of Acquisitions is an ongoing investigation about good and bad practices of art institutions, galleries, and studios. It’s inquiries address published policies as well as cases of misconduct and abuse. https://codeofacquisitions.orgTokyo based collective Chimpom invited artists such as Ai Weiwei, Taryn Simon, Trevor Paglen, and Ahmet Öğüt to contribute to an exhibition at Fukushima. These works are installed and waiting to be discovered by future settlers for years to come. More information about the “Dont follow the wind” exhibition. http://chimpom.jp/project/dfw.htmlThe Guardian’s Jonathan Jones was critical of this exhibition. https://www.theguardian.com/artanddesign/jonathanjonesblog/2015/jul/20/fukushima-exclusion-zone-art-politicsFind out how one of the curators Jason Waite revisited the project. https://www.e-flux.com/architecture/at-the-border/325747/the-entropic-silence-of-fukushima/Episode recorded on Zoom on April 21st 2020. Interview by Can Altay.Produced by Asli Altay and Sarp Renk Özer. Music by Grup Ses.
In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. Although the episode was recorded before George Floyd's murder sparked nationwide demonstrations for racial justice, Paglen's work is more timely than ever for its probing of surveillance, authoritarianism, and the ways both are being simultaneously empowered and cloaked by A.I.
In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. Although the episode was recorded before George Floyd's murder sparked nationwide demonstrations for racial justice, Paglen's work is more timely than ever for its probing of surveillance, authoritarianism, and the ways both are being simultaneously empowered and cloaked by A.I.
This week Brian and Duncan take a trip to the College Art Association conference 2020 and catch up with Natasha Egan from the Museum of Contemporary Photography. This wide-ranging conversation starts with surveillance and their current exhibition “In Real Life” featuring the artists Stephanie Dinkins, Trevor Paglen, Leo Selvaggio, Maija Tammi, José Orlando Villatoro, Xu Bing, and Liam Young and ends with what is the nature of photography in our contemporary world. https://www.mocp.org/index.php https://www.collegeart.org/
This is the fourth and last episode of mini series "The dark side of AI". I am your host Francesco and I'm with Chiara Tonini from London. The title of today's episode is Bias in the machine C: Francesco, today we are starting with an infuriating discussion. Are you ready to be angry? F: yeah sure is this about brexit? No, I don't talk about that. In 1986 the New York City's Rockefeller University conducted a study on breast and uterine cancers and their link to obesity. Like in all clinical trials up to that point, the subjects of the study were all men. So Francesco, do you see a problem with this approach? F: No problem at all, as long as those men had a perfectly healthy uterus. In medicine, up to the end of the 20th century, medical studies and clinical trials were conducted on men, medicine dosage and therapy calculated on men (white men). The female body has historically been considered an exception, or variation, from a male body. F: Like Eve coming from Adam's rib. I thought we were past that... When the female body has been under analysis, the focus was on the difference between it and the male body, the so-called “bikini approach”: the reproductive organs are different, therefore we study those, and those only. For a long time medicine assumed this was the only difference. Oh good ... This has led to a hugely harmful fallout across society. Because women had reproductive organs, they should reproduce, and all else about them was deemed uninteresting. Still today, they consider a woman without children somehow to have betrayed her biological destiny. This somehow does not apply to a man without children, who also has reproductive organs. F: so this is an example of a very specific type of bias in medicine, regarding clinical trials and medical studies, that is not only harmful for the purposes of these studies, but has ripple effects in all of society Only in the 2010 a serious conversation has started about the damage caused by not including women in clinical trials. There are many many examples (which we list in the references for this episode). Give me one Researchers consider cardiovascular disease a male disease - they even call it “the widower”. They conduct studies on male samples. But it turns out, the symptoms of a heart attack, especially the ones leading up to one, are different in women. This led to doctors not recognising or dismissing the early symptoms in women. F: I was reading that women are also subject to chronic pain much more than men: for example migraines, and pain related to endometriosis. But there is extensive evidence now of doctors dismissing women's pain, as either imaginary, or “inevitable”, like it is a normal state of being and does not need a cure at all. The failure of the medical community as a whole to recognise this obvious bias up to the 21st century is an example of how insidious the problem of bias is. There are 3 fundamental types of bias: One: Stochastic drift: you train your model on a dataset, and you validate the model on a split of the training set. When you apply your model out in the world, you systematically add bias in the predictions due to the training data being too specific Two: The bias in the model, introduced by your choice of the parameters of your model. Three: The bias in your training sample: people put training samples together, and people have culture, experience, and prejudice. As we will see today, this is the most dangerous and subtle bias. Today we'll talk about this bias. Bias is a warping of our understanding of reality. We see reality through the lens of our experience and our culture. The origin of bias can date back to traditions going back centuries, and is so ingrained in our way of thinking, that we don't even see it anymore. F: And let me add, when it comes to machine learning, we see reality through the lens of data. Bias is everywhere, and we could spend hours and hours talking about it. It's complicated. It's about to become more complicated. F: of course, if I know you… Let's throw artificial intelligence in the mix. F: You know, there was a happier time when this sentence didn't fill me with a sense of dread... ImageNet is an online database of over 14 million photos, compiled more than a decade ago at Stanford University. They used it to train machine learning algorithms for image recognition and computer vision, and played an important role in the rise of deep learning. We've all played with it, right? The cats and dogs classifier when learning Tensorflow? (I am a dog by the way. ) F: ImageNet has been a critical asset for computer-vision research. There was an annual international competition to create algorithms that could most accurately label subsets of images. In 2012, a team from the University of Toronto used a Convolutional Neural Network to handily win the top prize. That moment is widely considered a turning point in the development of contemporary AI. The final year of the ImageNet competition was 2017, and accuracy in classifying objects in the limited subset had risen from 71% to 97%. But that subset did not include the “Person” category, where the accuracy was much lower... ImageNet contained photos of thousands of people, with labels. This included straightforward tags like “teacher,” “dancer” and “plumber”, as well as highly charged labels like “failure, loser” and “slut, slovenly woman, trollop.” F: Uh Oh. Then “ImageNet Roulette” was created, by an artist called Trevor Paglen and a Microsoft researcher named Kate Crawford. It was a digital art project, where you could upload your photo and let the classifier identify you, based on the labels of the database. Imagine how well that went. F: I bet it did't work Of course it didn't work. Random people were classified as “orphans” or “non-smoker” or “alcoholic”. Somebody with glasses was a “nerd”. Tabong Kima, a 24-year old African American, was classified as “offender” and “wrongdoer”. F: and there it is. Quote from Trevor Paglen: “We want to show how layers of bias and racism and misogyny move from one system to the next. The point is to let people see the work that is being done behind the scenes, to see how we are being processed and categorized all the time.” F: The ImageNet labels were applied by thousands of unknown people, most likely in the United States, hired by the team from Stanford, and working through the crowdsourcing service Amazon Mechanical Turk. They earned pennies for each photo they labeled, churning through hundreds of labels an hour. The labels were not verified in any way : if a labeler thought someone looks “shady”, this label is just a result of their prejudice, but has no basis in reality. As they did, biases were baked into the database. Paglen quote again: “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.” They defined what a “loser” looked like. And a “slut.” And a “wrongdoer.” F: The labels originally came from another sprawling collection of data called WordNet, a kind of conceptual dictionary for machines built by researchers at Princeton University in the 1980s. But with these inflammatory labels included, the Stanford researchers may not have realized what they were doing. What is happening here is the transferring of bias from one system to the next. Tech jobs, in past decades but still today, predominantly go to white males from a narrow social class. Inevitably, they imprint the technology with their worldview. So their algorithms learn that a person of color is a criminal, and a woman with a certain look is a slut. I'm not saying they do it on purpose, but the lack of diversity in the tech industry translates into a narrower world view, which has real consequences in the quality of AI systems. F: Diversity in tech teams is often framed as an equality issue (which of course it is), but there are enormous advantages in it: it allows to create that cognitive diversity that will reflect into superior products or services. I believe this is an ongoing problem. In recent months, researchers have shown that face-recognition services from companies like Amazon, Microsoft and IBM can be biased against women and people of color. Crawford and Paglen argue this: “In many narratives around AI it is assumed that ongoing technical improvements will resolve all problems and limitations. But what if the opposite is true? What if the challenge of getting computers to “describe what they see” will always be a problem? The automated interpretation of images is an inherently social and political project, rather than a purely technical one. Understanding the politics within AI systems matters more than ever, as they are quickly moving into the architecture of social institutions: deciding whom to interview for a job, which students are paying attention in class, which suspects to arrest, and much else.” F: You are using the words “interpretation of images” here, as opposed to “description” or “classification”. Certain images depict something concrete, with an objective reality. Like an apple. But other images… not so much? ImageNet contain images only corresponding to nouns (not verbs for example). Noun categories such as “apple” are well defined. But not all nouns are created equal. Linguist George Lakoff points out that the concept of an “apple” is more nouny than the concept of “light”, which in turn is more nouny than a concept such as “health.” Nouns occupy various places on an axis from concrete to abstract, and from descriptive to judgmental. The images corresponding to these nouns become more and more ambiguous. These gradients have been erased in the logic of ImageNet. Everything is flattened out and pinned to a label. The results can be problematic, illogical, and cruel, especially when it comes to labels applied to people. F: so when an image is interpreted as Drug Addict, Crazy, Hypocrite, Spinster, Schizophrenic, Mulatto, Red Neck… this is not an objective description of reality, it's somebody's worldview coming to the surface. The selection of images for these categories skews the meaning in ways that are gendered, racialized, ableist, and ageist. ImageNet is an object lesson in what happens when people are categorized like objects. And this practice has only become more common in recent years, often inside the big AI companies, where there is no way for outsiders to see how images are being ordered and classified. The bizarre thing about these systems is that they remind of early 20th century criminologists like Lombroso, or phrenologists (including Nazi scientists), and physiognomy in general. This was a discipline founded on the assumption that there is a relationship between an image of a person and the character of that person. If you are a murderer, or a Jew, the shape of your head for instance will tell. F: In reaction to these ideas, Rene' Magritte produced that famous painting of the pipe with the tag “This is not a pipe”. You know that famous photograph of the soldier kissing the nurse at the end of the second world war? The nurse came public about it when she was like 90 years old, and told how this total stranger in the street had grabbed her and kissed her. This is a picture of sexual harassment. And knowing that, it does not seem romantic anymore. F: not romantic at all indeed Images do not describe themselves. This is a feature that artists have explored for centuries. We see those images differently when we see how they're labeled. The correspondence between image, label, and referent is fluid. What's more, those relations can change over time as the cultural context of an image shifts, and can mean different things depending on who looks, and where they are located. Images are open to interpretation and reinterpretation. Entire subfields of philosophy, art history, and media theory are dedicated to teasing out all the nuances of the unstable relationship between images and meanings. The common mythos of AI and the data it draws on, is that they are objectively and scientifically classifying the world. But it's not true, everywhere there is politics, ideology, prejudices, and all of the subjective stuff of history. F: When we survey the most widely used training sets, we find that this is the rule rather than the exception. Training sets are the foundation on which contemporary machine-learning systems are built. They are central to how AI systems recognize and interpret the world. By looking at the construction of these training sets and their underlying structures, we discover many unquestioned assumptions that are shaky and skewed. These assumptions inform the way AI systems work—and fail—to this day. And the impenetrability of the algorithms, the impossibility of reconstructing the decision-making of a NN, hides the bias further away from scrutiny. When an algorithm is a black box and you can't look inside, you have no way of analysing its bias. And the skewness and bias of these algorithms have real effects in society, the more you use AI in the judicial system, in medicine, the job market, in security systems based on facial recognition, the list goes on and on. Last year Google unveiled BERT (Bidirectional Encoder Representations from Transformers). It's an AI system that learns to talk: it's a Natural Language Processing engine to generate written (or spoken) language. F: we have an episode in which we explain all that They trained it from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. They baked decades and even centuries of biases — along with a few new ones — into all that material. So for instance BERT is extremely sexist: it associates with male almost all professions and positive attributes (except for “mom”). BERT is widely used in industry and academia. For example it can interpret news headlines automatically. Even Google's search engine use it. Try googling “CEO”, and you get out a gallery of images of old white men. F: such a pervasive and flawed AI system can propagate inequality at scale. And it's super dangerous because it's subtle. Especially in industry, query results will not be tested and examined for bias. AI is a black box and researchers take results at face value. There are many cases of algorithm-based discrimination in the job market. Targeting candidates for tech jobs for instance, may be done by algorithms that will not recognise women as potential candidates. Therefore, they will not be exposed to as many job ads as men. Or, automated HR systems will rank them lower (for the same CV) and screen them out. In the US, algorithms are used to calculate bail. The majority of the prison population in the US is composed of people of colour, as a result of a systemic bias that goes back centuries. An algorithm learns that a person of colour is more likely to commit a crime, is more likely to not be able to afford bail, is more likely to violate parole. Therefore, people of colour will receive harsher punishments for the same crime. This amplifies this inequality at scale. Conclusion Question everything, never take predictions of your models at face value. Always question how your training samples have been put together, who put them together, when and in what context. Always remember that your model produces an interpretation of reality, not a faithful depiction. Treat reality responsibly.
Creo que conviene advertir que, según mi criterio, el mejor ejemplo de inteligencia artificial es la tradición artística o, como supuestamente dijo Hegelen alguna ocasión "las pirámides de Egipto eran ya un misterio para los propios egipcios". Artículo en Medium: http://bit.ly/modelarelruido Enlaces a artículos del AI Now Institute: 1. Acerca del empirismo ingenuo sobre el cual están basados los sistemas de computer vision véase: Kate Crawford y Trevor Paglen, “Excavating AI: The Politics of Training Sets for Machine Learning (19 de septiembre de 2019) https://excavating.ai. 2. Sobre el Machine Learning como un sistema global de extracción y explotación véase: Kate Crawford y Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” AI Now Institute and Share Lab, (7 de septiembre de 2018) https://anatomyof.ai. Créidtos musicales: https://filmmusic.io: "Chill Wave" por Kevin MacLeod (https://incompetech.com) Licencia: CC BY (http://creativecommons.org/licenses/by/4.0/) "Time Ticks Away"; "Pins and Needles"; "End Boss" por My Heart, Your Thunder (https://myheartyourthunder.bandcamp.com/) (https://soundcloud.com/myheartyourthunder/) Licencia: CC BY-NC-ND 3.0 (https://creativecommons.org/licenses/by-nc-nd/3.0/) "3 6 1 5 P A R C O U R S U P"; "Interstellar Capitalism"; "Summer™ Blues" por Anonymous420 (https://loyaltyfreakmusic.com/tagged/anonymous420) Licencia: CC BY 3.0 (https://creativecommons.org/licenses/by/3.0/) "Hug Convoy" por Loyalty Freak Music (https://loyaltyfreakmusic.com/tagged/loyaltyfreakmusic) Licencia: CC0 1.0 Universal (https://creativecommons.org/publicdomain/zero/1.0/) "You're no good but I love you" por Soft and Furious (https://loyaltyfreakmusic.com/tagged/softandfurious) Licencia: CC0 1.0 Universal (https://creativecommons.org/publicdomain/zero/1.0/) Todas las piezas fueron recortadas para ajustarse a los prpósitos de este podcast. No reclamo la propiedad de ninguna de estas obras.
We pull back the curtain for a look at the complicated – and concerning – biases and preconceptions being programmed into artificial intelligences, with artist Trevor Paglen. Plus: Metronomy frontman Joe Mount takes us through the band’s latest album and we find out what it takes to put on a permanent exhibition as we meet London’s Wellcome Collection curator, Clare Barlow.
Hosted by Julieta Aranda, the podcasts focus on some of today’s most important issues, intersecting art, politics, environmental action, and gender disparity. The podcasts bring together the voices of highly respected experts and thinkers such as Benjamin H. Bratton, Coco Fusco, Stefanie Hessler, the Otolith Group, Otobong Nkanga, Trevor Paglen, and Lucia Pietroiusti, among many others, who jointly reflect on how art addresses issues including how to tackle and reduce the emission of carbon footprint, the challenges of being an artist and a mother, the persistence of surveillance states, and the rise of new nationalisms.
Another jampacked Tangent Show! This week we went to see Sites Unseen by Trevor Paglen at the Museum of Contemporary Art in Downtown San Diego and it sent our creativity into overdrive! We talk all about it here. And share some relevant tips for making books like what the Freedom of Information Act is and how it can help you create content. Also, we answer two of the most frequently asked questions we've heard this week - how to make coloring books (that will actually sell) and how to grow a Facebook group once you've got it started (see my previous video to learn how to start a group on the 'down low'). Plus... what we can learn from Tide being repackaged, why LaCroix is losing money and Isaac explains what the deal is with the new iPod Touch. Lastly, we go trendhunting at Google Trends to find an unusual trending niche that is very underserved - and Katharyne shares a bunch of low content book ideas for it. We also show how to use Google Trends to find international niche ideas. Great if you're making books for other countries outside the US! Links: Tangent Templates: https://templates.tangent.rocks KDP Group: https://www.facebook.com/groups/amazoncreatespace Cultivate: https://cultivate.katharyne.com/curriculum Trevor Paglen: http://www.paglen.com Google Trends: https://trends.google.com
e-flux founder, journal editor, and artist Anton Vidokle discusses cosmism with Kaye Cain-Nielsen, editor-in-chief of e-flux journal. You can read more on cosmism in the 88th issue of e-flux journal (February 2018). Featuring texts by Robert Bird, Maria Chehonadskih, Keti Chukhrov, Boris Groys, Trevor Paglen, Alexei Penzin, Marina Simakova, Arseny Zhilyaev, and a Timeline of Russian Cosmism compiled by Anastasia Gacheva, Arseny Zhilyaev, and Anton Vidokle.
Kiwi UFO—A blazing cylinder over New Zealand. Blank Spots on the Map “outs” spy satellites whose operations may trigger UFO reports. API introduces a new team member, chats about some interesting cases, and discusses explanations for common misidentifications. In Episode 12 of Case Files…API Director Paul Carr, Chief of Investigations Marsha Barnhart, and new Lead Investigator Savannah Dollison join in a Team Round Table discussion about some of their more recent cases. Marsha spotlighted her investigation of an unidentified case that was set in New Zealand. She also discussed a book, Blank Spots on the Map, by Trevor Paglen—a book that contains a chapter identifying characteristics of Spy Satellites which researchers might find useful when investigating reports of anomalous lights in the night sky. Links: Blank Spots on the Map Trevor Paglen Article on Trevor Paglen Sky Map http://www.sky-map.org/ LiveATC.net https://www.liveatc.net/ US Government FOIA Requests US FAA API Website Report A UFO Credits: Host and Producer: Marsha Barnhart Post Production: Paul Carr Episode 12 Music DrastikMeazures: https://soundcloud.com/draztikmeazures BoxCat Games: https://soundcloud.com/search?q=Box%20Cat%20Games Alien Chronicler: https://soundcloud.com/search?q=Alien%20Chronicler D J Spooky
How can CubeSats—the small, standardized satellites paving the way for the democratization of space—change our sense of the possible? We dive into two projects: the Planetary Society's Lightsail 2, with Director of Science and Technology Bruce Betts, and with MacArthur Genius grant-awardee Trevor Paglen, we discuss Orbital Reflector, the first satellite to be launched purely as an artistic gesture.
Three shows. Three reviews. The Trevor Paglen exhibition at Metro Pictures is creepy as fuck. We take issue with New York Magazine critic Jerry Saltz's review on the subject. Ellie Ga at Bureau sensitively touches on the horror of the Syrian refugee crisis. Omer Fast at James Cohen produces some powerful videos about the role of the artist in times of crisis, but they are overshadowed by protestors. They believe his decision to transform the front of the gallery into a waiting room in a Chinese bus station amounts to yellowface. Our thoughts on this and just about everything else.
Concluding discussion of the conference with Hito Steyerl, Anton Vidokle, Arseny Zhilyaev, Trevor Paglen, and Anselm Franke German translation
Art without death: Russian Cosmism Lecture, Sep 2, 2017 German translation
Art without death: Russian Cosmism Lecture, Sep 2, 2017 Original version
This week we talk to two artists who see themselves as detectives. Trevor Paglen has designed sculptures for the Fukushima Exclusion Zone, as well as art that's been launched into geostationary orbit. His photographs of secret military bases (taken at long range, using equipment made for astronomers) appeared in the Academy Award–winning documentary Citizenfour. We spoke with him about how to care for one's personal digital hygiene in the age of surveillance. To document torture, mass executions, and human-rights abuses at Saydnaya Prison in Syria, Amnesty International enlisted the help of sound artist Lawrence Abu Hamdan. This week we're presenting Abu Hamdan's sound installation Saydnaya (The Missing 19DB)—that's “DB” as in decibels, the standard measure for the volume of sound. It offers an uncompromising depiction of Saydnaya, a notorious military prison in Syria, believed to be the site of up to fifty hangings each day. It's a compound in the mountains just north of Damascus holding up to twenty thousand people in conditions of enforced silence. Abu Hamdan made the piece through interviews conducted in Istanbul with five survivors from the prison. This episode also features a luminous and digressive review of the Organist, ripped from the heaps of listener commentary on our Apple Podcasts page, by the writer Vu Tran—no stranger to art as detective work himself. His work has been anthologized in Best American Mystery Stories, and his novel Dragonfish adapts and carries forward the tradition of writers like Raymond Chandler and Dashiell Hammett. In his reflections on this podcast, Vu finds Melville's Secret Sharer in the woods of Vermont. To learn more about conditions in Saydnaya Prison, visit Amnesty International's interactive digital model, as reconstructed from interviews by Lawrence Abu Hamdan and Forensic Architecture. Produced by Ross Simonini Featured photo of Drone Vision courtesy of Trevor Paglen
Mede mogelijk gemaakt door: Orange Cyberdefense Over het juridische en politieke deel van deze vraag zijn advocaat Christiaan Alberdingk Thijm van Bureau Brandeis en Tweede Kamerlid voor de VVD Jeroen van Wijngaarden te gast. Er zijn namelijk verschillende meningen: kabinet en bijvoorbeeld D66 willen volledige encryptie voor iedereen, maar politie, OM en bijvoorbeeld de VVD denken daar anders over. Technisch Als je dan encryptie wilt toepassen op je bestanden of je mail, hoe doe je dat doen? Encryptie voor dummies. Want hoe noodzakelijk is dit eigenlijk voor de 'gewone' internetgebruiker? Is encryptie waterdicht en moeten we encryptie van derde partijen zoals Whatsapp en Dropbox vertrouwen? Te gast is Herbert Bos, hoogleraar systeem- en netwerkbeveiliging bij de VU. Technieuws In het technieuws onder andere aandacht voor een onderwerp waar we het vorige week al over hadden: de kapotte iPhones. Apple is daarvoor nu aangeklaagd. Verder: slim dierenspeelgoed van Acer en de Oval Office voor YouTubers. Internetkabels in de oceaan Het internet gaat door de lucht, over land en ook onder de zee door: de bekende kabels die over de bodem van de oceaan lopen. Maar bijna niemand heeft die kabels in het echt gezien. Behalve kunstenaar Trevor Paglen, hij zocht ze op en The Creators Project volgde hem. See omnystudio.com/listener for privacy information.
DEEP CABLES: Uncovering the Wiring of the World. The 8th event of the Disruption Network Lab. Keynote: Network Exposed. Charting the Invisible with Henrik Moltke (investigative journalist, DK/USA), Trevor Paglen (artist and geographer, USA). Moderated by Tatiana Bazzichelli (artistic director, Disruption Network Lab, IT/DE). Note: Trevor Paglen's presentation is not recorded. In August 2015 Henrik Moltke and a team of journalists from Pro Publica and the New York Times revealed intimate details of the National Security Agency's decades-long partnership with the telecom giant AT&T. A seemingly innocuous detail in a random document allowed the team to pin down the elusive collaboration, referred to by codename in the documents leaked by Edward Snowden. A cable severed by the 2011 earthquake in Japan caused an outage, after which NSAs ‘collection' - or tap - on the cable resumed. The date matched the repair on the northern leg of the Japan-US Cable, one of a handful of main arteries connecting Asia and the US. At the end of the cable is an anonymous looking industrial building, far off on the Mendocino coast of Northern California. The cable station is operated by AT&T. Under the motto “Follow the cables”, Henrik Moltke recounts how he retraces the physical footprint of deep state secrets. In this presentation, Henrik Moltke and Trevor Paglen will trace a link between the imaginary concept of “The Internet” and the present configuration of geopolitical wired structures, where big data, cloud computing, mass surveillance, and the monopolies of big corporations are intertwined. By disclosing through photography the development of transatlantic and undersea fibre-optic cables, and reconnecting the past with the present by charting the hidden infrastructure of information technology, this event will expose the inner functioning of the modern business of cable infrastructures, showing the global dimension, as well as the invisible sites of the physical Internet. http://www.disruptionlab.org/deep-cables (http://www.disruptionlab.org/deep-cables) Photo: Maria Silvano Produced by Voice Republic For more podcasts visit http://voicerepublic.com
Recorded June 19, 2015 at Stockholm Studios in Bushwick, Brooklyn Ralph Smith went to RIT with Tom, so I'd met him a few times over the year. He was PDN 30 in 2014 and he's photographed for Sotheby's, Food Network Magazine, Field & Stream, Conde Naste Traveler, among several others. His still life photographs are meticulously lit, playful and colorful. He works in a part of photoland that I don't know much about so I was curious to see what he had to say about hustling as a photographer. He also happens to be a fellow Minnesotan so it was nice relate from that perspective as well. He brought some interesting books as well, which we've come to expect, but it's always interesting to see the combinations. In this episode we get Duane Michels, Trevor Paglen, Star Wars and John Pfahl. I have a new appreciation for Michels after viewing his skewering of the photo world. I started to wonder if this podcast was satire? What would Duane Michels say about a photobook podcast? I don't want to know the answer. Enjoy! Photobooks/Photography Discussed
A new collection of Ranter writings from the English Civil War sheds light on their extreme libertarian views. Anne McElvoy is joined by the book's editor Nigel Smith. Plus journalist Rod Liddle and Conservative Party politician Douglas Carswell discuss libertarianism today. New Generation Thinker Naomi Paxton reflects on the Actresses' Franchise League. And a 62 metre photographic installation unveiled at London's Gloucester Road Tube station depicts the US reconnaissance base in North Yorkshire. Anne speaks to the image's creator Trevor Paglen.
MEDIA ROOTS- Trevor Paglen's work deliberately blurs the lines between science, contemporary art, journalism, and other disciplines to construct unfamiliar, yet meticulously researched ways to see and interpret the world around us. Paglen’s visual work has been exhibited at several art museums worldwide and his writing and art have been published in major publications including The New York Times, Wired, Vanity Fair and Newsweek. Paglen holds a B.A. from UC Berkeley, an M.F.A. from the School of the Art Institute of Chicago, and a Ph.D. in Geography from UC Berkeley, where he remains an affiliated researcher. In the 90s, Paglen was a member of a locally revered bay area noise/experimental project called Noise Gate, and co-ran an infamous underground venue in downtown Oakland called the Sandcrawler. His electronic music background hugely inspired co-host Robbie Martin in his own musical evolution. Paglen is also the author of several books: Torture Taxi, the first book to comprehensively cover the CIA's extraordinary rendition program; I Could Tell You But Then You Would Have to be Destroyed by Me, a book looking at the world of black projects through unit patches and memorabilia created for top-secret programs; and Blank Spots on the Map: The Dark Geography of the Pentagon's Secret World, a book that gives a broader look at secrecy in the United States. http://www.mediaroots.org/interview-with-experimental-geographer-artist-trevor-paglen.php http://www.MediaRoots.org
See the unseen in the desert with photographer Trevor Paglen.
This Educator Guide corresponds with the "Trevor Paglen: Photographer" video from KQED Spark.
Sight Unseen speaks with artists of all different mediums looking at the underbelly of their work, exploring what drives them to make such public works about private curiosities and how their work reflects the human condition. In this show, the curators of SF MoMA's SECA Awards, Apsara DiQuinzio and Alison Gass speak about Bay Area art and the winners of this year's SECA awards, as well as discussing the Mission School and the process of co-curating.
See the unseen in the desert with photographer Trevor Paglen. Original air date: April 2006.
This week: Patrica and Brian round-table with Apsara Dequinzio and Alison Gass, Assistant Curators at SFMOMA about the 2008 SECA award. Apsara and Alison let us in on the unique curatorial process of the SECA award, including leading tour buses of museum patrons through rapid-fire studio visits. SECA, the Society for the Encouragement of Contemporary Art, is an auxiliary group of SFMOMA and has honored bay area contemporary artists since 1967. The 2008 winners are Tauba Auerbach, Desirée Holman, Jordan Kantor, and Trevor Paglen, who's work will be on display at SFMOMA begining February 12, 2009.
This week: Marc and Brian talk to Trevor Paglen. "Trevor Paglen is an artist, writer, and experimental geographer working out of the Department of Geography at the University of California, Berkeley. His work involves deliberately blurring the lines between social science, contemporary art, and a host of even more obscure disciplines in order to construct unfamiliar, yet meticulously researched ways to interpret the world around us. His most recent projects involve close examinations of state secrecy, the California prison system, and the CIA’s practice of “extraordinary rendition.? Paglen’s visual work has been shown in galleries and museums including MASSMOCA (2006), the Warhol Museum (2007), Diverse Works (2005), in journals and magazines from Wired to The New York Review of Books, and at numerous other arts venues, universities, conferences, and public spaces. He has had one-person shows at Deadtech (2001), the LAB (2005), and Bellwether Gallery (2006). Paglen’s first book, Torture Taxi: On the Trail of the CIA’s Rendition Flights (co-authored with AC Thompson; Melville House, 2006) was the first book to systematically describe the CIA’s “extraordinary rendition? program. His second book, I Could Tell You But Then You Would Have to be Destroyed by Me (Melville House, 2007) an examination of the visual culture of “black? military programs, will be published in November 2007. He is currently completing his third book, entitled Blank Spots on a Map, which will be published by Dutton/NAL/Penguin in late 2008/early 2009. Paglen has received grants and commissions from Rhizome.org, the LEF Foundation, and the Eyebeam Center for Art and Technology. In 2005, he was a Vectors Journal Fellow at the University of Southern California. Paglen holds a BA from UC Berkeley, an MFA from the School of the Art Institute of Chicago, and is currently completing a PhD in the Department of Geography at the University of California at Berkeley." NEXT: Terri and Serena talk to Pate Conaway. "Pate Conaway is an interdisciplinary artist fromChicago, Illinois. Conaway sees the act of art-making as a performance in itself. Conaway has produced art in gallery situations, including during a five-week stint at the Museum of Contemporary Art in Chicago where he knitted a pair of nine-foot-long mittens. The artist, whose background is in performance and paper arts, continues to work in sculpture, installation, and interactive performance. Now learning to sew, Conaway is fascinated by the idea of applying garment construction techniques to bookbinding. Pate Conaway is a graduate of Chicago's Second City Training Center and received his MFA from Columbia College, Chicago. He has exhibited extensively in the mid-west and his work can be found in the Artist Book Collection at the Museum of Contemporary Art, Chicago." AND Mike B. has a rant to offer.