Podcasts about paglen

  • 12PODCASTS
  • 14EPISODES
  • 44mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jun 3, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about paglen

Latest podcast episodes about paglen

New Models Podcast
Preview | Psyberspace w/ Trevor Paglen (NM68)

New Models Podcast

Play Episode Listen Later Jun 3, 2024 23:03


Full ep released to subscribers: 10 Aug 2023 | To join New Models, find us via patreon.com/newmodels & newmodels.substack.com _ Artist Trevor Paglen speaks with New Models about systems of “influence” past and present – pointing to a transition from a world of surveillance capitalism that is potentially becoming one of PSYOPS capitalism. This conversation follows Paglen's parallel 2023 exhibitions “Hide the Real, Show the False” at n.b.k. Berlin and “You've Just Been F*cked by PSYPOS” at Pace Gallery in New York. For more: Tw/X: @trevorpaglen https://paglen.studio/ Jak Ritger, https://www.punctr.art/unlimited-hangout-the-ufo-story

A History of the World in Spy Objects
Trevor Paglen: Skynet

A History of the World in Spy Objects

Play Episode Listen Later Feb 20, 2024 13:28


How dangerous is metadata? According to the artist and author Trevor Paglen, it can be deadly. Paglen joins host Alice Loxton to shine a light on Skynet – a network of all-seeing satellites – and the ominous AI algorithm that farms metadata and gets to decide who lives and who dies. From SPYSCAPE, the home of secrets. A Cup And Nuzzle production. Series produced by Alex Burnard, Morgan Childs, Claire Crofton, Joe Foley, Frank Palmer, Kellie Redmond and Isabel Sutton. Music by Nick Ryan. Learn more about your ad choices. Visit megaphone.fm/adchoices

Time Sensitive Podcast
Trevor Paglen on Art in the Age of Mass Surveillance and Artificial Intelligence

Time Sensitive Podcast

Play Episode Listen Later Sep 22, 2021 72:41


Trevor Paglen aspires to see the unseen. The artist explores the act of looking through various angles—such as how artificial-intelligence systems have been trained to “see” and categorize the world, or the disquieting sense of being “watched” by a security camera—and creates scenarios that frequently implicate viewers in the experience. At other times, he'll take pictures of places that are typically kept far out of sight, including the rarely seen headquarters of America's National Security Agency, or the Mojave Desert, home to numerous military facilities, prisons, and a former nuclear testing site. Paglen, who has a Ph.D. in geography from University of California, Berkeley, also thinks about the relationship between space and time, and how the associations a person makes while looking at something—be it an age-old landscape or a satellite in endless orbit around the Earth—are fleeting and constantly changing. By highlighting invisible frameworks that exist in the world, Paglen invites viewers to think about life's inconspicuous, and often unsettling, realities. Paglen, who is 47 and has studios in New York and Berlin, draws on science, technology, and investigative journalism to make his wide-ranging work. In one of his early projects, “Recording Carceral Landscapes” (1995–2004), he wore a concealed microphone and posed as a criminology student to document the interiors of California penitentiaries. For “The Last Pictures” (2012), he collaborated with materials scientists at M.I.T. to devise an ultra-archival disc, micro-etched with a collection of 100 images, and launched it into space on a communications satellite for aliens to find. More recently, his viral digital art project and app “ImageNet Roulette” (2020), which allowed users to upload photos of their faces to see how A.I. might label them, horrified many users with racist, sexist, or overtly stereotypical results, leading ImageNet, a leading image database, to remove half a million images.  Beyond his art practice, Paglen continues his preoccupation with perception. He studies martial arts, surfs, and composes music—activities that require constant, intense awareness. It all stems from a heightened consciousness of, and interest in, the concept of observation that he's carried for nearly his entire life. “We're all trying to learn different ways of seeing,” he says. On this episode, Paglen discusses his deep-seated fascination with perception, talking with Spencer about the impacts of surveillance, deserts as sites of secrecy, and the value of trying to perceive forces that seem impossible to see. Show notes:Full transcript on timesensitive.fm@trevorpaglenpaglen.studio04:54: “The Last Pictures” project (2012)19:51: “Orbital Reflector” (2018)29:48: Robert Smithson's “Spiral Jetty” (1970) 42:53: Paglen's thrash group, Noisegate47:15: “Recording Carceral Landscapes” (1995–2004) 1:05:13: “ImageNet Roulette” (2020) 1:05:13: “Bloom” (2020)

The Quarantine Tapes
The Quarantine Tapes 186: Trevor Paglen

The Quarantine Tapes

Play Episode Listen Later Apr 22, 2021 30:01


On episode 186 of The Quarantine Tapes, Paul Holdengräber is joined by Trevor Paglen. Trevor is an artist and he talks with Paul about surprising himself with the type of work he found himself interested in this year. They discuss his focus on flowers in his most recent project, “Bloom.”Trevor and Paul dig into how images are changing in recent years and talk about Trevor’s essay, “Invisible Images.” They discuss the proliferation of algorithms and computer vision and talk about NFTs, Walter Benjamin, and the changing meaning of images under the pandemic. Then, Trevor unpacks the worrisome implications of images being separated from human eyes and his concerns over how these systems monitor and extract value from our lives. Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. Paglen’s work has had one-person exhibitions at the Smithsonian Museum of American Art, Washington D.C.; Carnegie Museum of Art, Pittsburgh; Fondazione Prada, Milan; the Barbican Centre, London; Vienna Secession, Vienna; and Protocinema Istanbul, and participated in group exhibitions the Metropolitan Museum of Art, the San Francisco Museum of Modern Art, the Tate Modern, and numerous other venues. Paglen has launched an artwork into distant orbit around Earth in collaboration with Creative Time and MIT, contributed research and cinematography to the Academy Award-winning film Citizenfour, and created a radioactive public sculpture for the exclusion zone in Fukushima, Japan. Paglen is the author of several books and numerous articles on subjects including experimental geography, artificial intelligence, state secrecy, military symbology, photography, and visuality. Paglen’s work has been profiled in the New York Times, the New Yorker, the Wall Street Journal, Wired, the Financial Times, Art Forum, and Aperture. In 2014, he received the Electronic Frontier Foundation’s Pioneer Award and in 2016, he won the Deutsche Börse Photography Prize. Paglen was named a MacArthur Fellow in 2017.  Paglen holds a B.A. from U.C. Berkeley, an MFA from the Art Institute of Chicago, and a Ph.D. in Geography from U.C. Berkeley.

The Art Angle
Re-Air: Why Artist Trevor Paglen Is Doing Everything He Can to Warn Humanity About Artificial Intelligence

The Art Angle

Play Episode Listen Later Mar 12, 2021 36:24


In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's (re-aired) episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. 

The Art Angle
Re-Air: Why Artist Trevor Paglen Is Doing Everything He Can to Warn Humanity About Artificial Intelligence

The Art Angle

Play Episode Listen Later Mar 12, 2021 37:10


In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's (re-aired) episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. 

Voice of the Arts
Dan Leers - Carnegie Museum of Art

Voice of the Arts

Play Episode Listen Later Sep 16, 2020


Dan Leers, Carnegie Music of Art Photography Curator, discusses "Trevor Paglen: Opposing Geometries" - on view now through March 14th in Gallery One and two locations in the Scaife Galleries.  The exhibit received a full page of notice in the Sunday Arts and Leisure section of the New York Times. Paglen's work at the Pace Gallery in London has also won praise. There is a forthcoming series of podcasts, discussion with Paglen and Leers and a catalog on the way. Leers discusses the measures being taken to assure safety during the pandemic and describes many of the photographs on view with details about their processing and philosophy. 

The Art Angle
Why Artist Trevor Paglen Is Doing Everything He Can to Warn Humanity About Artificial Intelligence

The Art Angle

Play Episode Listen Later Jun 12, 2020 36:24


In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. Although the episode was recorded before George Floyd's murder sparked nationwide demonstrations for racial justice, Paglen's work is more timely than ever for its probing of surveillance, authoritarianism, and the ways both are being simultaneously empowered and cloaked by A.I.

The Art Angle
Why Artist Trevor Paglen Is Doing Everything He Can to Warn Humanity About Artificial Intelligence

The Art Angle

Play Episode Listen Later Jun 12, 2020 37:10


In fall 2019, a new app called ImageNet Roulette was introduced to the world with what seemed like a simple, fun premise: snap a selfie, upload it to a database, and wait a few seconds for machine learning to tell you what type of person you are. Maybe a "teacher," maybe a "pilot," maybe even just a "woman." Or maybe, as the app's creator warned, the labels the system tagged you with would be shockingly racist, misogynistic, or misanthropic. Frequently, the warning turned out to be prescient, and the app immediately went viral thanks to its penchant for slurs and provocative presumptions. Long since decommissioned, ImageNet Roulette was part of a larger initiative undertaken by artist Trevor Paglen and artificial intelligence researcher Kate Crawford to expose the latent biases coded into the massive data sets informing a growing number of A.I. systems. It was only the latest light that Paglen's work had shined onto the dark underbelly of our image-saturated, technology-mediated world. Even beyond his Ph.D. in geography and his MacArthur "Genius" grant, Paglen's resume is unique among his peers on blue-chip gallery rosters. He's photographically infiltrated CIA black sites, scuba-dived through labyrinths of undersea data cables, launched art into space, and collaborated with NSA whistle-blower Edward Snowden, all as a means of making innovative art that brings into focus the all-but-invisible power structures governing contemporary life. On this week's episode of The Art Angle, Paglen joins Andrew Goldstein by phone to discuss his adventurous career. Although the episode was recorded before George Floyd's murder sparked nationwide demonstrations for racial justice, Paglen's work is more timely than ever for its probing of surveillance, authoritarianism, and the ways both are being simultaneously empowered and cloaked by A.I.

Data Science at Home
The dark side of AI: bias in the machine (Ep. 92)

Data Science at Home

Play Episode Listen Later Dec 28, 2019 20:26


  This is the fourth and last episode of mini series "The dark side of AI". I am your host Francesco and I'm with Chiara Tonini from London. The title of today's episode is Bias in the machine      C: Francesco, today we are starting with an infuriating discussion. Are you ready to be angry?    F: yeah sure is this about brexit?  No, I don't talk about that. In 1986 the New York City's Rockefeller University conducted a study on breast and uterine cancers and their link to obesity. Like in all clinical trials up to that point, the subjects of the study were all men.  So Francesco, do you see a problem with this approach?    F: No problem at all, as long as those men had a perfectly healthy uterus. In medicine, up to the end of the 20th century, medical studies and clinical trials were conducted on men, medicine dosage and therapy calculated on men (white men). The female body has historically been considered an exception, or variation, from a male body.    F: Like Eve coming from Adam's rib. I thought we were past that... When the female body has been under analysis, the focus was on the difference between it and the male body, the so-called “bikini approach”: the reproductive organs are different, therefore we study those, and those only. For a long time medicine assumed this was the only difference.    Oh good ... This has led to a hugely harmful fallout across society. Because women had reproductive organs, they should reproduce, and all else about them was deemed uninteresting. Still today, they consider a woman without children somehow to have betrayed her biological destiny. This somehow does not apply to a man without children, who also has reproductive organs.    F: so this is an example of a very specific type of bias in medicine, regarding clinical trials and medical studies, that is not only harmful for the purposes of these studies, but has ripple effects in all of society Only in the 2010 a serious conversation has started about the damage caused by not including women in clinical trials. There are many many examples (which we list in the references for this episode).    Give me one Researchers consider cardiovascular disease a male disease - they even call it “the widower”. They conduct studies on male samples. But it turns out, the symptoms of a heart attack, especially the ones leading up to one, are different in women. This led to doctors not recognising or dismissing the early symptoms in women.    F: I was reading that women are also subject to chronic pain much more than men: for example migraines, and pain related to endometriosis. But there is extensive evidence now of doctors dismissing women's pain, as either imaginary, or “inevitable”, like it is a normal state of being and does not need a cure at all.    The failure of the medical community as a whole to recognise this obvious bias up to the 21st century is an example of how insidious the problem of bias is.   There are 3 fundamental types of bias:    One: Stochastic drift: you train your model on a dataset, and you validate the model on a split of the training set. When you apply your model out in the world, you systematically add bias in the predictions due to the training data being too specific Two: The bias in the model, introduced by your choice of the parameters of your model.   Three: The bias in your training sample: people put training samples together, and people have culture, experience, and prejudice. As we will see today, this is the most dangerous and subtle bias. Today we'll talk about this bias.   Bias is a warping of our understanding of reality. We see reality through the lens of our experience and our culture. The origin of bias can date back to traditions going back centuries, and is so ingrained in our way of thinking, that we don't even see it anymore.    F: And let me add, when it comes to machine learning, we see reality through the lens of data. Bias is everywhere, and we could spend hours and hours talking about it. It's complicated.    It's about to become more complicated.    F: of course, if I know you… Let's throw artificial intelligence in the mix.    F: You know, there was a happier time when this sentence didn't fill me with a sense of dread...  ImageNet is an online database of over 14 million photos, compiled more than a decade ago at Stanford University. They used it to train machine learning algorithms for image recognition and computer vision, and played an important role in the rise of deep learning. We've all played with it, right? The cats and dogs classifier when learning Tensorflow? (I am a dog by the way. )   F: ImageNet has been a critical asset for computer-vision research. There was an annual international competition to create algorithms that could most accurately label subsets of images. In 2012, a team from the University of Toronto used a Convolutional Neural Network to handily win the top prize. That moment is widely considered a turning point in the development of contemporary AI. The final year of the ImageNet competition was 2017, and accuracy in classifying objects in the limited subset had risen from 71% to 97%. But that subset did not include the “Person” category, where the accuracy was much lower...  ImageNet contained photos of thousands of people, with labels. This included straightforward tags like “teacher,” “dancer” and “plumber”, as well as highly charged labels like “failure, loser” and “slut, slovenly woman, trollop.”   F: Uh Oh.  Then “ImageNet Roulette” was created, by an artist called Trevor Paglen and a Microsoft researcher named Kate Crawford. It was a digital art project, where you could upload your photo and let the classifier identify you, based on the labels of the database. Imagine how well that went.    F: I bet it did't work Of course it didn't work. Random people were classified as “orphans” or “non-smoker” or “alcoholic”. Somebody with glasses was a “nerd”. Tabong Kima, a 24-year old African American, was classified as “offender” and “wrongdoer”.    F: and there it is.  Quote from Trevor Paglen: “We want to show how layers of bias and racism and misogyny move from one system to the next. The point is to let people see the work that is being done behind the scenes, to see how we are being processed and categorized all the time.”   F: The ImageNet labels were applied by thousands of unknown people, most likely in the United States, hired by the team from Stanford, and working through the crowdsourcing service Amazon Mechanical Turk. They earned pennies for each photo they labeled, churning through hundreds of labels an hour. The labels were not verified in any way : if a labeler thought someone looks “shady”, this label is just a result of their prejudice, but has no basis in reality. As they did, biases were baked into the database. Paglen quote again: “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.” They defined what a “loser” looked like. And a “slut.” And a “wrongdoer.”   F: The labels originally came from another sprawling collection of data called WordNet, a kind of conceptual dictionary for machines built by researchers at Princeton University in the 1980s. But with these inflammatory labels included, the Stanford researchers may not have realized what they were doing. What is happening here is the transferring of bias from one system to the next.    Tech jobs, in past decades but still today, predominantly go to white males from a narrow social class. Inevitably, they imprint the technology with their worldview. So their algorithms learn that a person of color is a criminal, and a woman with a certain look is a slut.    I'm not saying they do it on purpose, but the lack of diversity in the tech industry translates into a narrower world view, which has real consequences in the quality of AI systems.    F: Diversity in tech teams is often framed as an equality issue (which of course it is), but there are enormous advantages in it: it allows to create that cognitive diversity that will reflect into superior products or services. I believe this is an ongoing problem. In recent months, researchers have shown that face-recognition services from companies like Amazon, Microsoft and IBM can be biased against women and people of color.    Crawford and Paglen argue this: “In many narratives around AI it is assumed that ongoing technical improvements will resolve all problems and limitations. But what if the opposite is true? What if the challenge of getting computers to “describe what they see” will always be a problem? The automated interpretation of images is an inherently social and political project, rather than a purely technical one. Understanding the politics within AI systems matters more than ever, as they are quickly moving into the architecture of social institutions: deciding whom to interview for a job, which students are paying attention in class, which suspects to arrest, and much else.”   F: You are using the words “interpretation of images” here, as opposed to “description” or “classification”. Certain images depict something concrete, with an objective reality. Like an apple. But other images… not so much?    ImageNet contain images only corresponding to nouns (not verbs for example). Noun categories such as “apple” are well defined. But not all nouns are created equal. Linguist George Lakoff points out that the concept of an “apple” is more nouny than the concept of “light”, which in turn is more nouny than a concept such as “health.” Nouns occupy various places on an axis from concrete to abstract, and from descriptive to judgmental. The images corresponding to these nouns become more and more ambiguous. These gradients have been erased in the logic of ImageNet. Everything is flattened out and pinned to a label. The results can be problematic, illogical, and cruel, especially when it comes to labels applied to people.    F: so when an image is interpreted as Drug Addict, Crazy, Hypocrite, Spinster, Schizophrenic, Mulatto, Red Neck… this is not an objective description of reality, it's somebody's worldview coming to the surface. The selection of images for these categories skews the meaning in ways that are gendered, racialized, ableist, and ageist. ImageNet is an object lesson in what happens when people are categorized like objects. And this practice has only become more common in recent years, often inside the big AI companies, where there is no way for outsiders to see how images are being ordered and classified.    The bizarre thing about these systems is that they remind of early 20th century criminologists like Lombroso, or phrenologists (including Nazi scientists), and physiognomy in general. This was a discipline founded on the assumption that there is a relationship between an image of a person and the character of that person. If you are a murderer, or a Jew, the shape of your head for instance will tell.    F: In reaction to these ideas, Rene' Magritte produced that famous painting of the pipe with the tag “This is not a pipe”.   You know that famous photograph of the soldier kissing the nurse at the end of the second world war? The nurse came public about it when she was like 90 years old, and told how this total stranger in the street had grabbed her and kissed her. This is a picture of sexual harassment. And knowing that, it does not seem romantic anymore.    F: not romantic at all indeed Images do not describe themselves. This is a feature that artists have explored for centuries. We see those images differently when we see how they're labeled. The correspondence between image, label, and referent is fluid. What's more, those relations can change over time as the cultural context of an image shifts, and can mean different things depending on who looks, and where they are located. Images are open to interpretation and reinterpretation. Entire subfields of philosophy, art history, and media theory are dedicated to teasing out all the nuances of the unstable relationship between images and meanings. The common mythos of AI and the data it draws on, is that they are objectively and scientifically classifying the world. But it's not true, everywhere there is politics, ideology, prejudices, and all of the subjective stuff of history.    F: When we survey the most widely used training sets, we find that this is the rule rather than the exception. Training sets are the foundation on which contemporary machine-learning systems are built. They are central to how AI systems recognize and interpret the world. By looking at the construction of these training sets and their underlying structures, we discover many unquestioned assumptions that are shaky and skewed. These assumptions inform the way AI systems work—and fail—to this day. And the impenetrability of the algorithms, the impossibility of reconstructing the decision-making of a NN, hides the bias further away from scrutiny. When an algorithm is a black box and you can't look inside, you have no way of analysing its bias.    And the skewness and bias of these algorithms have real effects in society, the more you use AI in the judicial system, in medicine, the job market, in security systems based on facial recognition, the list goes on and on.    Last year Google unveiled BERT (Bidirectional Encoder Representations from Transformers). It's an AI system that learns to talk: it's a Natural Language Processing engine to generate written (or spoken) language.    F: we have an episode  in which we explain all that   They trained it from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. They baked decades and even centuries of biases — along with a few new ones — into all that material. So for instance BERT is extremely sexist: it associates with male almost all professions and positive attributes (except for “mom”).    BERT is widely used in industry and academia. For example it can interpret news headlines automatically. Even Google's search engine use it.    Try googling “CEO”, and you get out a gallery of images of old white men.   F: such a pervasive and flawed AI system can propagate inequality at scale. And it's super dangerous because it's subtle. Especially in industry, query results will not be tested and examined for bias. AI is a black box and researchers take results at face value.    There are many cases of algorithm-based discrimination in the job market. Targeting candidates for tech jobs for instance, may be done by algorithms that will not recognise women as potential candidates. Therefore, they will not be exposed to as many job ads as men. Or, automated HR systems will rank them lower (for the same CV) and screen them out.    In the US, algorithms are used to calculate bail. The majority of the prison population in the US is composed of people of colour, as a result of a systemic bias that goes back centuries. An algorithm learns that a person of colour is more likely to commit a crime, is more likely to not be able to afford bail, is more likely to violate parole. Therefore, people of colour will receive harsher punishments for the same crime. This amplifies this inequality at scale.    Conclusion   Question everything, never take predictions of your models at face value. Always question how your training samples have been put together, who put them together, when and in what context. Always remember that your model produces an interpretation of reality, not a faithful depiction.  Treat reality responsibly.   

Explain Me
Making Monstrosity Visible in Three Parts: Paglen, Ga, and Fast

Explain Me

Play Episode Listen Later Oct 18, 2017 58:23


Three shows. Three reviews. The Trevor Paglen exhibition at Metro Pictures is creepy as fuck. We take issue with New York Magazine critic Jerry Saltz's review on the subject. Ellie Ga  at Bureau sensitively touches on the horror of the Syrian refugee crisis. Omer Fast at James Cohen produces some powerful videos about the role of the artist in times of crisis, but they are overshadowed by protestors. They believe his decision to transform the front of the gallery into a waiting room in a Chinese bus station amounts to yellowface. Our thoughts on this and just about everything else. 

National Gallery of Art | Audio
Visibility Machines: A Conversation with Harun Farocki and Trevor Paglen

National Gallery of Art | Audio

Play Episode Listen Later Nov 3, 2014 57:47


Media Roots Radio
Media Roots Radio - Interview with Trevor Paglen, Experimental Geographer, Artist & Author

Media Roots Radio

Play Episode Listen Later Sep 6, 2011 66:34


MEDIA ROOTS- Trevor Paglen's work deliberately blurs the lines between science, contemporary art, journalism, and other disciplines to construct unfamiliar, yet meticulously researched ways to see and interpret the world around us. Paglen’s visual work has been exhibited at several art museums worldwide and his writing and art have been published in major publications including The New York Times, Wired, Vanity Fair and Newsweek. Paglen holds a B.A. from UC Berkeley, an M.F.A. from the School of the Art Institute of Chicago, and a Ph.D. in Geography from UC Berkeley, where he remains an affiliated researcher. In the 90s, Paglen was a member of a locally revered bay area noise/experimental project called Noise Gate, and co-ran an infamous underground venue in downtown Oakland called the Sandcrawler. His electronic music background hugely inspired co-host Robbie Martin in his own musical evolution. Paglen is also the author of several books: Torture Taxi, the first book to comprehensively cover the CIA's extraordinary rendition program; I Could Tell You But Then You Would Have to be Destroyed by Me, a book looking at the world of black projects through unit patches and memorabilia created for top-secret programs; and Blank Spots on the Map: The Dark Geography of the Pentagon's Secret World, a book that gives a broader look at secrecy in the United States. http://www.mediaroots.org/interview-with-experimental-geographer-artist-trevor-paglen.php http://www.MediaRoots.org

Bad at Sports
Bad at Sports Episode 112: Trevor Paglen/ Pate Conaway

Bad at Sports

Play Episode Listen Later Oct 20, 2007 87:38


This week: Marc and Brian talk to Trevor Paglen. "Trevor Paglen is an artist, writer, and experimental geographer working out of the Department of Geography at the University of California, Berkeley. His work involves deliberately blurring the lines between social science, contemporary art, and a host of even more obscure disciplines in order to construct unfamiliar, yet meticulously researched ways to interpret the world around us. His most recent projects involve close examinations of state secrecy, the California prison system, and the CIA’s practice of “extraordinary rendition.? Paglen’s visual work has been shown in galleries and museums including MASSMOCA (2006), the Warhol Museum (2007), Diverse Works (2005), in journals and magazines from Wired to The New York Review of Books, and at numerous other arts venues, universities, conferences, and public spaces. He has had one-person shows at Deadtech (2001), the LAB (2005), and Bellwether Gallery (2006). Paglen’s first book, Torture Taxi: On the Trail of the CIA’s Rendition Flights (co-authored with AC Thompson; Melville House, 2006) was the first book to systematically describe the CIA’s “extraordinary rendition? program. His second book, I Could Tell You But Then You Would Have to be Destroyed by Me (Melville House, 2007) an examination of the visual culture of “black? military programs, will be published in November 2007. He is currently completing his third book, entitled Blank Spots on a Map, which will be published by Dutton/NAL/Penguin in late 2008/early 2009. Paglen has received grants and commissions from Rhizome.org, the LEF Foundation, and the Eyebeam Center for Art and Technology. In 2005, he was a Vectors Journal Fellow at the University of Southern California. Paglen holds a BA from UC Berkeley, an MFA from the School of the Art Institute of Chicago, and is currently completing a PhD in the Department of Geography at the University of California at Berkeley." NEXT: Terri and Serena talk to Pate Conaway. "Pate Conaway is an interdisciplinary artist fromChicago, Illinois.  Conaway sees the act of art-making as a performance in itself.  Conaway has produced art in gallery situations, including during a five-week stint at the Museum of Contemporary Art in Chicago where he knitted a pair of nine-foot-long mittens.  The artist, whose background is in performance and paper arts, continues to work in sculpture, installation, and interactive performance.  Now learning to sew, Conaway is fascinated by the idea of applying garment construction techniques to bookbinding. Pate Conaway is a graduate of Chicago's Second City Training Center and received his MFA from Columbia College, Chicago.  He has exhibited extensively in the mid-west and his work can be found in the Artist Book Collection at the Museum of Contemporary Art, Chicago." AND Mike B. has a rant to offer.