Podcast appearances and mentions of evan selinger

  • 23PODCASTS
  • 27EPISODES
  • 1h 2mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 19, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about evan selinger

Latest podcast episodes about evan selinger

Philosophical Disquisitions
TITE 4 - Behaviour Change and Control

Philosophical Disquisitions

Play Episode Listen Later Dec 19, 2023


In this episode, John and Sven talk about the role that technology can play in changing our behaviour. In doing so, they note the long and troubled history of philosophy and self-help. They also ponder whether we can use technology to control our lives or whether technology controls us. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.   Recommendations Brett Frischmann and Evan Selinger, Reengineering Humanity. Carissa Véliz, Privacy is Power. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Hardly Working with Brent Orrell
Evan Selinger on Tech, Surveillance, and Obscurity in Work and Society

Hardly Working with Brent Orrell

Play Episode Listen Later May 11, 2023 45:17


Responses to the sudden emergence of widely available artificial intelligence tend to swing between those who believe these technologies will deliver a utopia of unlimited growth and opportunity or inflict a robot-dominated dystopia of human obsolescence. In the space between those two polls, some are engaged in serious ethical reflection that attempts to weigh out the possible impacts of AI in light of the preexisting social trends. One of the more thoughtful, and fair-minded critics of emerging technologies is Evan Selinger, a professor of philosophy at Rochester Institute of Technology. In his research, Dr. Selinger asks how technology has affected our personal obscurity in society (the right not to be known), and how mass surveillance and optimization affects human work. Mentioned in the EpisodeDon Ihde Patrick GrimWoodrow Hertzog Social obscurity paper Digital doubles – in production and manufacturing Reengineering Humanity TaylorismFrederick Winslow Taylor Brett Frischmann

The Convivial Society
"The Face Stares Back" Audio + Links and Resources

The Convivial Society

Play Episode Listen Later Apr 8, 2022 13:38


Welcome to the Convivial Society, a newsletter about technology, culture, and the moral life. In this installment you’ll find the audio version of the previous essay, “The Face Stares Back.” And along with the audio version you’ll also find an assortment of links and resources. Some of you will remember that such links used to be a regular feature of the newsletter. I’ve prioritized the essays, in part because of the information I have on click rates, but I know the links and resources are useful to more than a few of you. Moving forward, I think it makes sense to put out an occasional installment that contains just links and resources (with varying amounts of commentary from me). As always, thanks for reading and/or listening. Links and ResourcesLet’s start with a classic paper from 1965 by philosopher Hubert Dreyfus, “Alchemy and Artificial Intelligence.” The paper, prepared for the RAND Corporation, opens with a long epigraph from the 17th-century polymath Blaise Pascal on the difference between the mathematical mind and the perceptive mind. On “The Tyranny of Time”: “The more we synchronize ourselves with the time in clocks, the more we fall out of sync with our own bodies and the world around us.” More: “The Western separation of clock time from the rhythms of nature helped imperialists establish superiority over other cultures.”Relatedly, a well-documented case against Daylight Saving Time: “Farmers, Physiologists, and Daylight Saving Time”: “Fundamentally, their perspective is that we tend to do well when our body clock and social clock—the official time in our time zone—are in synch. That is, when noon on the social clock coincides with solar noon, the moment when the Sun reaches its highest point in the sky where we are. If the two clocks diverge, trouble ensues. Startling evidence for this has come from recent findings in geographical epidemiology—specifically, from mapping health outcomes within time zones.”Jasmine McNealy on “Framing and Language of Ethics: Technology, Persuasion, and Cultural Context.” Interesting forthcoming book by Kevin Driscoll: The Modem World: A Prehistory of Social Media.Great piece on Jacques Ellul by Samuel Matlack at The New Atlantis, “How Tech Despair Can Set You Free”: “But Ellul rejects it. He refuses to offer a prescription for social reform. He meticulously and often tediously presents a problem — but not a solution of the kind we expect. This is because he believed that the usual approach offers a false picture of human agency. It exaggerates our ability to plan and execute change to our fundamental social structures. It is utopian. To arrive at an honest view of human freedom, responsibility, and action, he believed, we must confront the fact that we are constrained in more ways than we like to think. Technique, says Ellul, is society’s tightest constraint on us, and we must feel the totality of its grip in order to find the freedom to act.”Evan Selinger on “The Gospel of the Metaverse.”Ryan Calo on “Modeling Through”: “The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it.”“Why Christopher Alexander Still Matters”: “The places we love, the places that are most successful and most alive, have a wholeness about them that is lacking in too many contemporary environments, Alexander observed. This problem stems, he thought, from a deep misconception of what design really is, and what planning is.  It is not “creating from nothing”—or from our own mental abstractions—but rather, transforming existing wholes into new ones, and using our mental processes and our abstractions to guide this natural life-supporting process.” An interview with philosopher Shannon Vallor: “Re-envisioning Ethics in the Information Age”: “Instead of using the machines to liberate and enlarge our own lives, we are increasingly being asked to twist, to transform, and to constrain ourselves in order to strengthen the reach and power of the machines that we increasingly use to deliver our public services, to make the large-scale decisions that are needed in the financial realm, in health care, or in transportation. We are building a society where the control surfaces are increasingly automated systems and then we are asking humans to restrict their thinking patterns and to reshape their thinking patterns in ways that are amenable to this system. So what I wanted to do was to really reclaim some of the literature that described that process in the 20th century—from folks like Jacques Ellul, for example, or Herbert Marcuse—and then really talk about how this is happening to us today in the era of artificial intelligence and what we can do about it.”From Lance Strate in 2008: “Studying Media AS Media: McLuhan and the Media Ecology Approach.” Japan’s museum of rocks that look like faces.I recently had the pleasure of speaking with Katherine Dee for her podcast, which you can listen to here.I’ll leave you with an arresting line from Simone Weil’s notebooks: “You could not have wished to be born at a better time than this, when everything is lost.” Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe

The Sunday Show
The Perils of Amazon Ring

The Sunday Show

Play Episode Listen Later Oct 24, 2021 43:46


Earlier this month, https://www.rit.edu/directory/emsgsh-evan-selinger (Evan Selinger), a professor of philosophy at the Rochester Institute of Technology, https://www.tandfonline.com/doi/abs/10.1080/09505431.2021.1983797 (published a paper) with co-author Darrin Durant in the journal Science as Culture titled Amazon's Ring: Surveillance as a Slippery Slope. Last month, a https://www.washingtonpost.com/technology/2021/09/16/chris-gilliard-sees-digital-redlining-in-surveillance-tech/ (must-read profile) of https://twitter.com/hypervisible (Chris Gilliard) by Will Oremus for The Washington Post also started out with concerns about Ring, before detailing Gilliard's perspective and background. I invited Evan and Chris to join me to discuss their writings on Ring, and how it fits into their broader views on tech and society. 

Philosophical Disquisitions
82 - What should we do about facial recognition technology?

Philosophical Disquisitions

Play Episode Listen Later Sep 23, 2020


 Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show notesTopics discussed include: What is facial recognition anyway? Are there multiple forms that are confused and conflated? What's the history of facial recognition? What has changed recently? How is the technology used? What are the benefits of facial recognition? What's bad about it? What are the privacy and other risks? Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms? What can we do to address the risks? Should we regulate or ban?Relevant Links Brenda's Homepage Brenda on Twitter 'The Privacy Expert's Guide to AI and Machine Learning' by Brenda (at FPF) Brenda's US Congress Testimony on Facial Recognition 'Facial recognition and the future of privacy: I always feel like … somebody’s watching me' by Brenda 'The Case for Banning Law Enforcement From Using Facial Recognition Technology' by Evan Selinger and Woodrow Hartzog #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

A Newsletter of the Christian Study Center of Gainesville

Please note that this week we are launching a new feature for the newsletter. You may now choose to listen to the opening essay by clicking play above. Next week it will also be possible to find the audio in your favorite podcast apps. In 1934, T. S. Eliot published Choruses from “The Rock,” a collection of choruses Eliot composed for a play he wrote called “The Rock,” which explored the history of the church and its plight in the modern world. Although the work is relatively obscure compared to many of Eliot's better known works, it yielded some rather well known lines. It is in the first chorus, for example, that we read, The endless cycle of idea and action,Endless invention, endless experiment,Brings knowledge of motion, but not of stillness;Knowledge of speech, but not of silence;Knowledge of words, and ignorance of the Word.All our knowledge brings us nearer to our ignorance,All our ignorance brings us nearer to death,But nearness to death no nearer to God.Where is the Life we have lost in living?Where is the wisdom we have lost in knowledge?Where is the knowledge we have lost in information?The cycles of Heaven in twenty centuriesBring us farther from God and nearer to the Dust.These lines, best remembered for the distinctions they make among information, knowledge, and wisdom, would repay our careful attention. But it is to another set of lines that we will turn. In the sixth chorus, Eliot wrote, Why should men love the Church? Why should they love her laws?She tells them of Life and Death, and of all that they would forget.She is tender where they would be hard, and hard where they like to be soft.She tells them of Evil and Sin, and other unpleasant facts.They constantly try to escapeFrom the darkness outside and withinBy dreaming of systems so perfect that no one will need to be good.But the man that is will shadowThe man that pretends to be.Once again, Eliot gives us much we could reflect upon in these few lines, but let us focus on his claim that, in the modern world, human beings “constantly try to escape / From the darkness outside and within / By dreaming of systems so perfect that no one will need to be good.” These lines aptly capture what we might think of as the technocratic impulse in western society, the idea that it is possible to engineer an ideal society independently of how human beings act. Or, worse yet, that human action itself can and ought to be engineered by the application of social techniques. Such an impulse can take on an obviously totalitarian quality, but it is present in subtler forms as well. Most notably, it is evident in mid-twentieth century theories of behaviorism and in the more recent nudging approach to design and policy popularized by Cass Sunstein and Richard Thaler and championed by many in the tech industry. In this approach, small and often subtle interventions in the form of automated positive reinforcements or periodic reminders are seen as the path toward managing and shaping human behavior. Similarly, in their 2018 book, Reengineering Humanity, philosopher Evan Selinger and legal scholar Brett Frischmann documented the countless ways in which modern digital technology aims at what they called “engineered determinism.”Historically, the technocratic impulse is evident in the evolution of the rhetoric of progress throughout the course of the 18th and 19th centuries. The earlier Enlightenment notion of progress viewed technology as a necessary, but not sufficient cause of progress which was understood as a movement toward a more just, democratic society. This political vision was gradually replaced by a technocratic notion which measured progress by just one metric: technological innovation. The cultural historian Leo Marx put it this way: “the simple [small-r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology' as the basis and the measure of — as all but constituting — the progress of society.” Accordingly, technological innovation becomes a substitute for genuine political, economic, and social progress. Underlying this view is the accompanying desire for freedom without responsibility, or what, riffing on philosopher Albert Borgmann, we have called regardless freedom. To dream of systems so perfect no one will need to be good, as Eliot put it, is to dream of systems that underwrite irresponsibility. Such systems would function whether or not human beings act virtuously and responsibly, but such systems do not exist. They remain a dream, or, better, a nightmare. Virtue, as we will always re-discover, is an irreducible component of any rightly ordered society. If we are indeed in a moment that affords the possibility of reimagining and reforming our social structures, then we must resist the temptation to offload the necessary intellectual and moral labor to technical systems and solutions.To be clear, personal virtue is a necessary rather than sufficient cause of a just society. Modern societies do, in fact, require systems, institutions, and bureaucracies of varying scale and power. And it is possible that such systems not only fail due to a lack of virtue, but that they actively sustain and encourage vice and injustice. The well ordered society requires both virtue and a just social infrastructure. The classical or cardinal virtues of temperance, prudence, fortitude, and justice have long offered a foundation for civic order. These virtues encourage restraint, sound judgment, moral courage, and the desire for an equitable social order. To cultivate such virtues is to assume personal responsibility for the functioning of society. Beyond these cardinal virtues, the Church has always recognized the theological virtues: faith, hope, and love. These remain indispensable for the church, and, while they cannot, in their explicitly theological character, be expected or demanded of the wider public, Christians can, by their participation, leaven the civic order with these virtues. But it can do so only to the degree that it cultivates these virtues in her people. Study Center ResourcesPascal's is open for both online ordering and dine-in service. Please do feel free to spread the word that we are open and ready to serve.In this week's Dante reading group, we will be covering cantos 17-19 of the Inferno. If you'd like to connect with group, please email Mike Sacasas at mike@christianstudycenter.org.Be sure to check out the archive of resources available online from the study center. Classes and lectures are available at our audio archive. You can also peruse back issues of Reconsiderations here.Recommended Reading— Adam Elkus on the emergence of the “omni-cris”:When social constraints are weakened, the aggregate predictability of human behavior diminishes. Why? The weakening of constraints generates confusion. Things have always worked until they suddenly break. Things have always been decided for you until you have to suddenly decide on your own. Another way of thinking about social constraints – with a very long history in social science – posits them as involuntarily assigned expectations about the future. Prolonged and severe disruption of expectations without immediate prospect of relief accordingly should create greater variance in potential outcomes. The simplest way to understand the omni-crisis is as the sustained breaking of expectations and disruption of the ability to simulate the future forward using assumed constraints.— Taylor Dotson on “Radiation Politics in a Pandemic”:The inherent uncertainties in the science of impending dangers complicates government officials' ability to achieve public buy-in. Because empirical evidence is almost always incomplete or not totally convincing, officials must rely on trust, on their own legitimacy. The trouble […] is that trust is gained in drops but lost in buckets. Storming in to save the day with science is great — until some of the facts turn out wrong. British radiation scientists could have instead worked alongside sheep farmers in finding the pertinent scientific facts, recognizing that the farmers had something to contribute. Instead of expecting the farmers' deference, this approach would have gone a long way toward earning their trust in the scientists' own areas of expertise.— Venkatesh Rao on “Pandemic Time: A Distributed Doomsday Clock”:Whether or not the stars foretold our present condition, we will be living for the foreseeable future in a distorted temporality shaped by the progress of COVID-19 across the globe. Like the distorted time around a supergiant star going supernova and collapsing into a black hole, “pandemic time” is anything but normal. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit christianstudycenter.substack.com

Exchanges: A Cambridge UP Podcast
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

Exchanges: A Cambridge UP Podcast

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts.

New Books in Science, Technology, and Society
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

New Books in Science, Technology, and Society

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books Network
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

New Books Network

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Sociology
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

New Books in Sociology

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Anthropology
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

New Books in Anthropology

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Technology
Brett Frischmann and Evan Selinger, "Re-Engineering Humanity" (Cambridge UP, 2018)

New Books in Technology

Play Episode Listen Later Jan 23, 2020 89:04


Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In Re-Engineering Humanity (Cambridge University Press, 2018), Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

Philosophical Disquisitions
Mass Surveillance, Artificial Intelligence and New Legal Challenges

Philosophical Disquisitions

Play Episode Listen Later Dec 27, 2019


[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.]In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons:“while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…”The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1]The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs.I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws.1. What’s changed?  Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out.First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR.Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency.Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected:(i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image or voice recognition), and that may also be difficult, if not impossible, for humans to spot due to their complexity. To put it another way, AI allows us to understand data in new ways.(ii) It enables the creation of new kinds of informational product - what I mean here is that the AI systems don’t simply rebroadcast, dispassionate and objective forms of the data we collect. They actively construct and reshape the data into artifacts that can be more or less useful to humans.(iii) It enables new kinds of action and behaviour - what I mean here is that the informational products created by these AI systems are not simply inert artifacts that we observe with bemused detachment. They are prompts to change and alter human behaviour and decision-making.On top of all this, these AI systems do these things with increasing autonomy (or, less controversially, automation). Although humans do assist the AI systems in both understanding, constructing and acting on foot of the data being collected, advances in AI and robotics make it increasingly possible for machines to do things without direct human assistance or intervention.It is these ways of using data, coupled with increasing automation, that I believe give rise to the new legal challenges. It is impossible for me to cover all of these challenges in this talk. So what I will do instead is to discuss three case studies that I think are indicative of the kinds of challenges that need to be addressed, and that correspond to the three things we can now do with the data that we are collecting.2. Case Study: Facial Recognition TechnologyThe first case study has to do with facial recognition technology. This is an excellent example of how AI can understand data in new ways. Facial recognition technology is essentially like fingerprinting for the face. From a selection of images, an algorithm can construct a unique mathematical model of your facial features, which can then be used to track and trace your identity across numerous locations.The potential conveniences of this technology are considerable: faster security clearance at airports; an easy way to record and confirm attendance in schools; an end to complex passwords when accessing and using your digital services; a way for security services to track and identify criminals; a tool for locating missing persons and finding old friends. Little surprise then that many of us have already welcomed the technology into our lives. It is now the default security setting on the current generation of smartphones. It is also being trialled at airports (including Dublin Airport),[2] train stations and public squares around the world. It is cheap and easily plugged into existing CCTV surveillance systems. It can also take advantage of the vast databases of facial images collected by governments and social media engines.Despite its advantages, facial recognition technology also poses a significant number of risks. It enables and normalises blanket surveillance of individuals across numerous environments. This makes it the perfect tool for oppressive governments and manipulative corporations. Our faces are one of our most unique and important features, central to our sense of who we are and how we relate to each other — think of the Beatles immortal line ‘Eleanor Rigby puts on the face that she keeps in the jar by the door’ — facial recognition technology captures this unique feature and turns into a digital product that can be copied and traded, and used for marketing, intimidation and harassment.Consider, for example, the unintended consequences of the FindFace app that was released in Russia in 2016. Intended by its creators to be a way of making new friends, the FindFace app matched images on your phone with images in social media databases, thus allowing you to identify people you may have met but whose names you cannot remember. Suppose you met someone at a party, took a picture together with them, but then didn’t get their name. FindFace allows you use the photo to trace their real identity.[3] What a wonderful idea, right? Now you need never miss out on an opportunity for friendship because of oversight or poor memory. Well, as you might imagine, the app also has a dark side. It turns out to be the perfect technology for stalkers, harassers and doxxers (the internet slang for those who want to out people’s real world identities). Anyone who is trying to hide or obscure their identity can now be traced and tracked by anyone who happens to take a photograph of them.What’s more, facial recognition technology is not perfect. It has been shown to be less reliable when dealing with non-white faces, and there are several documented cases in which it matches the wrong faces, thus wrongly assuming someone is a criminal when they are not. For example, many US drivers have had their licences cancelled because an algorithm has found two faces on a licence database to be suspiciously similar and has then wrongly assumed the people in question to be using a false identity. In another famous illustration of the problem, 28 members of the US congress (most of them members of racial minorities), were falsely matched with criminal mugshots using facial recognition technology created by Amazon.[4] As some researchers have put it, the widespread and indiscriminate use of facial recognition means that we are all now part of a perpetual line-up that is both biased and error prone.[5] The conveniences of facial recognition thus come at a price, one that often only becomes apparent when something goes wrong, and is more costly for some social groups than others.What should be done about this from a legal perspective? The obvious answer is to carefully regulate the technology to manage its risks and opportunities. This is, in a sense, what is already being done under the GDPR. Article 9 of the GDPR stipulates that facial recognition is a kind of biometric data that is subject to special protections. The default position is that it should not be collected, but this is subject to a long list of qualifications and exceptions. It is, for example, permissible to collect it if the data has already been made public, if you get the explicit consent of the person, if it serves some legitimate public interest, if it is medically necessary or necessary for public health reasons, if it is necessary to protect other rights and so on. Clearly the GDPR does restrict facial recognition in some ways. A recent Swedish case fined a school for the indiscriminate use of facial recognition for attendance monitoring.[6] Nevertheless, the long list of exceptions makes the widespread use of facial recognition not just a possibility but a likelihood. This is something the EU is aware of and in light of the Swedish case they have signalled an intention to introduce stricter regulation of facial recognition.This is something we in Ireland should also be considering. The GDPR allows states to introduce stricter protections against certain kinds of data collection. And, according to some privacy scholars, we need the strictest possible protections to to save us from the depredations of facial recognition. Woodrow Hartzog, one of the foremost privacy scholars in the US, and Evan Selinger, a philosopher specialising in the ethics of technology, have recently argued that facial recognition technology must be banned. As they put it (somewhat alarmingly):[7]“The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”They caution against anyone who thinks that the technology can be procedurally regulated, arguing that governmental and commercial interests will always lobby for expansion of the technology beyond its initially prescribed remit. They also argue that attempts at informed consent will be (and already are) a ‘spectacular failure’ because people don’t understand what they are consenting to when they give away their facial fingerprint.Some people might find this call for a categorical ban extreme, unnecessary and impractical. Why throw the baby out with the bathwater and other cliches to that effect. But I would like to suggest that there is something worth taking seriously here, particularly since facial recognition technology is just the tip of the iceberg of data collection. People are already experimenting with emotion recognition technology, which uses facial images to predict future behaviour in real time, and there are many other kinds of sensitive data that are being collected, digitised and traded. Genetic data is perhaps the most obvious other example. Given that data is what fuels the fire of AI, it is possible that we should consider cutting off some of the fuel supply in its entirety.3. Case Study: DeepfakesLet me move on to my second case study. This one has to do with how AI is used to create new informational products from data. As an illustration of this I will focus on so-called ‘deepfake’ technology. This is a machine learning technique that allows you to construct realistic synthetic media from databases of images and audio files. The most prevalent use of deepfakes is, perhaps unsurprisingly, in the world of pornography, where the faces of famous actors have been repeatedly grafted onto porn videos. This is disturbing and makes deepfakes an ideal technology for ‘synthetic’ revenge porn.Perhaps more socially significant than this, however, are the potential political uses of deepfake technology. In 2017, a team of researchers at the University of Washington created a series of deepfake videos of Barack Obama which I will now play for you.[8] The images in these videos are artificial. They haven’t been edited together from different clips. They have been synthetically constructed by an algorithm from a database of audiovisual materials. Obviously, the video isn’t entirely convincing. If you look and listen closely you can see that there is something stilted and artificial about it. In addition to this it uses pre-recorded audio clips to sync to the synthetic video. Nevertheless, if you weren’t looking too closely, you might be convinced it was real. Furthermore, there are other teams working on using the same basic technique to create synthetic audio too. So, as the technology improves, it could be very difficult for even the most discerning viewers to tell the difference between fiction and reality.Now there is nothing new about synthetic media. With the support of the New Zealand Law Foundation, Tom Barraclough and Curtis Barnes have published one of the most detailed investigations into the legal policy implications of deepfake technology.[9] In their report, they highlight the fact that an awful lot of existing audiovisual media is synthetic: it is all processed, manipulated and edited to some degree. There is also a long history of creating artistic and satirical synthetic representations of political and public figures. Think, for example, of the caricatures in Punch magazine or in the puppet show Spitting Image. Many people who use deepfake technology to create synthetic media will, no doubt, claim a legitimate purpose in doing so. They will say they are engaging in legitimate satire or critique, or producing works of artistic significance.Nevertheless, there does seem to be something worrying about deepfake technology. The highly realistic nature of the audiovisual material being created makes it the ideal vehicle for harassment, manipulation, defamation, forgery and fraud. Furthermore, the realism of the resultant material also poses significant epistemic challenges for society. The philosopher Regina Rini captures this problem well. She argues that deepfake technology poses a threat to our society’s ‘epistemic backstop’. What she means is that as a society we are highly reliant on testimony from others to get by. We rely on it for news and information, we use it to form expectations about the world and build trust in others. But we know that testimony is not always reliable. Sometimes people will lie to us; sometimes they will forget what really happened. Audiovisual recordings provide an important check on potentially misleading forms of testimony. They encourage honesty and competence. As Rini puts it:[10]“The availability of recordings undergirds the norms of testimonial practice…Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying on the words of others. Recordings do this in two distinctive ways: actively correcting errors in past testimony and passively regulating ongoing testimonial practices.”The problem with deepfake technology is that it undermines this function. Audiovisual recordings can no longer provide the epistemic backstop that keeps us honest.What does this mean for the law? I am not overly concerned about the impact of deepfake technology on legal evidence-gathering practices. The legal system, with its insistence on ‘chain of custody’ and testimonial verification of audiovisual materials, is perhaps better placed than most to deal with the threat of deepfakes (though there will be an increased need for forensic experts to identify deepfake recordings in court proceedings). What I am more concerned about is how deepfake technologies will be weaponised to harm and intimidate others — particularly members of vulnerable populations. The question is whether anything can be done to provide legal redress for these problems? As Barraclough and Barnes point out in their report, it is exceptionally difficult to legislate in this area. How do you define the difference between real and synthetic media (if at all)? How do you balance the free speech rights against the potential harms to others? Do we need specialised laws to do this or are existing laws on defamation and fraud (say) up to the task? Furthermore, given that deepfakes can be created and distributed by unknown actors, who would the potential cause of action be against?These are difficult questions to answer. The one concrete suggestion I would make is that any existing or proposed legislation on ‘revenge porn’ should be modified so that it explicitly covers the possibility of synthetic revenge porn. Ireland is currently in the midst of legislating against the nonconsensual sharing of ‘intimate images’ in the Harassment, Harmful Communications and Related Offences Bill. I note that the current wording of the offence in section 4 of the Bill covers images that have been ‘altered’ but someone might argue that synthetically constructed images are not, strictly speaking, altered. There may be plans to change this wording to cover this possibility — I know that consultations and amendments to the Bill are ongoing[11] — but if there aren’t then I suggest that there should be.To reiterate, I am using deepfake technology as an illustration of a more general problem. There are many other ways in which the combination data and AI can be used to mess with the distinction between fact and fiction. The algorithmic curation and promotion of fake news, for example, or the use of virtual and augmented reality to manipulate our perception of public and private spaces, both pose significant threats to property rights, privacy rights and political rights. We need to do something to legally manage this brave new (technologically constructed) world.4. Case Study: Algorithmic Risk PredictionLet me turn turn now to my final case study. This one has to do with how data can be used to prompt new actions and behaviours in the world. For this case study, I will look to the world of algorithmic risk prediction. This is where we take a collection of datapoints concerning an individual’s behaviour and lifestyle and feed it into an algorithm that can make predictions about their likely future behaviour. This is a long-standing practice in insurance, and is now being used in making credit decisions, tax auditing, child protection, and criminal justice (to name but a few examples). I’ll focus on its use in criminal justice for illustrative purposes.Specifically, I will focus on the debate surrounding the COMPAS algorithm, that has been used in a number of US states. The COMPAS algorithm (created by a company called Northpointe, now called Equivant) uses datapoints to generate a recidivism risk score for criminal defendants. The datapoints include things like the person’s age at arrest, their prior arrest/conviction record, the number of family members who have been arrested/convicted, their address, their education and job and so on. These are then weighted together using an algorithm to generate a risk score. The exact weighting procedure is unclear, since the COMPAS algorithm is a proprietary technology, but the company that created it has released a considerable amount of information about the datapoints it uses into the public domain.If you know anything about the COMPAS algorithm you will know that it has been controversial. The controversy stems from two features of how the algorithm works. First, the algorithm is relatively opaque. This is a problem because the fair administration of justice requires that legal decision-making be transparent and open to challenge. A defendant has a right to know how a tribunal or court arrived at its decision and to challenge or question its reasoning. If this information isn’t known — either because the algorithm is intrinsically opaque or has been intentionally rendered opaque for reasons of intellectual property — then this principle of fair administration is not being upheld. This was one of the grounds on which the use of COMPAS algorithm was challenged in the US case of Loomis v Wisconsin.[12] In that case, the defendant, Loomis, challenged his sentencing decision on the basis that the trial court had relied on the COMPAS risk score in reaching its decision. His challenge was ultimately unsuccessful. The Wisconsin Supreme Court reasoned that the trial court had not relied solely on the COMPAS risk score in reaching its decision. The risk score was just one input into the court’s decision-making process, which was itself transparent and open to challenge. That said, the court did agree that courts should be wary when relying on such algorithms and said that warnings should be attached to the scores to highlight their limitations.The second controversy associated with the COMPAS algorithm has to do with its apparent racial bias. To understand this controversy I need to say a little bit more about how the algorithm works. Very roughly, the COMPAS algorithm is used to sort defendants into to outcome ‘buckets’: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket. A number of years back a group of data journalists based at ProPublica conducted an investigation into which kinds of defendants got sorted into those buckets. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores.Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. This turns out to be true. The reason why it doesn't immediately look like it is equally accurate upon a first glance at the relevant figures is that there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.So what is going on here? Is the algorithm fair or not? Here is where things get interesting. Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan.[13] To simplify their argument, they said that there are two things you might want a fair decision algorithm to do: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of racial group); (ii) you might want it to achieve an equal representation for all groups in the outcome buckets. They then proved that except in two unusual cases, it is impossible to satisfy both criteria. The two unusual cases are when the algorithm is a 'perfect predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black defedants as there are white defendants). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true for all algorithmic risk predictions and not just true for cases involving recidivism risk. If you would like to see a non-mathematical illustration of the problem, I highly recommend checking out a recent article in the MIT Technology Review which includes a game you can play using the COMPAS algorithm and which illustrates the hard tradeoff between different conceptions of fairness.[14]What does all this mean for the law? Well, when it comes to the issue of transparency and challengeability, it is worth noting that the GDPR, in articles 13-15 and article 22, contains what some people refer to as a ‘right to explanation’. It states that, when automated decision procedures are used, people have a right to access meaningful information about the logic underlying the procedures. What this meaningful information looks like in practice is open to some interpretation, though there is now an increasing amount of guidance from national data protection units about what is expected.[15] But in some ways this misses the deeper point. Even if we make these procedures perfectly transparent and explainable, there remains the question about how we manage the hard tradeoff between different conceptions of fairness and non-discrimination. Our legal conceptions of fairness are multidimensional and require us to balance competing interests. When we rely on human decision-makers to determine what is fair, we accept that there will be some fudging and compromise involved. Right now, we let this fudging take place inside the minds of the human decision-makers, oftentimes without questioning it too much or making it too explicit. The problem with algorithmic risk predictions is that they force us to make this fudging explicit and precise. We can no longer pretend that the decision has successfully balanced all the competing interests and demands. We have to pick and choose. Thus, in some ways, the real challenge with these systems is not that they are opaque and non-transparent but, rather, that when they are transparent they force us to make hard choices.To some, this is the great advantage of algorithmic risk prediction. A paper by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass Sunstein entitled ‘Discrimination in the Age of the Algorithm’ makes this very case.[16] They argue that the real problem at the moment is that decision-making is discriminatory and its discriminatory nature is often implicit and hidden from view. The widespread use of transparent algorithms will force it into the open where it can be washed by the great disinfectant of sunlight. But I suspect others will be less sanguine about this new world of algorithmically mediated justice. They will argue that human-led decision-making, with its implicit fudging, is preferable, partly because it allows us to sustain the illusion of justice. Which world do we want to live in? The transparent and explicit world imagined by Kleinberg et al, or the murky and more implicit world of human decision-making? This is also a key legal challenge for the modern age.5. ConclusionIt’s time for me to wrap up. One lingering question you might have is whether any of the challenges outlined above are genuinely new. This is a topic worth debating. In one sense, there is nothing completely new about the challenges I have just discussed. We have been dealing with variations of them for as long as humans have lived in complex, literate societies. Nevertheless, there are some differences with the past. There are differences of scope and scale — mass surveillance and AI enables collection of data at an unprecedented scale and its use on millions of people at the same time. There are differences of speed and individuation — AI systems can update their operating parameters in real time and in highly individualised ways. And finally, there are the crucial differences in the degree of autonomy with which these systems operate, which can lead to problems in how we assign legal responsibility and liability.Endnotes[1] I am indebted to Jacob Turner for drawing my attention to this story. He discusses it in his book Robot Rules - Regulating Artificial Intelligence (Palgrave MacMillan, 2018). This is probably the best currently available book about Ai and law. [2] See https://www.irishtimes.com/business/technology/airport-facial-scanning-dystopian-nightmare-rebranded-as-travel-perk-1.3986321; and https://www.dublinairport.com/latest-news/2019/05/31/dublin-airport-participates-in-biometrics-trial [3] https://arstechnica.com/tech-policy/2016/04/facial-recognition-service-becomes-a-weapon-against-russian-porn-actresses/# [4] This was a stunt conducted by the ACLU. See here for the press release https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28 [5] https://www.perpetuallineup.org/ [6] For the story, see here https://www.bbc.com/news/technology-49489154 [7] Their original call for this can be found here: https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66 [8] The video can be found here; https://www.youtube.com/watch?v=UCwbJxW-ZRg; For more information on the research see here: https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/; https://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf [9] The full report can be found here: https://static1.squarespace.com/static/5ca2c7abc2ff614d3d0f74b5/t/5ce26307ad4eec00016e423c/1558340402742/Perception+Inception+Report+EMBARGOED+TILL+21+May+2019.pdf [10] The paper currently exists in a draft form but can be found here: https://philpapers.org/rec/RINDAT [11] https://www.dccae.gov.ie/en-ie/communications/consultations/Pages/Regulation-of-Harmful-Online-Content-and-the-Implementation-of-the-revised-Audiovisual-Media-Services-Directive.aspx [12] For a summary of the judgment, see here: https://harvardlawreview.org/2017/03/state-v-loomis/ [13] “Inherent Tradeoffs in the Fair Determination of Risk Scores” - available here https://arxiv.org/abs/1609.05807 [14] The article can be found at this link - https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ [15] Casey et al ‘Rethinking Explainabie Machines’ - available here https://scholarship.law.berkeley.edu/btlj/vol34/iss1/4/ [16] An open access version of the paper can be downloaded here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3329669 #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Triangulation (Video LO)
Triangulation 422: Brett Frischmann: Re-Engineering Humanity

Triangulation (Video LO)

Play Episode Listen Later Nov 15, 2019 59:12


Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University and joins Denise Howell to discuss his book, co-authored with Evan Selinger, 'Re-Engineering Humanity.' They talk about what's happening to our lives as society embraces big data, predictive analytics, and smart environments. Host: Denise Howell Guest: Brett Frischmann Download or subscribe to this show at https://twit.tv/shows/triangulation. Sponsor: capterra.com/triangulation

Triangulation (Video HD)
Triangulation 422: Brett Frischmann: Re-Engineering Humanity

Triangulation (Video HD)

Play Episode Listen Later Nov 15, 2019 59:12


Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University and joins Denise Howell to discuss his book, co-authored with Evan Selinger, 'Re-Engineering Humanity.' They talk about what's happening to our lives as society embraces big data, predictive analytics, and smart environments. Host: Denise Howell Guest: Brett Frischmann Download or subscribe to this show at https://twit.tv/shows/triangulation. Sponsor: capterra.com/triangulation

Triangulation (MP3)
Triangulation 422: Brett Frischmann: Re-Engineering Humanity

Triangulation (MP3)

Play Episode Listen Later Nov 15, 2019 59:12


Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University and joins Denise Howell to discuss his book, co-authored with Evan Selinger, 'Re-Engineering Humanity.' They talk about what's happening to our lives as society embraces big data, predictive analytics, and smart environments. Host: Denise Howell Guest: Brett Frischmann Download or subscribe to this show at https://twit.tv/shows/triangulation. Sponsor: capterra.com/triangulation

Triangulation (Video HI)
Triangulation 422: Brett Frischmann: Re-Engineering Humanity

Triangulation (Video HI)

Play Episode Listen Later Nov 15, 2019 59:12


Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University and joins Denise Howell to discuss his book, co-authored with Evan Selinger, 'Re-Engineering Humanity.' They talk about what's happening to our lives as society embraces big data, predictive analytics, and smart environments. Host: Denise Howell Guest: Brett Frischmann Download or subscribe to this show at https://twit.tv/shows/triangulation. Sponsor: capterra.com/triangulation

AGI Podcast
Re-engineering humans and rethinking digital networked tools - Prof. Brett Frischmann

AGI Podcast

Play Episode Listen Later Aug 2, 2019 61:01


Re-engineering humans and rethinking digital networked tools. "We become what we behold. We shape our tools and then our tools shape us." -  (John Culkin, 1967) Introduction Since Prometheus' gift of fire to humankind, humans have been using it as a tool to adapt to their environment and ultimately adapt the environment to themselves. Yet, from contract law, to media, to the roads we create, human beings have also always been shaped by their very own tools. A set of foreseen and unforeseen consequences on the way people develop, learn, interact, or build relationships tend to manifest with ubiquitous tools. This is a rather obvious observation but an important one to make in order to contextualise the way that modern digital networked tools have affected people in the information age. In this month’s AGI podcast, we were honored to receive and converse with Professor Brett Frischmann who recently wrote, along with his colleague Professor Evan Selinger, the book Re-Engineering Humanity joined. Much of the podcast’s discussion touches on subjects that the book covers in-depth and with a refreshing level of optimism despite the harsh reality it unveils. The guest, Brett Frischmann, is the Charles Widger Endowed University Professor in Law, Business and Economics at Villanova University. He is also an Affiliate Scholar of the Center for Internet and Society at Stanford Law School and a Trustee for the Nexa Center for Internet & Society in Torino, Italy. More importantly, Prof. Frischmann has researched extensively on knowledge commons, the Social Value of Shared Resources and techno-social engineering of humans (the relationships between the techno-social world and humanity). These subjects have long been core to the vision of SingularityNET and it was an exciting opportunity to discuss them with such a knowledgeable guest.

Marooned! on Mars with Matt and Hilary
Blue Mars, Part 4: "Green Earth," (Post-)Colonialism, and Uncanny Hallucinations

Marooned! on Mars with Matt and Hilary

Play Episode Listen Later Jan 14, 2019 89:08


On this episode of Marooned!, we're discussing Part 4 of Blue Mars, "Green Earth," a Nirgal chapter. Nirgal, Sax, Maya, and Michel have traveled to Earth as a Martian delegation to attempt to normalize relations to the home planet and help out where they can. Nirgal goes off on a series of disorienting and hallucinatory adventures and comes back sick! Matt and Hilary spend some time chatting about what they've been up to since the last episode. Hilary "moderated" a "panel" at an event co-sponsored by the Chicago Humanities Festival and Humanities Without Walls as part of the MLA conference (or something). N. Katherine Hayles and Evan Selinger had a lot to say! Delightful weirdos who strangely think the humanities are important were in attendance--including the president of the MLA! In our "Mars in Popular Culture Roundup of the Week" segment, which will doubtless be expanded to a weekly extra episode once Tom Hanks gives us a million dollars, Matt watched two Mars-related movies that were bad: Capricorn One and something on Netflix (2036: Origin Unknown). Then we get to the good stuff. This chapter is hallucinatory and impressionistic, anchored in Nirgal's bodily experiences, but also full of subtle references to the history of colonialism, literature, and post-colonial thought, as we discover. Connections we make include C.L.R. James, Frankenstein, Treasure Island, Freud, Agatha Christie, Mr. Belvedere, Jamaica Kincaid, Great Expectations, Moby-Dick, K-19: The Widowmaker, New York 2140. Home at last, Nirgal encounters a planet that wants to kill him, where he feels most at home in zones that are out of reach of earthly life--high in the Alps on a glacier and beneath the sea, polluted and more dangerous than before. We reflect on Nirgal's perennial homelessness as a constitutive lack, which takes his experience of the overwhelming colors, heat, and moisture of Earth from the hallucinatory to the uncanny, or unheimlich in Freudian thinking. This is appropriate because he also keeps running into doppelgängers of his parents, Coyote and Hiroko. All the while, the relation between Earth and Mars is up for debate. Hilary gives a critique of the concept of population and Malthusian logic, and makes a case for faith in people's willingness to figure out the common good in the here-and-now rather than defer decision-making to an investment in an unknowable future. People should get to live good lives while they're alive! Back to our common Arendtian refrain: why put all your faith in the future when you could work to make the present better? Elsewhere, Matt becomes as smart as Jamaica Kincaid when he discovers that you can take the colonies away from the empire, but you can't take colonialism away from the colonizers, and he does a really bad British accent. A very fond farewell to all our listeners across the pond! Things Hilary doesn't like: Tom Hanks, The Family Guy, Avengers: Infinity War (discussed off-mic). Ways Matt can't identify with Nirgal: Scared of scuba diving, does not routinely wake up to find multiple strange women having sex with him. Email us: maroonedonmarspodcast@gmail.com Tweet us (we don't like twitter) @podcastonmars Rate & Review us: iTunes, Google Play, wherever. Voicemail us: Anchor.fm app Music by The Spirit of Space

The Writer Files: Writing, Productivity, Creativity, and Neuroscience
‘The Writer’s Brain’ on Impostor Syndrome: Part Two

The Writer Files: Writing, Productivity, Creativity, and Neuroscience

Play Episode Listen Later Jun 26, 2018 24:49


  In Part Two of this special edition of the show we call “The Writer’s Brain,” a guest series with neuroscientist Michael Grybko, we dig into a phenomenon known as “impostor syndrome,” an experience many writers struggle with. The Experience Known as “Impostor Syndrome” The experience known as “impostor syndrome” has been recognized in over 70% of the population across a wide range of demographics. Everyone from bestselling authors, to A-list celebrities, and even genius-level scientists, have all admitted to feeling a kind of isolation from not wanting to be outed as a “fraud,” even though they’re far from it. And it’s not just limited to high-achievers; it’s been found in men and women across a wide variety of groups, including those about to launch a new creative project or career, teachers, students, entrepreneurs, and many others. Across all demographics, success tends to create an even deeper sense of the impostor experience, and although not considered a clinical psychological syndrome, the effects can be debilitating to writers at any level of experience or professional standing. These feelings of self-doubt can snowball if not addressed, and leave you with a sinking depression, anxiety, and a sense of dread at taking on new or challenging tasks. Luckily, research scientist Michael Grybko returned to the podcast to help me find some answers about the origins of anxiety in the human brain, and how to address the impostor experience from both a scientific and layperson’s perspective. If you missed previous episodes of The Writer’s Brain you can find them all in the show notes, in the archives at writerfiles.fm, Apple Podcasts, or wherever you tune in. And if you missed the first half of this show you can find it right here. If you’re a fan of The Writer Files, please click subscribe to automatically see new interviews. In Part Two of this file Michael Grybko and I discuss: Why the “writer as athlete” trope undervalues the power of the human brain Small steps you can take to rewire your anxiety How writers can harness their interactional expertise to beat impostor experience Why you don’t need a PhD to sound like an expert Tips and tricks for overcoming your unfounded self-doubt Why a page a day keeps the impostor syndrome away The Show Notes:   The Best of The Writer s Brain Part One: Creativity The Best of ‘The Writer’s Brain’ Part Two: Empathy The Best of ‘The Writer’s Brain’ Part Three: Storytelling The Best of ‘The Writer’s Brain’ Part Four: Writer’s Block The Best of ‘The Writer’s Brain’ Part Five: Fake News What Happens When We Turn the World’s Most Famous Robot Test on Ourselves? – Evan Selinger for The Atlantic How a Famous Robot Test Can Help You Beat Impostor Syndrome – Kelton Reid for Copyblogger Sociologist fools physics judges – Nature (International Journal of Science) How to Outsmart Writer s Block with Neuroscience – Kelton Reid for Copyblogger This Is Your Brain on Writing The Physics of Productivity – James Clear Kelton Reid on Twitter

Techtonic with Mark Hurst | WFMU
Brett Frischmann, co-author, "Re-Engineering Humanity" from Jun 11, 2018

Techtonic with Mark Hurst | WFMU

Play Episode Listen Later Jun 11, 2018


Turning humans into robots: Brett Frischmann, talks about his new book "Re-Engineering Humanity," co-authored with Evan Selinger. Tomaš Dvořák - "Game Boy Tune" - Machinarium Soundtrack - "Mark's intro" - "Interview with Brett Frischmann" - "Your calls and comments 201-209-9368" Xordox - "Diamonds" - Neospection [this song on bandcamp] http://www.wfmu.org/playlists/shows/79640

Philosophical Disquisitions
Episode #39 - Re-engineering Humanity with Frischmann and Selinger

Philosophical Disquisitions

Play Episode Listen Later Jun 4, 2018


In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the 'Free Will Wager' and how it pertains to debates about technology and social engineering.You can listen to the episode below or download it here. You can also subscribe on Stitcher and iTunes (the RSS feed is here).Show Notes0:00 - Introduction1:33 - What is techno-social engineering?7:55 - Is techno-social engineering turning us into simple machines?14:11 - Digital contracting as an example of techno-social engineering22:17 - The three important ingredients of modern techno-social engineering29:17 - The Digital Tragedy of the Commons34:09 - Must we wait for a Leviathan to save us?44:03 - The Free Will Wager55:00 - The problem of Engineered Determinism1:00:03 - What does it mean to be self-determined?1:12:03 - Solving the problem? The freedom to be offRelevant LinksEvan Selinger's homepageBrett Frischmann's homepageRe-engineering Humanity - website'Reverse Turing Tests: Are humans becoming more machine-like?' by meEpisode 4 with Evan Selinger on Privacy and Algorithmic OutsourcingEpisode 7 with Brett Frischmann on Human-Focused Turing TestsGregg Caruso on 'Free Will Skepticism and Its Implications: An Argument for Optimism'Derk Pereboom on Relationships and Free Will #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Algocracy and Transhumanism Podcast
Episode #39 – Re-engineering Humanity with Frischmann and Selinger

Algocracy and Transhumanism Podcast

Play Episode Listen Later Jun 4, 2018


In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their … More Episode #39 – Re-engineering Humanity with Frischmann and Selinger

Ahead of Our Time
Own Your Power Over Tech — Evan Selinger

Ahead of Our Time

Play Episode Listen Later Apr 30, 2018 75:12


Evan Selinger is a philosophy professor and author focused on the unique intersection of philosophy, technology, and privacy law. His work, which includes co-authoring the book Re-Engineering Humanity, helps people wrap their minds around the implications of our relationships with technology—a vision of where we are today and where we’re going—and he hopes to create an optimal future for us and for the generations to come. In our conversation, Evan shares the “meta-skill” of world-building, including our fundamental abilities of choice (free will) and a salvo for us to be more thoughtful about when and how we outsource our humanity to technology. Throughout, he displays a judicious eye for the ways our thoughts and blind spots, actions and inaction, end up shaping our world—and he encourages us to pause. This pause can come in the form of a mindful moment (“breathing room”) or simply the consideration of the full, weighty context of social and technological issues at play. To read the full description and show notes, visit www.aheadofourtime.com/own-your-power-over-tech.

tech own your power evan selinger
Philosophical Disquisitions
Episode #31 - Hartzog on Robocops and Automated Law Enforcement

Philosophical Disquisitions

Play Episode Listen Later Oct 28, 2017


In this episode I am joined by Woodrow Hartzog. Woodrow is currently a Professor of Law and Computer Science at Northeastern University (he was the Starnes Professor at Samford University’s Cumberland School of Law when this episode was recorded). His research focuses on privacy, human-computer interaction, online communication, and electronic agreements. He holds a Ph.D. in mass communication from the University of North Carolina at Chapel Hill, an LL.M. in intellectual property from the George Washington University Law School, and a J.D. from Samford University. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center.We talk about the rise of automated law enforcement and the virtue of an inefficient legal system. You can download the episode here or listen below. You can also subscribe to the podcast via iTunes or Stitcher (RSS feed is here). Show Notes0:00 - Introduction2:00 - What is automated law enforcement? The 3 Steps6:30 - What about the robocops?10:00 - The importance of hidden forms of automated law enforcement12:55 - What areas of law enforcement are ripe for automation?17:53 - The ethics of automated prevention vs automated punishment23: 10 - The three reasons for automated law enforcement26:00 - The privacy costs of automated law enforcement32:13 - The virtue of discretion and inefficiency in the application of law40:10 - An empirical study of automated law enforcement44:35 - The conservation of inefficiency principle48:40 - The practicality of conserving inefficiency51:20 - Should we keep a human in the loop?55:10 - The rules vs standards debate in automated law enforcement58:36 - Can we engineer inefficiency into automated systems1:01:10 - When is automation desirable in law?  Relevant LinksWoody's homepageWoody's SSRN page'Inefficiently Automated Law Enforcement' by Woodrow Hartzog, Gregory Conti, John Nelson and Lisa Shay'Obscurity and Privacy' by Woodrow Hartzog and Evan SelingerEpisode 4 with Evan Selinger on Algorithmic Outsourcing and PrivacyKnightscope RobotsRobocop joins Dubai police to fight real life crime    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Center for Internet and Society
Evan Selinger- Hearsay Culture Show #214 - KZSU-FM (Stanford)

Center for Internet and Society

Play Episode Listen Later May 28, 2014 57:13


I'm pleased to post Show # 214, May 28, my interview with Prof. Evan Selinger of Rochester Institute of Technology on technology and the human experience. Evan's work spans the range of technology, ethics and philosophy, an unusual but critical intersection as we consider the ramifications of algorithms, robotics, drones, 3D printers and social media, among many other innovations, on our lives. In our discussion, we focused on Evan's concern about "outsourcing" our humanity to computers and technology and how it has and will impact our humanity. Evan is an insightful and original commentator and scholar, and I greatly enjoyed our discussion! {Hearsay Culture is a talk show on KZSU-FM, Stanford, 90.1 FM, hosted by Center for Internet & Society Resident Fellow David S. Levine. The show includes guests and focuses on the intersection of technology and society. How is our world impacted by the great technological changes taking place? Each week, a different sphere is explored. For more information, please go to http://hearsayculture.com.}

Templeton Research Lectures
Brains, Selves, and Spirituality in the History of Cybernetics

Templeton Research Lectures

Play Episode Listen Later Apr 25, 2008 54:08


Andrew Pickering is internationally known as a leader in the field of science and technology studies. He is the author of Constructing Quarks: A Sociological History of Particle Physics (1984), The Mangle of Practice: Time, Agency and Science (1995) and Chasing Technoscience: Matrix for Materiality (with Don Ihde, Evan Selinger, Donna Jeanne Haraway, and Bruno Latour, 2003). He has written on topics as diverse as post-World War II particle physics; mathematics, science and industry in the 19th-century; and science, technology and warfare in and since WWII. His most recent book, Sketches of Another Future: Cybernetics in Britain, 1940-2000 (forthcoming), analyzes cybernetics as a distinctive form of life—spanning brain science, psychiatry, robotics, the theory of complex systems, management, politics, the arts, education and spirituality. Pickering has held fellowships at MIT, the Institute for Advanced Study at Princeton, the Guggenheim Foundation and the Center for Advanced Study in the Behavioral Sciences at Stanford. He is professor of sociology and philosophy at the University of Exeter, where he moved in 2007 after serving as professor of sociology and director of an interdisciplinary STS graduate program at the University of Illinois, Urbana-Champaign.