POPULARITY
Anupam Chander, Scott Ginsburg Professor of Law and Technology at Georgetown, and Visiting Scholar at Harvard's Institute for Rebooting Social Media, discusses the future of humanity during a wave of paid technology innovation. He also shares how AI is impacting digital sovereignty. Key Takeaways: Why Section 230 of the US Communications Decency Act has been so important for people online How US Internet Laws impact its dominance as a world innovation leader Ways that regulators are having to balance new technologies The impact that judges, regulators, and the courts have had on the evolution of new technologies Guest Bio: Anupam Chander is Scott K. Ginsburg Professor of Law and Technology at Georgetown. A Harvard College and Yale Law graduate, he is the author of The Electronic Silk Road, published by Yale University Press. He practiced law in NY and Hong Kong with Cleary, Gottlieb, and has been a visiting law professor at Yale, Chicago, Stanford, Cornell, and Tsinghua. A recipient of Google Research Awards and an Andrew Mellon grant, he has consulted for the World Bank, World Economic Forum, and UNCTAD. A non-resident fellow at Yale's Information Society Project, he is a member of the American Law Institute. In 2023-24, he's a Visiting Scholar at the Institute for Rebooting Social Media at Harvard University, and Cheng Yu Tung Visiting Professor at the University of Hong Kong. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
In the Information Age, keeping our private lives private is becoming harder and harder to do. For example, our online searches and chats are leaving digital traces, while our phones (and even our cars) are collecting information on where we go. All of this data is vulnerable, and there's growing concern about the risk of our most sensitive personal information being exploited. In today's episode, we're going to focus on ways we can mitigate the risks of data surveillance and keep our personal information more secure. I am joined once again by Albert Fox Cahn. He is founder and executive director of the Surveillance Technology Oversight Project (also known as STOP). He is also a Practitioner-in-Residence at N.Y.U Law School's Information Law Institute and a fellow at the Harvard Kennedy School's Carr Center For Human Rights Policy and Yale Law School's Information Society Project. As a lawyer, technologist, and activist, Albert has become a leading voice on how to govern and build the technologies of the future. Some of the topics we explore include: What are some practical ways individuals can keep their online data more secure? How safe (or unsafe) is it to use apps that track our health data, sexually and otherwise? For individuals traveling out of state to access legally available care that might not be available in their home state, are there any steps they can take to mitigate data surveillance? What steps can providers take to protect patients and clients when it comes to data that might be subject to government surveillance? What responsibility do tech companies have to offer more protection over consumer data? You can connect with Albert online on Twitter and Bluesky. Got a sex question? Send me a podcast voicemail to have it answered on a future episode at speakpipe.com/sexandpsychology *** Thank you to our sponsors! The Kinsey Institute at Indiana University has been a trusted source for scientific knowledge and research on critical issues in sexuality, gender, and reproduction for over 75 years. Learn about more research and upcoming events at kinseyinstitute.org or look for them on social media @kinseyinstitute. Expand your sexual horizons with Beducated! Featuring more than 100 online courses taught by the experts, Beducated brings pleasure-based sex ed directly into your bedroom. Enjoy a free trial today and get 60% off their yearly pass by using my last name - LEHMILLER - as the coupon code. Sign up now at: https://beducate.me/pd2349-lehmiller *** Want to learn more about Sex and Psychology? Click here for previous articles or follow the blog on Facebook, Twitter, or Reddit to receive updates. You can also follow Dr. Lehmiller on YouTube and Instagram. Listen and stream all episodes on Apple, Spotify, Google, or Amazon. Subscribe to automatically receive new episodes and please rate and review the podcast! Credits: Precision Podcasting (Podcast editing) and Shutterstock/Florian (Music). Image created with Canva; photos used with permission of guest.
In 2022, the United States Supreme Court reversed a half-century old ruling that had legalized abortion in this country. Since then, reproductive healthcare has become more difficult to access in certain areas, prompting concern about technology being leveraged as a form of abortion surveillance and tracking. Text messages, Google searches, emails, and period-tracking apps all have the potential to be used against people seeking abortions. In today's show, we're going to explore how online surveillance of abortion is increasing and the effects this is having, as well as the broader risks of living in a time when there's unparalleled surveillance of our private lives. My guest is Albert Fox Cahn. He is founder and executive director of the Surveillance Technology Oversight Project (also known as STOP). He is also a Practitioner-in-Residence at N.Y.U Law School's Information Law Institute and a fellow at the Harvard Kennedy School's Carr Center For Human Rights Policy and Yale Law School's Information Society Project. As a lawyer, technologist, and activist, Albert has become a leading voice on how to govern and build the technologies of the future. Some of the topics we explore include: How has the landscape for abortion care in the United States shifted since Roe v Wade was overturned? In what ways is abortion care subject to surveillance? Are new surveillance practices having an effect on pregnant women's decision making and feelings of agency? What are the implications of surveillance for accessing gender-affirming care? Can you be tracked when you travel to another state to access healthcare? You can connect with Albert online on Twitter and Bluesky. Got a sex question? Send me a podcast voicemail to have it answered on a future episode at speakpipe.com/sexandpsychology *** Thank you to our sponsors! Passionate about building a career in sexuality? Check out the Sexual Health Alliance. With SHA, you'll connect with world-class experts and join an engaged community of sexuality professionals from around the world. Visit SexualHealthAlliance.com and start building the sexuality career of your dreams today. Ever questioned, wondered, or fantasized about trying something different? Feeld is the place where you freely explore your desires. For a limited time, receive a free month of Majestic Membership when you download the app as a new member. Download the Feeld app at feeld.co/justin to access your free month of Majestic Membership. Please allow up to 24 hours for your free Majestic trial to activate. *** Want to learn more about Sex and Psychology? Click here for previous articles or follow the blog on Facebook, Twitter, or Reddit to receive updates. You can also follow Dr. Lehmiller on YouTube and Instagram. Listen and stream all episodes on Apple, Spotify, Google, or Amazon. Subscribe to automatically receive new episodes and please rate and review the podcast! Credits: Precision Podcasting (Podcast editing) and Shutterstock/Florian (Music). Image created with Canva; photos used with permission of guest.
Welcome back to the “Tech Policy Grind” podcast by the Internet Law & Policy Foundry! This week, Reema chats with Marcela Mattiuzzo and Nathalie Fragoso, Partners at VMCA Advogados, on Brazilian privacy law and regulatory policy updates. They dive into recent activity from Brazil's data protection authority (the ANPD), and the history of the LGPD. They also cover how regulatory control of Artificial Intelligence is shaping up. Marcela Mattiuzzo is partner at VMCA in the areas of data protection and competition law. She holds a PhD and a Masters from the University of São Paulo. Previously, she was a Visiting Fellow at the Information Society Project at Yale University. In addition, she worked as advisor and chief of staff to the President of the Brazilian competition authority. Nathalie Fragoso is a partner at VMCA in the area of data protection and technology as well, with a special focus on internet regulation. She is also a Professor at Insper. Previously, she has been the Head of Research on Privacy and Surveillance at InternetLab, and held roles at the Brazilian Institute for the Defense of the Right to Defense, the Criminal Justice Network, and the Luiz Gama Human Rights Clinic at the University of São Paulo Law School. Check out the Foundry on Instagram, Twitter, or LinkedIn and subscribe to our newsletter! If you'd like to support the show, donate to the Foundry here or reach out to us at foundrypodcasts@ilpfoundry.us. Thanks for listening, and stay tuned for our next episode! DISCLAIMER: Reema engaged with this episode by the Internet Law & Policy Foundry voluntarily and in her personal capacity. The views and opinions expressed on this show do not reflect the organizations and institutions she is affiliated with.
At the September G20 summit in Delhi, the government of prime minister Narendra Modi promoted the country's digital public infrastructure (DPI) as a model for the world for how to develop digital systems that enable countries to deliver social services and provide access to infrastructure and economic opportunities to residents. Other world leaders were enthusiastic about the pitch, endorsing a common framework for DPI systems. But even as an Indian vision for DPI appears to be attractive beyond that country's borders, what are the ideas and events that shaped India's approach? Today's guest is Mila Samdub, a researcher at the Information Society Project at Yale Law School who recently published an essay titled “The Bangalore Ideology: How an amoral technocracy powers Modi's India,” looking at histories of technocratic ideas in India, and how they have combined with Modi's particular brand of populism.
Internet blackouts — when internet service is shut down in a country or region — have become much more common over the last decade. But who gets to decide when these disruptions are necessary? From thwarting political protests to preventing cheating on school exams, we're diving into the who, what, and why of internet blackouts around the world. And we're asking… what exactly are the rules here in the US? GUESTS: Mazin Riyadh, student at the University of Mosul; Dr. Patricia Vargas, Fellow for the Information Society Project and Fellow for the Internet Society; Zuha Siddiqui, Journalist ADDITIONAL RESOURCES: Internet Shutdowns During Exams, Access Now Political Factors that Enable an Internet Kill Switch in Democratic and Non-Democratic Regimes, Yale Information Society Project Pakistan's 4-day internet shutdown was the final straw for its tech workers, Rest of World
Libel laws and the First Amendment in the United States are meant to hit a sweet spot — protecting reputations and facts while also affording journalists the freedom to publish unflattering information about powerful people that the public needs to know. But disinformation is increasingly threatening that balance.In this episode, law professor RonNell Andersen Jones explains what could be at risk. “If it's too easy for somebody to sue for defamation over a falsehood, then powerful people will hold that over everybody's head and threaten to sue their critics and will silence a lot of conversation that we ought to be having," she said.Jones is a Distinguished Professor and Teitelbaum Chair in Law at the University of Utah and an Affiliated Fellow at Yale Law School's Information Society Project. A former newspaper reporter and editor, Jones is a First Amendment scholar who now teaches, researches and writes on legal issues affecting the press and on the intersection between media and the courts.Listen to the conversation to learn more.Additional Reading:The "Actual Malice" Standard Explained, Protect DemocracySupreme Court Puts First Amendment Limits on Laws Banning Online Threats, The New York TimesThe Multibillion Dollar Defamation Lawsuits Against Fox News, Explained, VoxDominion CEO Predicts 'Business Ultimately Goes to Zero' Because of 2020 Election Lies, TIME.comIs that a fact? is a production of the News Literacy Project, a nonpartisan education nonprofit building a national movement to create a more news-literate America. Our host is Darragh Worland, our producer is Mike Webb, our editor is Timothy Kramer, and our theme music is by Eryn Busch.
Hosted by Andrew Keen, Keen On features conversations with some of the world's leading thinkers and writers about the economic, political, and technological issues being discussed in the news, right now. In this episode, Andrew is joined by Albert Fox Cahn, founder and director of the Surveillance Technology Oversight Project. Albert Fox Cahn is the Surveillance Technology Oversight Project's ( S.T.O.P.'s) founder and executive director. He is also a Practitioner-in-Residence at N.Y.U Law School's Information Law Institute and a fellow at Yale Law School's Information Society Project, Ashoka, and TED. Albert started S.T.O.P. with the belief that local surveillance is an unprecedented threat to public safety, equity, and democracy. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode Gus is joined by Professor Rob Hevery, a law professor at Albany Law School, where he researches and writes about technology, law, and society, covering topics such as the internet, drones, robots, AI, and human augmentation. He is also a 2022 Fellow with the Center for Quantum Networks, where he is researching policy-making surrounding development and implementation of the Quantum Internet. And that is our topic on this episode: quantum networking, law, and innovation. Hang on, it's a wild ride.Episode Notes:Robert Heverly is a tenured associate professor of law at Albany Law School, where he has taught since 2010. Professor Heverly has also taught at Michigan State University College of Law and at the University of East Anglia in Norwich, England, and was a Resident Fellow with the Information Society Project at Yale Law School (where he retains an affiliation). Prof. Heverly is a 2022 Fellow with the Center for Quantum Networks, where he is researching policy-making surrounding development and implementation of the Quantum Internet. Prof. Heverly researches and writes about technology, law, and society, covering topics such as the internet, drones, robots, AI, and human augmentation. He teaches classes in Torts, cyberspace law, copyright law, and unmanned aerial vehicles. His article, “More is Different: Liability of Compromised Systems in Internet Denial of Service Attacks” was recently published in the Florida State University Law Review. Prof. Heverly was Chair of the AALS Internet and Computer Law Section and was the Reporter for the Uniform Law Commission's “Uniform Tort Law Relating to Drones Act.” He holds a J.D. from Albany Law School and an LL.M. from Yale Law School.Links:Center for Quantum Networks (CQN): https://cqn-erc.org/About quantum networks: https://cqn-erc.org/about/
The reversal of Roe v. Wade would make it difficult or impossible for millions of people to obtain abortions, but would also open the doors to criminally prosecute people who seek or obtain an abortion. And in our technological age, that criminalization brings new, frightening opportunities for digital surveillance by law enforcement agencies or anti-abortion vigilantes. In this panel from Aspen Digital, “Digital Surveillance and the Fight for Reproductive Rights,” three experts in digital privacy and civil rights walk us through the risks and existing practices, and share what can be done: Wafa Ben-Hassine from the Omidyar Network, Tiffany Li from University of New Hampshire School of Law and Yale Law School's Information Society Project, and Cynthia Conti-Cook from the Ford Foundation. The panelists are also joined by U.S. Senator Ron Wyden of Oregon, a longtime advocate for digital privacy, and Vivian Schiller, the Executive Director of Aspen Digital, moderates.
Women in the United States are being warned against using menstrual tracking apps or even Googling "abortion" in the wake of a leaked opinion which suggests a landmark legal protection could be overturned. Last week an initial draft majority opinion belonging to Justice Samuel Alito was leaked, indicating the Supreme Court had voted to overturn Roe v Wade, the legal precedent guaranteeing American women the right to an abortion. Any decision is not final until it is published, likely to be in June. But it's prompted fears about what such a fundamental legal change would mean - not just in terms of abortion access - but for how technology may come into play. Kathryn speaks to Nikolas Guggenberger is executive director of the Information Society Project at Yale Law School and Daly Barnett, who's a technologist at the Electronic Frontier Foundation.
Join Allyson in a discussion on Ukraine with a Russian media expert. Benjamin Peters is the Hazel Rogers Associate Professor and chair of Media Studies, affiliated faculty with the score of Cyber Studies, and former Director of Russian Studies at the University of Tulsa as well as associated fellow at the Information Society Project at Yale Law School.
You may not know it, but our world has already been basically taken over by free and open source software, or FOSS - specifically, the Linux operating system. Just about every single electronic appliance or device today, from your smartphone to your smart toaster, is running some flavor of the Linux operating system. Furthermore, open source software projects are the bedrock of many for-profit software applications, operating systems, mobile apps and web apps. It's everywhere, and yet you probably know very little about it. Today, Sean O'Brien will give us a little FOSS history lesson, explain why supporting this movement is so important, and even tell us how we might replace some pricey and user-hostile popular software with top-notch free and open alternatives. Sean O'Brien is a lecturer in Cybersecurity at Yale Law School and Chief Security Officer at Panquake.com He is a Visiting Fellow at the Information Society Project at Yale Law School, where he founded and leads the Privacy Lab initiative. He has been involved in Free and Open-Source Software (FOSS) for approximately two decades, including volunteer work for the Free Software Foundation and FreedomBox Foundation. Show Links Panquake: https://panquake.com/ Yale Privacy Lab: https://privacylab.yale.edu/ It's FOSS website: https://itsfoss.com/ Free Software Foundation: https://www.fsf.org/ Intro to Linux classes: https://itsfoss.com/free-linux-training-courses/ Windows Subsystem for Linux: https://docs.microsoft.com/en-us/windows/wsl/about System 76: https://system76.com/Purism: https://puri.sm/ Lineage OS: https://lineageos.org/Graphene OS: https://grapheneos.org/ Calyx OS: https://calyxos.org/ F-Droid: https://f-droid.org/ LibreOffice: https://www.libreoffice.org/ VLC Media Player: https://www.videolan.org/vlc/ Audacity audio editor: https://www.audacityteam.org/GIMP photo editor: https://www.gimp.org/ Inkscape illustrator: https://inkscape.org/ CryptPad: https://cryptpad.fr/ Further Info Become a Patron! https://www.patreon.com/FirewallsDontStopDragons Subscribe to the newsletter: https://firewallsdontstopdragons.com/newsletter/new-newsletter/Would you like me to speak to your group about security and/privacy? http://bit.ly/Firewalls-SpeakerGenerate secure passphrases! https://d20key.com/#/
As part of his Blueprint to End Gun Violence, Mayor Adams wants to expand the use surveillance technologies, like facial recognition software, to track down potential suspects. Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project (S.T.O.P.), practitioner-in-residence at N.Y.U Law School's Information Law Institute, a fellow at Yale Law School's Information Society Project, and Ashoka, talks about what the mayor has said, and the known pitfalls and controversies around such tech.
Right now in India, there's a legal battle that could portend the future of the internet. In this episode of Arbiters of Truth, Lawfare's miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Chinmayi Arun, a resident fellow at the Information Society Project at Yale Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She discussed one of the biggest stories about freedom of expression online today—the battle between Twitter and the Indian government, which has demanded that Twitter geoblock a large number of accounts, including the account of a prominent investigative magazine, in response to protests by tens of thousands of farmers across India. Chinmayi walked us through the political context of the farmers' protests, how the clash between Twitter and the Indian government is part of an increasingly constrained environment for freedom of expression in India, and where this battle might end up. See acast.com/privacy for privacy and opt-out information.
In this episode of the DarshanTalks Podcast, the host Darshan Kulkarni Pharm.D, MS, Esq. welcomes back guest Sean O'Brien to talk about Data & Privacy. Sean O'Brien is a lecturer at Yale Law School and Chief Security Officer at Panquake. He is a Visiting Fellow at the Information Society Project at Yale Law School, where he founded and leads the Privacy Lab initiative. At the start of the podcast, he explained his journey in Yale and the Privacy lab. In Privacy Lab they deal with hands-on tech implementation where they show people how to be more secure, private and potentially anonymous about their privacy and data. There are digital self-defense workshops that were called crypto parties. These involved hands-on practices which is different from teaching basics and principles of a particular subject. The expert talked about teaching the students which included professionals, paralegals, law graduates, and undergrad students who are seeking to learn about cyber tech. With that, they performed cyber security activities and students came up with their own concepts of hacking. For example, They performed the activity of hacking other students' microphones in the same room, which was fun and engaging to learn. Discussion on social networking and privacy implications that the big giants like Facebook and Twitter are causing people to feel not so secure. Sean talked about the Panquake, a decentralized platform that works on blockchain, and how it's the game-changer for social networking with short messaging technology wherein people's privacy is protected. He described there are three components of determining the social networking with privacy i.e., the question of critical mass, the question of hunger and willingness to pay for privacy by users, and history of alternative/replacement of social networks. Panquake blockchain model will be based on Byzantine fault tolerance. Connect with Sean O' Brien: Email: sean.obrien@yale.edu Twitter: @seanodiggity Website: TalkLiberation.com Panquake.com ---- Disclaimers: 1. This discussion is merely an oral discussion and should not be relied upon solely on its own to support any conclusion of law or fact. 2. The discussion does not and should not reflect any individual products status as safe, efficacious, adulterated, or misbranded or meeting or not meeting expectations at a local, state, federal, or international agency or organization. 3. The discussion should not be construed to be complete advice that is right for you and may not necessarily represent a specific product. 4. This discussion is provided for general educational purposes and should not be construed as legal advice, regulatory advice, or medical advice. 5. This does not create an attorney-client relationship. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of the DarshanTalks Podcast, the host Darshan Kulkarni Pharm.D, MS, Esq. welcomes back guest Sean O'Brien to talk about Data & Privacy. Sean O'Brien is a lecturer at Yale Law School and Chief Security Officer at Panquake. He is a Visiting Fellow at the Information Society Project at Yale Law School, where he founded and leads the Privacy Lab initiative. At the start of the podcast, he explained his journey in Yale and the Privacy lab. In Privacy Lab they deal with hands-on tech implementation where they show people how to be more secure, private and potentially anonymous about their privacy and data. There are digital self-defense workshops that were called crypto parties. These involved hands-on practices which is different from teaching basics and principles of a particular subject. The expert talked about teaching the students which included professionals, paralegals, law graduates, and undergrad students who are seeking to learn about cyber tech. With that, they performed cyber security activities and students came up with their own concepts of hacking. For example, They performed the activity of hacking other students' microphones in the same room, which was fun and engaging to learn. Discussion on social networking and privacy implications that the big giants like Facebook and Twitter are causing people to feel not so secure. Sean talked about the Panquake, a decentralized platform that works on blockchain, and how it's the game-changer for social networking with short messaging technology wherein people's privacy is protected. He described there are three components of determining the social networking with privacy i.e., the question of critical mass, the question of hunger and willingness to pay for privacy by users, and history of alternative/replacement of social networks. Panquake blockchain model will be based on Byzantine fault tolerance. Connect with Sean O' Brien: Email: sean.obrien@yale.edu Twitter: @seanodiggity Website: TalkLiberation.com Panquake.com ---- Disclaimers: 1. This discussion is merely an oral discussion and should not be relied upon solely on its own to support any conclusion of law or fact. 2. The discussion does not and should not reflect any individual products status as safe, efficacious, adulterated, or misbranded or meeting or not meeting expectations at a local, state, federal, or international agency or organization. 3. The discussion should not be construed to be complete advice that is right for you and may not necessarily represent a specific product. 4. This discussion is provided for general educational purposes and should not be construed as legal advice, regulatory advice, or medical advice. 5. This does not create an attorney-client relationship. Learn more about your ad choices. Visit megaphone.fm/adchoices
A whistleblower, a system crash and the United States Congress on its case; Facebook goes under the microscope, yet again. Contributors: Pranesh Prakash - Co-founder, Centre for Internet and Society; affiliated fellow, Information Society Project, Yale Law School Siva Vaidhyanathan - Professor, University of Virginia; author, Antisocial Media Marianne Franklin - Professor of global media and politics, Goldsmiths, University of London Mahsa Alimardani - Researcher, Oxford Internet Institute On our radar: The Pandora Papers - the largest investigation in journalism history - are reverberating through the financial world of the rich and powerful. Producer Flo Phillips tells Richard Gizbert about the biggest ever leaks of offshore data and who they have exposed. The case of Egypt's jailed TikTok stars The Egyptian government has been progressively tightening its grip on cyberspace and female social media influencers are the new targets. Contributors: Yasmin Omar - Egypt legal associate, The Tahrir Institute for Middle East Policy; human rights lawyer Joey Shea - Non-resident scholar, Middle East Institute Dalia Fahmy - Associate professor, Long Island University, Brooklyn
Paris Marx is joined by Michael Kwet to discuss how digital technologies are used to entrench the power of the United States and its dominant corporations at the expense of the Global South.Michael Kwet is a Visiting Fellow of the Information Society Project at Yale Law School. He got his PhD in Sociology at Rhodes University in South Africa. Follow Michael on Twitter at @Michael_Kwet.
The landmark 1964 Supreme Court decision New York Times Company v. Sullivan shaped libel and defamation law and established constitutional principles that still govern the scope of press protections in America today. The “actual malice” standard established in the decision requires a public official suing for defamation to prove that the newspaper published a false statement “with knowledge that it was false or with reckless disregard of whether it was false or not.” This made it harder for news publications to be sued for libel; yet it also made it more difficult for those defamed to seek redress. Recently, Supreme Court Justices Gorsuch and Thomas in separate opinions have each called for Sullivan to be revisited. Host Jeffrey Rosen moderated a debate over the importance of the Sullivan case and whether or not it should be reconsidered—featuring experts RonNell Andersen Jones, professor of law at the University of Utah and an Affiliated Fellow at Yale Law School's Information Society Project, and David A. Logan, professor of law and former dean at Roger Williams University and author of an article cited by Justice Gorsuch in his opinion questioning Sullivan. In this episode you'll also hear audio from the Supreme Court oral argument of New York Times v. Sullivan, courtesy of Oyez. Additional resources and transcript available in our Media Library at constitutioncenter.org/constitution. Questions or comments about the show? Email us at podcast@constitutioncenter.org.
The landmark 1964 Supreme Court decision New York Times Company v. Sullivan shaped libel and defamation law and established constitutional principles that still govern the scope of press protections in America today. The “actual malice” standard established in the decision requires a public official suing for defamation to prove that the newspaper published a false statement “with knowledge that it was false or with reckless disregard of whether it was false or not.” This made it harder for news publications to be sued for libel; yet it also made it more difficult for those defamed to seek redress. Recently, Supreme Court Justices Gorsuch and Thomas in separate opinions have each called for Sullivan to be revisited. Host Jeffrey Rosen moderated a debate over the importance of the Sullivan case and whether or not it should be reconsidered—featuring experts RonNell Andersen Jones, professor of law at the University of Utah and an Affiliated Fellow at Yale Law School's Information Society Project, and David A. Logan, professor of law and former dean at Roger Williams University and author of an article cited by Justice Gorsuch in his opinion questioning Sullivan. In this episode you'll also hear audio from the Supreme Court oral argument of New York Times v. Sullivan, courtesy of Oyez. Additional resources and transcript available in our Media Library at constitutioncenter.org/constitution. Questions or comments about the show? Email us at podcast@constitutioncenter.org.
How does the U.S. Supreme Court talk about the press? RonNell Andersen Jones, professor of law at the University of Utah, takes a look. Professor RonNell Andersen Jones is an Affiliated Fellow at Yale Law School's Information Society Project and the Teitelbaum Chair and Professor of Law at the University of Utah S.J. Quinney College […]
In part 2, expanding upon the impact of algorithms designed as racism --- we discuss the implications of Big Tech and Big Data on the global south with a focus on South Africa. We examine the notion of digital colonialism, how controlled opposition to it mutes our understanding of western tech hegemony and how a People's Tech (Digital Socialism) can help counter it. We are joined by guests: Michael Kwet, a Visiting Fellow of the Information Society Project at Yale Law School. He is the author of Digital colonialism: US empire and the new imperialism in the Global South, and hosts theTech Empire podcast. His work has been published at Motherboard, Wired, BBC World News Radio, and Counterpunch. He received his PhD in Sociology from Rhodes University, South Africa. Tshiamo Malatji, an organizer in Bloemfontein, South Africa, focusing on climate change, food sovereignty, and post-natural building as modes of responding to ecological crises. References https://docs.google.com/document/d/17fylaOq1hOXSxdThFhUK5ZDb3-L_upYaun8MhBU1-sY/edit Twitter - @blackmythspod Instagram -blackmythspod Facebook- The Black Myths Podcast Black Power Media - https://www.youtube.com/playlist?list=PL7_X-VeroWRvx6b9iD0BOZrvAOieHbb8p
In The Cycles of Constitutional Time, Jack Balkin takes an overarching look at the dynamics of constitutional government over the history of the United States. To understand what is happening today, he argues, “we have to think in terms of political cycles that interact with each other and create remarkable — and dark — times.” Single-term presidents, Balkin notes, often coincide with the ends of these cycles, moments where an existing approach to political life has run its course. Since Reagan's ascendency in 1980, Balkin contends, the dominating approach has been characterized by a lack of trust in politicians and big government. But, he suggests, the single-term presidency of Donald Trump could be another iteration in the pattern, and we may be moving towards a more progressive era. Balkin is the founder and director of Yale's Information Society Project, an interdisciplinary center that studies law and new information technologies. He recently spoke with Governing Editor-at-Large Clay Jenkinson.
This episode welcomes Robert Pollin and Jason Hickel to discuss the climate crisis and degrowth. The show is divided into four main parts. First, it lays out key concepts and information about the climate crisis. We then discuss solutions, including a global Green New Deal and a post-growth, redistributive solutions for society. Next, we explore the concept of “degrowth”, as understood within the context of colonialism and global inequality. This section includes a conversation between Pollin and Hickel about planetary boundaries and the growing degrowth current within the environmental movement. Finally, we discuss the Biden administration and European policy, as well as workers' movements and international activism from below. Robert Pollin is Co-Director and Distinguished Professor of Economics of the Political Economy Research Institute at the University of Massachusetts Amherst. His most recent book is called Climate Crisis and the Global Green New Deal: The Political Economy of Saving the Planet, co-authored with Noam Chomsky. Jason Hickel is a Visiting Senior Fellow at the International Inequalities Institute at the London School of Economics, and Senior Lecturer at Goldsmiths, University of London. His most recent book is called Less is More: How Degrowth Will Save the World. You can follow Jason on Twitter at @jasonhickel. Michael Kwet is a Visiting Fellow of the Information Society Project at Yale Law School and received his PhD in Sociology from Rhodes University in South Africa. You can follow Michael on Twitter at @michael_kwet. Robert Pollin at UMass-Amherst: https://www.umass.edu/economics/pollin Jason Hickel at Twitter: https://twitter.com/jasonhickel Michael Kwet at Twitter: https://twitter.com/Michael_Kwet Tech Empire at Twitter: https://twitter.com/techempirecast
Today we are joined by Samantha Godwin, a Resident Fellow for the Information Society Project at Yale Law School, to discuss COVID-19 health policy from a legal and philosophical perspective. Back us on Patreon! www.patreon.com/plenarysession Check out our YouTube channel: www.youtube.com/channel/UCUibd0E2kdF9N9e-EmIbUew
This episode features a discussion on the challenges of content moderation at scale with four great experts on the the key issues including Tarleton Gillespie, a Principal Researcher at Microsoft Research New England and an Adjunct Associate Professor in the Department of Communication at Cornell University; Kate Klonick, Assistant Professor at Law at St. John's University Law School and an Affiliate Fellow at the Information Society Project at Yale Law School; Jameel Jaffer, the executive director of the Knight First Amendment Institute at Columbia University; and Sarah T. Roberts, Assistant Professor of Information Studies at UCLA. Tech Policy Press fellow Romi Geller and cofounder Bryan Jones discuss news of the day.
Right now in India, there’s a legal battle that could portend the future of the internet. In this episode of Arbiters of Truth, Lawfare’s miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic talked to Chinmayi Arun, a resident fellow at the Information Society Project at Yale Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She discussed one of the biggest stories about freedom of expression online today—the battle between Twitter and the Indian government, which has demanded that Twitter geoblock a large number of accounts, including the account of a prominent investigative magazine, in response to protests by tens of thousands of farmers across India. Chinmayi walked us through the political context of the farmers’ protests, how the clash between Twitter and the Indian government is part of an increasingly constrained environment for freedom of expression in India, and where this battle might end up.
On the Gist, importing walkie-talkies into Myanmar. In the Interview, we’re coming into a new constitutional cycle, and it means Democrats could take quite a bit of power. Well, if they don’t blow it. We’ve been in the same Republican-dominated cycle for decades, according to Jack Balkin, author of The Constitutional Cycles of Time. Balkin and Mike talk about the extreme political polarization not seen since the Civil War, constitutional rot, and, just for fun, delve into conservative complaints about de-platforming and free speech. Balkin is a Knight Professor of Constitutional Law and the First Amendment and Director for The Information Society Project at Yale Law School. In the spiel, delving into the Trump files ahead of the Impeachment trial. Email us at thegist@slate.com Podcast production by Margaret Kelley and Cheyna Roth. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
On the Gist, importing walkie-talkies into Myanmar. In the Interview, we’re coming into a new constitutional cycle, and it means Democrats could take quite a bit of power. Well, if they don’t blow it. We’ve been in the same Republican-dominated cycle for decades, according to Jack Balkin, author of The Constitutional Cycles of Time. Balkin and Mike talk about the extreme political polarization not seen since the Civil War, constitutional rot, and, just for fun, delve into conservative complaints about de-platforming and free speech. Balkin is a Knight Professor of Constitutional Law and the First Amendment and Director for The Information Society Project at Yale Law School. In the spiel, delving into the Trump files ahead of the Impeachment trial. Email us at thegist@slate.com Podcast production by Margaret Kelley and Cheyna Roth. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
Data & Society and Stanford PACS host a special book launch: One of the most far-reaching transformations in our era is the wave of digital technologies rolling over—and upending—nearly every aspect of life. Work and leisure, family and friendship, community and citizenship have all been modified by now-ubiquitous digital tools and platforms. Digital Technology and Democratic Theory looks closely at one significant facet of our rapidly evolving digital lives: how technology is radically changing our lives as citizens and participants in democratic governments.To understand these transformations, this book brings together contributions by scholars from multiple disciplines to wrestle with the question of how digital technologies shape, reshape, and affect fundamental questions about democracy and democratic theory. As expectations have whiplashed—from Twitter optimism in the wake of the Arab Spring to Facebook pessimism in the wake of the 2016 US election—the time is ripe for a more sober and long-term assessment. How should we take stock of digital technologies and their promise and peril for reshaping democratic societies and institutions? To answer, this volume broaches the most pressing technological changes and issues facing democracy as a philosophy and an institution.Speaker BiosRobyn Caplan | @robyncaplanRobyn Caplan is a Researcher at Data & Society, and a PhD Candidate at Rutgers University (ABD, advisor Philip M. Napoli) in the School of Communication and Information Studies. She conducts research on issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies.Her work has been published in journals such as First Monday, Big Data & Society, and Feminist Media Studies. She has had editorials featured in The New York Times, and her work has been featured by NBC News THINK and Al Jazeera. She has conducted research on a variety of issues regarding data-centric technological development in society, including government data policies, media manipulation, and the use of data in policing.Lucy Bernholz | @p2173Lucy Bernholz is a Senior Research Scholar at Stanford University's Center on Philanthropy and Civil Society and Director of the Digital Civil Society Lab. She has been a Visiting Scholar at The David and Lucile Packard Foundation, and a Fellow at the Rockefeller Foundation's Bellagio Center, the Hybrid Reality Institute, and the New America Foundation. She is the author of numerous articles and books, including the annual Blueprint Series on Philanthropy and the Social Economy, the 2010 publication Disrupting Philanthropy, and her 2004 book Creating Philanthropic Capital Markets: The Deliberate Evolution. She is a co-editor of Philanthropy in Democratic Societies (2016, Chicago University Press) and of the forthcoming volume Digital Technology and Democratic Theory. She writes extensively on philanthropy, technology, and policy on her award winning blog, philanthropy2173.com.She studied history and has a B.A. from Yale University, where she played field hockey and captained the lacrosse team, and an M.A. and Ph.D. from Stanford University.Rob Reich | @robreichRob Reich is professor of political science and, by courtesy, professor of philosophy at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and co-director of the Center on Philanthropy and Civil Society (publisher of the Stanford Social Innovation Review), both at Stanford University. He is the author most recently of Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better (Princeton University Press, 2018) and Philanthropy in Democratic Societies: History, Institutions, Values (edited with Chiara Cordelli and Lucy Bernholz, University of Chicago Press, 2016). He is also the author of several books on education: Bridging Liberalism and Multiculturalism in American Education (University of Chicago Press, 2002) and Education, Justice, and Democracy (edited with Danielle Allen, University of Chicago Press, 2013). His current work focuses on ethics, public policy, and technology, and he serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford. Rob is the recipient of multiple teaching awards, including the Walter J. Gores award, Stanford's highest honor for teaching. Reich was a sixth grade teacher at Rusk Elementary School in Houston, Texas before attending graduate school. He is a board member of the magazine Boston Review, of Giving Tuesday, and at the Spencer Foundation. More details at his personal webpage: http://robreich.stanford.eduSeeta Peña GangadharanDr Seeta Peña Gangadharan is Associate Professor in the Department of Media and Communications at the London School of Economics and Political Science. Her work focuses on inclusion, exclusion, and marginalization, as well as questions around democracy, social justice, and technological governance. She currently co-leads two projects: Our Data Bodies, which examines the impact of data collection and data-driven technologies on members of marginalized communities in the United States, and Justice, Equity, and Technology, which explores the impacts of data-driven technologies and infrastructures on European civil society. She is also a visiting scholar in the School of Media Studies at The New School, Affiliated Fellow of Yale Law School's Information Society Project, and Affiliate Fellow of Data & Society Research Institute.Before joining the Department in 2015, Seeta was Senior Research Fellow at New America's Open Technology Institute, addressing policies and practices related to digital inclusion, privacy, and “big data.” Before OTI, she was a Postdoctoral Associate in Law and MacArthur Fellow at Yale Law School's Information Society Project. She received her PhD from Stanford University and holds an MSc from the Department of Media and Communications at the London School of Economics and Political Science.Seeta's research has been supported by grants from Digital Trust Foundation, Institute of Museum and Library Services, Ford Foundation, Open Society Foundations, Stanford University's Center on Philanthropy and Civil Society, and U.S. Department of Commerce's Broadband Technology Opportunities Program.Archon Fung | @ArfungArchon Fung is the Winthrop Laflin McCormack Professor of Citizenship and Self-Government at the Harvard Kennedy School. His research explores policies, practices, and institutional designs that deepen the quality of democratic governance. He focuses upon public participation, deliberation, and transparency. He co-directs the Transparency Policy Project and leads democratic governance programs of the Ash Center for Democratic Governance and Innovation at the Kennedy School. His books include Full Disclosure: The Perils and Promise of Transparency (Cambridge University Press, with Mary Graham and David Weil) and Empowered Participation: Reinventing Urban Democracy (Princeton University Press). He has authored five books, four edited collections, and over fifty articles appearing in professional journals. He received two S.B.s — in philosophy and physics — and his Ph.D. in political science from MIT.
Presidents have a unique way of shaping polity, though government strength is also tested in other ways. Party polarization, money in politics, economic inequality, and other forces have all hindered the ability to govern. American legal scholar Jack Balkin joins Julian Zelizer and Sam Wang in this week's episode to help walk through these “constitutional cycles.” He explains how the rise and fall of parties, polarization and de-polarization, as well as episodes of "republic decay" and renewal have all shaped American polity over time. These are all subjects in his latest book: “The Cycles of Constitutional Time.” Balkin is Knight Professor of Constitutional Law and the First Amendment at Yale Law School. He is the founder and director of Yale’s Information Society Project, an interdisciplinary center that studies law and new information technologies. He also directs the Abrams Institute for Freedom of Expression and the Knight Law and Media Program at Yale. *This episode was recorded in September 2020.
Panelists look back on some of the major stories of the 2020 election to unpack the impact of disinformation on voters. Many polls miss the mark during election season but their role during election night remains influential. Social media informs many voters, but seems to cause further polarization. Is there any room for civility in American politics? RonNell Anderson Jones,Affiliated Fellow at Yale Law School's Information Society Project and the Lee E. Teitelbaum Endowed Chair and Professor of Law at the University of Utah S.J. Quinney College of Law, Boyd Matheson, opinion editor for the Deseret News, and Max Roth, anchor with Fox13 News join host Jason Perry on this week’s Hinckley Report.
On November 2, 2020, the Federalist Society's Illinois Student Chapter hosted Logan Beirne for a discussion of the history of presidential power.Logan Beirne is a Clinical Lecturer in Law and a Faculty Fellow at the Information Society Project at Yale Law School. He is also the Chief Executive Officer of Matterhorn Transactions, Inc., a legal information services company that provides transaction term language and market trend analytics across the US, UK, and Canada.Featuring:- Logan Beirne, Clinical Lecturer in Law and Faculty Fellow, Information Society Project, Yale Law School
Laura Denardis is the author of The Internet in Everything: Freedom and Security in a World with No Off Switch, Yale University Press. We discuss what this means for our everyday lives and she gives us practical and simple tips to to protect ourselves. “As always, with any internet issue. It's a tension between the great conveniences and the great economic benefit and the human aspects that are you know, it's just so convenient to use them and it's helping us to live better lives. There's a tension between that and the danger of insecure devices.” The Internet of Things or IoT refers to all the physical devices in the world that are connected to the Internet, collecting data every second of every day. Laura E. DeNardis is Professor and Interim Dean of the School of Communication at American University in Washington, DC. where she also serves as Faculty Director of the Internet Governance Lab. She authored six books and numerous articles on the political implications of Internet technical architecture and governance. Dr. DeNardis is currently an Adjunct Senior Research Scholar in the Faculty of International and Public Affairs at Columbia University and an affiliated fellow at the Information Society Project at Yale Law School. "This brilliant and essential book does nothing less than alter our paradigm for thinking about the internet.”—Anupam Chander, author of The Electronic Silk Road: How the Web Binds the World Together in Commerce “This is a must-read.” —Vint Cerf, Internet Pioneer “With more things than people connected to the Internet, we enter a cyber-physical world of opportunities and threats. Laura DeNardis is the perfect guide to this strange new world.”—Joseph S. Nye, Jr., Harvard University “A crucial read for understanding the unseen but powerful mechanisms and standards which shape security and policy issues impacting everyone.”—Marietje Schaake, Member of European Parliament 2009-2019 WEBSITE Information on The Internet in Everything: Freedom in a World with no Off Switch Her recent article on The Conversation: 'Internet of things' could be an unseen threat to elections
Today, Kate Klonick is back as the guest host. She is an assistant professor at St. John’s Law School, a fellow at the Information Society Project at Yale Law School, and researcher of the intersection between law and tech. She’s also co-host of a daily YouTube series called In Lieu of Fun. On the Gist, in 2020, every online company has a community of standards and manually reviews user content. In the interview, Kate talks to Harvard law professor Noah Feldman about his idea for Facebook to create a Supreme Court to adjudicate disputes over speech. They discuss how he came up with the idea and pitched it to Sheryl Sandberg and Mark Zuckerberg, the influences it draws from political systems, and the size of the case it should choose as its first. Feldman hosts the podcast Deep Background. In the spiel, Facebook’s oversight board could be the start of something revolutionary within big tech. Email us at thegist@slate.com Podcast production by Daniel Schroeder and Margaret Kelley. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today, Kate Klonick is back as the guest host. She is an assistant professor at St. John’s Law School, a fellow at the Information Society Project at Yale Law School, and researcher of the intersection between law and tech. She’s also co-host of a daily YouTube series called In Lieu of Fun. On the Gist, in 2020, every online company has a community of standards and manually reviews user content. In the interview, Kate talks to Harvard law professor Noah Feldman about his idea for Facebook to create a Supreme Court to adjudicate disputes over speech. They discuss how he came up with the idea and pitched it to Sheryl Sandberg and Mark Zuckerberg, the influences it draws from political systems, and the size of the case it should choose as its first. Feldman hosts the podcast Deep Background. In the spiel, Facebook’s oversight board could be the start of something revolutionary within big tech. Email us at thegist@slate.com Podcast production by Daniel Schroeder and Margaret Kelley. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
We are halfway through our guest hosting week. Today, Kate Klonick takes the mic. She is an assistant professor at St. John’s Law School, a fellow at the Information Society Project at Yale Law School, and researcher of the intersection between law and tech. She’s also co-host of a daily YouTube series called In Lieu of Fun. On the Gist, how Kate found herself on a daily live show during quarantine. In the interview, Kate talks to Ben Wittes, senior fellow at the Brookings Institution, co-founder and editor-in-chief of Lawfare, and Kate’s co-host on In Lieu of Fun. They discuss how the global pandemic spurred a need for intelligent discussion in a less than lateral way. Guests on their show have ranged from apiarists to the former president of Estonia. Wittes wanted to build something that welcomed a community, allowing for audience input around guest selection and conversation topics. Along the way, he found an avenue that continues to forge new friendships without the stale in-person meet and greets we so often find at happy hours. In the spiel, what it’s like to build an online community during this crisis. Email us at thegist@slate.com Podcast production by Daniel Schroeder and Margaret Kelley. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
We are halfway through our guest hosting week. Today, Kate Klonick takes the mic. She is an assistant professor at St. John’s Law School, a fellow at the Information Society Project at Yale Law School, and researcher of the intersection between law and tech. She’s also co-host of a daily YouTube series called In Lieu of Fun. On the Gist, how Kate found herself on a daily live show during quarantine. In the interview, Kate talks to Ben Wittes, senior fellow at the Brookings Institution, co-founder and editor-in-chief of Lawfare, and Kate’s co-host on In Lieu of Fun. They discuss how the global pandemic spurred a need for intelligent discussion in a less than lateral way. Guests on their show have ranged from apiarists to the former president of Estonia. Wittes wanted to build something that welcomed a community, allowing for audience input around guest selection and conversation topics. Along the way, he found an avenue that continues to forge new friendships without the stale in-person meet and greets we so often find at happy hours. In the spiel, what it’s like to build an online community during this crisis. Email us at thegist@slate.com Podcast production by Daniel Schroeder and Margaret Kelley. Slate Plus members get bonus segments and ad-free podcast feeds. Sign up now. Learn more about your ad choices. Visit megaphone.fm/adchoices
“Internet Speech Will Never Go Back to Normal,” declared the headline of a recent Atlantic article by law professors Jack Goldsmith and Andrew Keane Woods. The piece argues that the U.S. must learn from China in regulating the internet. “[S]ignificant monitoring and speech control are inevitable components of a mature and flourishing internet,” the authors write, “and governments must play a large role in these practices to ensure that the internet is compatible with a society’s norms and values.” But is this conclusion the only one available from the fallout of the coronavirus crisis? Or are there other ways to ensure a mature and flourishing internet in which free speech and public health can coexist? And could Facebook’s new Oversight Board be one of the answers? Here to discuss the issue are two of the biggest experts on the subject of internet law and platform regulation: Daphne Keller, Platform Regulation Director at the Stanford Cyber Policy Center (formerly an Associate General Counsel at Google); and Kate Klonick, assistant professor at St. John’s University teaching internet law and information privacy, and a fellow at Yale Law School’s Information Society Project. In this episode we discuss: Whether social media platforms like Facebook, Twitter, and YouTube have been paragons of responsibility or the lapdogs of censorious governments, when it comes to content moderation? If the current crisis justifies lowering the threshold for when content is deemed “harmful” or if we should be even more vigilant about what stays online? Are there specific problems in the policies and guidelines laid out by health authorities like WHO and CDC, which have changed their position on issues like facemasks and included inaccurate information about the nature of COVID-19? What is the impact and outcome of automated content moderation based on the performance during the pandemic? Whether democracies — particularly European ones — have weakened online freedom by choosing to respond to legitimate concerns about hate speech, disinformation, and terrorist content with illiberal laws? Show notes: Jack Goldsmith and Andrew Keane Woods in The Atlantic: “Internet Speech Will Never Go Back to Normal” Samuel Walker: “Hate Speech: The History of an American Controversy” Daphne Keller’s Hoover Institution essay: “Who Do You Sue?” Why have kings, emperors, and governments killed and imprisoned people to shut them up? And why have countless people risked death and imprisonment to express their beliefs? Jacob Mchangama guides you through the history of free speech from the trial of Socrates to the Great Firewall. You can subscribe and listen to Clear and Present Danger on Apple Podcasts, Google Play, YouTube, TuneIn, and Stitcher, or download episodes directly from SoundCloud. Stay up to date with Clear and Present Danger on the show’s Facebook and Twitter pages, or visit the podcast’s website at freespeechhistory.com. Email us feedback at freespeechhistory@gmail.com.
After years of controversial content moderation decisions, from deepfakes to deplatforming, Facebook is trying something new. In January, the social network announced that its new Oversight Board, which will act as a sort of supreme court for controversial content, will begin hearing cases this summer. Could this independent board change the way we govern speech online? Guest: Kate Klonick, assistant professor at St. John’s University School of Law, and fellow at the Information Society Project at Yale. Learn more about your ad choices. Visit megaphone.fm/adchoices
After years of controversial content moderation decisions, from deepfakes to deplatforming, Facebook is trying something new. In January, the social network announced that its new Oversight Board, which will act as a sort of supreme court for controversial content, will begin hearing cases this summer. Could this independent board change the way we govern speech online? Guest: Kate Klonick, assistant professor at St. John’s University School of Law, and fellow at the Information Society Project at Yale. Learn more about your ad choices. Visit megaphone.fm/adchoices
If Then | News on technology, Silicon Valley, politics, and tech policy
After years of controversial content moderation decisions, from deepfakes to deplatforming, Facebook is trying something new. In January, the social network announced that its new Oversight Board, which will act as a sort of supreme court for controversial content, will begin hearing cases this summer. Could this independent board change the way we govern speech online? Guest: Kate Klonick, assistant professor at St. John’s University School of Law, and fellow at the Information Society Project at Yale. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode from Lawfare's Arbiters of Truth series on disinformation in the run-up to the 2020 election, Quinta Jurecic, Evelyn Douek, and Alina Polyakova spoke with Tiffany Li, a visiting professor at Boston University and a fellow at the Information Society Project at Yale Law School. Tiffany writes on all the issues discussed on this podcast—disinformation, misinformation, and platform governance—but with an additional twist. She’s also a privacy scholar. They talked about how privacy law can inform platform governance, and how prioritizing privacy might help tackle disinformation—as well as what tensions there might be between those two goals.
Michael Kwet is a Visiting Fellow of the Information Society Project at Yale Law School. His perspective of big data and corporate involvement in education invites us to reconsider our assumptions about analytics and automated education. This episode will get you thinking! Interview: https://episodes.castos.com/onlinelearninglegends/032-Mike-Kwet-Final.mp3 | recorded September 2019 Mike's profile: https://law.yale.edu/mike-kwet Nominated papers (free to access): Kwet, M. (2018). Digital colonialism: US empire and the New Imperialism in the Global South. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3232297Kwet, M. (2017). Operation Phakisa Education: Why a secret? https://firstmonday.org/ojs/index.php/fm/article/view/8054/6585Kwet, M. (2019). Smart CCTV networks are driving an AI-powered apartheid in South Africa. https://www.vice.com/en_uk/article/pa7nek/smart-cctv-networks-are-driving-an-ai-powered-apartheid-in-south-africaKwet, M. (2019). Digital colonialism: South Africa’s education transformation in the shadow of Silicon Valley. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3496049Kwet, M. (2019). In stores, secret surveillance tracks your every move. https://www.nytimes.com/interactive/2019/06/14/opinion/bluetooth-wireless-tracking-privacy.html Nominated paper (may require payment): Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class 60(4), https://doi.org/10.1177/0306396818823172. Twitter: @Michael_Kwet
Podcast Description "Everyone wanted diversity for the board, but it was really unclear as to how on Earth you have diversity over an entire globe with only 40 people on the entire board." Kate Klonick is an Assistant Professor at Law at St. John's University Law School and an Affiliate Fellow at the Information Society Project at Yale Law School. Her current research focuses on the development of Facebook's new Oversight Board -- an independent body that will hear user appeals from Facebook users and advise the platform about its online speech policies. Thanks to individual grants from the Knight Foundation, Charles Koch Institute, and MacArthur Foundation, she has amassed over 100+ hours of interviews and embedded research with the Governance Team at Facebook that is creating the Board. The results of this research will be published in a law review article in the Yale Law Journal in 2020, but available in draft form online in late 2019. Kate holds a JD from Georgetown University Law Center, where she was a Senior Editor at The Georgetown Law Journal and the Founding Editor of the The Georgetown Law Journal Online; and a PhD from Yale Law School where she studied under Jack Balkin, Tom Tyler, and Josh Knobe. Between law school and her time at Yale, she clerked for the Hon. Richard C. Wesley of the Second Circuit and the Hon. Eric N. Vitaliano of the Eastern District of New York. She has a background in cognitive psychology which she apply to the study of emerging issues in law and technology. Specifically, this has included research and work on the Internet's effect on freedom of expression and private platform governance. Kate also writes and works on issues related to online shaming, artificial intelligence, robotics, content moderation, algorithms, privacy, and intellectual property. Her work on these topics has appeared in the Harvard Law Review, the Southern California Law Review, Maryland Law Review, New Yorker, New York Times, The Atlantic, Slate, Lawfare, Vox, The Guardian and numerous other publications. Additional Resources Relevant Twitter Thread Establishing Structure and Governance for an Independent Oversight Board Inside the Team at Facebook That Dealt with the Christchurch Shooting Twitter Kate Klonick Become a #causeascene Podcast sponsor because disruption and innovation are products of individuals who take bold steps in order to shift the collective and challenge the status quo. Learn more > All music for the #causeascene podcast is composed and produced by Chaos, Chao Pack, and Listen on SoundCloud. Listen to more great #causeascene podcasts full podcast list >
In this episode Aaron Mak learns about all the ways China is using cyber warfare to disrupt the efforts of protesters in Hong Kong. His guest is Nick Frisch, a fellow at Yale’s Information Society Project and a scholar of media and technology in the Chinese speaking world. Frisch was recently in Hong Kong as a fellow at the Journalism and Media Studies Center at Hong Kong University. After the interview, Shannon Palus joins the show for this week’s edition of Don’t Close My Tabs. Learn more about your ad choices. Visit megaphone.fm/adchoices
If Then | News on technology, Silicon Valley, politics, and tech policy
In this episode Aaron Mak learns about all the ways China is using cyber warfare to disrupt the efforts of protesters in Hong Kong. His guest is Nick Frisch, a fellow at Yale’s Information Society Project and a scholar of media and technology in the Chinese speaking world. Frisch was recently in Hong Kong as a fellow at the Journalism and Media Studies Center at Hong Kong University. After the interview, Shannon Palus joins the show for this week’s edition of Don’t Close My Tabs. Learn more about your ad choices. Visit megaphone.fm/adchoices
The reaction against the ever-growing amount of information collected by tech giants has led to proposals ranging from self-regulation to strict GDPR-style privacy, and even the potential break-up of larger companies. But could treating tech companies as information fiduciaries — creating a legal obligation to be trustworthy in their use of our data — help solve this privacy problem? Ash is joined by Jack Balkin, Knight professor of constitutional law and the First Amendment at Yale Law School and founder Yale’s Information Society Project, and Mike Godwin, senior fellow of technology and innovation at the R Street Institute. For more, see Balkin’s work on the subject (law review article, website, Balkinization blog), and Godwin’s book, The Splinters of our Discontent.
In the first of two episodes recorded at SXSW in Austin, Texas, Jonathan talks all things artificial intelligence with Azeem Azhar, editor of the Exponential View newsletter; Tiffany Li, Resident Fellow at Yale Law School at the Information Society Project; and Meredith Broussard, data journalism professor at NYU.We've also been working on another podcast, exploring the future of digital identity with a range of global experts. It's part of the Good ID project and the podcast is called Inside Good ID. It's available wherever you listen to Government vs the Robots, so please do check it out and let us know what you think! See acast.com/privacy for privacy and opt-out information.
This episode features Jennifer Ange, STLR Staffer, talking with Dr. Moran Yemini about the freedom of speech in the new digital age. In his recent article published on STLR, Dr. Yemini argues that the digital age presents a new irony of free speech. The popular concept that the Internet promotes freedom of expression may be too simplistic. In his view, the Internet, while it strengthens our capacity of expression, also limits the liberty aspect of expression. Dr. Yemini received his Ph.D. in Law from the University of Haifa, where he is also a Senior Fellow at the Center for Cyber, Law & Policy. He is currently a Visiting Fellow at the Information Society Project at Yale Law School, and at the Digital Life Initiative at Cornell Tech. Dr Yemini’s article THE NEW IRONY OF FREE SPEECH can be found in our most recent issue. To find that article, and all our content examining the intersection of science, technology, and the law, please visit our website, STLR.org. We’d love your help in making this podcast better. If you like what we’re doing, please subscribe, rate, and give a review on iTunes, Google Play, Spotify, or wherever you get your podcasts. We’d also love to hear from you. Please send us an email at STLRpodcast@gmail.com. Nothing in this podcast should be considered legal advice. If you think you need legal assistance, consult a lawyer, not a podcast. Opening and Closing Themes by Jonathan Coulton ("The Future Soon" and "Ikea")
Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn. Guidance for GDPR Right to be Forgotten Cindy Ng We continue our discussion with Tiffany Li who is an attorney and Resident Fellow at Yale Law Schools Information Society Project. In part two, we discuss non-human creators of intellectual property and how it could potentially impact the right to be forgotten, as well as the benefits of multi-disciplinary training where developers take a law class and lawyers take a tech class. Andy Green So do you think the regulators will have some more guidance specifically for the GDPR right to be forgotten? Tiffany Li The European regulators typically have been fairly good about providing external guidance outside of regulations and outside of decisions. Guidance documents that are non-binding have very helpful in understanding different aspects of regulation. And I think that we will have more research done. I would love to really see though is more interdisciplinary research. So one problem I think that we have in law generally, in technology law, is the sort of habit of operating in a law and policy only silo. So we have the lawyers, we have the policymakers, we have the lobbyists, everyone there in a room talking about, for example, how we should protect privacy. And that's wonderful and I've been in that room many times. But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers. AI and Intellectual Property Cindy Ng Speaking of interdisciplinary, you recently wrote a really interesting paper on AI and intellectual property, and you describe the future dilemmas of what might arise in IP law specifically involving works by non-human creators. And I was wondering if you can introduce to our listeners the significance of your inquiry. Tiffany Li So this is a draft paper that I've been writing about AI and intellectual property. Specifically, I'm looking at the copyright ability of works that are created by non-human authors, which could include AI, but could also include animals for example, or other non-human actors. Getting back to that same difference I mentioned earlier where we have one from an AI that is simply machine learning and super advanced statistics, and we have one from an AI that may be something close to a new type of intelligence. So my paper looks at this from two angles. First, we look what current scholarship says about who should own creative works that are created by AI or non-humans. And here we have an interesting issue. For example, if you devise an AI system to compose music, which we've seen in a few different cases, the question then is who you should own the copyright or the IP rights generally over the music that's created? One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality. But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory. And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally. Andy Green If we give AIs some of these rights, then it almost conflicts with the right to be forgotten because now you would need the consent of the AI? Tiffany Li Sure. That's definitely possible. We don't know. I mean, we don't have AI citizens yet except in Saudi Arabia. Andy Green I've heard about that, yeah. Cindy Ng So since we're talking about AI citizens, if we do extend AI citizens to have intellectual property rights, does it mean that they get other kinds of rights? Such as freedom of speech and the right to vote, or that's not a proper approach or way to think about it? Are we treading in science fiction movies that we've been where humans are superior to a machine? I know we're just kind of playing around with ideas, but it will be really interesting to hear your insights especially... It's your specialty. Tiffany Li No problem. I mean, I'm in this field because I love playing around with those ideas. Even though I do continually mention that there is that division between the AI we have now and that futuristic sentient AI, I do think that eventually we will get there. There will be a point where we have AI that can think, for a certain definition of thinking, that can think at least like level human beings. And because those intelligent systems can design themselves, it's fairly easy to assume that they will then design even more intelligent systems. And we'll get to that point where there will be super intelligent AIs who are more intelligent than humans. So the question they ask then I think is really interesting. It's the concept of whether we should be giving these potential future beings the same rights that we give human beings. And I think that's interesting because it gets down to a really a philosophical question, right? It's not a question about privacy or security or even law. It's the question of what we believe is important on a moral level, and it's who we believe to be capable of either having morals or being part of a moral calculus. So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth. Multidisciplinary Training Cindy Ng And to wrap up and to bring all our topics together, I wanna bring it back to regulations and technology and training and I'd like to continue our play thinking with the idea that developers who create technology, if we should require training so that they take principle such as right to be forgotten, privacy by design, and you even mentioned the moral obligation for developers to consider all of these elements because what they'll be creating will ultimately impact humans. And I wonder if they could get the training that we require of doctors and lawyers so that everyone is working from the same knowledge base. Could you see that happening? And I wanted to know what your opinions are on this. Tiffany Li I love that mode of thought. I think that in addition to lawyers and policymakers needing to understand more from technologists, I think that people working in tech definitely should think more about these ethical issues. And I think that it's starting, we're starting to see a trend of people in the technology community thinking about really how their actions can affect the world at large. And there may be partially in the mainstream news right now because of the reaction to the last election and to ideas such as fake news and disinformation and so on. But we see the tech industry changing and we're accepting somewhat the idea that maybe they should be responsibility or ethical considerations built into the role of being a technologist. So what I like to think about it's just the fact that regardless of whether you are a product developer or you are a privacy officer or you're a lawyer at a tech company per se, for example, regardless of what role you have every action that you make have an impact in the world at large. And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on. I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap. Cindy Ng So I just wanted to ask if you had anything else that you wanted to share that we didn't cover? We covered so many different topics. Tiffany Li So I'd love to take a moment to introduce the work that I'm currently doing. I'm a Resident Fellow at Yale Law School's Information Society Project, which is a research center dedicated to different legal issues involving the information society as we know it. I'm currently leading a new initiative which is called the Wikimedia and Yale Law School Initiative on intermediaries and information. This initiative is funded by a generous grant from the Wikimedia Foundation, which is the nonprofit that runs Wikipedia. And we're doing some really interesting research right now on exactly what we just discussed on the role of tech companies, but particularly these information intermediaries or these social media platforms and so on. These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.
Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this first part , we talk about the GDPR's "right to be forgotten" rule and the gap between technology and the law. Consumer Versus Business Interests Cindy Ng Tiffany Li is an attorney and resident fellow at the Yale Law School Information Society Project. She is also an expert on privacy, intellectual property, law and policy. In our interview we discuss the legal background in GDPR's right to be forgotten, the hype and promise of artificial intelligence, as well as her paper, "Humans forget, machines remember." The right to be forgotten, it's a core principle in the GDPR, where a consumer can request to have their personal data be removed from the internet. And I was wondering if you can speak to the tension between an individual's right to privacy and a company's business interest. Tiffany Li So the tension between the consumer right to privacy and a company's business interest really happens in many different spaces. Specifically, here we're wrote about the right to be forgotten, which is the concept that an individual should be able to request that data or information about them be deleted from a website or a search engine, for example. Now, there's an obvious tension there between a consumer's rights or desire to have their privacy unstated and the business or the company's business interest in having information out there and also in decreasing the cost for compliance. Before the right to be forgotten in particular, there is that interesting question about whether or not we should be protecting the personal privacy rights of whoever's requesting that their information be deleted, or should we protect this concept that the company should be able to control the information that they provide on their service, as well as a larger conceptual ideal of having free speech and free expression and knowledge out there on the internet. So one argument outside of this consumer versus business tension, one argument really is simply that the right to be forgotten goes against the values of speech and expression, because by requesting that your information or information about you be taken down, you are in some ways silencing someone else's speech. AI and the Right to Be Forgotten Andy Green Right. So, Tiffany, I wanted to follow up a little bit. I was wondering if you can give some of the legal background behind the GDPR's right to be forgotten, specifically referring to the Spain versus Google case that you mentioned in your paper on AI and the right to be forgotten. Tiffany Li The main important case that we discuss the right to be forgotten is the Spanish case that started in 2010. In that year, a Spanish citizen, along with the Spanish DPA, the Data Protection Agency, sued both the Spanish newspaper as well as Google, the American internet company that is now part of Alphabet. So the Spanish citizen argued that Google infringed on his right to privacy because the Google search results included information related to things that he didn't want to be in the public realm any longer. That's the basic legal framework. Eventually, this case went up to the ECJ, which in 2014 ruled in favor of the Spanish citizen and against Google. Essentially, what they ruled was that the right to be forgotten was something that could be enforced against search engine operators. Now, this wasn't a blanket rule, indicating a few searching conditions. A few conditions have to be met in order for search engine operators to be forced to comply with the right to be forgotten, and there are various exceptions that apply as well. And I think what's interesting really is that even then people were already discussing this tension that we mentioned before. Both the tension between consumer rights and business interests but also the tension between privacy in general and expression and transparency. So it goes all the way back to 2010, and we're still dealing with the ramifications of that decision now. Andy Green Right. So one thing about that decision that maybe a lot of people don't understand is that the Spanish newspaper that originally ran this story still has that content. The court decided, and correct me if I'm wrong, that that had to be still available. It's just that Google's search page results could not show it. Tiffany Li Yes. I think that there have been instances in a few other cases that have had similar past patterns, and there has been discussion of, you know, whether we can actually force newspapers to delete their archives. I know one person mentioned this, and really, what to me is kind of frightening framing that the right to be forgotten, taken to an ultimate endpoint...what essentially mean burning newspaper archives. Especially coming from an American point of view. You know, I'm in the U.S. where free speech is sacrosanct thing. That is incredibly frightening to think about, the idea that any individual could control what's kept as part of the news media and what's kept as part of our history is a little worrisome. And of course, the right to be forgotten has many conditions on it and it's not an ultimate right without, you know, anything protecting all these values we discussed. But I think it should be mentioned that there are consequences, and if we take anything to an extreme, the consequences become, well, extreme. Andy Green Extreme, right. So I'm wondering if you can just explain a little bit about what the right to be forgotten specifically requires of companies. Tiffany Li An interesting distinction that I discussed, my coauthors and I discussed in our paper on the right to be forgotten and artificial intelligence is that the law back in 2010, as well as the law that is upcoming, the GDPR in 2018, the law does not really define what it means to comply with the right to be forgotten. So they mentioned removing records and erasing records, but this isn't really clearly defined in terms of technical aspects, you know, how to actually comply. And it's especially an issue with current databases and with artificial intelligence and big data in general. We don't know if the law means that you have to delete a record, you have to override a record, you have to replace the record with a null value, you have to take away the data file, the data point from the record in general. We don't know what this means. Companies aren't told how to comply. They're just told that they absolutely have to, which is problematic. Cindy Ng So deleting is not just as simple as dragging a file to the trash can or clicking delete. I'd like to pivot to artificial intelligence. There's a lot of excitement and promise of artificial intelligence, and I'm wondering if you can set the stage by highlighting a few benefits and risks and then linking it back to your specific interest in artificial intelligence and the right to be forgotten. Tiffany Li So broadly speaking, I think that artificial intelligence definitely is the way of the future. And I don't wanna over-hype it too much because I know that right now AI is such a buzzword. It's included really in any discussion that anyone has about the future, right? On the other hand, I also don't believe that AI is this, you know, horrible monster that will eventually lead to the end of humanity as some people have put it. I think right now we're dealing with two things. We're dealing with maybe a soft AI. So, advanced machine learning or really what I call AI as being just very advanced statistics, right? We have that kind of artificial intelligence that can train itself, that can learn, that can create better algorithms based on the algorithms that it's programmed with and the data that we give it. We have that from the artificial intelligence. We do not yet have that form of super intelligent AI. We don't have, you know, the Terminator AI. That doesn't exist yet and we're not anywhere close to that. So take a step back a little bit. Get away from that idea of the super intelligent sentient AI who is either a God or a monster, and get back to what AI is right now. Andy Green So Tiffany, in your recent paper on AI and the right to be forgotten, you talk about AI apps as they are now and you describe how it's not so easy to erase something from its memory. Tiffany Li In our paper, we look at a few different case scenarios. I think the first issue to bring up is what I already mentioned, which is simply that there is no definition of deletion. So it's difficult to understand what it means to delete something, which means that in the case of the right to be forgotten, it seems like legislators are treating this as analogous to a human brain, right? We want the right to be forgotten from the public eye and from the minds of people around us. Translating that to machine intelligence though doesn't quite make sense because machines don't remember or forget in the same way that people do. So if you forget something, you can't find a record of it in your brain, you can't think of it in the future. If you want a machine to forget something or an artificial intelligence system, you can do a number of things, as I mentioned. You can override the specific data point, replace it with a null value, delete it from the record, delete it in your system index and so on. So that's one issue, right? There's no definition of what deletion means, so we don't really know what forgetting means. I think another issue, if we take a step back, if we think about machine learning algorithms and artificial intelligence, you consider any personal information as part of the training data that is used to train an AI system. If your personal information, for example, if you committed a crime and the fact of that crime and your personal information are linked to that crime, and put into an algorithm that determines the likelihood of any human being to become a criminal. So after adding in your data, that AI system then has a slight bias towards believing that people who may be similar to your various data points may be more likely to commit a crime, by a very slight bias. So when that happens, after that, if you request for your data to be removed from the system, we get into kind of a quandary. If we just remove the data record, there's a possibility of affecting the entire system because the training data that the algorithm was trained on is crucial to the development of the algorithm and the development of the AI system. Andy Green Yep. Tiffany Li So there's that first question of, can we even do this? Is this possible? Will this negatively affect these AI systems? Will this actually protect privacy, right? Because if you delete your data on a system that's already been trained on your data, then there may still be a negative effect on you. And the first basic goal of this right to be forgotten might not be accomplished through these means. I know there's a long list of questions, but are a few issues that we're thinking of when we consider it a problem of artificial intelligence in contrast with the right to be forgotten and with privacy in general. There's a lot that hasn't been figured out, which makes it a little problematic that we're legislating before we know really the technical ways to comply to legislation. Andy Green That's really fascinating, how the long-term memory that's embedded in these rules, that it's not so easy to erase once you...
This is the final episode in a special series in partnership with The Federation, exploring ethics in tech. In this episode, we talk about 'Functional Sovereignty' with Frank Pasquale. He talks to us about the concepts in his book The Black Box Society: The Secret Algorithms Behind Money & Information About Frank: Frank Pasquale's scholarship and public speaking translates complex law and policy into accessible writing and presentations. His research agenda focuses on challenges posed to information law by rapidly changing technology. He is presently researching a book on automation and the professions. Frank has testified before the Judiciary Committee of the House of Representatives, appearing with the General Counsels of Google, Microsoft, and Yahoo. He has also presented before a Department of Health & Human Services/Federal Trade Commission Roundtable (on personal health records) and panels of the National Academy of Sciences (on ubiquitous sensor networks and the IoT). Frank has been a Visiting Fellow at Princeton’s Center for Information Technology, and a Visiting Professor at Yale Law School and Cardozo Law School. He served as the Schering-Plough Professor in Health Care Regulation and Enforcement at Seton Hall University. Frank is an Affiliate Fellow of Yale Law School’s Information Society Project. Frank has been named to the Advisory Boards of the Electronic Privacy Information Center, the Data Competition Institute, Patient Privacy Rights, and the Journal of Legal Education. He has blogged at Concurring Opinions since 2006. His work has been published by the New York Times, Los Angeles Times, Chronicle of Higher Education, Boston Review, and many other media outlets. Hosted by Rebecca Rae-Evans (@rebeccawho), featuring pod regular Greg Ashton (@grgashton) Produced by @paul_yakabovski Get in touch:Twitter: @techforgoodliveInstagram: techforgoodliveEmail: hello@techforgood.live
Data & Society welcomes Mike Ananny and Tarleton Gillespie for a conversation with Kate Klonick about the underlying decisions that impact the public's access to media systems and internet platforms. In "Networked Press Freedom: Creating Infrastructures for a Public Right to Hear," Mike Ananny offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today's press freedom emerges from intertwined collections of humans and machines. His book proposes what robust, self-governing publics need to demand of technologists and journalists alike. Tarleton Gillespie's "Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media" investigates how social media platforms police what we post online—and the way these decisions shape public discourse, cultural production, and the fabric of society. Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why “content moderators” censor or promote user-posted content. The book then flips the way we think about moderation, to argue that content moderation is not ancillary to what platforms do, it is essential, definitional, constitutional. And given that, the very fact of moderation should change how we understand what platforms are. Mike Ananny is an associate professor of communication and journalism in the Annenberg School at the University of Southern California (USC), a faculty affiliate with USC's Science, Technology, and Society initiative, and a 2018-19 Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University. Tarleton Gillespie is a principal researcher at Microsoft Research New England and an affiliated associate professor at Cornell University. He co-founded the blog Culture Digitally. His previous book is the award-winning "Wired Shut: Copyright and the Shape of Digital Culture." Kate Klonick is an assistant professor at law at St. John's University Law School and an affiliate at the Information Society Project at Yale Law School, Data & Society, and New America. Her work on networked technologies' effect on the areas of social norm enforcement, torts, property, intellectual property, artificial intelligence, robotics, freedom of expression, and governance has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The Atlantic, Slate, The Guardian and numerous other publications.
Claudia Haupt discusses competing frameworks for regulating speech on the web. Claudia Haupt is a 2017-18 Data & Society Fellow and a resident fellow with the Information Society Project at Yale Law School. She previously taught at Columbia Law School and George Washington University Law School. Prior to that, she clerked at the Regional Court of Appeals of Cologne and practiced law at the Cologne office of the law firm of Graf von Westphalen, with a focus on information technology law.
On International Women’s Day, we’re highlighting the stories of several incredibly talented women in tech policy. They discuss what brought them to tech policy, and what drives them on this career path. Featured in this episode are: Gigi Sohn, a Distinguished Fellow at the Georgetown Law Institute for Technology Law & Policy and Mozilla Policy Fellow; Michelle Richardson, Deputy Director of the Center for Democracy and Technology's Freedom, Security, and Technology Project; Dr. Betsy Cooper, executive director of the Berkley Center for Long-Term Cyber Security; Cathy Gellis, lawyer with a focus on Internet issues; Jennifer Granick, surveillance and cybersecurity counsel for the ACLU; Carrie Wade, Director of Harm Reduction Policy and Senior Fellow at the R Street Institute; and Tiffany Li, resident fellow at Yale Law School’s Information Society Project.
Berkman Klein Center for Internet and Society: Audio Fishbowl
After a photographer left his camera equipment out for a group of wild macaques to explore, the monkeys took a series of photos, including selfies. Once the photos were posted publicly, legal disputes arose around who should own the copyrights — the human photographer who engineered the situation, or the macaques who snapped the photos. This unique case raises the increasingly pertinent question as to whether non-humans — whether they be monkeys or artificial intelligence machines — can claim copyrights to their creations. Jon Lovvorn, Lecturer on Law and the Policy Director of Harvard Law School's Animal Law & Policy Program, hosts a discussion panel featuring Jeff Kerr, the General Counsel of PETA, which sued on behalf of the monkey, and experts on copyright, cyber law, and intermediary liability issues, as well as Tiffany C. Li of Yale Law School’s Information Society Project, and Christopher T. Bavitz and Kendra Albert of Harvard Law School’s Cyberlaw Clinic. More info on this event here: https://cyber.harvard.edu/events/2018/luncheon/01/monkeyselfie
Tiffany Li, who heads the Wikimedia/Yale Law School Initiative on Intermediaries and Information at Yale Law’s Information Society Project, joins the crew to discuss algorithms, artificial intelligence, and how they challenge the European Union’s so-called Right to Be Forgotten. She also takes about her recent transition from working an in-house attorney to academia. Listeners can […]
Our guest today is Tiffany Li. She’s an attorney and Resident Fellow at Yale Law School’s Information Society Project. She's an expert on privacy, intellectual property, and law and policy, and her research includes legal issues involving online speech, access to information, and Internet freedom. She’s coauthor of the paper, Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten, which will be published soon in Computer Security & Law Review.
Tiffany C. Li (@tiffanycli) is an attorney and Resident Fellow at Yale Law School's Information Society Project. She is an expert on privacy, intellectual property, and law and policy at the forefront of new technological innovations. Li leads the Wikimedia/Yale Law School Initiative on Intermediaries and Information, where she researches cutting-edge legal issues involving online speech, access to information, and Internet freedom. Additionally, Li is also an Affiliate Scholar at Princeton's Center for Information Technology Policy.
The US Presidential election is less than two months away, and you’ve undoubtedly heard questions about cybersecurity in the lead up to the election (DNC, RNC hacks, etc.). Our resident voting and election cybersecurity expert covers all the issues that could come up this November, and what we should expect. Brian also chatted with journalist Kate Klonick about her recent article with Slate on what really governs online speech (hint – not the 1st Amendment). She breaks it down for us. Additional info from Joe on online voting cybersecurity: *CNN – http://bit.ly/JoePutinVoting *NPR – http://bit.ly/JoeProtectVote Kate is a Resident Fellow at the Information Society Project at Yale and a PhD candidate at Yale Law. Her recent article can be found here: http://bit.ly/KKonlinespeech. Attribution: sounds used from Psykophobia, Taira Komori, BenKoning, Zabuhailo, bloomypetal, guitarguy1985, bmusic92, and offthesky of freesound.org.
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics.
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel Kreiss is back on the podcast with his new book Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy (Oxford University Press, 2016). Kreiss is associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill and an affiliated fellow of the Information Society Project at Yale Law School. Why did it take more than 20 people to write a tweet for the Romney campaign? Why did dozens of new companies emerge from recent Democratic campaigns? Prototype Politics argues that each party has adopted digital technologies in some very different ways and that these differences have had major consequences. Democrats and Republicans have had varied approaches to investing in technology and in technology expertise. Once the technology leaders, Republicans have lagged behind Democrats in recent cycles, investing smaller amounts of money overall and placing much less organization emphasis on digital strategy. It remains to be seen how these differences will shape the 2016 election, but Prototype Politics offers a fascinating account of the changing role of technology has moved to the center of campaign politics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hidden algorithms make many of the decisions that affect significant areas of society: the economy, personal and organizational reputation, the promotion of information, etc. These complex formulas, or processes, are thought by many to be unbiased and impartial and, therefore, good for automated decision-making. Yet, recent scandals, as well as information uncovered by researchers and investigative reporters have uncovered that these algorithms may not be as neutral as believed. At the same time, there is no mechanism, legal or otherwise, that would force organizations to make these hidden processes transparent for evaluation. In his new book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), Frank Pasquale, a professor of law at the University of Maryland, and affiliate fellow at Yale Law School’s Information Society Project, explores the significant influence that hidden processes have on finance, search, and reputation. Pasquale examines the increasing corporate, and government, surveillance of consumers, and the incongruity between the secrecy allowed to corporations in comparison to that allowed to regular citizens. In so doing, he calls for greater oversight, transparency, and enforcement to help restore organizational trust and to combat the possible deleterious effects that technical secrecy may have. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hidden algorithms make many of the decisions that affect significant areas of society: the economy, personal and organizational reputation, the promotion of information, etc. These complex formulas, or processes, are thought by many to be unbiased and impartial and, therefore, good for automated decision-making. Yet, recent scandals, as well as information uncovered by researchers and investigative reporters have uncovered that these algorithms may not be as neutral as believed. At the same time, there is no mechanism, legal or otherwise, that would force organizations to make these hidden processes transparent for evaluation. In his new book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), Frank Pasquale, a professor of law at the University of Maryland, and affiliate fellow at Yale Law School’s Information Society Project, explores the significant influence that hidden processes have on finance, search, and reputation. Pasquale examines the increasing corporate, and government, surveillance of consumers, and the incongruity between the secrecy allowed to corporations in comparison to that allowed to regular citizens. In so doing, he calls for greater oversight, transparency, and enforcement to help restore organizational trust and to combat the possible deleterious effects that technical secrecy may have. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hidden algorithms make many of the decisions that affect significant areas of society: the economy, personal and organizational reputation, the promotion of information, etc. These complex formulas, or processes, are thought by many to be unbiased and impartial and, therefore, good for automated decision-making. Yet, recent scandals, as well as information uncovered by researchers and investigative reporters have uncovered that these algorithms may not be as neutral as believed. At the same time, there is no mechanism, legal or otherwise, that would force organizations to make these hidden processes transparent for evaluation. In his new book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), Frank Pasquale, a professor of law at the University of Maryland, and affiliate fellow at Yale Law School’s Information Society Project, explores the significant influence that hidden processes have on finance, search, and reputation. Pasquale examines the increasing corporate, and government, surveillance of consumers, and the incongruity between the secrecy allowed to corporations in comparison to that allowed to regular citizens. In so doing, he calls for greater oversight, transparency, and enforcement to help restore organizational trust and to combat the possible deleterious effects that technical secrecy may have. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hidden algorithms make many of the decisions that affect significant areas of society: the economy, personal and organizational reputation, the promotion of information, etc. These complex formulas, or processes, are thought by many to be unbiased and impartial and, therefore, good for automated decision-making. Yet, recent scandals, as well as information uncovered by researchers and investigative reporters have uncovered that these algorithms may not be as neutral as believed. At the same time, there is no mechanism, legal or otherwise, that would force organizations to make these hidden processes transparent for evaluation. In his new book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), Frank Pasquale, a professor of law at the University of Maryland, and affiliate fellow at Yale Law School’s Information Society Project, explores the significant influence that hidden processes have on finance, search, and reputation. Pasquale examines the increasing corporate, and government, surveillance of consumers, and the incongruity between the secrecy allowed to corporations in comparison to that allowed to regular citizens. In so doing, he calls for greater oversight, transparency, and enforcement to help restore organizational trust and to combat the possible deleterious effects that technical secrecy may have. Learn more about your ad choices. Visit megaphone.fm/adchoices
U.S. v. Jones, decided by the United States Supreme Court on January 23, 2012, held that a police department’s attachment of a GPS device to an unknowing suspect’s vehicle and the subsequent monitoring of that device constituted a search under the Fourth Amendment. Topics for discussion include the third party doctrine; viable ways for law enforcement agencies to structure investigative processes involving digital technology; an overview of the cases that have come out since Jones that involve GPS tracking; the mosaic theory; and how technology impacts the “poverty exception” to the Fourth Amendment. Panelists Professor Susan Freiwald, Professor of Law, University of San Francisco Professor Stephen E. Henderson, Professor of Law, University of Oklahoma Priscilla (Cilla) Smith, Senior Fellow at the Information Society Project at the Yale Law School Moderator Professor Joseph E. Kennedy, Professor of Law, UNC
On February 14, 2012, Yale Law School’s Information Society Project hosted a panel discussion about gene patents at Yale Law School. Panelists included: Chris Hansen, attorney for the ACLU; Richard Marsh, General Counsel of Myriad Genetics; Rochelle Dreyfuss, Pauline Newman Professor of Law at NYU; and Dr. Allen Bale, Director of the DNA Diagnostic Lab and Professor of Genetics at the Yale School of Medicine. The panelists discussed whether human genes should be patentable. Is isolated DNA a "product of nature" or a "man-made invention?" Do gene patents on balance promote innovation or harm it?
In this Kosmos podcast, I'm joined by Anthony Deardurff, Deputy Director of the Federalist Society Faculty Division, recapping the Federalist Society Conference that was held in Washington, D.C. We talk about the various panels that were held and some of the major ideas that were presented at the conference. For short interviews with some of the paper and panel presenters at the conference, see: J.W. Verret, George Mason School of Law, on The Sovereign Shareholder: Government Ownership and Corporate Law Post-Bailout Christina Mulligan, Information Society Project at Yale Law School, on her paper regarding Numeris Clausus William Baude, Stanford Constitutional Law Center, on his paper about the Defense of Marriage Act
A talk show on KZSU-FM, Stanford, 90.1 FM, hosted by Center for Internet & Society Resident Fellow David S. Levine. The show includes guests and focuses on the intersection of technology and society. How is our world impacted by the great technological changes taking place? Each week, a different sphere is explored. This week, David interviewsLaura DeNardis, Executive Director of the Information Society Project at Yale Law School and author of Protocol Politics: The Globalization of Internet Governance. For more information, please go to http://hearsayculture.com.
James Grimmelmann is Associate Professor at New York Law School and a member of its Institute for Information Law and Policy. He received his J.D. from Yale Law School, where he was Editor-in-Chief of LawMeme and a member of the Yale Law Journal. Prior to law school, he received an A.B. in computer science from Harvard College and worked as a programmer for Microsoft. He has served as a Resident Fellow of the Information Society Project at Yale, as a legal intern for Creative Commons and the Electronic Frontier Foundation, and as a law clerk to the Honorable Maryanne Trump Barry of the United States Court of Appeals for the Third Circuit. He studies how the law governing the creation and use of computer software affects individual freedom and the distribution of wealth and power in society. As a lawyer and technologist, he aims to help these two groups speak intelligibly to each other. He writes about intellectual property, virtual worlds, search engines, online privacy, and other topics in computer and Internet law. Recent publications include The Ethical Visions of Copyright Law, 77 Fordham L. Rev. 2005 (2009), How to Fix the Google Book Search Settlement, J. Internet L., Apr. 2009, at 1, and The Structure of Search Engine Law, 93 Iowa L. Rev. 1 (2007). He has been blogging since 2000 at the Laboratorium (http://laboratorium.net/). His home page is at http://james.grimmelmann.net/.