Podcast appearances and mentions of tiffany li

  • 19PODCASTS
  • 28EPISODES
  • 31mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about tiffany li

Latest podcast episodes about tiffany li

The Chicago Maroon
The Arts Podcast, E7: Tate McRae Got So Close

The Chicago Maroon

Play Episode Listen Later May 6, 2025 33:44


Episode Description On this episode, your favorite podcast hosts review Tate McRae's most recent album, So Close to What. They are joined by special guest Justin (resident Tate McRae stan and expert) as they spar over whether or not this album was able to achieve what seemed to be Tate's vision. Tune in to hear what we love and hate from Tate! Hosted by: Elizabeth Eck, podcast editor; Tiffany Li, editor-in-chief; Nolan Shaffer, arts editor Edited by: Tiffany Li, editor-in-chief

The Chicago Maroon
The Arts Podcast, E6: What We're Listening To Recently

The Chicago Maroon

Play Episode Listen Later Apr 25, 2025 40:07


Episode Description For this episode, each of your favorite arts podcast hosts picked three songs that they have been listening to recently, and played it for everyone. There was some fierce debate over who has the better music taste (and deep disagreement on how to pronounce Bladee's name). Tune in to find out what's been playing in our ears! Hosted by: Elizabeth Eck, podcast editor; Tiffany Li, editor-in-chief; Nolan Shaffer, arts editor Edited by: Elizabeth Eck, podcast editor; Nolan Shaffer, arts editor

The Chicago Maroon
The Arts Podcast, E5: Looking Back At the Grammys

The Chicago Maroon

Play Episode Listen Later Apr 3, 2025 27:10


Episode Description Your hosts are back with an episode on the Grammy awards! Who deserved it? Who didn't? Find out with Elizabeth, Nolan, and Tiffany as they discuss, disagree, and dissect. Hosted by: Elizabeth Eck, podcast editor; Tiffany Li, editor-in-chief; Nolan Shaffer, arts editor

Murder In The Black
The Shadows of Betrayal | Keith Green

Murder In The Black

Play Episode Listen Later Dec 12, 2024 29:33


This episode explores the heart-wrenching case of Keith Green, a man who vanished under mysterious circumstances in 2016, only for his remains to be discovered later. The investigation into his disappearance uncovers a complicated tapestry of interpersonal relationships, financial conflicts, and profound betrayal, prominently featuring his ex-girlfriend Tiffany Lee and her current partner Kaveh Bayat. The intricate dynamics between these individuals suggest that they may have played significant roles in the events leading to Keith's tragic fate. ​Despite substantial evidence implying their involvement in Keith's disappearance and subsequent death, both Tiffany Lee and Kaveh Bayat were acquitted during a trial that raised serious concerns about the efficacy of the legal system.​ The proceedings exposed numerous flaws in the investigative and judicial processes, prompting critical discussions about how such cases are handled by law enforcement and the courts. As the trial unfolded, it became clear that the complexities of the case extended beyond mere facts, highlighting the often ambiguous nature of evidence and the challenges of achieving justice. --------------------------------------------------------------------- Sources for Episode: News Articles ABC7 News "The disappearance of Millbrae resident Keith Green" - Date: December 20, 2019. ABC7 News "EXCLUSIVE: What Hillsborough heiress Tiffany Li is paying in lawsuit" - Date: January 11, 2024. SFGATE Coverage detailing the trial and subsequent acquittal of Tiffany Li and Kaveh Bayat, focusing on the complexities of the legal proceedings. KRON4 Articles discussing the emotional aftermath and impact of the case on Keith Green's family and the community. Court Documents San Mateo County Superior Court Court records from the trial concerning the murder of Keith Green, detailing evidence and testimonies presented in court. -------------------------------------------------------------------- KEEP UP WITH US Instagram: @murderintheblack Facebook: Murder In The Black Podcast website: www.murderintheblackpodcast.com -------------------------------------------------------------------- Chapters 00:00 Introduction and Case Overview 01:38 The Disappearance of Keith Green 05:03 Investigation and Discovery of Keith's Body 10:57 Tiffany Lee's Involvement and Relationship Dynamics 18:48 The Arrests and Legal Proceedings 24:35 Trial Outcomes and Public Reaction 26:35 Key Takeaways and Reflections 28:16 New Chapter 29:02 trueCrime-outro-high-long.wav

The Generation Why Podcast
Keith Green - 597

The Generation Why Podcast

Play Episode Listen Later Nov 18, 2024 64:49


April 29th, 2016. San Mateo County, California. Keith Green, a young father of two, was reported missing. He was last seen with his former partner Tiffany Li. Tiffany told the police that she and Keith parted ways at the Millbrae Pancake House, but cell phone data showed that Keith followed Tiffany to her mansion in Hillsborough on the night of his disappearance. For bonus episodes and outtakes visit: patreon.com/generationwhyListen ad free with Wondery+. Join Wondery+ for exclusives, binges, early access, and ad free listening. Available in the Wondery App. https://wondery.app.link/generationwhy.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Chicago Maroon
The Arts Podcast, E3: The Future of Pop

The Chicago Maroon

Play Episode Listen Later Oct 28, 2024 21:33


We're back with episode three of the Arts podcast! This week your hosts Elizabeth, Nolan, and Tiffany discuss Chappell Roan and Sabrina Carpenter and their respective visions for the future of pop music. The hosts learned that art can get political! Tune in to hear their thoughts on why both artists have suddenly blown up after years of being in the music industry, the rise and fall of Chappell Roan's image, and whether Sabrina Carpenter's hyper-sexual presentation is healthy or setting women back. Hosted by: Elizabeth Eck, Nolan Shaffer, Tiffany Li Edited by: Tiffany Li

The Chicago Maroon
The Arts Podcast, E2: BRAT Summer

The Chicago Maroon

Play Episode Listen Later Oct 15, 2024 30:04


Tune in to hear the second episode of the Arts podcast, where Elizabeth and Tiffany talk all things Brat (Charli xcx's newest album) and the Sweat tour with Charli xcx and Troye Sivan with Shygirl as the opener. On this episode, they'll also share their thoughts on whether Sam Bankman Fried's ex-girlfriend is brat, why Kamala Harris's campaign might have picked up and run with Brat's branding, and what tracks they're excited for from the remix album of Brat released last Friday. Hosted by: Elizabeth Eck and Tiffany Li Edited by: Tiffany Li

The Chicago Maroon
The Arts Podcast, E1: Taylor Swift and Her Tortured Poets

The Chicago Maroon

Play Episode Listen Later Jun 8, 2024 35:10


Arts is starting a new podcast! On Episode 1 of the Arts Podcast, Elizabeth, Nolan, and Tiffany discuss Taylor Swift's latest (and arguably greatest) album, The Tortured Poets Department. Tune in to hear their reactions, spicy takes, and top track picks! Hosted by: Elizabeth Eck, Nolan Shaffer, and Tiffany Li Edited by: Tiffany Li

The Chicago Maroon
The Maroon Weekly, E118

The Chicago Maroon

Play Episode Listen Later Apr 16, 2024 10:28


On Episode 118, Greg and Jake are joined in the studio by Tiffany, a second-year who is joining the section. She tells us about the new contract that Facilities Services workers reached with the University. We also discuss the venture funding won by event-planning app Lynkr and the Microsoft Outlook outage that affected university students.  Featuring: Gregory Caesar, Tiffany Li, Jake Zucker. Edited by: Jake Zucker.

Chicago Maroon: News
The Maroon Weekly, E118

Chicago Maroon: News

Play Episode Listen Later Apr 16, 2024 10:28


On Episode 118, Greg and Jake are joined in the studio by Tiffany, a second-year who is joining the section. She tells us about the new contract that Facilities Services workers reached with the University. We also discuss the venture funding won by event-planning app Lynkr and the Microsoft Outlook outage that affected university students.  Featuring: Gregory Caesar, Tiffany Li, Jake Zucker. Edited by: Jake Zucker.

The Lawfare Podcast
Rational Security: The ”Covered in Lyes” Edition

The Lawfare Podcast

Play Episode Listen Later Oct 29, 2023 64:03


This week on Rational Security, Alan, Quinta, and Scott came together in the virtual studio to talk over the week's big national security news, including:“Stuck in the Middle (East) with You.” The Biden administration is finding itself increasingly pilloried from both sides for its handling of the Oct. 7 massacre perpetrated by Hamas and Israel's ensuing military response in the Gaza Strip, as the right urges stronger support for Israel while some on the left are becoming more vocal in calling for a ceasefire. How far can the Biden administration walk this tightrope?“Et Tu, Jenna?” Four co-defendants of former President Trump, including Rudy Giuliani's right hand woman Jenna Ellis, have now pled out and promised to cooperate in the Fulton County prosecution addressing alleged election interference—and media reports indicate that his former Chief of Staff Mark Meadows has accepted an immunity deal to testify before a federal grand jury. What does this all mean for Trump's legal prospects moving forward?“Exit, Stage Far Right.” Former President Trump is reportedly once again planning to exit or diminish NATO if he returns to the White House—a position his contender for Republican nominee Vivek Ramaswamy has endorsed. What is the future of U.S. participation in the NATO alliance?For object lessons, Alan recommended Tiffany Li's brilliant contribution to McSweeney's Internet Tendency, “Statement from the University on Current Tensions in the Place You're Probably Thinking About When You Read This,” which satirizes…exactly what you're thinking about. Quinta lightened the mood by talking about serial killers in recommending Robert Kolker's new piece, “The Botched Hunt for the Gilgo Beach Killer,” in the New York Times Magazine. And Scott directed D.C. locals to his favorite amaro distillery, Don Ciccio & Figli, who is brewing up botanicals right here in the city's own Ivy City neighborhood.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Rational Security
The ”Covered in Lyes” Edition

Rational Security

Play Episode Listen Later Oct 26, 2023 63:16


This week, Alan, Quinta, and Scott came together in the virtual studio to talk over the week's big national security news, including:“Stuck in the Middle (East) with You.” The Biden administration is finding itself increasingly pilloried from both sides for its handling of the Oct. 7 massacre perpetrated by Hamas and Israel's ensuing military response in the Gaza Strip, as the right urges stronger support for Israel while some on the left are becoming more vocal in calling for a ceasefire. How far can the Biden administration walk this tightrope?“Et Tu, Jenna?” Four co-defendants of former President Trump, including Rudy Giuliani's right hand woman Jenna Ellis, have now pled out and promised to cooperate in the Fulton County prosecution addressing alleged election interference—and media reports indicate that his former Chief of Staff Mark Meadows has accepted an immunity deal to testify before a federal grand jury. What does this all mean for Trump's legal prospects moving forward?“Exit, Stage Far Right.” Former President Trump is reportedly once again planning to exit or diminish NATO if he returns to the White House—a position his contender for Republican nominee Vivek Ramaswamy has endorsed. What is the future of U.S. participation in the NATO alliance?For object lessons, Alan recommended Tiffany Li's brilliant contribution to McSweeney's Internet Tendency, “Statement from the University on Current Tensions in the Place You're Probably Thinking About When You Read This,” which satirizes…exactly what you're thinking about. Quinta lightened the mood by talking about serial killers in recommending Robert Kolker's new piece, “The Botched Hunt for the Gilgo Beach Killer,” in the New York Times Magazine. And Scott directed D.C. locals to his favorite amaro distillery, Don Ciccio & Figli, who is brewing up botanicals right here in the city's own Ivy City neighborhood. Hosted on Acast. See acast.com/privacy for more information.

Voices of VR Podcast – Designing for Virtual Reality
#1258: Using XR & AI to Reclaim and Preserve Indigenous Languages with Michael Running Wolf

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Aug 27, 2023 38:33


Michael Running Wolf is a Northern Cheyenne/Lakota/Blackfeet indigenous man who grew up in Montana. He's worked for Amazon, but eventually left in order to pursue his lifelong goal of building XR experiences that integrate with AI for language education and to reclaim and preserve indigenous languages. The biggest blocker is that most natural language processing approaches have a hard to dealing with the infinite words that come from polysynthetic languages like many North American indigenous languages. I had a chance to catch up with Running Wolf at Augmented World Expo where he talked about his aspirations for researching solutions to these open problems, and eventually creating immersive experiences that can create a dynamic relational context that alters how indigenous languages are spoken. Also be sure to check out Running Wolf on a panel discussion about "New Technology, Old Property Laws" at the Existing Law and Extended Reality Symposium at the Stanford Cyber Policy Center along with fellow panelists including Mark Lemley, Tiffany Li, and Micaela Mantegna. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

Aspen Ideas to Go
Digital Surveillance and the Fight for Reproductive Rights

Aspen Ideas to Go

Play Episode Listen Later May 17, 2022 60:12


The reversal of Roe v. Wade would make it difficult or impossible for millions of people to obtain abortions, but would also open the doors to criminally prosecute people who seek or obtain an abortion. And in our technological age, that criminalization brings new, frightening opportunities for digital surveillance by law enforcement agencies or anti-abortion vigilantes. In this panel from Aspen Digital, “Digital Surveillance and the Fight for Reproductive Rights,” three experts in digital privacy and civil rights walk us through the risks and existing practices, and share what can be done: Wafa Ben-Hassine from the Omidyar Network, Tiffany Li from University of New Hampshire School of Law and Yale Law School's Information Society Project, and Cynthia Conti-Cook from the Ford Foundation. The panelists are also joined by U.S. Senator Ron Wyden of Oregon, a longtime advocate for digital privacy, and Vivian Schiller, the Executive Director of Aspen Digital, moderates. 

Marketplace Tech
An old cybersecurity law gets an update (sort of)

Marketplace Tech

Play Episode Listen Later Apr 22, 2022 7:42 Very Popular


Scraping data from public websites is legal. That’s the upshot of a decision by the Ninth Circuit Court of Appeals earlier this week. LinkedIn had taken a case against data analytics company hiQ, arguing it was illegal for hiQ to “scrape” users’ profile data to analyze employee turnover rates under the federal Computer Fraud and Abuse Act (CFAA). Tiffany Li, a technology attorney and professor of law at the University of New Hampshire, joins our host Meghan McCarty Carino to talk about how the CFAA fits into today’s world.

Marketplace All-in-One
An old cybersecurity law gets an update (sort of)

Marketplace All-in-One

Play Episode Listen Later Apr 22, 2022 7:42


Scraping data from public websites is legal. That’s the upshot of a decision by the Ninth Circuit Court of Appeals earlier this week. LinkedIn had taken a case against data analytics company hiQ, arguing it was illegal for hiQ to “scrape” users’ profile data to analyze employee turnover rates under the federal Computer Fraud and Abuse Act (CFAA). Tiffany Li, a technology attorney and professor of law at the University of New Hampshire, joins our host Meghan McCarty Carino to talk about how the CFAA fits into today’s world.

Techdirt
Algorithmic Destruction

Techdirt

Play Episode Listen Later Apr 12, 2022 49:52


People often talk about some kind of "right to deletion" as an approach to fixing online privacy issues. This construct can create problems, as we've seen with Europe's version, but newer proposals don't seem to consider these lessons. A recent paper by law professor Tiffany Li looks at another angle on the issue: how data deletion impacts algorithms and AI-trained models. This week, Tiffany joins us on the podcast to discuss this concept of "algorithmic destruction", and how policy makers are ignoring it. "Algorithmic Destruction" paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4066845

Based on the Evidence
Tiffany Li

Based on the Evidence

Play Episode Listen Later Sep 24, 2020 40:37


He did it! No, she did it! No, they did it! Can the jury sort out the evidence and figure out whodunnit? Which of three likely suspects will pay for the death of a loving father of two? 

tiffany li
Ipse Dixit
Tiffany Li on Privacy in the Pandemic

Ipse Dixit

Play Episode Listen Later Sep 3, 2020 38:57


In this episode, Tiffany C. Li, Visiting Clinical Assistant Professor at Boston University School of Law and a fellow at the Yale Information Society Project, discusses her article "Privacy in Pandemic: Law, Technology, and Public Health in the Covid-19 Crisis." Li begins by identifying the many ways in which the current pandemic implicates privacy law, from testing and contract tracing to distance learning. She discusses the ways in which the law protects privacy and the ways in which many privacy values aren't fully realized. She explains why AI and other automated approaches may introduce bias issues. And she reflects on why privacy is essential to public health. Li is on Twitter at @tiffanycli.This episode was hosted by Brian L. Frye, Spears-Gilbert Professor of Law at the University of Kentucky College of Law. Frye is on Twitter at @brianlfrye. See acast.com/privacy for privacy and opt-out information.

The Lawfare Podcast
Tiffany Li on Privacy and Disinformation

The Lawfare Podcast

Play Episode Listen Later Dec 12, 2019 38:44


In this episode from Lawfare's Arbiters of Truth series on disinformation in the run-up to the 2020 election, Quinta Jurecic, Evelyn Douek, and Alina Polyakova spoke with Tiffany Li, a visiting professor at Boston University and a fellow at the Information Society Project at Yale Law School. Tiffany writes on all the issues discussed on this podcast—disinformation, misinformation, and platform governance—but with an additional twist. She’s also a privacy scholar. They talked about how privacy law can inform platform governance, and how prioritizing privacy might help tackle disinformation—as well as what tensions there might be between those two goals.

Your Legal Rights
Your Legal Rights: Bay Area's "Trial of the Century" Phenomenon

Your Legal Rights

Play Episode Listen Later Dec 4, 2019 58:49


Your Legal Rights reflects on Bay Area's "Trial of the Century" Phenomenon, pausing to reflect on the year just passed, with not one but two trials that have been labeled “Trial of the Century” in both media and legal circles. We welcome three guests this Wednesday who recently and successfully defended one such case, that of Tiffany Li, and can recount their shared experience. We welcome Geoff Carr, Carr Yeley & Associates, Redwood City CA; May Mar, Law Office of May J Mar, Redwood City CA; and Lauren Potter, Law Office of Lauren Potter, Redwood City CA. Co-host: Dean Johnson. Your own questions or comments? Please call toll-free 866-798-8255.

WBAL News Now With Bryan Nehman Podcast
Tiffany Li On FaceApp Privacy Laws

WBAL News Now With Bryan Nehman Podcast

Play Episode Listen Later Jul 26, 2019 5:41


Tiffany Li, a fellow at Yale Law School's Information Society Project, joins me on-air to discuss the ever questionable FaceApp privacy laws...

Government vs The Robots
SXSW Part 1: AI

Government vs The Robots

Play Episode Listen Later Apr 24, 2019 31:35


In the first of two episodes recorded at SXSW in Austin, Texas, Jonathan talks all things artificial intelligence with Azeem Azhar, editor of the Exponential View newsletter; Tiffany Li, Resident Fellow at Yale Law School at the Information Society Project; and Meredith Broussard, data journalism professor at NYU.We've also been working on another podcast, exploring the future of digital identity with a range of global experts. It's part of the Good ID project and the podcast is called Inside Good ID. It's available wherever you listen to Government vs the Robots, so please do check it out and let us know what you think! See acast.com/privacy for privacy and opt-out information.

Inside Out Security
Privacy Attorney Tiffany Li and AI Memory, Part II

Inside Out Security

Play Episode Listen Later Jan 28, 2019 14:10


Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn. Guidance for GDPR Right to be Forgotten Cindy Ng We continue our discussion with Tiffany Li who is an attorney and Resident Fellow at Yale Law Schools Information Society Project. In part two, we discuss non-human creators of intellectual property and how it could potentially impact the right to be forgotten, as well as the benefits of multi-disciplinary training where developers take a law class and lawyers take a tech class. Andy Green So do you think the regulators will have some more guidance specifically for the GDPR right to be forgotten? Tiffany Li The European regulators typically have been fairly good about providing external guidance outside of regulations and outside of decisions. Guidance documents that are non-binding have very helpful in understanding different aspects of regulation. And I think that we will have more research done. I would love to really see though is more interdisciplinary research. So one problem I think that we have in law generally, in technology law, is the sort of habit of operating in a law and policy only silo. So we have the lawyers, we have the policymakers, we have the lobbyists, everyone there in a room talking about, for example, how we should protect privacy. And that's wonderful and I've been in that room many times. But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers. AI and Intellectual Property Cindy Ng Speaking of interdisciplinary, you recently wrote a really interesting paper on AI and intellectual property, and you describe the future dilemmas of what might arise in IP law specifically involving works by non-human creators. And I was wondering if you can introduce to our listeners the significance of your inquiry. Tiffany Li So this is a draft paper that I've been writing about AI and intellectual property. Specifically, I'm looking at the copyright ability of works that are created by non-human authors, which could include AI, but could also include animals for example, or other non-human actors. Getting back to that same difference I mentioned earlier where we have one from an AI that is simply machine learning and super advanced statistics, and we have one from an AI that may be something close to a new type of intelligence. So my paper looks at this from two angles. First, we look what current scholarship says about who should own creative works that are created by AI or non-humans. And here we have an interesting issue. For example, if you devise an AI system to compose music, which we've seen in a few different cases, the question then is who you should own the copyright or the IP rights generally over the music that's created? One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality. But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory. And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally. Andy Green If we give AIs some of these rights, then it almost conflicts with the right to be forgotten because now you would need the consent of the AI? Tiffany Li Sure. That's definitely possible. We don't know. I mean, we don't have AI citizens yet except in Saudi Arabia. Andy Green I've heard about that, yeah. Cindy Ng So since we're talking about AI citizens, if we do extend AI citizens to have intellectual property rights, does it mean that they get other kinds of rights? Such as freedom of speech and the right to vote, or that's not a proper approach or way to think about it? Are we treading in science fiction movies that we've been where humans are superior to a machine? I know we're just kind of playing around with ideas, but it will be really interesting to hear your insights especially... It's your specialty. Tiffany Li No problem. I mean, I'm in this field because I love playing around with those ideas. Even though I do continually mention that there is that division between the AI we have now and that futuristic sentient AI, I do think that eventually we will get there. There will be a point where we have AI that can think, for a certain definition of thinking, that can think at least like level human beings. And because those intelligent systems can design themselves, it's fairly easy to assume that they will then design even more intelligent systems. And we'll get to that point where there will be super intelligent AIs who are more intelligent than humans. So the question they ask then I think is really interesting. It's the concept of whether we should be giving these potential future beings the same rights that we give human beings. And I think that's interesting because it gets down to a really a philosophical question, right? It's not a question about privacy or security or even law. It's the question of what we believe is important on a moral level, and it's who we believe to be capable of either having morals or being part of a moral calculus. So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth. Multidisciplinary Training Cindy Ng And to wrap up and to bring all our topics together, I wanna bring it back to regulations and technology and training and I'd like to continue our play thinking with the idea that developers who create technology, if we should require training so that they take principle such as right to be forgotten, privacy by design, and you even mentioned the moral obligation for developers to consider all of these elements because what they'll be creating will ultimately impact humans. And I wonder if they could get  the training that we require of doctors and lawyers so that everyone is working from the same knowledge base. Could you see that happening? And I wanted to know what your opinions are on this. Tiffany Li I love that mode of thought. I think that in addition to lawyers and policymakers needing to understand more from technologists, I think that people working in tech definitely should think more about these ethical issues. And I think that it's starting, we're starting to see a trend of people in the technology community thinking about really how their actions can affect the world at large. And there may be partially in the mainstream news right now because of the reaction to the last election and to ideas such as fake news and disinformation and so on. But we see the tech industry changing and we're accepting somewhat the idea that maybe they should be responsibility or ethical considerations built into the role of being a technologist. So what I like to think about it's just the fact that regardless of whether you are a product developer or you are a privacy officer or you're a lawyer at a tech company per se, for example, regardless of what role you have every action that you make have an impact in the world at large. And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on. I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap. Cindy Ng So I just wanted to ask if you had anything else that you wanted to share that we didn't cover? We covered so many different topics. Tiffany Li So I'd love to take a moment to introduce the work that I'm currently doing. I'm a Resident Fellow at Yale Law School's Information Society Project, which is a research center dedicated to different legal issues involving the information society as we know it. I'm currently leading a new initiative which is called the Wikimedia and Yale Law School Initiative on intermediaries and information. This initiative is funded by a generous grant from the Wikimedia Foundation, which is the nonprofit that runs Wikipedia. And we're doing some really interesting research right now on exactly what we just discussed on the role of tech companies, but particularly these information intermediaries or these social media platforms and so on. These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.

Inside Out Security
Privacy Attorney Tiffany Li and AI Memory, Part I

Inside Out Security

Play Episode Listen Later Jan 1, 2019 11:31


Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this first part , we talk about the GDPR's "right to be forgotten" rule and the gap between technology and the law. Consumer Versus Business Interests Cindy Ng Tiffany Li is an attorney and resident fellow at the Yale Law School Information Society Project. She is also an expert on privacy, intellectual property, law and policy. In our interview we discuss the legal background in GDPR's right to be forgotten, the hype and promise of artificial intelligence, as well as her paper, "Humans forget, machines remember." The right to be forgotten, it's a core principle in the GDPR, where a consumer can request to have their personal data be removed from the internet. And I was wondering if you can speak to the tension between an individual's right to privacy and a company's business interest. Tiffany Li So the tension between the consumer right to privacy and a company's business interest really happens in many different spaces. Specifically, here we're wrote about the right to be forgotten, which is the concept that an individual should be able to request that data or information about them be deleted from a website or a search engine, for example. Now, there's an obvious tension there between a consumer's rights or desire to have their privacy unstated and the business or the company's business interest in having information out there and also in decreasing the cost for compliance. Before the right to be forgotten in particular, there is that interesting question about whether or not we should be protecting the personal privacy rights of whoever's requesting that their information be deleted, or should we protect this concept that the company should be able to control the information that they provide on their service, as well as a larger conceptual ideal of having free speech and free expression and knowledge out there on the internet. So one argument outside of this consumer versus business tension, one argument really is simply that the right to be forgotten goes against the values of speech and expression, because by requesting that your information or information about you be taken down, you are in some ways silencing someone else's speech. AI and the Right to Be Forgotten Andy Green Right. So, Tiffany, I wanted to  follow up a little bit. I was wondering if you can give some of the legal background behind the GDPR's right to be forgotten, specifically referring to the Spain versus Google case that you mentioned in your paper on AI and the right to be forgotten. Tiffany Li The main important case that we discuss the right to be forgotten is the Spanish case that started in 2010. In that year, a Spanish citizen, along with the Spanish DPA, the Data Protection Agency, sued both the Spanish newspaper as well as Google, the American internet company that is now part of Alphabet. So the Spanish citizen argued that Google infringed on his right to privacy because the Google search results included information related to things that he didn't want to be in the public realm any longer. That's the basic legal framework. Eventually, this case went up to the ECJ, which in 2014 ruled in favor of the Spanish citizen and against Google. Essentially, what they ruled was that the right to be forgotten was something that could be enforced against search engine operators. Now, this wasn't a blanket rule, indicating a few searching conditions. A few conditions have to be met in order for search engine operators to be forced to comply with the right to be forgotten, and there are various exceptions that apply as well. And I think what's interesting really is that even then people were already discussing this tension that we mentioned before. Both the tension between consumer rights and business interests but also the tension between privacy in general and expression and transparency. So it goes all the way back to 2010, and we're still dealing with the ramifications of that decision now. Andy Green Right. So one thing about that decision that maybe a lot of people don't understand is that the Spanish newspaper that originally ran this story still has that content. The court decided, and correct me if I'm wrong, that that had to be still available. It's just that Google's search page results could not show it. Tiffany Li Yes. I think that there have been instances in a few other cases that have had similar past patterns, and there has been discussion of, you know, whether we can actually force newspapers to delete their archives. I know one person mentioned this, and really, what to me is kind of frightening framing that the right to be forgotten, taken to an ultimate endpoint...what essentially mean burning newspaper archives. Especially coming from an American point of view. You know, I'm in the U.S. where free speech is sacrosanct thing. That is incredibly frightening to think about, the idea that any individual could control what's kept as part of the news media and what's kept as part of our history is a little worrisome. And of course, the right to be forgotten has many conditions on it and it's not an ultimate right without, you know, anything protecting all these values we discussed. But I think it should be mentioned that there are consequences, and if we take anything to an extreme, the consequences become, well, extreme. Andy Green Extreme, right. So I'm wondering if you can just explain a little bit about what the right to be forgotten specifically requires of companies. Tiffany Li An interesting distinction that I discussed, my coauthors and I discussed in our paper on the right to be forgotten and artificial intelligence is that the law back in 2010, as well as the law that is upcoming, the GDPR in 2018, the law does not really define what it means to comply with the right to be forgotten. So they mentioned removing records and erasing records, but this isn't really clearly defined in terms of technical aspects, you know, how to actually comply. And it's especially an issue with current databases and with artificial intelligence and big data in general. We don't know if the law means that you have to delete a record, you have to override a record, you have to replace the record with a null value, you have to take away the data file, the data point from the record in general. We don't know what this means. Companies aren't told how to comply. They're just told that they absolutely have to, which is problematic. Cindy Ng So deleting is not just as simple as dragging a file to the trash can or clicking delete. I'd like to pivot to artificial intelligence. There's a lot of excitement and promise of artificial intelligence, and I'm wondering if you can set the stage by highlighting a few benefits and risks and then linking it back to your specific interest in artificial intelligence and the right to be forgotten. Tiffany Li So broadly speaking, I think that artificial intelligence definitely is the way of the future. And I don't wanna over-hype it too much because I know that right now AI is such a buzzword. It's included really in any discussion that anyone has about the future, right? On the other hand, I also don't believe that AI is this, you know, horrible monster that will eventually lead to the end of humanity as some people have put it. I think right now we're dealing with two things. We're dealing with maybe a soft AI. So, advanced machine learning or really what I call AI as being just very advanced statistics, right? We have that kind of artificial intelligence that can train itself, that can learn, that can create better algorithms based on the algorithms that it's programmed with and the data that we give it. We have that from the artificial intelligence. We do not yet have that form of super intelligent AI. We don't have, you know, the Terminator AI. That doesn't exist yet and we're not anywhere close to that. So take a step back a little bit. Get away from that idea of the super intelligent sentient AI who is either a God or a monster, and get back to what AI is right now. Andy Green So Tiffany, in your recent paper on AI and the right to be forgotten, you talk about AI apps as they are now and you describe how it's not so easy to erase something from its memory. Tiffany Li In our paper, we look at a few different case scenarios. I think the first issue to bring up is what I already mentioned, which is simply that there is no definition of deletion. So it's difficult to understand what it means to delete something, which means that in the case of the right to be forgotten, it seems like legislators are treating this as analogous to a human brain, right? We want the right to be forgotten from the public eye and from the minds of people around us. Translating that to machine intelligence though doesn't quite make sense because machines don't remember or forget in the same way that people do. So if you forget something, you can't find a record of it in your brain, you can't think of it in the future. If you want a machine to forget something or an artificial intelligence system, you can do a number of things, as I mentioned. You can override the specific data point, replace it with a null value, delete it from the record, delete it in your system index and so on. So that's one issue, right? There's no definition of what deletion means, so we don't really know what forgetting means. I think another issue, if we take a step back, if we think about machine learning algorithms and artificial intelligence, you consider any personal information as part of the training data that is used to train an AI system. If your personal information, for example, if you committed a crime and the fact of that crime and your personal information are linked to that crime, and put into an algorithm that determines the likelihood of any human being to become a criminal. So after adding in your data, that AI system then has a slight bias towards believing that people who may be similar to your various data points may be more likely to commit a crime, by a very slight bias. So when that happens, after that, if you request for your data to be removed from the system, we get into kind of a quandary. If we just remove the data record, there's a possibility of affecting the entire system because the training data that the algorithm was trained on is crucial to the development of the algorithm and the development of the AI system. Andy Green Yep. Tiffany Li So there's that first question of, can we even do this? Is this possible? Will this negatively affect these AI systems? Will this actually protect privacy, right? Because if you delete your data on a system that's already been trained on your data, then there may still be a negative effect on you. And the first basic goal of this right to be forgotten might not be accomplished through these means. I know there's a long list of questions, but are a few issues that we're thinking of when we consider it a problem of artificial intelligence in contrast with the right to be forgotten and with privacy in general. There's a lot that hasn't been figured out, which makes it a little problematic that we're legislating before we know really the technical ways to comply to legislation. Andy Green That's really fascinating, how the long-term memory that's embedded in these rules, that it's not so easy to erase once you...

Inside Out Security
Cyber & Tech Attorney Camille Stewart: Discerning One's Appetite for Risk (Part Two)

Inside Out Security

Play Episode Listen Later Jun 17, 2018 11:32


We continue our conversation with cyber and tech attorney Camille Stewart on discerning one's appetite for risk. In other words, how much information are you willing to share online in exchange for something free? It's a loaded question and Camille takes us through the lines of questioning one would take when taking a fun quiz or survey online. As always, there are no easy answers or shortcuts to achieving the state of privacy savvy nirvana. What's also risky is that we shouldn't connect laws made in the physical world to cyberspace. Camille warns: if we start making comparisons because at face value, the connection appears to be similar, but in reality isn't, we may set up ourselves up to truly stifle innovation. Choosing Convenience over Privacy Camille Stewart Hi, I'm Camille Stewart. I'm a cyber and technology attorney. I am currently at Deloitte working on cyber risk and innovation issues, so identifying emerging technologies for the firm to work with. Prior to that, I was a senior policy advisor at the Department of Homeland Security working on cyber infrastructure regarding to foreign policy in the Office of Policy. I was an appointee in the Obama Administration. And then prior to that I was in-house at a cybersecurity company. So I've worked in both the public sector and the private sector on cyber issues. Cindy Ng Thanks, Camille. Can you talk a little bit about privacy conceptually? Everybody wants privacy, it seems like a good thing, but why aren't people picking privacy over convenience? Convenience, yes, it's easy but what about privacy is not getting through to people? Camille Stewart I don't think people are looking at the long-term ramifications, right? I know very recently we had the genetic testing case that helped lead to a killer, which is wonderful in that specific instance. But I doubt that anybody who sends in their genetic information, had it tested and figured out their heritage has thought about how that data might be used otherwise, has read the disclaimer that tells you how your data will be used whether it's for research, whether it will be used by the police, whether it will be used to create new things. And if anybody remembers Henrietta Lacks, her data was used to create all of these things that are very wonderful but she never got any compensation for it. Not knowing how your information is used takes away all of your control, right? And a world where your data is commoditized and it has a value, you should be in control of the value of your data. And whether it's as simple as we're giving away our right to choose how and when we disburse our information and/or privacy that leads us to security implications, those things are important. For example, you don't care that there's information pooled and aggregated from a number of different places about you because you've posted it freely or because you traded it for a service that's very convenient until the moment when you realize that because you took the quiz and let this information out or because you didn't care that your address was posted on like a Spokeo site or something else, you didn't realize that all of the questions to your banking security information are now all easily searched on the internet and probably being aggregated by some random organization. So somebody could easily take and say, "Oh, what's your mother's maiden name? Okay. And what city do you live in? Okay. And what high school did you go to? Okay." And those are three pieces of information that maybe you didn't post in the same place but you posted and didn't care because you traded it for something or you posted it and you didn't think it through and now they can aggregate it because you use those two things for everything and now someone has access to your bank account, they've got access to your email, they've got access to all of these things that are really important to you and your privacy has now translated into your security. Cindy Ng I was just talking to my coworkers about this that it doesn't come naturally to know not to answer these questions because you can online somewhere and let's say you’re a part of a community you trust and you answer these innocuous questions and then you won't necessarily have the foresight to know that it's gonna come back and hurt you. How did you come up with the reasoning behind, "Oh, I probably shouldn't answer those questions?" Because you kinda have to be a little skillful and have a bit of foresight or some knowledge to even think in the way that you do. Camille Stewart No, you're right, there is a level of savvy that has to happen for you to think that way and a level of, like you said, foresight or a level of reaction, right? Most people aren't thinking that way because they knew it before it happened but now that the information's out there, they're taking action. And I think there are a lot of people who are neglecting that. So we all, just like organizations, just have to press it, have to make this vision become their appetite for risk. We as individuals have to do the same. And so if you are willing to risk because you think either, "They won't look for me," or, "I'm willing to take the hits because my bank will reimburse me," or whatever the decision which you are making, I want you to be informed. I'm not telling you what your risk calculus is but I wanna encourage people to understand how information can be used, understand what they're putting out there and make decisions accordingly. So your answer to that might be like "Look, I don't wanna give up taking Facebook for this or sharing information in a community that I trust on some social site but what I will do is have a set of answers that I don't share with anyone to those normal questions that they use for password reset that are wrong but only I know the fake answers that I'm using for them." So instead of your actual mother's maiden name, you're using something else and you've decided that that's one of the ways that you will protect yourself because you really wanna still use these other tools and that might be the way you protect yourself. So I challenge people not to give up the things that they love, like I mean, I would assess whether or not certain things are worth the risk, right? Like a quiz on Facebook that makes you provide data to an external third party that you're not really sure of how they're using it, not likely worth it. But the quizzes where you can just kinda take them, that might be worth it. I mean, the answers you provide for those questions still are revealing about you but maybe not in a way that's super impactful. Maybe in a way that's likely just for marketing and if you're okay with that, then take that or you go resilient the other way. Artificial Intelligence and Legal Protections Cindy Ng I wanna talk about an article that an attorney wrote, Tiffany Li, she wrote about how AI will someday eclipse the intelligence of the human and whether or not AI will have legal protections and then she juxtaposed it with the case with the monkey and how a monkey took a photographer's camera and took a selfie and there was a lawsuit with how we can use the monkey's lawsuit as precedent for future cases such as AI and recently, the monkey lost the lawsuit. Not the monkey but PETA. I just wanna hear from your perspective, as a lawyer, how to think about it moving forward. Camille Stewart I mean, it remains to be seen how things like AI will translate, especially in terms of creative spaces. It will be hard to determine ownership if a machine creates a work. And I mean, they'll come down to a final decision. We'll have to decide that things that are created by a machine and solely by a machine, right, like if there are human's input we might make one decision versus if it's solely created by a machine, we might say that that is in the public sphere and anybody can use it and is not as anything that has any kinda attributable protection. Versus if there is human input, we would decide that that is something that they can then own the production of, right, because they contributed to the making of whatever the end product is. It's hard to speculate but there will have to be a line drawn and it's likely somewhere in there, right? The sense that there is enough human interjection, whether that is from the input from whatever creative process is happening by the machine or in the creation of the process or program or software that is being used and then spit out some creation on the end, there will have to be a law or I guess at least case law that kinda dictates where that line is drawn. But those will be the things that's fun, right? Tiffany, and other lawyers like myself, I think those are the things that we enjoy most about the space is that stuff is unclear. And as these things roll out you get to make connections with the monkey case and AI and with other things that have already happened and new processes, new tech, new innovations and try to help draw those lines. Cindy Ng Is there anything we need to look out for that we're not aware of? Or certain connections that are sorta in the legal space that people in the tech space aren't aware of? Camille Stewart So I was gonna say, I don't actually think it is safe to on a broad scale without some level of assessment, connect laws made in accordance with the physical world to cyberspace, I think it's dangerous, because usually they're not one for one. It is the place where most people start because it's the easiest proposition to compare something that we've seen before with something in cyber. But they don't always compare or don't always compare in the way that we would think that they would. And so it's dangerous to make those comparisons without some level of assessment. And so I would tell people to challenge those assessments when you hear them and try to poke holes in them, because bad facts make for bad law. And if we take the easy route and just start making comparisons because on their face they seem similar, we may set up ourselves up to truly stifle innovation, which is exactly what we're trying to prevent. Cindy Ng Can you provide us with an example of why it's dangerous, because it feels like the natural thing to do? Camille Stewart No, you're right, it does feel natural. I'm trying to think of something...I'm thinking more along the lines of likening something physical to something cyber. So let's think about borders, right? So borders in a physical sense are very clear limitations of authority and operation. You can't cross a physical border without being able to use a passport, a Visa, things like that and they can control physical entry and exit at a border, a different country can. That is not the same as cyber-based. And to liken the two in the way that you use rules is not smart, right? It's your first inclination to wanna try to stop data flow at the edge of a country, at the edge of some imaginary border, but it is not realistic because the internet by its very nature is global and interconnected and, you know, traverses the world freely and you can't really stop things on that line, which is why things like GDPR are important for organizations across the world because as a company that has a global reach because you're on the internet, you will be affected by how laws are created in different localities. So that's a very big example but it happens in very discreet ways too when it comes to technology, cyberspace, and physical laws. Or the physical space and laws that are operated in that way and so I would challenge people that when you hear people make a one for one connection very easily without some level of assessment to try to question that to make sure it really is the best way to adapt some things to the given situation. The reason for example, Tiffany's likening of AI to this monkey case, it's an easy connection to make because in your head you think, "Well, the monkey is not human, they made a thing, and if they can't own the thing then when you do that online and a machine makes a thing, they can't own a thing." But it very well may not be the same analysis that needs to be made in setting, right? The lines may become very different because none of us could create a monkey. So if I can't create a monkey, then it's harder to control the output of that monkey. But I could very well create a machine that could then create an output and shouldn't I be the owner of that output if I created the machine that then created the output? Cindy Ng Mm-hmm. Camille Stewart But that was my point is that likening things that on their face being the same, the lines therein might be different or they just might be different altogether because cyberspace and the physical space are not a one for one.

Tech Policy Grind
Episode 2: Nobody Deletes Tiffany Li

Tech Policy Grind

Play Episode Listen Later Nov 27, 2017 25:45


Tiffany Li, who heads the Wikimedia/Yale Law School Initiative on Intermediaries and Information at Yale Law’s Information Society Project, joins the crew to discuss algorithms, artificial intelligence, and how they challenge the European Union’s so-called Right to Be Forgotten. She also takes about her recent transition from working an in-house attorney to academia. Listeners can […]

The CyberWire
The Right to Be Forgotten with Yale Law School's Tiffany Li

The CyberWire

Play Episode Listen Later Nov 22, 2017 18:31


Our guest today is Tiffany Li. She’s an attorney and Resident Fellow at Yale Law School’s Information Society Project. She's an expert on privacy, intellectual property, and law and policy, and her research includes legal issues involving online speech, access to information, and Internet freedom. She’s coauthor of the paper, Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten, which will be published soon in Computer Security & Law Review.