POPULARITY
There was a time in American public life when “the news” was a nightly ritual, a 30-minute glimpse into the wider-world, a way to stay informed. But Dr. Alice E. Marwick says that world is long gone, washed away in recent years with failing traditional news outlets and the rise of social media influencers. Marwick is the director of research at Data & Society. She is a qualitative social scientist who researches the social, political, and cultural implications of popular social media technologies. Her most recent book, “The Private is Political: Networked Privacy on Social Media,” examines how the networked nature of online privacy disproportionately impacts members of marginalized communities. Marwick has several other publications in the realm of communications and mass media. She is currently writing her third book on online radicalization, supported by an Andrew Carnegie fellowship. Marwick was previously associate professor of communication and principal researcher and co-founder of the Center for Information, Technology, and Public Life at the University of North Carolina. She was also Microsoft visiting professor at the Center for Information Technology Policy at Princeton University. See omnystudio.com/listener for privacy information.
Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Deep LearningGenerative Artificial IntelligenceAISnakeOil.com | NewsletterBitcoin and Cryptocurrency Technologies | Princeton/Coursera CourseGuest Profile:Faculty Profile at Princeton UniversityLinkedIn ProfileWikipedia PageHis Work:Amazon Author PageAI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the DifferenceBitcoin and Cryptocurrency Technologies: A Comprehensive IntroductionFairness and Machine Learning: Limitations and OpportunitiesGoogle Scholar PageEpisode Quotes:What can the AI community learn from medicine about testing?28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.Replacing broken systems with AI can escalate the problem08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.Foundation models lower costs and could shift AI research back to academia27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas.
In AI Snake Oil: What AI Can Do, What It Can't, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan provide an essential understanding of how AI works and why some applications remain fundamentally beyond its capabilities.Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. As a researcher at Princeton University's Center for Information Technology Policy, he examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. In his new book, he cuts through the hype to help readers discriminate between legitimate and bogus claims for AI technologies and applications.In his conversation with Martin Reeves, chair of the BCG Henderson Institute, Kapoor discusses historical patterns of technology hype, differentiates between the powers and limitations of predictive versus generative AI, and outlines how managers can balance healthy skepticism with embracing the potential of new technologies.Key topics discussed: 01:05 | Examples of AI “snake oil”04:42 | Historical patterns of technology hypeand how AI is different07:26 | Capabilities and exaggerations of predictive AI11:42 | Powers and limitations of generative AI17:11 | Drivers of inflated expectations20:18 | Implications for regulation23:26 | How managers can balance scepticism and embracing new tech24:58 | Future of AI researchAdditional inspirations from Sayash Kapoor:AI Snake Oil (Substack)A Checklist of Eighteen Pitfalls in AI Journalism (UNESCO article, 2022)This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy
Hi, I'm Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here.* Check out my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here.* Last week I interviewed Mozilla's Jasmine Sun and Nik Marda on the potential of public AI, and the week prior I shared my conversation with AI reporter Karen Hao on OpenAI's mythology, Meta's secret, and Microsoft's hypocrisy.
How many photographs have been taken worldwide in the history of photography? And how many just this year? These are a few of the fascinating facts that begin this episode that I know you'll end up repeating at upcoming holiday parties that will make you sound so interesting! Source: John Mitchinson author of 1227 Quite Interesting Facts to Blow Your Socks Off (https://amzn.to/4fP4vaX). To hear it said, artificial intelligence is the greatest thing in the world or the beginning of the end of civilization. So, what's the truth about AI? What can it do and what will it never do? That is what Arvind Narayanan is going to tell you, and he is someone to listen to. Arvind is a professor of computer science at Princeton University and director of its Center for Information Technology Policy. He was named one of Time magazine 100 most influential people in AI and he is co-author of the book k AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference (https://amzn.to/3Z9RBiv). What did they eat at the first Thanksgiving? No doubt you've heard stories about the first Thanksgiving but a lot of what we were told just isn't true. In fact, many of the foods and traditions of Thanksgiving came much later. Here to set the record straight on that famous dinner held by the Pilgrims and native Americans is Leslie Landrigan. She has been writing about New England history for over 10 years – and she is author of the book the book Historic Thanksgiving Foods: And the People who Cooked Them, 1607 to 1955 (https://amzn.to/40NW23s) Anyone who owns a printer has wondered why the ink cartridges cost so much to replace. The answer is a bit complicated and kind of interesting. Listen as I explain https://www.consumerreports.org/electronics-computers/printers/why-is-printer-ink-so-expensive-a2101590645/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr Lukasz Olejnik (@lukOlejnik), LL.M, is an independent cybersecurity, privacy and data protection researcher and consultant. Senior Visiting Research Fellow of the Department of War Studies, King's College London. He holds a Computer Science PhD at INRIA (French Institute for Research in Digital Science and Technology), and LL.M. from University of Edinburgh. He worked at CERN (European Organisation for Nuclear Research), and was a research associate at University College London. He was associated with Princeton's Center for Information Technology Policy, and Oxford's Centre for Technology and Global Affairs. He was a member of the W3C Technical Architecture Group. Former cyberwarfare advisor at the International Committee of the Red Cross in Geneva, where he worked on the humanitarian consequences of cyber operations. Author of scientific articles, op-eds, analyses, and books Philosophy of Cybersecurity, and “Propaganda”. He contributes public commentary to international media. References: Full interview transcript (on Medium) Propaganda, by Lukasz Olejnik Lukasz Olejnik on Cyber, Privacy and Tech Policy Critique (Newsletter) Lukasz Olejnik on Mastodon Lukasz Olejnik on X EU Digital Services Act (DSA) Section 230 (“Protection for private blocking and screening of offensive material“) of the Communications Decency Act (1996) Cubby, Inc. v. CompuServe Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. as precursors to Section 230 Doppelganger in action: Sanctions for Russian disinformation linked to Kate rumours EU takes shot at Musk over Trump interview — and EU takes shot at Musk over Trump interview — and misses (Politico) The story of Pavel Rubtsov (“Journalist or Russian spy? The strange case of Pablo González”), The Guardian Silicon Valley, The New Lobbying Monster (mentioning Chris Lehane's campaigns), The New Yorker Financial Times: Clip purporting to show a Haitian voting in Georgia is among ‘Moscow's broader efforts' to sway the race “Pseudo-media”: Spain proposes tightening rules on media to tackle fake news
There is a lot of hype around generative artificial intelligence, but is the tech truly transformational? Arvind Narayanan, Princeton University professor, director of the Center for Information Technology Policy at the school and author of the book “AI Snake Oil” says it's overhyped, while Box co-founder and CEO Aaron Levie advocates for the use of AI in a functional enterprise capacity. The two spoke with WSJ tech columnist Christopher Mims at the WSJ CIO Network Summit. Zoe Thomas hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr. Jennifer Rexford is the Provost, Gordon Y. S. Wu Professor in Engineering, Professor of Computer Science, and formerly the Chair of the Department of Computer Science at Princeton University. She is an accomplished computer scientists whose research has focused on analysis of computer networks, and in particular network routing, performance measurement, and network management. She is an affiliated faculty member in electrical and computer engineering, operations research and financial engineering, applied and computational mathematics, gender and sexuality studies, Center for Information Technology Policy, High Meadows Environmental Institute and Princeton Institute for Computational Science and Engineering.
Arvind Narayanan and Sayash Kapoor are well regarded computer scientists at Princeton University and have just published a book with a provocative title, AI Snake Oil. Here I've interviewed Sayash and challenged him on this dismal title, for which he provides solid examples of predictive AI's failures. Then we get into the promise of generative AI.Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Transcript with links to audio and external links to key publications Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths, and I'm delighted to welcome the co-author of a new book AI SNAKE OIL and it's Sayash Kapoor who has written this book with Arvind Narayanan of Princeton. And so welcome, Sayash. It's wonderful to have you on Ground Truths.Sayash Kapoor (00:28):Thank you so much. It's a pleasure to be here.Eric Topol (00:31):Well, congratulations on this book. What's interesting is how much you've achieved at such a young age. Here you are named in TIME100 AI's inaugural edition as one of those eminent contributors to the field. And you're currently a PhD candidate at Princeton, is that right?Sayash Kapoor (00:54):That's correct, yes. I work at the Center for Information Technology Policy, which is a joint program between the computer science department and the school of public and international affairs.Eric Topol (01:05):So before you started working on your PhD in computer science, you already were doing this stuff, I guess, right?Sayash Kapoor (01:14):That's right. So before I started my PhD, I used to work at Facebook as a machine learning engineer.Eric Topol (01:20):Yeah, well you're taking it to a more formal level here. Before I get into the book itself, what was the background? I mean you did describe it in the book why you decided to write a book, especially one that was entitled AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Background to Writing the BookSayash Kapoor (01:44):Yeah, absolutely. So I think for the longest time both Arvind and I had been sort of looking at how AI works and how it doesn't work, what are cases where people are somewhat fooled by the potential for this technology and fail to apply it in meaningful ways in their life. As an engineer at Facebook, I had seen how easy it is to slip up or make mistakes when deploying machine learning and AI tools in the real world. And had also seen that, especially when it comes to research, it's really easy to make mistakes even unknowingly that inflate the accuracy of a machine learning model. So as an example, one of the first research projects I did when I started my PhD was to look at the field of political science in the subfield of civil war prediction. This is a field which tries to predict where the next civil war will happen and in order to better be prepared for civil conflict.(02:39):And what we found was that there were a number of papers that claimed almost perfect accuracy at predicting when a civil war will take place. At first this seemed sort of astounding. If AI can really help us predict when a civil war will start like years in advance sometimes, it could be game changing, but when we dug in, it turned out that every single one of these claims where people claim that AI was better than two decades old logistic regression models, every single one of these claims was not reproducible. And so, that sort of set the alarm bells ringing for the both of us and we sort of dug in a little bit deeper and we found that this is pervasive. So this was a pervasive issue across fields that were quickly adopting AI and machine learning. We found, I think over 300 papers and the last time I compiled this list, I think it was over 600 papers that suffer from data leakage. That is when you can sort of train on the sets that you're evaluating your models on. It's sort of like teaching to the test. And so, machine learning model seems like it does much better when you evaluate it on your data compared to how it would really work out in the real world.Eric Topol (03:48):Right. You say in the book, “the goal of this book is to identify AI snake oil - and to distinguish it from AI that can work well if used in the right ways.” Now I have to tell you, it's kind of a downer book if you're an AI enthusiast because there's not a whole lot of positive here. We'll get to that in a minute. But you break down the types of AI, which I'm going to challenge a bit into three discrete areas, the predictive AI, which you take a really harsh stance on, say it will never work. Then there's generative AI, obviously the large language models that took the world by storm, although they were incubating for several years when ChatGPT came along and then content moderation AI. So maybe you could tell us about your breakdown to these three different domains of AI.Three Types of AI: Predictive, Generative, Content ModerationSayash Kapoor (04:49):Absolutely. I think one of our main messages across the book is that when we are talking about AI, often what we are really interested in are deeper questions about society. And so, our breakdown of predictive, generative, and content moderation AI sort of reflects how these tools are being used in the real world today. So for predictive AI, one of the motivations for including this in the book as a separate category was that we found that it often has nothing to do with modern machine learning methods. In some cases it can be as simple as decades old linear regression tools or logistic regression tools. And yet these tools are sold under the package of AI. Advances that are being made in generative AI are sold as if they apply to predictive AI as well. Perhaps as a result, what we are seeing is across dozens of different domains, including insurance, healthcare, education, criminal justice, you name it, companies have been selling predictive AI with the promise that we can use it to replace human decision making.(05:51):And I think that last part is where a lot of our issues really come down to because these tools are being sold as far more than they're actually capable of. These tools are being sold as if they can enable better decision making for criminal justice. And at the same time, when people have tried to interrogate these tools, what we found is these tools essentially often work no better than random, especially when it comes to some consequential decisions such as job automation. So basically deciding who gets to be called on the next level of like a job interview or who is rejected, right as soon as they submit the CV. And so, these are very, very consequential decisions and we felt like there is a lot of snake oil in part because people don't distinguish between applications that have worked really well or where we have seen tremendous advances such as generative AI and applications where essentially we've stalled for a number of decades and these tools don't really work as claimed by the developers.Eric Topol (06:55):I mean the way you partition that, the snake oil, which is a tough metaphor, and you even show the ad from 1905 of snake oil in the book. You're really getting at predictive AI and how it is using old tools and selling itself as some kind of breakthrough. Before I challenge that, are we going to be able to predict things? By the way, using generative AI, not as you described, but I would like to go through a few examples of how bad this has been and since a lot of our listeners and readers are in the medical world or biomedical world, I'll try to get to those. So one of the first ones you mentioned, which I completely agree, is how prediction of Covid from the chest x-ray and there were thousands of these studies that came throughout the pandemic. Maybe you could comment about that one.Some Flagrant ExamplesSayash Kapoor (08:04):Absolutely. Yeah, so this is one of my favorite examples as well. So essentially Michael Roberts and his team at the University of Cambridge a year or so after the pandemic looked back at what had happened. I think at the time there were around 500 studies that they included in the sample. And they looked back to see how many of these would be useful in a clinical setting beyond just the scope of writing a research paper. And they started out by using a simple checklist to see, okay, are these tools well validated? Does the training and the testing data, is it separate? And so on. So they ran through the simple checklist and that excluded all but 60 of these studies from consideration. So apart from 60 studies, none of these other studies even passed a very, very basic criteria for being included in the analysis. Now for these 60, it turns out that if you take a guess about how many were useful, I'm pretty confident most cases would be wrong.(09:03):There were exactly zero studies that were useful in a clinically relevant setting. And the reasons for this, I mean in some cases the reasons were as bizarre as training a machine learning model to predict Covid where all of the positive samples of people who had Covid were from adults. But all of the negative samples of people who didn't have Covid were from children. And so, essentially claiming that the resulting classifier can predict who has Covid is bizarre because all the classifier is doing is looking at the checks history and basically predicting which x-ray belongs to a child versus an adult. And so, this is the sort of error in some cases we saw duplicates in the training and test set. So you have the same person that is being used for training the model and that it is also used for evaluating the model. So simply memorizing a given sample of x-rays would be enough to achieve a very high performance. And so, for issues like these, I think all 60 of these studies prove to be not useful in a clinically relevant setting. And I think this is sort of the type of pattern that we've seen over and over again.Eric Topol (10:14):Yeah, and I agree with you on that point. I mean that was really a flagrant example and that would fulfill your title of your book, which as I said is a very tough title. But on page 29, and we'll have this in the post. You have a figure, the landscape of AI snake oil, hype, and harm. And the problem is there is nothing good in this landscape. So on the y-axis you have works, hype, snake oil going up on the y-axis. And on the x-axis, you have benign and harmful. So the only thing you have that works and that's benign is autocomplete. I wouldn't say that works. And then you have works facial recognition for surveillance is harmful. This is a pretty sobering view of AI. Obviously, there's many things that are working that aren't on this landscape. So I just would like to challenge, are you a bit skewed here and only fixating on bad things? Because this diagram is really rough. I mean, there's so much progress in AI and you have in here you mentioned the predicting civil wars, and obviously we have these cheating detection, criminal risk prediction. I mean a lot of problems, video interviews that are deep fakes, but you don't present any good things.Optimism on Generative AISayash Kapoor (11:51):So to be clear, I think both Arvind and are somewhat paradoxically optimistic about the future of generative AI. And so, the decision to focus on snake oil was a very intentional one from our end. So in particular, I think at various places in the book we outline why we're optimistic, what types of applications we think we're optimistic about as well. And the reason we don't focus on them is that it basically comes down to the fact that no one wants to read a book that has 300 pages about the virtues of spellcheck or AI for code generation or something like that. But I think I completely agree and acknowledge that there are lots of positive applications that didn't make the cut for the book as well. That was because we wanted people to come to this from a place of skepticism so that they're not fooled by the hype.(12:43):Because essentially we see even these positive uses of AI being lost out if people have unrealistic expectations from what an AI tool should do. And so, pointing out snake oil is almost a prerequisite for being able to use AI productively in your work environment. I can give a couple of examples of where or how we've sort of manifested this optimism. One is AI for coding. I think writing code is an application that I do, at least I use AI a lot. I think almost half of the code I write these days is generated, at least the first draft is generated using AI. And yet if I did not know how to program, it would be a completely different question, right? Because for me pointing out that, oh, this syntax looks incorrect or this is not handling the data in the correct way is as simple as looking at a piece of code because I've done this a few times. But if I weren't an expert on programming, it would be completely disastrous because even if the error rate is like 5%, I would have dozens of errors in my code if I'm using AI to generate it.(13:51):Another example of how we've been using it in our daily lives is Arvind has two little kids and he's built a number of applications for his kids using AI. So I think he's a big proponent of incorporating AI into children's lives as a force for good rather than having a completely hands-off approach. And I think both of these are just two examples, but I would say a large amount of our work these days occurs with the assistance of AI. So we are very much optimistic. And at the same time, I think one of the biggest hindrances to actually adopting AI in the real world is not understanding its limitations.Eric Topol (14:31):Right. Yeah, you say in the book quote, “the two of us are enthusiastic users of generative AI, both in our work and our personal lives.” It just doesn't come through as far as the examples. But before I leave the troubles of predictive AI, I liked to get into a few more examples because that's where your book shines in convincing that we got some trouble here and we need to be completely aware. So one of the most famous, well, there's a couple we're going to get into, but one I'd like to review with you, it's in the book, is the prediction of sepsis in the Epic model. So as you know very well, Epic is the most used IT and health systems electronic health records, and they launched never having published an algorithm that would tell when the patient was hospitalized if they actually had sepsis or risk of sepsis. Maybe you could take us through that, what you do in the book, and it truly was a fiasco.The Sepsis DebacleSayash Kapoor (15:43):Absolutely. So I think back in 2016/2017, Epic came up with a system that would help healthcare providers predict which patients are most at risk of sepsis. And I think, again, this is a very important problem. I think sepsis is one of the leading causes of death worldwide and even in the US. And so, if we could fix that, I think it would be a game changer. The problem was that there were no external validations of this algorithm for the next four years. So for four years, between 2017 to 2021, the algorithm wasn't used by hundreds of hospitals in the US. And in 2021, a team from University of Michigan did this study in their own hospital to see what the efficacy of the sepsis prediction model is. They found out that Epic had claimed an AUC of between 0.76 and 0.83, and the actual AUC was closer to 0.6, and AUC of 0.5 is making guesses at random.(16:42):So this was much, much worse than the company's claims. And I think even after that, it still took a year for sepsis to roll back this algorithm. So at first, Epic's claims were that this model works well and that's why hospitals are adopting it. But then it turned out that Epic was actually incentivizing hospitals to adopt sepsis prediction models. I think they were giving credits of hundreds of thousands of dollars in some cases. If a hospital satisfied a certain set of conditions, one of these conditions was using a sepsis prediction model. And so, we couldn't really take their claims at face value. And finally in October 2022, Epic essentially rolled back this algorithm. So they went from this one size fits all sepsis prediction model to a model that each hospital has to train on its own data, an approach which I think is more likely to work because each hospital's data is different. But it's also more time consuming and expensive for the hospitals because all of a sudden you now need your own data analysts to be able to roll out this model to be able to monitor it.(17:47):I think this study also highlights many of the more general issues with predictive AI. These tools are often sold as if they're replacements for an existing system, but then when things go bad, essentially they're replaced with tools that do far less. And companies often go back to the fine print saying that, oh, we should always deploy it with the human in the loop, or oh, it needs to have these extra protections that are not our responsibility, by the way. And I think that gap between what developers claim and how the tool actually works is what is most problematic.Eric Topol (18:21):Yeah, no, I mean it's an egregious example, and again, it fulfills like what we discussed with statistics, but even worse because it was marketed and it was incentivized financially and there's no doubt that some patients were completely miscategorized and potentially hurt. The other one, that's a classic example that went south is the Optum UnitedHealth algorithm. Maybe you could take us through that one as well, because that is yet another just horrible case of how people were discriminated against.The Infamous Optum AlgorithmSayash Kapoor (18:59):Absolutely. So Optum, another health tech company created an algorithm to prioritize high risk patients for preemptive care. So I think it was around when Obamacare was being introduced that insurance networks started looking into how they could reduce costs. And one of the main ways they identified to reduce costs is basically preemptively caring for patients who are extremely high risk. So in this case, they decided to keep 3% of the patients in the high risk category and they built a classifier to decide who's the highest risk, because potentially once you have these patients, you can proactively treat them. There might be fewer emergency room visits, there might be fewer hospitalizations and so on. So that's all fine and good. But what happened when they implemented the algorithm was that every machine learning model needs like the target variable, what is being predicted at the end of the day. What they decided to predict was how much patient would pay, how much would they charge, what cost the hospital would incur if they admitted this patient.(20:07):And they essentially use that to predict who should be prioritized for healthcare. Now unsurprisingly, it turned out that white patients often pay a lot more or are able to pay a lot more when it comes to hospital visits. Maybe it's because of better insurance or better conditions at work that allow them to take leave and so on. But whatever the mechanism is, what ended up happening with this algorithm was I think black patients with the same level of healthcare prognosis were half as likely or about much less likely compared to white ones of getting enrolled in this high risk program. So they were much less likely to get this proactive care. And this was a fantastic study by Obermeyer, et al. It was published in Science in 2019. Now, what I think is the most disappointing part of this is that Optum did not stop using this algorithm after this study was released. And that was because in some sense the algorithm was working precisely as expected. It was an algorithm that was meant to lower healthcare costs. It wasn't an algorithm that was meant to provide better care for patients who need it most. And so, even after this study was rolled out, I think Optum continued using this algorithm as is. And I think as far as I know, even today this is or some version of this algorithm is still in use across the network of hospitals that Optum serves.Eric Topol (21:31):No, it's horrible the fact that it was exposed by Ziad Obermeyer's paper in Science and that nothing has been done to change it, it's extraordinary. I mean, it's just hard to imagine. Now you do summarize the five reasons predictive AI fails in a nice table, we'll put that up on the post as well. And I think you've kind of reviewed that as these case examples. So now I get to challenge you about predictive AI because I don't know that such a fine line between that and generative AI are large language models. So as you know, the group at DeepMind and now others have done weather forecasting with multimodal large language models and have come up with some of the most accurate weather forecasting we've ever seen. And I've written a piece in Science about medical forecasting. Again, taking all the layers of a person's data and trying to predict if they're high risk for a particular condition, including not just their electronic record, but their genomics, proteomics, their scans and labs and on and on and on exposures, environmental.Multimodal A.I. in Medicine(22:44):So I want to get your sense about that because this is now a coalescence of where you took down predictive AI for good reasons, and then now these much more sophisticated models that are integrating not just large data sets, but truly multimodal. Now, some people think multimodal means only text, audio, speech and video images, but here we're talking about multimodal layers of data as for the weather forecasting model or earthquake prediction or other things. So let's get your views on that because they weren't really presented in the book. I think they're a positive step, but I want to see what you think.Sayash Kapoor (23:37):No, absolutely. I think maybe the two questions are sort of slightly separate in my view. So for things like weather forecasting, I think weather forecasting is a problem that's extremely tenable for generative AI or for making predictions about the future. And I think one of the key differences there is that we don't have the problem of feedback loops with humans. We are not making predictions about individual human beings. We are rather making predictions about what happens with geological outcomes. We have good differential equations that we've used to predict them in the past, and those are already pretty good. But I do think deep learning has taken us one step further. So in that sense, I think that's an extremely good example of what doesn't really fit within the context of the chapter because we are thinking about decisions thinking about individual human beings. And you rightly point out that that's not really covered within the chapter.(24:36):For the second part about incorporating multimodal data, genomics data, everything about an individual, I think that approach is promising. What I will say though is that so far we haven't seen it used for making individual decisions and especially consequential decisions about human beings because oftentimes what ends up happening is we can make very good predictions. That's not in question at all. But even with these good predictions about what will happen to a person, sometimes intervening on the decision is hard because oftentimes we treat prediction as a problem of correlations, but making decisions is a problem of causal estimation. And that's where those two sort of approaches disentangle a little bit. So one of my examples, favorite examples of this is this model that was used to predict who should be released before screening when someone comes in with symptoms of pneumonia. So let's say a patient comes in with symptoms of pneumonia, should you release them on the day of?(25:39):Should you keep them in the hospital or should you transfer them to the ICU? And these ML researchers were basically trying to solve this problem. They found out that the neural network model they developed, this was two decades ago, by the way. The neural network model they developed was extremely accurate at predicting who would basically have a high risk of having complications once they get pneumonia. But it turned out that the model was saying essentially that anyone who comes in who has asthma and who comes in with symptoms of pneumonia is the lowest risk patient. Now, why was this? This was because when in the past training data, when some such patients would come into the hospital, these patients would be transferred directly to the ICU because the healthcare professionals realized that could be a serious condition. And so, it turned out that actually patients who had asthma who came in with symptoms of pneumonia were actually the lowest risk amongst the population because they were taken such good care of.(26:38):But now if you use this prediction that a patient comes in with symptoms of pneumonia and they have asthma, and so they're low risk, if you use this to make a decision to send them back home, that could be catastrophic. And I think that's the danger with using predictive models to make decisions about people. Now, again, I think the scope and consequences of decisions also vary. So you could think of using this to surface interesting patterns in the data, especially at a slightly larger statistical level to see how certain subpopulations behave or how certain groups of people are likely to develop symptoms or whatever. But I think when as soon as it comes to making decisions about people, the paradigm of problem solving changes because as long as we are using correlational models, I think it's very hard to say what will happen if we change the conditions, what will happen if the decision making mechanism is very different from one where the data was collected.Eric Topol (27:37):Right. No, I mean where we agree on this is that at the individual level, using multimodal AI with all these layers of data that have now recently become available or should be available, that has to be compared ideally in a randomized trial with standard of care today, which doesn't use any of that. And to see whether or not that decision's made, does it change the natural history and is it an advantage, that's yet to be done. And I agree, it's a very promising pathway for the future. Now, I think you have done what is a very comprehensive sweep on the predictive AI failures. You've mentioned here in our discussion, your enthusiasm and in the book about generative AI positive features and hope and excitement perhaps even. You didn't really yet, we haven't discussed much on the content moderation AI that you have discreetly categorized. Maybe you could just give us the skinny on your sense of that.Content Moderation AISayash Kapoor (28:46):Absolutely. So content moderation AI is AI that's used to sort of clean up social media feeds. Social media platforms have a number of policies about what's allowed and not allowed on the platforms. Simple things such as spam are obviously not allowed because let's say people start spamming the platform, it becomes useless for everyone. But then there are other things like hate speech or nudity or pornography and things like that, which are also disallowed on most if not all social media platforms today. And I think a lot of the ways in which these policies are enforced today is using AI. So you might have an AI model that runs every single time you upload a photo to Facebook, for instance. And not just one perhaps hundreds of such models to detect if it has nudity or hate speech or any of these other things that might violate the platform's terms of service.(29:40):So content moderation AI is AI that's used to make these decisions. And very often in the last few years we've seen that when something gets taken down, for instance, Facebook deletes a post, people often blame the AI for having a poor understanding. Let's say of satire or not understanding what's in the image to basically say that their post was taken down because of bad AI. Now, there have been many claims that content moderation AI will solve social media's problems. In particular, we've heard claims from Mark Zuckerberg who in a senate testimony I think back in 2018, said that AI is going to solve most if not all of their content moderation problems. So our take on content moderation AI is basically this. AI is very, very useful for solving the simple parts of content moderation. What is a simple part? So basically the simple parts of content moderation are, let's say you have a large training data of the same type of policy violation on a platform like Facebook.(30:44):If you have large data sets, and if these data sets have a clear line in the sand, for instance, with nudity or pornography, it's very easy to create classifiers that will automate this. On the other hand, the hard part of content moderation is not actually just creating these AI models. The hard part is drawing the line. So when it comes to what is allowed and not allowed on platforms, these platforms are essentially making decisions about speech. And that is a topic that's extremely fraught. It's fraught in the US, it's also fraught globally. And essentially these platforms are trying to solve this really hard problem at scale. So they're trying to come up with rules that apply to every single user of the platform, like over 3 billion users in the case of Facebook. And this inevitably has these trade-offs about what speech is allowed versus disallowed that are hard to say one way or the other.(31:42):They're not black and white. And what we think is that AI has no place in this hard part of content moderation, which is essentially human. It's essentially about adjudicating between competing interests. And so, when people claim that AI will solve these many problems of content moderation, I think what they're often missing is that there's this extremely large number of things you need to do to get content moderation right. AI solves one of these dozen or so things, which is detecting and taking down content automatically, but all of the rest of it involves essentially human decisions. And so, this is sort of the brief gist of it. There are also other problems. For example, AI doesn't really work so well for low resource languages. It doesn't really work so well when it comes to nuances and so on that we discussed in the book. But we think some of these challenges are solvable in the medium to long term. But these questions around competing interests of power, I think are beyond the domain of AI even in the medium to long term.Age 28! and Career AdviceEric Topol (32:50):No, I think you nailed that. I think this is an area that you've really aptly characterized and shown the shortcomings of AI and how the human factor is so critically important. So what's extraordinary here is you're just 28 and you are rocking it here with publications all over the place on reproducibility, transparency, evaluating generative AI, AI safety. You have a website on AI snake oil that you're collecting more things, writing more things, and of course you have the experience of having worked in the IT world with Facebook and also I guess also Columbia. So you're kind of off to the races here as one of the really young leaders in the field. And I am struck by that, and maybe you could comment about the inspiration you might provide to other young people. You're the youngest person I've interviewed for Ground Truths, by the way, by a pretty substantial margin, I would say. And this is a field where it attracts so many young people. So maybe you could just talk a bit about your career path and your advice for people. They may be the kids of some of our listeners, but they also may be some of the people listening as well.Sayash Kapoor (34:16):Absolutely. First, thank you so much for the kind words. I think a lot of this work is with collaborators without whom of course, I would never be able to do this. I think Arvind is a great co-author and supporter. I think in terms of my career parts, it was sort of like a zigzag, I would say. It wasn't clear to me when I was an undergrad if I wanted to do grad school or go into the industry, and I sort of on a whim went to work at Facebook, and it was because I'd been working on machine learning for a little bit of time, and I just thought, it's worth seeing what the other side has to offer beyond academia. And I think that experience was very, very helpful. One of the things, I talked to a lot of undergrads here at Princeton, and one of the things I've seen people be very concerned about is, what is the grad school they're going to get into right after undergrad?(35:04):And I think it's not really a question you need to answer now. I mean, in some cases I would say it's even very helpful to have a few years of industry experience before getting into grad school. That has definitely, at least that has been my experience. Beyond that, I think working in a field like AI, I think it's very easy to be caught up with all of the new things that are happening each day. So I'm not sure if you know, but AI has I think over 500-1,000 new archive papers every single day. And with this rush, I think there's this expectation that you might put on yourself on being successful requires a certain number of publications or a certain threshold of things. And I think more often than not, that is counterproductive. So it has been very helpful for me, for example, to have collaborators who are thinking long term, so this book, for instance, is not something that would be very valued within the CS community, I would say. I think the CS community values peer-reviewed papers a lot more than they do books, and yet we chose to write it because I think the staying power of a book or the longevity of a book is much more than any single paper could do. So the other concrete thing I found very helpful is optimizing for a different metric compared to what the rest of the community seems to be doing, especially when it comes to fast moving fields like AI.Eric Topol (36:29):Well, that last piece of advice is important because I think too often people, whether it's computer scientists, life scientists, whoever, they don't realize that their audience is much broader. And that reaching the public with things like a book or op-eds or essays, varied ways that are intended for public consumption, not for, in this case, computer scientists. So that's why I think the book is a nice contribution. I don't like the title because it's so skewed. And also the content is really trying to hammer it at home. I hope you write a sequel book on the positive sides of AI. I did want to ask you, when I read the book, I thought I heard your voice. I thought you had written the book, and Arvind maybe did some editing. You wrote about Arvind this and Arvind that. Did you write the first draft of the book and then he kind of came along?Sayash Kapoor (37:28):No, absolutely not. So the way we wrote the book was we basically started writing it in parallel, and I wrote the first draft of half the chapters and he wrote the first draft of the other half, and that was essentially all the way through. So we would sort of write a draft, pass it to the other person, and then keep doing this until we sent it to our publishers.Eric Topol (37:51):Okay. So I guess I was thinking of the chapters you wrote where it came through. I'm glad that it was a shared piece of work because that's good, because that's what co-authoring is all about, right? Well, Sayash, it's really been a joy to meet you and congratulations on this book. I obviously have expressed my objections and my disagreements, but that's okay because this book will feed the skeptics of AI. They'll love this. And I hope that the positive side, which I think is under expressed, will not be lost and that you'll continue to work on this and be a conscience. You may know I've interviewed a few other people in the AI space that are similarly like you, trying to assure its safety, its transparency, the ethical issues. And I think we need folks like you. I mean, this is what helps get it on track, keeping it from getting off the rails or what it shouldn't be doing. So keep up the great work and thanks so much for joining.Sayash Kapoor (39:09):Thank you so much. It was a real pleasure.************************************************Thanks for listening, reading or watching!The Ground Truths newsletters and podcasts are all free, open-access, without ads.Please share this post/podcast with your friends and network if you found it informative!Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff for audio and video support at Scripps Research.Note: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in. Get full access to Ground Truths at erictopol.substack.com/subscribe
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Arvind Narayanan is a professor of Computer Science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a big proponent of the AI scaling myths around the importance of just adding more compute. He is also the lead author of a textbook on the computer science of cryptocurrencies which has been used in over 150 courses around the world, and an accompanying Coursera course that has had over 700,000 learners. In Today's Episode with Arvind Narayanan We Discuss: 1. Compute, Data, Algorithms: What is the Bottleneck: Why does Arvind disagree with the commonly held notion that more compute will result in an equal and continuous level of model performance improvement? Will we continue to see players move into the compute layer in the need to internalise the margin? What does that mean for Nvidia? Why does Arvind not believe that data is the bottleneck? How does Arvind analyse the future of synthetic data? Where is it useful? Where is it not? 2. The Future of Models: Does Arvind agree that this is the fastest commoditization of a technology he has seen? How does Arvind analyse the future of the model landscape? Will we see a world of few very large models or a world of many unbundled and verticalised models? Where does Arvind believe the most value will accrue in the model layer? Is it possible for smaller companies or university research institutions to even play in the model space given the intense cash needed to fund model development? 3. Education, Healthcare and Misinformation: When AI Goes Wrong: What are the single biggest dangers that AI poses to society today? To what extent does Arvind believe misinformation through generative AI is going to be a massive problem in democracies and misinformation? How does Arvind analyse AI impacting the future of education? What does he believe everyone gets wrong about AI and education? Does Arvind agree that AI will be able to put a doctor in everyone's pocket? Where does he believe this theory is weak and falls down?
My guest is Arvind Narayanan, a Professor of Computer Science at Princeton University, and the director of the Center for Information Technology Policy, also at Princeton. Arvind is renowned for his work on the societal impacts of digital technologies, including his textbook on fairness and machine learning, his online course on cryptocurrencies, his research on data de-anonymization, dark patterns, and more. He has already amassed over 30,000 citations on Google Scholar. In just a few days, in late September 2024, Arvind will release a book co-authored with Sayash Kapoor titled “AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.” Having had the privilege of reading an early version, our conversation delves into some of the book's key arguments. We also explore what Arvind calls AI scaling myths, the reality of artificial general intelligence, how governments can scale effective AI policies, the importance of transparency, the role that antitrust can, and cannot play, the societal impacts of scaling automation, and more. I hope you enjoy our conversation. Find me on X at @ProfSchrepel. Also, be sure to subscribe. ** References: ➝ AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference (2024) ➝ AI scaling myths (2024) ➝ AI existential risk probabilities are too unreliable to inform policy (2024) ➝ Foundation Model Transparency Reports (2024)
How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don't worry too much about x-risk from alien invasions. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at brave.com/api. Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. Kapoor has previously worked on AI in both industry and academia, with experience at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Notably, Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. Sayash Kapoor https://x.com/sayashk https://www.cs.princeton.edu/~sayashk/ Arvind Narayanan (other half of the AI Snake Oil duo) https://x.com/random_walker AI existential risk probabilities are too unreliable to inform policy https://www.aisnakeoil.com/p/ai-existential-risk-probabilities Pre-order AI Snake Oil Book https://amzn.to/4fq2HGb AI Snake Oil blog https://www.aisnakeoil.com/ AI Agents That Matter https://arxiv.org/abs/2407.01502 Shortcut learning in deep neural networks https://www.semanticscholar.org/paper/Shortcut-learning-in-deep-neural-networks-Geirhos-Jacobsen/1b04936c2599e59b120f743fbb30df2eed3fd782 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/ TOC: 00:00:00 Intro 00:01:57 How seriously should we take Xrisk threat? 00:02:55 Risk too unrealiable to inform policy 00:10:20 Overinflated risks 00:12:05 Perils of utility maximisation 00:13:55 Scaling vs airplane speeds 00:17:31 Shift to smaller models? 00:19:08 Commercial LLM ecosystem 00:22:10 Synthetic data 00:24:09 Is AI complexifying our jobs? 00:25:50 Does ChatGPT make us dumber or smarter? 00:26:55 Are AI Agents overhyped? 00:28:12 Simple vs complex baselines 00:30:00 Cost tradeoff in agent design 00:32:30 Model eval vs downastream perf 00:36:49 Shortcuts in metrics 00:40:09 Standardisation of agent evals 00:41:21 Humans in the loop 00:43:54 Levels of agent generality 00:47:25 ARC challenge
This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. In the first, we'll hear from Sayash Kapoor, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled On the Societal Impact of Open Foundation Models. And in the second, we'll hear from Politico Chief Technology Correspondent Mark Scott about the US-EU Trade and Technology Council (TTC) meeting, and what he's learned about the question of access to social media platform data by interviewing over 50 stakeholders, including regulators, researchers, and platform executives.
Bill C-27, the government's privacy and artificial intelligence bill is slowly making its way through the Parliamentary process. One of the emerging issues has been the mounting opposition to the AI portion of the bill, including a recent NDP motion to divide the bill for voting purposes, separating the privacy and AI portions. In fact, several studies have been released which place the spotlight on the concerns with the government's plan for AI regulation, which is widely viewed as vague and ineffective. Christelle Tessono is a tech policy researcher based at Princeton University's Center for Information Technology Policy (CITP). She was one of several authors of a joint report on the AI bill which brought together researchers from the Cybersecure Policy Exchange at Toronto Metropolitan University, McGill University's Centre for Media, Technology and Democracy, and the Center for Information Technology Policy at Princeton University. Christelle joins the Law Bytes podcast to talk about the report and what she thinks needs to change in Bill C-27.
NBFC are those companies that provide various financial and non-financial services. NBFC is a Private or Public Limited Company registered under the Companies Act, 2013, and also has to obtain a Certificate of Registration from RBI. They work differently from commercial banks and cooperatives and do not need to hold a banking license. Eligibility/Requirements for registering NBFC · Good credit stores with no defaults. · Minimum Net Owned Fund (NOF) of Rs.2 crores. · Business plan for next five years. · 1/3rd of the total number of directors must possess finance experience · Registration is compulsory. · If foreign investment involved, then the company must comply with FEMA Act. Documents required for registration of NBFC 1. Proof of identity of all the shareholders and directors. Such documents include Passport, Driving license, Voter Id (Self attested copy) 2. Proof of address of all the directors and the shareholders. Such documents include bank statements, electricity bill, water bill, gas bill and telephone (should not be older than 2 months) (Self-attested copy) 3. Certificate of Incorporation, MOA, and AOA of the company 4. Proof of address of the company. 5. Company's GST number and PAN card copy. 6. Net worth certificate of Directors Shareholders and company. 7. Educational qualification proof of Directors. 8. Resolution from the company's member of the Board for approval of its formation. 9. Bank account details of the company with NOF deposited in it and well audited for the last 3 years. 10. Report from the bank with no lien remark on the initial NOF. 11. Organizational structure's plan and Information Technology Policy of the company. Advantages of NBFC · Largest source of ushering finance into the country. · It can underwrite stocks and shares. · It can do wealth management activities. · It can provide loans and credit facilities. · Easy recovery of the loan. Disadvantages of NBFC · It cannot accept demand deposits. · It is having a stringent regulative mechanism. So, What is Instade Alogs is all about? It is an audio learning and discussion forum enriching our listeners to get all the valuable insights related to the professional world. You can connect with us on Linkedin | Twitter | Facebook | Instagram We are Listed on the Below mentioned Platforms, you may subscribe to anyone to get instant updates : Listen on Spotify | Apple Podcasts | Google Podcasts | Castbox | Radio Public | Pocket Casts | Overcast | Breaker
Annette Zimmermann makes the provocative argument that there are times it might be better to take cutting-edge artificial intelligence tools and leave them unused. Annette is a political philosopher working on the ethics of artificial intelligence and machine learning. She’s a technology and human rights fellow at the Carr Center for Human Rights Policy at Harvard University, and an assistant professor in philosophy at the University of York in the United Kingdom. Annette was previously a postdoc at Princeton’s Center for Information Technology Policy as well as at Princeton's University Center for Human Values.
Kevin Lee recently co-wrote a fascinating study about how easy it is for an attacker to gain control of another person’s cell phone. From there, the attacker can use the phone’s multi-factor authentication tool – usually a security code provided over a text message -- to do all kinds of damage, including making unauthorized purchases. As part of the study, his research team managed to fool five wireless carriers, including Verizon Wireless, AT&T and T-Mobile, into moving a customer’s account to a different phone’s SIM card without their permission. He’s a doctoral student in computer science at Princeton, affiliated with the Center for Information Technology Policy.
Today’s guests have written a study about the Google Search engine, and the subtle – and not-so-subtle – ways in which it shows its bias, and in many ways perpetuates tired old stereotypes. Orestis Papakyriakopoulos is a postdoctoral research associate at Princeton’s Center for Information Technology Policy. His research showcases political issues and provides ideas, frameworks, and practical solutions towards just, inclusive and participatory algorithms. Arwa Michelle Mboya is a research assistant at the MIT Media Lab. She is a virtual reality programmer and researcher who investigates the socio-economic effects of enhanced imagination.
To kick off our second season, we’re honored to welcome Barton Gellman, Princeton Class of 1982. Bart has won multiple Pulitzer Prizes, including for his groundbreaking work with The Washington Post in 2013 to reveal widespread surveillance by the National Security Agency. The stories showed that even though they weren’t the targets, law-abiding American citizens could still find their private email, social media content, and online activity swept up by our national security apparatus. Privacy has long been a passion of Gellman’s, and today we’ll ask him for tips we can use to make our own digital lives more private, from email to text messaging to apps and the cloud. He talks about tradeoffs he’s willing to make to be a full participant in the digital revolution, as well as one popular service he distrusts so much, he vows to delete his account entirely. And we’ll as talk about his book, “Dark Mirror: Edward Snowden and the American Surveillance State.” Bart Gellman was a visiting fellow at Princeton’s Center for Information Technology Policy a few years back.
Bob talks with Princeton scholar Orestis Papakyriakopoulos about the social media titan's latest assault on transparency, and the all-too-familiar blame-shifting that followed it. That has become standard operating procedure from a company Bob describes as “amoral, except when it's immoral.”TEDDY ROOSEVELT: Surely there never was a fight better worth making than the one which we are in.BOB GARFIELD: Welcome to Bully Pulpit. That was Teddy Roosevelt, I'm Bob Garfield. Episode 4: It Wasn't Me, It Was My Dog. Last week, Facebook abruptly shut down a research program by scholars, at New York University's Ad Observatory, who had been monitoring the company's political advertising inventory.NEWSCASTER: Now, this whole battle started on Tuesday when Facebook disabled the accounts of researchers at the NYU Ad Observatory, Facebook explaining, quote, “NYU's Ad Observatory project studied political ads using unauthorized means to access and collect data from Facebook in violation of our terms of service. We took these actions to stop unauthorized scraping and protect people's privacy in line with our privacy program under the FTC order.”BG: Yes, Facebook's product management director, Mike Clark, claimed in a blog post that the company's hands were tied by the government. You know, just like Son of Sam claimed it was his dog who ordered him to kill.Within 24 hours, Wired magazine and others revealed that the FTC consent order provided no such thing. Even the agency's Bureau of Consumer Protection weighed in, with acting director Samuel Levine writing to Facebook founder Mark Zuckerberg saying, quote, “I am disappointed by how your company has conducted itself in this matter.”Please note that Levine didn't say surprised, just disappointed, because the history of Facebook is the history of Facebook conducting itself in disappointing ways, voicing shame and regret from the bottom of its heart, and then returning to deceptive and greedy business as usual.MARK ZUCKERBERG (MONTAGE): We didn't take a broad enough view of our responsibility, and that was a big mistake and it was my mistake. This was a major breach of trust and, and I'm really sorry that this happened. We have a basic responsibility to protect people's data. And if we can't do that, then we don't deserve to have the opportunity to serve people.NEWSCASTER: In 2003, Zuckerberg apologized in the Harvard Crimson for any harm done after his website FaceMash asked users to rate people's hotness. Three years later, Zuckerberg said Facebook, quote, “really messed this one up,” following user complaints that the newly launched news feed invaded their privacy.NEWSCASTER: Zuckerberg apologized once again in 2007 for an uproar over the company's Beacon advertising system, saying, “I know we can do better.”BG: That last part courtesy of CBS News. So the FTC wasn't surprised about the latest phony excuse for systematic opacity, and neither was Orestis Papakyriakopoulos, a postdoctoral research director at Princeton University's Center for Information Technology Policy. He's speaking to me from Athens, Greece. Orestis, welcome to Bully Pulpit.ORESTIS PAPAKYRIAKOPOULOS: Glad to be here, Bob.BG: All right, we'll get to your work shortly. But I want to begin with the NYU project. What were they studying?OP: So, the NYU researchers had an Ad Observatory project. They were trying to monitor what ads are placed on Facebook and who sees them, like which demographics are targeted and so on — in order to provide additional transparency on how online advertising takes place.BG: And what was the method? Were they, in fact, scraping content or metadata from the site in some clandestine fashion, as Facebook alleged?OP: No, actually, they've developed a plugin that you put on your browser, the Ad Observer, and they asked users all over the world to use their plugin, and practically the plugin was recording what the users saw. So in this way, they could see which ads a user was targeted.BG: Wait, so when Facebook invoked protecting user privacy, all of the users had proactively downloaded the browser extension and were giving explicit permission to the NYU people to see what ads they were being served.OP: Exactly, but when Facebook uses the term users, they mean the advertisers who placed the ads. The advertisers did not give their permission to NYU to collect the information about the targeted ads.BG: [chuckling]OP: Yeah, exactly.BG: I see, so the advertisers who pay money to have their ads seen we're skittish about having their ads seen.OP: Exactly.BG: Now, the whole point of the Facebook algorithm is that consumers get more and more content they have demonstrated interest in by clicking on it or commenting or sharing. That very same algorithm, though, takes the same user behavior data and allows advertisers to micro target to exactly the consumer profile they're most interested in, whether to buy a car or toothpaste or a political worldview.OP: Yeah, so Facebook's business model until today is to use this data they collect to place personalized advertisements and they sell the space and they sell the tool they've developed so advertisers can place their ads.BG: Selling the tools they've developed. This gets to the next sensitive area of privacy, because the FTC order that the company invoked last week came with a five billion dollar fine for violating an earlier 2012 consent decree after Facebook was caught not only being careless, but mercenary with users personal data. Can you remind me what the specifics were of the original complaint?OP: Sure. So back in 2012, the FTC claimed that Facebook was violating numerous privacy rules. And more specifically, for example, users believed that they had put their accounts to private settings or some information that they had on their profile were not public, but advertisers still had the opportunity to collect this data. Another example of what was violated back then is that although users were deleting their profiles or saying that taking their information down, third party entities were still able to collect this data, although the users had removed their consent access on the platform.BG: So then came the new order in 2019, in which the FTC said Facebook was found to be, quote, “deceiving users about their ability to control the privacy of their personal information.” Can you summarize the 2019 case?OP: Sure. So going back to 2012, because Facebook violated specific rules, the FTC said that Facebook needs to change how it functions to make more clearer representations of what holds in privacy terms and what not, to inform users as well as to switch off all these back doors that gave data about users to third party individuals. And although Facebook started doing that, for example, what happened is that although new apps were not able to get this data, if you had an older up, you still were able to collect information. And this is the window that was exploited also by Cambridge Analytica, that the company used an app that was created in the past for a different purpose and started collecting data about users, and these data the users have not given their consent to give the data to the company.BG: And this wasn't like, oops, careless of me. This had to have been done with malice aforethought.OP: Yeah. So definitely Cambridge Analytica did it because they found an opportunity there to collect all this data. I don't know if Facebook knew about the backdoor or not, but definitely they did not do their job right.BG: And then sat on the information for two years before the story finally blew up in the media.OP: And going back to now to 2019, the FTC said, hey, Facebook did not conform to our claims. There are still issues with data privacy and Facebook need to conform to the older rules. Plus, there were some new issues that appeared. For example, Facebook need to make more transparency in how they use their face recognition technology and their platform. The FTC implemented stronger accountability mechanisms in cases that Facebook violates against the norm, and so on.BG: So once again, disappointing but unsurprising. And just ,as is was the case with Cambridge Analytica, simply astonishing indifference to the abuse of its targeting algorithm. And this is whether permitting Trump friendly or Boris Johnson friendly foreign agents to spread toxic lies in a political campaign, or the Myanmar Buddhist military to incite pogroms with false accusations against the Muslim Rohingya minority. I've often described the company as amoral, except when it is immoral. Would you care to argue against that proposition?OP: So definitely Facebook as every company, they look at their self-interest. This is what they were doing in the past and they are keep doing now. Their model is to collect as much data they can and find ways to sell it to get the most profit out of it. That also means that not disclosing a lot of things that are going on on the platform because these might make them accountable and also make them impose restrictions on their business model.BG: And in fact, in the Cambridge Analytica affair, there were a number of universities and the United States Senate trying to look into how it could have all taken place. Facebook vowed transparency, but instead actually tried to stymie some researchers by failing to make its API fully available and so on. How cooperative were they even when they were most in the crucible following Cambridge Analytica?OP: Generally, Facebook I think that transparency efforts of Facebook belong more to the marketing part of the company rather than an actual effort of the company to be more open with scientists and policy makers and so on. So they always try to give minimal data under rules that protect them 100 percent. And also the quality of the data information they provide usually is not able to answer key questions about the nature of the platform, how does it affect the society, the democracy and so on.BG: All right. Let's talk about your work at the Center for Information Technology Policy at Princeton. According to your center's website, your research, quote “provides ideas, frameworks and practical solutions towards just, inclusive and participatory socioalgorithmic ecosystems through the application of data intensive algorithms and social theories.” So what, what do you do?OP: So, for example, in the case of Facebook and online platforms in general, we try to understand how the tools and the algorithms they deploy are used politically, to place political ads to influence the public opinion. And as part of it, we look at Facebook, Google and YouTube, which belongs to Google, for example — or other platforms like TikTok, which are used a lot for political communication — and we ask who has access to the tools of the platforms, how do the tools of the platforms function and what effects they might have in the society. Like, who sees an ad, why, why they don't see an ad, is there probably a potential from discrimination, to are there other issues that may come as a side effect of seeing specific ads, and other further research questions.BG: Now, I want to go back very briefly to the NYU people. Facebook claimed they had offered those researchers an alternative method with its very own FORT researcher platform, which in the name of science and transparency and societal good, it beneficently makes available to scholars. In fact, FORT stands for Facebook Open Research and Transparency. But you read that Mike Clark blog post about NYU and you were like, yeah, right, because you and your team tried to take the FORT and found it heavily defended.OP: Exactly, and they said first they have a political ads library that is open to the public and they also provide the FORT data set where researchers can get access. And to start with the minor thing, the political ads library's too general and does not actually provide information about who placed an ad to whom. You can also more or less see some general statistics about ads, like general demographics and location, who saw it, as well as the content of the ad.BG: It seems to me as if someone was being investigated for murder and the person of interest says to the cops, here is the evidence you may choose from. I will provide this. You can use this and only this for making your case.OP: Exactly, that's the one thing, and they also claim that they have the FORT data set. And it's interesting because back in February, the group I am in, we tried to get access to that data set and they provided us with a contract which we had to sign in order to get the data set without telling us what the data set includes. And this agreement that Facebook gave us said that actually Facebook possesses the power to decide if our research can get published or not. So we could do some research. They could review it then and they would say, OK, this is publishable or this is not, otherwise you need to remove that or that and so on. Which we found really problematic. Research need to be free, otherwise it becomes censored. And we asked them first, OK, can you tell us more? We cannot sign a contract without knowing what data we are getting, of course. And second, are we going to have the freedom to answer our research question? And the first answers of Facebook was we are not able to negotiate the terms we are proposing because this is mandated by the FTC and the Cambridge Analytica scandal. Which of course did not hold. The FTC decrees don't say anything about how researchers can access Facebook data.BG: When Facebook played the FTC card last week, you were you were like, oh, I've seen this movie before. They're invoking government regulation that in fact, doesn't regulate the thing that they're trying to hide.OP: Exactly. And because we saw how they treated the NYU researchers, and we were frustrated that they used again the FTC argument, we said, OK, we need to speak up and talk about our own experience because this cannot go on.BG: So just to reiterate, it's a mystery package that you don't get to unwrap until you've signed an onerous contract, which specifies, among other things, that if Facebook doesn't like what you want to publish based on your access to FORT, then it just censors you. I want to return to the letter that the FTC official wrote to Mark Zuckerberg after the NYU controversy erupted last week. He addressed the subject of Facebook's trustworthiness to keep its word, not only only the long haul, but like in any time period whatsoever. He observes, quote, “Only last week, Facebook's general counsel, Jennifer Newstead, committed the company to ‘timely, transparent communication to Bureau of Consumer Protection staff about significant developments.' Yet the FTC received no notice that Facebook would be publicly invoking our consent decree to justify terminating academic research earlier this week. Had you honored your commitment to contact us in advance, we would have pointed out to you that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest.” They broke their promise and they did it absolutely immediately. How is anybody supposed to, in academia or elsewhere, supposed to deal with a company that appears to be out of control?OP: I think that the answer is not company specific, but more general, like there need to be regulations that define what data online platforms and tech companies should provide to researchers, as well as how, because it's not only about the data that Facebook holds, it's also the data that Google holds and all the other platforms. And although focus is usually on Facebook, the other platforms also have a very high degree of opacity. So I do believe that policymakers and politicians need to step up and say, we need to bring regulation that forces Facebook and the other platforms to change how they function, to change what they disclose and what they not.BG: All right, so there was a 2012 consent decree in which Facebook promised to make corrections to how it does business. It violated that consent decree, leading to the 2019 update, which expanded the government regulation and also fined them five billion dollars. Now, I know you're a data scientist and an engineer, but I'm going to ask you now to be a lawyer, too, because in the 2019 decree, the FTC said, quote, “It is ordered that respondent (that's Facebook) in connection with any product or service, shall not misrepresent in any manner, expressly or by implication, the extent to which the respondent maintains the privacy or security of covered information” — including, and this skips a few lines, “the extent to which the respondent makes or has made covered information accessible to third parties.” Now, I'm not a lawyer either, but it seems to me that what happened last week with NYU is explicitly a violation of that clause. They misrepresented the way they treat covered information, data that is, under the pretext of privacy or security. Is there going to be a 2021 update to the 2019 update to the 2012 order?OP: I'm not a lawyer, but Facebook tries to exploit ambiguity in ways that conforms to their interests. And for example, that is to say that we are protecting users privacy in order to not allow the NYU researchers to understand how their tools are used.BG: All right. You say ambiguity. This looks pretty expressly stated to me, but I guess this isn't your table. I will ask you what this all means. What are the implications of this dust-up involving a, you know, relatively small research project? What are the implications for the rest of us?OP: It is an issue for the academic community because we as academics struggle to understand technological tools and how they affect the society with very little help in general. And really, this tool has also been invaluable for a lot of researchers and was a useful resource to understand Facebook ads, but generally it also shows how much power we have as academics. And we we need to make calls to policymakers to change things, because the research and the knowledge we can extract will be useful for them and the rest of the society.BG: And concerning your work at Princeton, I know you haven't published yet, but I wonder if there's a sneak preview that you can offer of, if not your absolute finding some interesting tidbits along the way.OP: First, we find limitations, strong limitations, what the data the provided can actually say, like we find unexplainable moderation practices like why ads were removed or not removed, although they define specific guidelines about how ads should be. We also find that a lot of ads are related to protected groups. And there are questions to understand how these protected groups were targeted and make political statements about it. But also, it's not also about our research. Like we are able to access only the data that Facebook gives through their political ads library. So there are thousands or even millions of ads that are placed and researchers cannot get access to them at all. And that's why NYU's project was such a great resource, because there was no other way to get information about these advertisements. I find it personally troubling that there is so much opacity about online ads, but on other ads, like on TV or on radio, you get so much information. And they know there are legal and historical reasons why they are treated differently, but they should not.BG: I want to ask you one final thing, Orestis. Like the wildfires that right now are ravaging Greece and California and elsewhere around the world, authoritarianism is raging. Disinformation has become not just an art, but a science. Millions and millions of people are foolishly swallowing lies and disinformation fed them by cynical politicians. The world is literally in flames. Why do companies like Facebook not rush to provide whatever data they can in support of better academic understanding of what is happening on our screens and in our psyches?OP: I think they followed the idea of the less we provide, the safer we are. I do believe that if we had access to data, we could find positive effects of social media as well. So I don't believe that everything is bad. It's not black and white, but I think they believe that the less they give, the more protected they are because they are afraid that if a very strong regulation is passed, they will lose the ability to, to keep having the same business model they have until today with the same profits.BG: Orestis, thank you so much for joining me.OP: Thanks for having me Bob.BG: Orestis Papakyriakopoulos is a postdoctoral research associate at Princeton University's Center for Information Technology Policy. Papakyriakopoulos was perhaps admirably circumspect in casting doubt more on capitalist self-interest than Facebook per se. But whenever these blowups occur, I think back to the first scene of the 2010 movie The Social Network, in which Zuckerberg, played by actor Jesse Eisenberg, is getting dumped by his girlfriend.GIRLFRIEND: You are probably going to be a very successful computer person. You're gonna go through life thinking that girls don't like you because you're a nerd. And I want you to know from the bottom of my heart that that won't be true. It'll be because you're an a*****e.BG: OK, we're done here. Before I sign off, though, I must remind you, I must implore you to comment, rate, share what you've heard here today. And not just Bully Pulpit, but the other Booksmart Studios shows like John McWhorter's Lexicon Valley and Amna Khalid's Banished, both of which programs are like, whoa — tell friends, neighbors, family members, stop strangers on the street. The success of Booksmart, the impact of our work depends as much on you as on us. So please spread the word.Also, if you become a paying subscriber to Booksmart Studios, you will get extended interviews, additional content of other kinds, access to the hosts and in my case, continued access to my weekly column, which is, for the moment, free to sample. Now then, Bully Pulpit is produced by Mike Vuolo and Matthew Schwartz. Our theme was composed by Julie Miller and the team at Harvest Creative Services in Lansing, Michigan. Chris Mandra and N'Dinga Gaba are our audio engineers. Bully Pulpit is a production of Bookmart Studios. I'm Bob Garfield. Get full access to Bully Pulpit at bullypulpit.substack.com/subscribe
In this episode of "Keen On", Andrew is joined by Barton Gellman, the author of "Dark Mirror: Edward Snowden and the American Surveillance State", to discuss the top secret establishment of the surveillance state. Barton Gellman, a staff writer at The Atlantic, is the author most recently of Dark Mirror: Edward Snowden and the American Surveillance State and the bestselling Angler: The Cheney Vice Presidency. He has held positions as senior fellow at The Century Foundation, Lecturer at Princeton’s Woodrow Wilson School and visiting research collaborator at Princeton’s Center for Information Technology Policy. Before joining The Atlantic, Gellman spent 21 years at The Washington Post, where he served tours as legal, diplomatic, military and Middle East correspondent. Gellman anchored the team that won the 2014 Pulitzer Prize for Public Service for coverage of the National Security Agency and Edward Snowden. He was previously awarded the 2008 Pulitzer Prize for National Reporting for a series on Vice President Dick Cheney. In 2002, he was a member of the team that won the Pulitzer Prize for National Reporting for coverage of the 9/11 attacks and their aftermath. Other professional honors include two George Polk Awards, two Overseas Press Club awards, two Emmy awards for a PBS Frontline documentary, Harvard’s Goldsmith Prize for Investigative Reporting and the Los Angeles Times Book Prize. Gellman graduated with highest honors from Princeton University and earned a master’s degree in politics at University College, Oxford, as a Rhodes Scholar. He lives in New York City. Learn more about your ad choices. Visit megaphone.fm/adchoices
We are pleased to announce our newest session of The Library is Open Podcast! This session features an interview with R. David Lankes, he is the director of the University of South Carolina's School of Information Science, and recipient of the American Library Association's 2016 Ken Haycock Award for Promoting Librarianship.About R. David LankesR. David Lankes is a professor and the director of the University of South Carolina's School of Information Science. Lankes has always been interested in combining theory and practice to create active research projects that make a difference. His work has been funded by organizations such as The MacArthur Foundation, The Institute for Library and Museum Services, NASA, The U.S. Department of Education, The U.S. Department of Defense, The National Science Foundation, The U.S. State Department, and The American Library Association.Lankes is a passionate advocate for libraries and their essential role in today's society earning him the American Library Association's Ken Haycock Award for Promoting Librarianship in 2016. He also seeks to understand how information approaches and technologies can be used to transform industries. In this capacity he has served on advisory boards and study teams in the fields of libraries, telecommunications, education, and transportation including at the National Academies. He has been a visiting fellow at the National Library of Canada, The Harvard School of Education, and was the first fellow of ALA's Office for Information Technology Policy. His book,The Atlas of New Librarianship won the 2012 ABC-CLIO/Greenwood Award for the Best Book in Library Literature.Read more about David on his website!
This week, Shane interviews Democratic Congressional Candidate Tracy Mitrano. Mitrano is running in New York's 23rd Congressional district, which includes eleven counties spanning New York's Southern Tier and Finger Lakes regions. The district is currently represented by Republican Tom Reed, who is seeking reelection.Mitrano was born and raised in Rochester, NY and earned a PhD from Binghamton University and a Law degree from Cornell University. Mitrano has served as assigned counsel attorney for many years in New York and has taught American History, Family and Social Policy, and Internet Law. Mitrano has also worked as Director of Information Technology Policy at Cornell and has consulted in information management, security, and privacy.Mitrano has previously run for Congress in the same district in 2018.This interview focuses on Mitrano's fundraising and polling in comparison to Tom Reed, her positions on further COVID stimulus funding, internet regulation, and campaign finance, and why she thinks she is the better choice for Congress.You can find out more about Tracy Mitrano at https://www.mitrano2020.com/NEW - - We now have a Patreon account! Patreon is a platform that will allow you to support us financially. We are a small operation trying to grow our presence and provide necessary policy discussions and informative interviews to the people of upstate NY and the Otsego County area. Check us out on Patreon, and consider supporting us! We appreciate it! - Shane & Benhttps://www.patreon.com/upstateregularMake sure you subscribe to the podcast on your favorite podcast app, leave us a review, and follow us on Facebook and Twitter!Support the show (https://www.patreon.com/upstateregular)
Bio Philip N. Howard (@pnhoward) the Director of the Oxford Internet Institute and a statutory Professor of Internet Studies at Balliol College at the University of Oxford. Howard investigates the impact of digital media on political life around the world, and he is a frequent commentator on global media and political affairs. Howard’s research has demonstrated how new information technologies are used in both civic engagement and social control in countries around the world. His projects on digital activism, computational propaganda, and modern governance have been supported by the European Research Council, National Science Foundation, US Institutes of Peace, and Intel’s People and Practices Group. He has published nine books and over 140 academic articles, book chapters, conference papers, and commentary essays on information technology, international affairs and public life. His articles examine the role of new information and communication technologies in politics and social development, and he has published in peer review journals such as the American Behavioral Scientist, the Annals of the American Academy of Political and Social Science, and The Journal of Communication. His first book on information technology and elections in the United States is called New Media Campaigns and the Managed Citizen (New York: Cambridge University Press, 2006). It is one of the few books to ever win simultaneous “best book” prizes from the professional associations of multiple disciplines, with awards from the American Political Science Association, the American Sociological Association, and the International Communication Association. His authored books include The Digital Origins of Dictatorship and Democracy (New York, NY: Oxford University Press, 2010), Castells and the Media (London, UK: Polity, 2011), Democracy’s Fourth Wave? Digital Media and the Arab Spring (New York, NY: Oxford University Press, 2012, with Muzammil Hussain), and Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up (New Haven, CT: Yale University Press, 2015). He has edited Society Online: The Internet in Context (Thousand Oaks, CA: Sage, 2004, with Steve Jones), the Handbook of Internet Politics (London, UK: Routledge, 2008, with Andrew Chadwick), State Power 2.0: Authoritarian Entrenchment and Political Engagement Worldwide (Farnham, UK: Ashgate, 2013, with Muzammil Hussain) and Computational Propaganda: Political Parties, Politicians and Manipulation on Social Media (New York, NY: Oxford University Press, 2018, with Samuel Woolley). Howard has had senior teaching, research, and administrative appointments at universities around the world. He has been on the teaching faculty at the Central European University, Columbia University, Northwestern University, the University of Oslo, and the University of Washington. He has had fellowship appointments at the Pew Internet & American Life Project in Washington D.C., the Stanhope Centre for Communications Policy Research at the London School of Economics, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the Center for Information Technology Policy at Princeton University. From 2013-15 he helped design and launch a new School of Public Policy at Central European University in Budapest, where he was the school’s first Founding Professor and Director of the Center for Media, Data and Society. He currently serves as Director of the Oxford Internet Institute at Oxford University, the leading center of research and teaching on technology and society. Howard’s research and commentary writing has been featured in the New York Times, Washington Post, and many international media outlets. He was awarded the National Democratic Institute’s 2018 “Democracy Prize” and Foreign Policy magazine named him a “Global Thinker” for pioneering the social science of fake news production. His B.A. is in political science from Innis College at the University of Toronto, his M.Sc. is in economics from the London School of Economics, and his Ph.D. is in sociology from Northwestern University. His website is philhoward.org. Resources Philip Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (2020)
Podcast miniseries on the intersection of artificial intelligence and human rights, framed by a conference on the topic at Princeton University in April 2019. The event was jointly put on by the Princeton Center for Information Technology Policy and the UN Special Rapporteur for Human Rights and Extreme Poverty. This series is meant to build your foundation on the topics of human rights, AI, and introduce you to the critical issues we face where the topics come together. Integrated into the episodes are current examples and clips from world experts. Together we will get at the key questions at the frontier of AI and human rights, globally. Podcast series was created and produced by Stevie Bergman. More information and resources can be found at tvr2c.com/aihrpod Email: aihrpod@gmail.com Copyright Stevie Bergman 2020 Music is Paco's Dream https://soundcloud.com/aihrpod
Podcast miniseries on the intersection of artificial intelligence and human rights, framed by a conference on the topic at Princeton University in April 2019. The event was jointly put on by the Princeton Center for Information Technology Policy and the UN Special Rapporteur for Human Rights and Extreme Poverty. This series is meant to build your foundation on the topics of human rights, AI, and introduce you to the critical issues we face where the topics come together. Integrated into the episodes are current examples and clips from world experts. Together we will get at the key questions at the frontier of AI and human rights, globally. Podcast series was created and produced by Stevie Bergman. More information and resources can be found at tvr2c.com/aihrpod Email: aihrpod@gmail.com Copyright Stevie Bergman 2020 Music is Paco's Dream https://soundcloud.com/aihrpod
Podcast miniseries on the intersection of artificial intelligence and human rights, framed by a conference on the topic at Princeton University in April 2019. The event was jointly put on by the Princeton Center for Information Technology Policy and the UN Special Rapporteur for Human Rights and Extreme Poverty. This series is meant to build your foundation on the topics of human rights, AI, and introduce you to the critical issues we face where the topics come together. Integrated into the episodes are current examples and clips from world experts. Together we will get at the key questions at the frontier of AI and human rights, globally. Podcast series was created and produced by Stevie Bergman. More information and resources can be found at tvr2c.com/aihrpod Email: aihrpod@gmail.com Copyright Stevie Bergman 2020 Music is Paco's Dream https://soundcloud.com/aihrpod
Podcast miniseries on the intersection of artificial intelligence and human rights, framed by a conference on the topic at Princeton University in April 2019. The event was jointly put on by the Princeton Center for Information Technology Policy and the UN Special Rapporteur for Human Rights and Extreme Poverty. This series is meant to build your foundation on the topics of human rights, AI, and introduce you to the critical issues we face where the topics come together. Integrated into the episodes are current examples and clips from world experts. Together we will get at the key questions at the frontier of AI and human rights, globally. Podcast series was created and produced by Stevie Bergman. More information and resources can be found at tvr2c.com/aihrpod Email: aihrpod@gmail.com Copyright Stevie Bergman 2020 Music is Paco's Dream https://soundcloud.com/aihrpod
Podcast miniseries on the intersection of artificial intelligence and human rights, framed by a conference on the topic at Princeton University in April 2019. The event was jointly put on by the Princeton Center for Information Technology Policy and the UN Special Rapporteur for Human Rights and Extreme Poverty. This series is meant to build your foundation on the topics of human rights, AI, and introduce you to the critical issues we face where the topics come together. Integrated into the episodes are current examples and clips from world experts. Together we will get at the key questions at the frontier of AI and human rights, globally. Podcast series was created and produced by Stevie Bergman. More information and resources can be found at tvr2c.com/aihrpod Email: aihrpod@gmail.com Copyright Stevie Bergman 2020 Music is Paco's Dream https://soundcloud.com/aihrpod
HELLO! It's great to be back! In this episode I interview Professor Ruha Benjamin on her book Race After Technology. Tune in to hear about the intersection of race, inequality and technology-- some scary stuff but Ruha Benjamin breaks it down in a way that will leave you feeling empowered!!If you've ever wondered: "Are Robots Racist?" I've got the answer for you! -- Also, if you wish to borrow my copy of the book let me know! Thank you all for the patience and love, wash your hands! For comments, questions or concerned (within reason) hit me up IG @PopularEducationRadio or email me at PopularEducationRadio@gmail.com "I am an Associate Professor in the Department of African American Studies at Princeton University where I study the social dimensions of science, technology, and medicine, race and citizenship, knowledge and power. I am also the founder of the JUST DATA Lab, and a Faculty Associate in the Center for Information Technology Policy, Program on History of Science, Center for Health and Wellbeing, Program on Gender and Sexuality Studies, and Department of Sociology. I serve on the Executive Committees for the Program in Global Health and Health Policy and Center for Digital Humanities." for more information please go to, RuhaBenjamin.com
Ari Ezra Waldman is a Professor of Law at New York Law School and is currently the Microsoft Visiting Professor of Information Technology Policy at Princeton University. He received his undergraduate and law degrees from Harvard, and his masters and Ph.D. in Sociology from Columbia University. Waldman’s research focuses on privacy, technology design, online speech, and marginalized communities. He’s the author of author of Privacy as Trust, which UC Boulder professor Scott Skinner-Thompson described as “a must read for anyone interested in saving privacy in the digital age.” More at: notesonquotes.com Instagram: @notesonquotes Facebook.com/notesonquotes Twitter: @noteson_on_quotes
Listen to our talk between Nick Hawtin and Professor Deven Desai about the law, technology and future of the law and ethics! Professor Deven Desai joined the Scheller faculty in fall of 2014 in the Law and Ethics Program. Prior to joining Scheller, Professor Desai was an associate professor of law at the Thomas Jefferson School of Law. He was also the first, and to date, only Academic Research Counsel at Google, Inc., and a Visiting Fellow at Princeton University's Center for Information Technology Policy
(0.5 General California MCLE) Rapid advances in technology are changing how kids are learning in today's schools. Yet with the rise of education tech, what kinds of information are companies collecting from children and how is this sensitive data being stored and protected? Professor Reidenberg sheds light on this murky terrain and suggests a path forward. Professor Joel Reidenberg holds the Stanley D. and Nikki Waxberg Chair in Law at Fordham University School of Law where he is the Founding Academic Director of the Center on Law and Information Policy at Fordham Law School. He served as the inaugural Microsoft Visiting Professor of Information Technology Policy at Princeton University teaching in the computer science department and more recently as faculty teaching cybersecurity policy at Princeton's Woodrow Wilson School.
Joanna J. Bryson is a transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. Her research covers topics ranging from artificial intelligence, through autonomy and robot ethics, and on to human cooperation. Her work has appeared in venues ranging from a reddit to Science.Bryson's first degree is in Behavioural Science (non-clinical psychology) from Chicago (1986), she also holds an MSc in Artificial Intelligence and an MPhil in Psychology from Edinburgh (1992, 2000) and a PhD in Artificial Intelligence from MIT (2001). She joined Bath in 2002 in their Department of Computer Science, where she was promoted to Reader (tenured associate professor) in 2010. She founded and for several years lead their Intelligent Systems research group, and is affiliated with Bath's Institutes for Policy Research and Mathematical Innovation, as well as their Centres for Networks and Collective Behaviour and for Digital Entertainment. Joanna is currently the Professor of Ethics and Technology at the Hertie School of Governance in Berlin. She has held visiting academic positions with Princeton's Center for Information Technology Policy, the Mannheim Centre for Social Science Research (MZES, 2011-2014), the Department of Anthropology Oxford (Harvey Whitehouse's Explaining Religion project, 2010-2011), The Methods & Data Institute at Nottingham (agent-based modelling in political science 2007-2008), and the Konrad Lorenz Institute for Evolution & Cognition Research in Austria (on the biological origins of culture, 2007- 2009). Before coming to Bath, she conducted academic research in Edinburgh's Human Communication Research Centre (1999-2000), and Harvard's Department of Psychology (2001-2002). Additionally, she has professional experience in Chicago's financial industry (1986-1991), international organization management consultancy (1993-1994), and industrial AI research (for LEGO, 1995, 1998). Bryson has served on the Senate, Council, and Court for the University of Bath, representing the Academic Assembly. She is presently a member of the College of the British Engineering and Physical Sciences Research Council (EPSRC) and serves as a member of the editorial board for several academic journals, including Adaptive Behaviour, AI & Society, Connection Science, and The International Journal of Synthetic Emotions.
Episode 7: Why Net Neutrality Matters for Nonprofits Despite substantial public opposition, Obama-era regulations securing Net Neutrality – a principle that essentially bars Internet Service Providers (ISPs) from playing favorites with different websites – were rolled back by the FCC on June 11, 2018. These changes could pave the way for a new, highly manipulated user experience: movements, media reports, resources, and more that ISPs – or their investors – don’t like, or that don’t make them any money, could end up on the other side of a slow connection. What could this mean for nonprofits – and for the communities they serve? Katharine Trendacosta, Policy Analyst at the Electronic Frontier Foundation, and Larra Clark, Deputy Director for both the Public Library Association (PLA) and the American Library Association’s (ALA) Office of Information Technology Policy, join the hosts.
The recording took place at the University of Cambridge and was made possible by Leverhulme Centre for the Future of Intelligence. In the episode, Joanna and I talk about the meaning of intelligence and mind, when AI started being taken seriously by media, Europe’s place in the global AI landscape, and a lot more. Joanna Bryson is a widely recognised academic expert in intelligence, from the perspectives of computer science, psychology, and biology. She is employed at the University of Bath, where she teaches Intelligent Control and Cognitive Systems. Joanna is also fellow at Princeton’s Centre for Information Technology Policy. Joanna’s main focus and expertise lies in researching natural and artificial intelligence. Current projects include building accountable and transparent AI, and understanding cultural variation in human cooperation and economic behaviour. To explore Joanna’s work please go to http://www.cs.bath.ac.uk/ and @j2bryson on Twitter. --- Send in a voice message: https://anchor.fm/bigfanofhumanrace/message
From smart phones to social media, digital technology has changed the way we live —allowing for new explorations of human behavior. Big data now enables scientists to process data about human behavior on a scale never before imaginable. In this episode, Professors Julian Zelizer and Sam Wang interview Matt Salganik, a professor of sociology at Princeton University. Salgnik’s new book, “Bit by Bit: Social Research in the Digital Age,” explores these concepts, detailing how the digital revolution is transforming how social scientists observe behavior, ask questions, run experiments and engage in mass collaborations. Salganik is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning at Princeton University. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, The New York Times, and the Wall Street Journal.
Tiffany C. Li (@tiffanycli) is an attorney and Resident Fellow at Yale Law School's Information Society Project. She is an expert on privacy, intellectual property, and law and policy at the forefront of new technological innovations. Li leads the Wikimedia/Yale Law School Initiative on Intermediaries and Information, where she researches cutting-edge legal issues involving online speech, access to information, and Internet freedom. Additionally, Li is also an Affiliate Scholar at Princeton's Center for Information Technology Policy.
In this episode: LogRhythm CISO James Carder is our featured interview. News from Secure64, LogRhythm, Ping Identity, Vector8 and Managed Methods. Full show notes here: https://www.colorado-security.com/news/2017/7/8/23-710-show-notes 100,000 Cosplay-ers can't be wrong We hope you've recovered from a week of cosplay, fireworks and flag waving, and you're ready to get back to the real world. The news never stops here in Colorado. Besides Comic Con, we got news that a local college placed in a national cybersecurity competition, Secure64 looks to secure DNS for major carriers, LogRhythm and Ping Identity win prizes, and blogs from Vector8 and Managed Methods. Sign up for our mailing list on the main site to receive weekly updates - https://www.colorado-security.com/. We're continually working to improve the show, and appreciate the feedback we get from our listeners. If you discover any audio issues, or have suggestions for our format, let us know. This week's episode is available on Soundcloud, iTunes and the Google Play store. Reach out with any questions or comments to info@colorado-security.com Feature interview: James Carder has had an interesting career on his way to Denver. From Air Force, to Raytheon, IBM, Mandiant, the Mayo Clinic, and finally LogRhythm here in Colorado, James has had a great career. He sat down with Alex to share with you all. Local security news: Denver Comic Con draws 115,000, packs a $10 million economic punch Colorado robotics team takes world championship, eyes next opponent Red Rocks CC wins 2nd in Community College Innovation Challenge Secure64 & Mavenir Team Up to Bring NFV-ready IMS to Carriers LogRhythm Clinches the 2017 Frost & Sullivan Asia-Pacific Enterprise Security Product Line Strategy Leadership Award Ping Identity named in 2017 KuppingerCole Leadership Compass on CIAM Managed Methods blog: Schools Using Google Need a Lesson in Cloud Security Vector8 blog: The Five Dimensions of Hunting Robb & Alex featured on this week's Cybersecurity Interviews podcast Job Openings: Ping Identity - GRC Analyst Trust Company of America - Sr. Manager, Information Technology Policy and Governance Aimco - Information Security Analyst Reed Group - Information Security Officer Trace 3 - Identity Architect Lares Sec Consulting - Application Security Consultant RedShield Security - Cloud Security Engineer Upcoming Events: This Week and Next: CSA - July Meeting - 7/11 Denver ISSA - July Meetings - 7/11-12 COS ISSA - Chapter meetings - 7/12-13 SecureSet - Expert Series, Bryan Becker, Kroenke Sports - 7/13 Colorado Innovation and Technology Experience - 7/13-14 OWASP - July meeting - 7/19 NCC - Cyber Center Charrette 2017 - 7/19-21 DenverSec - Meetup - 7/20 Other Notable Upcoming Events: COS ISSA - CISSP Exam Prep - 7/29 7th Annual Cyber Security Training & Technology Forum - 8/30-31 View our events page for a full list of upcoming events If you have any questions or comments, or any organizations or events we should highlight, contact Alex and Robb at info@colorado-security.com * Thanks to CJ Adams for our intro and exit! If you need any voiceover work, you can contact him here at carrrladams@gmail.com. Check out his other voice work here. * Intro and exit song: "The Language of Blame" by The Agrarians is licensed under CC BY 2.0
We are being told a simple story about the future of work: if a machine can record and imitate what you do, you will be replaced by it. Christened a “fourth industrial revolution,” a narrative of mass unemployment is now gripping policymakers. It envisions human workers rendered superfluous by ever-more-powerful software, robots, and predictive analytics. Substituting robots for workers may seem like an impossibly grandiose ambition. But its main problem is not impracticality—rather, it is not nearly grand enough. It is a vision of society built on a narrow consumerism. We don’t exist simply to be served. We want to serve others, to make a contribution, and to find some meaning in our daily activities. Another approach is possible—indeed, plausible. It is a future of robots and software complementing work, to make it better. I call technology that improves workers’ skills and opportunities “humane automation,” to be distinguished from other forms of technical advance that are indifferent to—or undermine—workers’ skills and wages. Fortunately, forms of humane automation are already taking root in many fields. As consumers and citizens, we can encourage this more inclusive and sustainable path. Enlightened policymakers and professionals can also re-channel the flow of commerce to respect, rather than replace, human initiative. Frank Pasquale, JD, MPhil is an expert on the law and policy of big data, predictive analytics, artificial intelligence, and algorithms. He has advised government and business leaders on the health care, internet, and finance industries, including the U.S. Department of Health and Human Services, the U.S. House Judiciary Committee, the Federal Trade Commission, the Council of Institutional Investors, the Capitol Forum, and the European Commission. He has spoken on his book, The Black Box Society (Harvard University Press, 2015) at academic and policy venues around the world, including law, computer science, humanities, and social science departments. His work has recently been translated into Chinese, French, German, Korean, Hungarian, and Serbian, and he is routinely quoted in global media outlets. He has been recognised as one of the ten most-cited health law scholars in the United States. His current book project is tentatively titled Laws of Robotics: Revitalizing the Professions in an Era of Automation (under contract to Harvard University Press). He is a currently a Visiting Fellow on the Technology and Democracy Project in CRASSH , an affiliate fellow at the Yale Information Society Project, and a fellow at the New Economy Law Center. He has served as a visiting fellow at Princeton’s Center for Information Technology Policy.
The digital revolution is generating massive amounts of information. And while this big data certainly benefits researchers and consumers, it also poses significant privacy concerns. In this WooCast episode, Tim Lee of Vox interviews Princeton professors Prateek Mittal and Matt Salganik about the benefits, risks and concerns related to big data. This episode is part of a series featuring moderators and panelists who will participate in the Princeton-Fung Global Forum: “Society 3.0+: Can Liberty Survive the Digital Age?” The conference, to be held March 20-21 in Berlin, is being organized by the Woodrow Wilson School of Public and International Affairs. Timothy B. Lee is a senior correspondent at Vox.com, where he covers technology and economics. He previously covered technology policy for The Washington Post and Ars Technica. He holds a master's degree in computer science from Princeton. Follow him on Twitter: @binarybits Prateek Mittal is an assistant professor in the Department of Electrical Engineering at Princeton University, where he is also affiliated with the Center for Information Technology Policy. His research aims to build secure and privacy-preserving communication systems. His research interests include the domains of privacy enhancing technologies, trustworthy social systems and internet/network security. His work has influenced the design of several widely used anonymity systems. Matthew Salganik is professor of sociology at Princeton University, and he is affiliated with several of Princeton's interdisciplinary research centers including the Office for Population Research, the Center for Information Technology Policy, the Center for Health and Wellbeing and the Center for Statistics and Machine Learning. His research interests include social networks and computational social science. He is the author of the forthcoming book "Bit by Bit: Social Research in the Digital Age."
Can liberty survive the digital age? In this episode, Princeton University professors Jennifer Rexford and Janet Vertesi discuss internet infrastructure and its effect on how people use the web as a vehicle for communication and information. This episode is part of a series featuring panelists who will participate in the Princeton-Fung Global Forum: “Society 3.0+: Can Liberty Survive the Digital Age?” The conference, to be held March 20-21 in Berlin, is being organized by the Woodrow Wilson School of Public and International Affairs. Jennifer Rexford is the Gordon Y.S. Wu Professor of Engineering, professor of computer science and chair of the Department of Computer Science at Princeton University. Before joining Princeton in 2005, she worked for eight years at AT&T Labs—Research. Jennifer received her bachelor's degree in electrical engineering from Princeton University in 1991 and her Ph.D. degree in electrical engineering and computer science from the University of Michigan in 1996. She is co-author of the book "Web Protocols and Practice" (Addison-Wesley, May 2001). She served as the chair of ACM SIGCOMM from 2003 to 2007. Jennifer was the 2004 winner of ACM's Grace Murray Hopper Award for outstanding young computer professional. She is an ACM fellow (2008) and a member of the American Academy of Arts and Sciences (2013) and the National Academy of Engineering (2014). Janet Vertesi is a sociologist of science and technology at Princeton University, where she is an assistant professor in the Department of Sociology. At Princeton, she teaches classes on the sociology of science and technology, technology in organizations and human-computer interaction. Vertesi has spent the past decade as an ethnographer of spacecraft missions at NASA, and is the author of "Seeing Like a Rover: How Robots, Teams and Images Craft Knowledge of Mars." In addition to her research on complex technical organizations, she nurtures a passion for public understanding about the intersection of technology and society, especially with respect to online privacy. Best known publicly for her “opt out” experiments that reveal underlying assumptions embedded in computing technologies, she is an advisory board member of the Data & Society Institute in New York City, has blogged extensively on the topic at Time.com and is a faculty affiliate of the Center for Information Technology Policy.
Building the Knowledge School The rise of the information school movement has been seen as both a positive and negative reality in the preparation of librarians. Have undergraduate programs taken away resources and attention from the masters in library science? Has the growth of faculty with little or no understanding of libraries diluted the field? Dr. David Lankes lays out his thoughts for moving past the arguments to defining a knowledge school. A school focused on impact in communities and built upon the values of librarians, but serving the needs of a broader information infrastructure. Dr David Lankes University of South Carolina Director, School of Library & Information Science R. David Lankes is the director of the University of South Carolina’s School of Library & Information Science and the 2016-2017 Follett Chair at Dominican University’s Graduate School of Library and Information Science. David has always been interested in combining theory and practice to create active research projects that make a difference. His work has been funded by organizations such as The MacArthur Foundation, The Institute for Library and Museum Services, NASA, The U.S. Department of Education, The U.S. Department of Defense, The National Science Foundation, The U.S. State Department, and The American Library Association. David is a passionate advocate for libraries and their essential role in today’s society earning him the American Library Association’s Ken Haycock Award for Promoting Librarianship in 2016. He also seeks to understand how information approaches and technologies can be used to transform industries. In this capacity he has served on advisory boards and study teams in the fields of libraries, telecommunications, education, and transportation including at the National Academies. He has been a visiting fellow at the National Library of Canada, The Harvard School of Education, and was the first fellow of ALA’s Office for Information Technology Policy. His book,The Atlas of New Librarianship won the 2012 ABC-CLIO/Greenwood Award for the Best Book in Library Literature. www.against-the-grain.com www.atgthepodcast.com www.charlestonlibraryconference.com
David Robinson (@dgrobinson) is a Principal at Upturn, a public interest technology and policy consulting firm. Prior to co-founding UpTurn, David was the Associate Director of Princeton's Center for Information Technology Policy. He also launched The American, a national magazine of business and economics at the American Enterprise Institute, growing The American's website to more than 1.5 million unique visits in its first year. David holds a JD from Yale, was a Rhodes Scholar, and graduated magna cum laude with a degree in Philosophy from Princeton. In this episode we discussed: what predictive policing is. how predictive policing technologies fall short of their marketing claims. how predictive policing enables disparities within the criminal justice system. what policymakers should consider as they incorporate predictive policing technologies into their law enforcement activities. Resources: UpTurn Daily Rituals: How Artists Work by Mason Currey Asana NEWS ROUNDUP Julia Angwin and Terry Parris Jr. at Pro Publica raised a lot of red flags last week when they reported that Facebook allows advertisers to exclude audience segments on the basis of race. Angwin and Parris discovered a chilling echo of race-based redlining in real estate where African Americans and other minority groups were prevented from buying real estate in predominantly white neighborhoods. Angwin and Parris purchased an ad on Facebook targeting Facebook users who are house hunting and allowed them to exclude anyone who was African American, Asian-American or Hispanic. But the Fair Housing Act of 1968 makes it illegal “"to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Facebook says it does what it can to prevent discrimination. ---- Did FBI Director James Comey break the law when he announced 11 days before the election that his agency was re-opening the investigation into Hillary Clinton's emails? Several leading experts say it was. Passed in 1939, the Hatch Act limits federal employees, with a few exceptions, such as the President, from engaging in activities that would impact the outcome of an election. Legal experts such as former Chief White House Ethics attorney Richard Painter, who filed a formal ethics complaint against Comey and the FBI last week, think there was no other reason for Comey to make the disclosure other than to impact the outcome of the election. Comey did, however, announce to members of Congress on Sunday that the agency will not recommend charges against Hillary Clinton based on the emails discovered on former Congressman Anthony Weiner's computer. Lauren Hodges has the story reporting for NPR. You should also read Painter's Op-Ed in The New York Times. ---- 1.4 million people “checked in” to Standing Rock on Facebook, even though they weren't actually there, to support opponents of the Dakota Access Pipeline. Rumors had spread that the police were monitoring Facebook to crack down on protesters. But how exactly do the police use social media data to surveil protests? Jeff Landale has the analysis in Christian Science Monitor. ---- A new University of Washington and Stanford study of 1,500 rides found Uber and Lyft drivers discriminate against black passengers. For example, blacks waited 30% longer for rides--5 minutes and 15 seconds--versus 4 minutes for white passengers, according to the study. The ride cancellation rate was also 6 points higher, or 10.1 %, for black sounding names compared to white sounding names. Elizabeth Weise has the story at USA Today. ---- Mobile browsing as surpassed desktop browsing for the first time. This is according to a new report from StatCounter. Mobile browsing now accounts for over 51% of all online browsing actvitiy. Check Samuel Gibbs' story in the Guardian. ---- Black Lives Matter is opposing the Trans Pacific Partnership arguing the trade deal would further disenfranchise blacks by sending capital to nations with lower wages and poorer working conditions, allowing employers to avoid domestic courts, increase mobility for workers with higher paying jobs but no one else, and prevent the formation of unions. ---- AT&T had a tough legal and regulatory week The Dodgers Channel, owned by Time Warner Cable, offered customers exclusive access to live Dodgers games. Even though Time Warner Cable owned the Dodgers Channel, the company attempted to license it to other cable providers, which would have provided each licensees' customers access to the games. But, as Brian Fung reports in the Washington Post, the Department of Justice is now suing AT&T and its subsidiary, DirectTV, for colluding with their LA competitors, including Cox and Charter, to make sure none of them agreed to license the Dodgers Channel from Time Warner Cable. This way, the three companies could take comfort in knowing they wouldn't lose subscribers. Charter of course has now acquired Time Warner Cable. ---- Finally, the FCC says AT&T incorrectly interpreted FCC rules when it sued the City of Louisville in federal court for granting Google access to utility poles in order to build out its fiber network. AT&T had said the FCC's pole attachment rules pre-emept state rules. However, the FCC submitted a statement of interest to the Department of Justice saying the federal pole attachment rules do not pre-empt state rules at all and, in fact, defer to state regulations where states show they have the situation under control with its own regulations. John Brodkin has the story in Ars Technica.
Will Facebook play a decisive role in the 2016 presidential primaries? Should Twitter be blamed for the rise of the Islamic State? Has the Chinese government successfully marginalized political dissent by controlling the companies that run China’s Internet? The fast-evolving power relationships — and clashes — among governments, corporations, and other non-state actors across digital networks pose fundamental challenges to how we think about governance, accountability, security, and human rights. Without new approaches to governance and accountability by public as well as private actors, the Internet of the future will no longer be compatible with the defense and protection of human rights. Nor will its users — or governments — be any more secure. Fortunately a nascent ecosystem of efforts are now experimenting with new ways to hold governments, companies, and other actors accountable when they exercise power across global networks. One such effort is the Ranking Digital Rights project, which sets forth a framework for measuring information and communication technology (ICT) companies’ commitments, policies, and practices affecting users’ freedom of expression and privacy. In this lecture, Ranking Digital Rights director Rebecca MacKinnon discusses the project’s Corporate Accountability Index as a concrete example how stakeholders around the globe are working to create new frameworks, mechanisms, and processes for holding power accountable and promoting the protection of human rights in a digitally networked world. . . . . . . . Rebecca MacKinnon is a leading advocate for Internet users’ rights to online freedom of expression and privacy around the world. She is author of the award-winning book Consent of the Networked: The Worldwide Struggle For Internet Freedom (Basic Books, 2012). Presently based at New America in Washington, D.C., she directs the Ranking Digital Rights project whose Corporate Accountability Index ranks the world’s most powerful Internet and telecommunications companies on policies and practices affecting users’ freedom of expression and privacy. MacKinnon is co-founder of the citizen media network Global Voices, a borderless community of more than 800 writers, digital media experts, activists, and translators living around the world who give voice to the stories of marginalized and misrepresented communities and who advocate for the free expression rights of Internet users everywhere. She also serves on the board of the Committee to Protect Journalists and is a founding member of the Global Network Initiative, a multi-stakeholder organization focused on upholding principles of freedom of expression and privacy in the ICT sector. Fluent in Mandarin Chinese, MacKinnon was CNN’s Beijing bureau chief from 1998 to 2001 and Tokyo bureau chief from 2001 to 2003. Since leaving CNN in 2004 she has held fellowships at Harvard’s Shorenstein Center on the Press and Public Policy, the Berkman Center for Internet and Society, the Open Society Foundations, and Princeton’s Center for Information Technology Policy. For two years in 2007–08 she served on the faculty of the University of Hong Kong’s Journalism and Media Studies Centre, and taught as an adjunct lecturer at the University of Pennsylvania Law School in Fall 2013. She is also a visiting affiliate at the Annenberg School for Communication’s Center for Global Communications Studies. MacKinnon received her AB magna cum laude from Harvard University and was a Fulbright scholar in Taiwan. She presently lives in Washington, D.C.
Part 1 of 2. This year’s theme is “Cyber Security in a Globally Integrated World.” Cyber security is not something many of us think about every day. But — as recent international “hacking” events show — it is all around us. More and more personal data is being put online, from personal records to medical information to payment details. But the internet respects no country’s boundary. This conference will focus on the geopolitical, legal and technical aspects of cyber security in the modern world. Speakers: Rob D’Ovidio, Ph.D. - Associate Professor of Criminology and Justice Studies, Drexel University Joseph Ingemi - Data Security and Privacy Consultant, IBM Curtis K.S. Levinson - US Cyber Defense Advisor, NATO Alan Tilles - Chairman of Telecommunications Entertainment, Shulman Rogers Andrea M. Matwyshyn, Ph.D. - Microsoft Visiting Professor at the Center for Information Technology Policy at Princeton University
Part 2 of 1. This year’s theme is “Cyber Security in a Globally Integrated World.” Cyber security is not something many of us think about every day. But — as recent international “hacking” events show — it is all around us. More and more personal data is being put online, from personal records to medical information to payment details. But the internet respects no country’s boundary. This conference will focus on the geopolitical, legal and technical aspects of cyber security in the modern world. Speakers: Rob D’Ovidio, Ph.D. - Associate Professor of Criminology and Justice Studies, Drexel University Joseph Ingemi - Data Security and Privacy Consultant, IBM Curtis K.S. Levinson - US Cyber Defense Advisor, NATO Alan Tilles - Chairman of Telecommunications Entertainment, Shulman Rogers Andrea M. Matwyshyn, Ph.D. - Microsoft Visiting Professor at the Center for Information Technology Policy at Princeton University
Happy new year! I'm pleased to post the first show of the winter quarter, Show # 227, January 14, 2015, my interview with Solon Barocas, Postdoctoral Research Associate at Princeton's Center for Information Technology Policy, co-author of the article Big Data's Disparate Impact (with Andrew D. Selbst). Algorithmic computing and decision-making have entered our world much faster than our understanding of it. In Solon's article, he takes a close look at the massively under-explored impact of algorithms on traditional forms of employment discrimination under Title VII of the Civil Rights Act (think discrimination on the basis of race or gender). Identifying both the technical and legal issues involved is a challenge, but this article does a wonderful job exposing the risks of algorithms in this space, which often (although not exclusively) includes embedding human prejudices in the code itself. We examined these and other ramifications of algorithmic computing and civil rights discrimination in our discussion. I greatly enjoyed it (recorded at Princeton!) and hope that you find it illuminating. {Hearsay Culture is a talk show on KZSU-FM, Stanford, 90.1 FM, hosted by Center for Internet & Society Resident Fellow David S. Levine. The show includes guests and focuses on the intersection of technology and society. How is our world impacted by the great technological changes taking place? Each week, a different sphere is explored. For more information, please go to http://hearsayculture.com.}
This talk will examine several technical questions related to the NSA's program that collects data about a substantial fraction of all domestic phone calls. How effective is such a program likely to be in identifying potential terrorists or clearing up false suspicion? How easily can enemies evade the program? Can the program be redesigned to better protect privacy, without losing effectiveness? About the speaker: Ed Felten is the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University, and the Director of Princeton's Center for Information Technology Policy. His research interests include computer security and privacy, network technologies, and public policy issues relating to computing. In 2011-12 he served as the first Chief Technologist at the Federal Trade Commission. He is a Fellow the ACM and a member of the National Academy of Engineering.
Steve Schultze is a busy fellow. He is a fellow at the Berkman Center for Internet and Society. He recently joined the Princeton Center for Information Technology Policy as Associate Director. He also is one of the developers behind RECAP – an ambitious and provocative project that seeks to bring publicly available digital court records out from behind a costly paywall. What is RECAP? Find out on this week’s episode! And why are there fees for court records? Steve also just dropped a great working paper that goes into more detail on the topic. If you’re in Washington, DC next week catch Steve’s talk on RECAP at the O’Reilly Gov 2.0 conference on Tuesday, September 8. Naturally we think Steve will make a terrific addition to the Princeton team — congrats, Steve! — and, while we’re sad to lose him, we’re looking forward to stronger ties to CITP and opportunities to collaborate and partner in the future. CC-licensed music this week: Neurowaxx – Pop Circus General Fuzz – Acclimate
Electronic voting machines have made our elections less reliable and less secure, but recent developments offer hope of a better system in the future. Current research offers the hope of a future voting system that is more reliable and more secure than ever before, at reasonable cost, by combining high-tech and low-tech methods so that each can compensate for the weaknesses of the other. This talk will sketch what this future might look like, and will highlight some of the research that may make it possible. About the speaker: Edward W. Felten is a Professor of Computer Science and Public Affairs at Princeton University, and is the founding Director of Princeton's Center for Information Technology Policy. His research interests include computer security and privacy, especially relating to media and consumer products; and technology law and policy. He has published about eighty papers in the research literature, and two books. His research on topics such as web security, copyright and copy protection, and electronic voting has been covered extensively in the popular press. His weblog, at freedom-to-tinker.com, is widely read for its commentary on technology, law, and policy.He was the lead computer science expert witness for the Department of Justice in the Microsoft antitrust case, and he has testified in other important lawsuits. He has testified before the Senate Commerce Committee on digital television technology and regulation, and twice testified about electronic voting before House committees. In 2004, Scientific American magazine named him to its list of fifty worldwide science and technology leaders. He was recently named an ACM Fellow.