Podcasts about Electronic Privacy Information Center

  • 46PODCASTS
  • 63EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 13, 2025LATEST
Electronic Privacy Information Center

POPULARITY

20172018201920202021202220232024


Best podcasts about Electronic Privacy Information Center

Latest podcast episodes about Electronic Privacy Information Center

The Leading Voices in Food
E271: Grappling with digital food and beverage marketing to youth

The Leading Voices in Food

Play Episode Listen Later May 13, 2025 29:15


So even the people that follow the topic closely are stunned by the digital landscape that engulfs our children, how quickly it evolves, and the potential social cost. Two people in a unique position to explain all this are our guest today, Jeffrey Chester and Kathryn Montgomery, both from the Center for Digital Democracy. Jeff is executive director of the Center, and Kathryn is its research director and senior strategist, as well as professor emerita of communication at American University. Jeff and Kathryn have been pioneers in this work and have been uniquely strong voices for protecting children. Interview Summary Let me congratulate the two of you for being way ahead of your time. I mean the two of you through your research and your advocacy and your organizational work, you were onto these things way before most people were. I'm really happy that you're joining us today, and welcome to our podcast. Kathryn, let me begin with you. So why be concerned about this digital landscape? Kathryn - Well, certainly if we're talking about children and youth, we have to pay attention to the world they live in. And it's a digital world as I think any parent knows, and everybody knows. In fact, for all of us, we're living in a digital world. So young people are living their lives online. They're using mobile phones and mobile devices all the time. They're doing online video streaming. They form their communications with their peers online. Their entire lives are completely integrated into this digital media landscape, and we must understand it. Certainly, the food and beverage industry understand it very well. And they have figured out enormously powerful ways to reach and engage young people through these digital media. You know, the extent of the kids' connection to this is really remarkable. I just finished a few minutes ago recording a podcast with two people involved with the Children and Screens organization. And, Chris Perry, who's the executive director of that organization and Dmitri Christakis who was with us as well, were saying that kids sometimes check their digital media 300 times a day. I mean, just unbelievable how much of this there is. There's a lot of reasons to be concerned. Let's turn our attention to how bad it is, what companies are doing, and what might be done about it. So, Jeff, tell us if you would, about the work of the Center for Digital Democracy. Jeff - Well, for more than a quarter of a century, we have tracked the digital marketplace. As you said at the top, we understood in the early 1990s that the internet, broadband what's become today's digital environment, was going to be the dominant communications system. And it required public interest rules and policies and safeguards. So as a result, one of the things that our Center does is we look at the entire digital landscape as best as we can, especially what the ultra-processed food companies are doing, but including Google and Meta and Amazon and GenAI companies. We are tracking what they're doing, how they're creating the advertising, what their data strategies are, what their political activities are in the United States and in many other places in the world. Because the only way we're going to hold them accountable is if we know what they're doing and what they intend to do. And just to quickly follow up, Kelly, the marketers call today's global generation of young people Generation Alpha. Meaning that they are the first generation to be born into this complete digital landscape environment that we have created. And they have developed a host of strategies to target children at the earliest ages to take advantage of the fact that they're growing up digitally. Boy, pretty amazing - Generation Alpha. Kathryn, I have kind of a niche question I'd like to ask you because it pertains to my own career as well. So, you spent many years as an academic studying and writing about these issues, but also you were a strong advocacy voice. How did you go about balancing the research and the objectivity of an academic with advocacy you were doing? Kathryn - I think it really is rooted in my fundamental set of values about what it means to be an academic. And I feel very strongly and believe very strongly that all of us have a moral and ethical responsibility to the public. That the work we do should really, as I always have told my students, try to make the world a better place. It may seem idealistic, but I think it is what our responsibility is. And I've certainly been influenced in my own education by public scholars over the years who have played that very, very important role. It couldn't be more important today than it has been over the years. And I think particularly if you're talking about public health, I don't think you can be neutral. You can have systematic ways of assessing the impact of food marketing, in this case on young people. But I don't think you can be totally objective and neutral about the need to improve the public health of our citizens. And particularly the public health of our young people. I agree totally with that. Jeff let's talk about the concept of targeted marketing. We hear that term a lot. And in the context of food, people talk about marketing aimed at children as one form of targeting. Or, toward children of color or people of color in general. But that's in a way technological child's play. I understand from you that there's much more precise targeting than a big demographic group like that. Tell us more. Jeff - Well, I mean certainly the ultra-processed food companies are on the cutting edge of using all the latest tools to target individuals in highly personalized way. And I think if I have one message to share with your listeners and viewers is that if we don't act soon, we're going to make an already vulnerable group even more exposed to this kind of direct targeted and personalized marketing. Because what artificial intelligence allows the food and beverage companies and their advertising agencies and platform partners to do is to really understand who we are, what we do, where we are, how we react, behave, think, and then target us accordingly using all those elements in a system that can create this kind of advertising and marketing in minutes, if not eventually milliseconds. So, all of marketing, in essence, will be targeted because they know so much about us. You have an endless chain of relationships between companies like Meta, companies like Kellogg's, the advertising agencies, the data brokers, the marketing clouds, et cetera. Young people especially, and communities of color and other vulnerable groups, have never been more exposed to this kind of invasive, pervasive advertising. Tell us how targeted it can be. I mean, let's take a 11-year-old girl who lives in Wichita and a 13-year-old boy who lives in Denver. How much do the companies know about those two people as individuals? And how does a targeting get market to them? Not because they belong to a big demographic group, but because of them as individuals. Jeff - Well, they certainly are identified in various ways. The marketers know that there are young people in the household. They know that there are young people, parts of families who have various media behaviors. They're watching these kinds of television shows, especially through streaming or listening to music or on social media. Those profiles are put together. And even when the companies say they don't exactly know who the child is or not collecting information from someone under 13 because of the privacy law that we helped get enacted, they know where they are and how to reach them. So, what you've had is an unlimited amassing of data power developed by the food and beverage companies in the United States over the last 25 years. Because really very little has been put in their way to stop them from what they do and plan to do. So presumably you could get some act of Congress put in to forbid the companies from targeting African American children or something like that. But it doesn't sound like that would matter because they're so much more precise in the market. Yes. I mean, in the first place you couldn't get congress to pass that. And I think this is the other thing to think about when you think about the food and beverage companies deploying Generative AI and the latest tools. They've already established vast, what they call insights divisions, market research divisions, to understand our behavior. But now they're able to put all that on a fast, fast, forward basis because of data processing, because of data clouds, let's say, provided by Amazon, and other kinds of tools. They're able to really generate how to sell to us individually, what new products will appeal to us individually and even create the packaging and the promotion to be personalized. So, what you're talking about is the need for a whole set of policy safeguards. But I certainly think that people concerned about public health need to think about regulating the role of Generative AI, especially when it comes to young people to ensure that they're not marketed to in the ways that it fact is and will continue to do. Kathryn, what about the argument that it's a parent's responsibility to protect their children and that government doesn't need to be involved in this space? Kathryn - Well, as a parent, I have to say is extremely challenging. We all do our best to try to protect our children from unhealthy influences, whether it's food or something that affects their mental health. That's a parent's obligation. That's what a parent spends a lot of time thinking about and trying to do. But this is an environment that is overwhelming. It is intrusive. It reaches into young people's lives in ways that make it virtually impossible for parents to intervene. These are powerful companies, and I'm including the tech companies. I'm including the retailers. I'm including the ad agencies as well as these global food and beverage companies. They're extremely powerful. As Jeff has been saying, they have engaged and continue to engage in enormous amounts of technological innovation and research to figure out precisely how to reach and engage our children. And it's too much for parents. And I've been saying this for years. I've been telling legislators this. I've been telling the companies this. It's not fair. It's a very unfair situation for parents. That makes perfect sense. Well, Jeff, your Center produces some very helpful and impressive reports. And an example of that is work you've done on the vast surveillance of television viewers. Tell us more about that, if you would. Jeff - Well, you know, you have to keep up with this, Kelly. The advocates in the United States and the academics with some exceptions have largely failed to address the contemporary business practices of the food and beverage companies. This is not a secret what's going on now. I mean the Generative AI stuff and the advanced data use, you know, is recent. But it is a continuum. And the fact is that we've been one of the few groups following it because we care about our society, our democracy, our media system, et cetera. But so much more could be done here to track what the companies are doing to identify the problematic practices, to think about counter strategies to try to bring change. So yes, we did this report on video streaming because in fact, it's the way television has now changed. It's now part of the commercial surveillance advertising and marketing complex food and beverage companies are using the interactivity and the data collection of streaming television. And we're sounding the alarm as we've been sounding now for too long. But hopefully your listeners will, in fact, start looking more closely at this digital environment because if we don't intervene in the next few years, it'll be impossible to go back and protect young people. So, when people watch television, they don't generally realize or appreciate the fact that information is being collected on them. Jeff - The television watches you now. The television is watching you now. The streaming companies are watching you now. The device that brings you streaming television is watching you now is collecting all kinds of data. The streaming device can deliver personalized ads to you. They'll be soon selling you products in real time. And they're sharing that data with companies like Meta Facebook, your local retailers like Albertsons, Kroger, et cetera. It's one big, huge digital data marketing machine that has been created. And the industry has been successful in blocking legislation except for the one law we were able to get through in 1998. And now under the Trump administration, they have free reign to do whatever they want. It's going to be an uphill battle. But I do think the companies are in a precarious position politically if we could get more people focused on what they're doing. Alright, we'll come back to that. My guess is that very few people realize the kind of thing that you just talked about. That so much information is being collected on them while they're watching television. The fact that you and your center are out there making people more aware, I think, is likely to be very helpful. Jeff - Well, I appreciate that, Kelly, but I have to say, and I don't want to denigrate our work, but you know, I just follow the trades. There's so much evidence if you care about the media and if you care about advertising and marketing or if you care, just let's say about Coca-Cola or Pepsi or Mondalez. Pick one you can't miss all this stuff. It's all there every day. And the problem is that there has not been the focus, I blame the funders in part. There's not been the focus on this marketplace in its contemporary dimensions. I'd like to ask you both about the legislative landscape and whether there are laws protecting people, especially children from this marketing. And Kathy, both you and Jeff were heavily involved in advocacy for a landmark piece of legislation that Jeff referred to from 1998, the Children's Online Privacy Protection Act. What did this act involve? And now that we're some years in, how has it worked? Kathryn - Well, I always say I've been studying advertising in the digital media before people even knew there was going to be advertising in digital media. Because we're really talking about the earliest days of the internet when it was being commercialized. But there was a public perception promoted by the government and the industry and a lot of other institutions and individuals that this was going to be a whole new democratic system of technology. And that basically it would solve all of our problems in terms of access to information. In terms of education. It would open up worlds to young people. In many ways it has, but they didn't talk really that much about advertising. Jeff and I working together at the Center for Media Education, were already tracking what was going on in that marketplace in the mid-1990s when it was very, very new. At which point children were already a prime target. They were digital kids. They were considered highly lucrative. Cyber Tots was one of the words that was used by the industry. What we believed was that we needed to get some public debate and some legislation in place, some kinds of rules, to guide the development of this new commercialized media system. And so, we launched a campaign that ultimately resulted in the passage of the Children's Online Privacy Protection Act. Now it only governs commercial media, online, digital media that targets children under the age of 13, which was the most vulnerable demographic group of young people. We believe protections are really, really very important for teenagers. There's a lot of evidence for that now, much more research actually, that's showing their vulnerable abilities. And it has required companies to take young people into account when developing their operations. It's had an impact internationally in a lot of other countries. It is just the barest minimum of what we need in terms of protections for young people. And we've worked with the Federal Trade Commission over the years to ensure that those rules were updated and strengthened so that they would apply to this evolving digital media system. But now, I believe, that what we need is a more global advocacy strategy. And we are already doing that with advocates in other countries to develop a strategy to address the practices of this global industry. And there are some areas where we see some promising movement. The UK, for example, passed a law that bans advertising on digital media online. It has not yet taken effect, but now it will after some delays. And there are also other things going on for ultra processed foods, for unhealthy foods and beverages. So, Kathryn has partly answered this already, Jeff, but let me ask you. That act that we've talked about goes back a number of years now, what's being done more recently on the legislative front? Perhaps more important than that, what needs to be done? Well, I have to say, Kelly, that when Joe Biden came in and we had a public interest chair at the Federal Trade Commission, Lena Khan, I urged advocates in the United States who are concerned about unhealthy eating to approach the Federal Trade Commission and begin a campaign to see what we could do. Because this was going to be the most progressive Federal Trade Commission we've had in decades. And groups failed to do so for a variety of reasons. So that window has ended where we might be able to get the Federal Trade Commission to do something. There are people in the United States Congress, most notably Ed Markey, who sponsored our Children's Privacy Law 25 years ago, to get legislation. But I think we have to look outside of the United States, as Kathryn said. Beyond the law in the United Kingdom. In the European Union there are rules governing digital platforms called the Digital Services Act. There's a new European Union-wide policy safeguards on Generative AI. Brazil has something similar. There are design codes like the UK design code for young people. What we need to do is to put together a package of strategies at the federal and perhaps even state level. And there's been some activity at the state level. You know, the industry has been opposed to that and gone to court to fight any rules protecting young people online. But create a kind of a cutting-edge set of practices that then could be implemented here in the United States as part of a campaign. But there are models. And how do the political parties break down on this, these issues? Kathryn - I was going to say they break down. Jeff - The industry is so powerful still. You have bipartisan support for regulating social media when it comes to young people because there have been so many incidences of suicide and stalking and other kinds of emotional and psychological harms to young people. You have a lot of Republicans who have joined with Democrats and Congress wanting to pass legislation. And there's some bipartisan support to expand the privacy rules and even to regulate online advertising for teens in our Congress. But it's been stymied in part because the industry has such an effective lobbying operation. And I have to say that in the United States, the community of advocates and their supporters who would want to see such legislation are marginalized. They're under underfunded. They're not organized. They don't have the research. It's a problem. Now all these things can be addressed, and we should try to address them. But right now it's unlikely anything will pass in the next few months certainly. Kathryn - Can I just add something? Because I think what's important now in this really difficult period is to begin building a broader set of stakeholders in a coalition. And as I said, I think it does need to be global. But I want to talk about also on the research front, there's been a lot of really important research on digital food marketing. On marketing among healthy foods and beverages to young people, in a number of different countries. In the UK, in Australia, and other places around the world. And these scholars have been working together and a lot of them are working with scholars here in the US where we've seen an increase in that kind of research. And then advocates need to work together as well to build a movement. It could be a resurgence that begins outside of our country but comes back in at the appropriate time when we're able to garner the kind of support from our policymakers that we need to make something happen. That makes good sense, especially a global approach when it's hard to get things done here. Jeff, you alluded to the fact that you've done work specifically on ultra processed foods. Tell us what you're up to on that front. Jeff - As part of our industry analysis we have been tracking what all the leading food and beverage companies are doing in terms of what they would call their digital transformation. I mean, Coca-Cola and Pepsi on Mondelez and Hershey and all the leading transnational processed food companies are really now at the end of an intense period of restructuring to take advantage of the capabilities provided by digital data and analytics for the further data collection, machine learning, and Generative AI. And they are much more powerful, much more effective, much more adept. In addition, the industry structure has changed in the last few years also because of digital data that new collaborations have been created between the platforms, let's say like Facebook and YouTube, the food advertisers, their marketing agencies, which are now also data companies, but most notably the retailers and the grocery stores and the supermarkets. They're all working together to share data to collaborate on marketing and advertising strategies. So as part of our work we've kept abreast of all these things and we're tracking them. And now we are sharing them with a group of advocates outside of the United States supported by the Bloomberg Philanthropies to support their efforts. And they've already made tremendous progress in a lot of areas around healthy eating in countries like Mexico and Argentina and Brazil, et cetera. And I'm assuming all these technological advances and the marketing muscle, the companies have is not being used to market broccoli and carrots and Brussels sprouts. Is that right? Jeff - The large companies are aware of changing attitudes and the need for healthy foods. One quick takeaway I have is this. That because the large ultra processed food companies understand that there are political pressures promoting healthier eating in North America and in Europe. They are focused on expanding their unhealthy eating portfolio, in new regions specifically Asia Pacific, Africa, and Latin America. And China is a big market for all this. This is why it has to be a global approach here, Kelly. First place, these are transnational corporations. They are creating the, our marketing strategies at the global level and then transmitting them down to be tailored at the national or regional level. They're coming up with a single set of strategies that will affect every country and every child in those countries. We need to keep track of that and figure out ways to go after that. And there are global tools we might be able to use to try to protect young people. Because if you could protect young, a young person in China, you might also be able to protect them here in North Carolina. This all sounds potentially pretty scary, but is there reason to be optimistic? Let's see if we can end on a positive note. What do you think. Do you have reason to be optimistic? Kathryn - I've always been an optimist. I've always tried to be an optimist, and again, what I would say is if we look at this globally and if we identify partners and allies all around the world who are doing good work, and there are many, many, many of them. And if we work together and continue to develop strategies for holding this powerful industry and these powerful industries accountable. I think we will have success. And I think we should also shine the spotlight on areas where important work has already taken place. Where laws have been enacted. Where companies have been made to change their practices and highlight those and build on those successes from around the world. Thanks. Jeff, what about you? Is there reason to be optimistic? Well, I don't think we can stop trying, although we're at a particularly difficult moment here in our country and worldwide. Because unless we try to intervene the largest corporations, who are working and will work closely with our government and other government, will be able to impact our lives in so many ways through their ability to collect data. And to use that data to target us and to change our behaviors. You can change our health behaviors. You can try to change our political behaviors. What the ultra-processed food companies are now able to do every company is able to do and governments are able to do. We have to expose what they're doing, and we have to challenge what they're doing so we can try to leave our kids a better world. It makes sense. Do you see that the general public is more aware of these issues and is there reason to be optimistic on that front? That awareness might lead to pressure on politicians to change things? Jeff - You know, under the Biden administration, the Federal Trade Commission identified how digital advertising and marketing works and it made it popular among many, many more people than previously. And that's called commercial surveillance advertising. The idea that data is collected about you is used to advertise and market to you. And today there are thousands of people and certainly many more advocacy groups concerned about commercial surveillance advertising than there were prior to 2020. And all over the world, as Kathryn said, in countries like in Brazil and South Africa and Mexico, advocates are calling attention to all these techniques and practices. More and more people are being aware and then, you know, we need obviously leaders like you, Kelly, who can reach out to other scholars and get us together working together in some kind of larger collaborative to ensure that these techniques and capabilities are exposed to the public and we hold them accountable. Bios Kathryn Montgomery, PhD. is Research Director and Senior Strategist for the Center for Digital Democracy (CDD). In the early 90s, she and Jeff Chester co-founded the Center for Media Education (CME), where she served as President until 2003, and which was the predecessor organization to CDD. CME spearheaded the national campaign that led to passage of the 1998 Children's Online Privacy Protection Act (COPPA) the first federal legislation to protect children's privacy on the Internet. From 2003 until 2018, Dr. Montgomery was Professor of Communication at American University in Washington, D.C., where she founded and directed the 3-year interdisciplinary PhD program in Communication. She has served as a consultant to CDD for a number of years and joined the full-time staff in July 2018. Throughout her career, Dr. Montgomery has written and published extensively about the role of media in society, addressing a variety of topics, including: the politics of entertainment television; youth engagement with digital media; and contemporary advertising and marketing practices. Montgomery's research, writing, and testimony have helped frame the national public policy debate on a range of critical media issues. In addition to numerous journal articles, chapters, and reports, she is author of two books: Target: Prime Time – Advocacy Groups and the Struggle over Entertainment Television (Oxford University Press, 1989); and Generation Digital: Politics, Commerce, and Childhood in the Age of the Internet (MIT Press, 2007). Montgomery's current research focuses on the major technology, economic, and policy trends shaping the future of digital media in the Big Data era. She earned her doctorate in Film and Television from the University of California, Los Angeles. Jeff Chester is Executive Director of the Center for Digital Democracy (CDD), a Washington, DC non-profit organization. CDD is one of the leading U.S. NGOs advocating for citizens, consumers and other stakeholders on digital privacy and consumer protections online. Founded in 1991, CDD (then known as the Center for Media Education) led the campaign for the enactment of the Children's Online Privacy Protection Act (COPPA, 1998). During the 1990s it also played a prominent role in such issues as open access/network neutrality, diversity of media ownership, public interest policies for children and television, as well the development of the FCC's “E-Rate” funding to ensure that schools and libraries had the resources to offer Internet services. Since 2003, CDD has been spearheading initiatives designed to ensure that digital media in the broadband era fulfill their democratic potential. A former investigative reporter, filmmaker and Jungian-oriented psychotherapist, Jeff Chester received his M.S.W. in Community Mental Health from U.C. Berkeley. He is the author of Digital Destiny: New Media and the Future of Democracy (The New Press, 2007), as well as articles in both the scholarly and popular press. During the 1980s, Jeff co-directed the campaign that led to the Congressional creation of the Independent Television Service (ITVS) for public TV. He also co-founded the National Campaign for Freedom of Expression, the artist advocacy group that supported federal funding for artists. In 1996, Newsweek magazine named Jeff Chester one of the Internet's fifty most influential people. He was named a Stern Foundation “Public Interest Pioneer” in 2001, and a “Domestic Privacy Champion” by the Electronic Privacy Information Center in 2011. CDD is a member of the Transatlantic Consumer Dialogue (TACD). Until January 2019, Jeff was the U.S. co-chair of TACD's Information Society (Infosoc) group, helping direct the organization's Transatlantic work on data protection, privacy and digital rights.

Parallax Views w/ J.G. Michael
An Inside View of the Luna Hearing on the JFK Assassination w/ Jim DiEugenio

Parallax Views w/ J.G. Michael

Play Episode Listen Later Apr 10, 2025 63:17


In this episode of Parallax Views, returning guest and veteran JFK assassination researcher Jim DiEugenio joins us to unpack the recent developments in the long-running battle over JFK assassination records. We dive into the legacy of President Trump's January 2025 executive order concerning the JFK files and discuss the implications of the recent congressional hearing led by Rep. Anna Paulina Luna — to which Jim himself testified! Jim, who was in attendance at the Luna hearing, shares his firsthand impressions of the proceedings and what they may signal for the future of the JFK Records Act. But this conversation goes beyond the question of conspiracy: we explore the broader stakes of transparency, institutional trust, and the historical significance of declassified documents — including revelations like Operation Northwoods, which had nothing to do with the assassination itself but emerged from the Assassination Records Review Board process and reshaped how we understand Cold War covert operations. Whether or not you believe there was a conspiracy to kill JFK, the questions raised by these files and the government's resistance to disclosure remain urgent. Tune in for a timely conversation on truth, secrecy, and why history still matters — no matter how long ago it happened. And yes, we'll address conservative pundit Ben Shapiro's angry dismissal of anyone interested in the assassination because it happened "a long time ago". Additional topics covered include: - Oliver Stone, the attacks on his movie JFK months before the film was even released, and the role of the Vietnam war in shaping Oliver Stone as a filmmaker and a human being. - CIA spymaster James Jesus Angleton's chilling quote about the CIA: “If you were in a room with them, you were in a room full of people that you had to believe would deservedly end up in hell. I guess I will see them there soon.” - The release of the unredacted Schlesinger Memo and what it tells us about covert operation during the Cold War. Learn what it is and how it enhances our understanding of a moment in history and politics. - The surprise appearance of John Davisson, a senior counsel at the Electronic Privacy Information Center in Washington, who raised privacy concerns related to information in the recently released documents. - Jim's thoughts on Democrats' lack of interest in the hearing and why he thinks Democrats are passing up a big opportunity with it - And much more!

Federal Drive with Tom Temin
Inside one of the lawsuits against Trump administration actions

Federal Drive with Tom Temin

Play Episode Listen Later Mar 21, 2025 10:32


Many groups have launched lawsuits against the Trump administration in recent weeks. Among them, the Electronic Privacy Information Center and others has sued over DOGE access to the Treasury Department payment apparatus. For why it brought suit, EPIC's senior counsel John Davisson. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Federal Drive with Tom Temin
Inside one of the lawsuits against Trump administration actions

Federal Drive with Tom Temin

Play Episode Listen Later Mar 21, 2025 11:17


Many groups have launched lawsuits against the Trump administration in recent weeks. Among them, the Electronic Privacy Information Center and others has sued over DOGE access to the Treasury Department payment apparatus. For why it brought suit, EPIC's senior counsel John Davisson. Learn more about your ad choices. Visit podcastchoices.com/adchoices

POLITICO Dispatch
This expert says DOGE is a privacy nightmare. Here's why

POLITICO Dispatch

Play Episode Listen Later Feb 5, 2025 17:23


Elon Musk is moving fast to dismantle and downsize federal agencies. But his Department of Government Efficiency is drawing scrutiny from privacy and cybersecurity advocates, not to mention Democrats, who fear its employees may be mishandling government data. On POLITICO Tech, Alan Butler, president and executive director of the Electronic Privacy Information Center, breaks down his concerns about DOGE and whether it could be breaking privacy laws.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Business of Tech
AI Adoption, Outdated Tech, Office Worker Outlook, and AI Legislation in Tech Industry

Business of Tech

Play Episode Listen Later Jun 26, 2024 11:05


In this podcast episode, Dave Sobel discusses three key topics related to AI adoption and technology trends in the business world. The episode begins by highlighting the increasing embrace of AI by small and medium-sized businesses (SMBs), with 94% of SMBs planning to incorporate AI into their operations in the current year. Additionally, NTT Data's Infrastructure Lifecycle Management Report reveals that 80% of global organizations view outdated technology as a hindrance to innovation, emphasizing the importance of consistent technology infrastructure management.The episode delves into the challenges faced by organizations in maintaining their technology infrastructure, with a focus on project management, workflow automation, system integrations, and talent acquisition. Despite these challenges, a study by Jitterbit shows that office workers have a positive outlook on working with AI, viewing it as an opportunity for new skills and professional growth. The report indicates that 85% of office workers expect AI to enhance their roles and professional skills, dispelling fears of AI replacing human efforts.Furthermore, the episode discusses the barriers to wider adoption of AI tools by U.S. companies, such as uncertainty about their value and a lack of internal expertise. It emphasizes the need for CEOs to take ownership of AI implementation and understand its potential contributions to revenue and productivity. The podcast also highlights recent developments by tech giants like HPE, Apple, and Microsoft in the AI space, including the launch of new AI channel programs, core machine learning models, and vision foundation models.Lastly, the episode touches upon the Electronic Privacy Information Center's AI Legislation Scorecard, which evaluates the strength of state and federal AI legislation in promoting responsible AI use. The scorecard emphasizes the importance of strong privacy laws and enforcement mechanisms to mitigate AI risks. It suggests that organizations can use the scorecard to assess their AI practices and policies, aligning them with responsible AI standards. Overall, the episode underscores the significance of AI adoption, technology infrastructure management, and regulatory compliance in shaping the future of business technology Three things to know today 00:00 SMBs Embrace AI, NTT DATA on Outdated Tech, and Office Workers' Positive AI Outlook  05:08 From AI Channel Programs to Advanced Vision Models: Key Developments by HPE, Apple, and Microsoft07:30 EPIC's AI Legislation Scorecard: A Tool for Evaluating Responsible AI Standards  Supported by: https://www.bitdefender.com/business/products/msp-security-solutions.htmlhttps://www.coreview.com/msp.All our Sponsors:   https://businessof.tech/sponsors/  Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social

Marketplace Tech
What a privacy organization and Big Tech’s lead lobbying group think about internet regulation

Marketplace Tech

Play Episode Listen Later Mar 27, 2024 15:54


When you look at the lawsuits aimed at blocking attempts to regulate tech, it’s usually not companies like Meta or Snap doing the suing. Oftentimes, it’s a group called NetChoice, which has emerged as Big Tech’s top lobbying force from Capitol Hill to the courts. Today, a conversation with NetChoice General Counsel Carl Szabo and Megan Iorio, senior counsel at the Electronic Privacy Information Center, a nonprofit focused on privacy. They occasionally agree, but very often they do not. Case in point: the California Age-Appropriate Design Code Act, which requires websites that children are likely to visit to provide privacy protections by default. It was set to take effect in July, but so far, Szabo’s group has successfully blocked it in court. Marketplace’s Lily Jamali sat down with Szabo and Iorio and asked about how their groups interact.

Marketplace All-in-One
What a privacy organization and Big Tech’s lead lobbying group think about internet regulation

Marketplace All-in-One

Play Episode Listen Later Mar 27, 2024 15:54


When you look at the lawsuits aimed at blocking attempts to regulate tech, it’s usually not companies like Meta or Snap doing the suing. Oftentimes, it’s a group called NetChoice, which has emerged as Big Tech’s top lobbying force from Capitol Hill to the courts. Today, a conversation with NetChoice General Counsel Carl Szabo and Megan Iorio, senior counsel at the Electronic Privacy Information Center, a nonprofit focused on privacy. They occasionally agree, but very often they do not. Case in point: the California Age-Appropriate Design Code Act, which requires websites that children are likely to visit to provide privacy protections by default. It was set to take effect in July, but so far, Szabo’s group has successfully blocked it in court. Marketplace’s Lily Jamali sat down with Szabo and Iorio and asked about how their groups interact.

Really? no, Really?
Understanding TikTok – Should it Stay, or Should it Go?

Really? no, Really?

Play Episode Listen Later Mar 19, 2024 33:57 Transcription Available


This episode is all about trying to figure out if TikTok - the video hosting platform owned by the Chinese company Byte Dance should be banned or not. Over 70% of young Americans use TikTok, which critics claim is spying on, manipulating, and spreading misinformation to its users. But is TikTok any different than Meta, Google, or X? Our guest's answer may surprise you… Really, no really! Bruce Schneier is a cryptographer, computer security professional, privacy specialist, who has been writing about security issues since 2004 and is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. IN THIS EPISODE: Surveillance as a business model is 100% legal. Breaking down the types of information companies are collecting, and what they're doing with it. Why does China want to know about our affinity for puppy videos? Social Media's ability to manipulate is greater than you might think. Russia and China's preferred platform for transmitting misinformation is…? (Spoiler Alert: It's not TikTok) Facebook has data on you… even if you aren't on Facebook-RnR! Data Privacy is a national security issue, but not treated like one. The European Union leads the world in protecting data privacy. The unsettling future of A.I. and personal data surveillance. Cougar hemorrhoids? Googleheim: What you are missing… if you're NOT on TikTok! *** FOLLOW BRUCE: Bruce's Blog: Schneier.com X: @schneierblog Facebook: Bruce Schneier *** FOLLOW REALLY NO REALLY: www.reallynoreally.com Instagram YouTube TikTok Facebook Threads XSee omnystudio.com/listener for privacy information.

Bioethics in the Margins
Deliberative democracy, social justice and the Black Opticon: a discussion with Dr. Anita Allen

Bioethics in the Margins

Play Episode Listen Later Mar 1, 2024 55:21


Kirk and Amelia had the pleasure of speaking with Dr. Anita L. Allen, the Henry R. Silverman Professor of Law and Professor of Philosophy at the University of Pennsylvania. In this episode, they discuss Dr. Allen's experiences working on President Obama's Presidential Commission for the Study of Bioethical Issues where she engaged in deliberative democracy approaches to explore challenges with advances in biomedicine, technology and synthetic biology. A highlight of her time there included a report titled “Ethically Impossible” that documented and acknowledged gross human research subject abuses that occurred in Guatemala from 1946-1948, overseen by the US Public Health Service. (https://bioethicsarchive.georgetown.edu/pcsbi/sites/default/files/Ethically%20Impossible%20(with%20linked%20historical%20documents)%202.7.13.pdf). Other aspects of Dr. Allen's prolific career that they discuss include her work on the concept of privacy, reproductive justice and racial justice concerns in what Dr. Allen has termed “The Black Opticon” (https://www.yalelawjournal.org/forum/dismantling-the-black-opticon). Dr. Allen is an internationally renowned philosopher with over 120 articles and chapters published at the intersection of bioethics, privacy and data protection law, women's rights, and diversity in higher education. She is a graduate of Harvard Law, currently serving on the Board of the National Constitution Center, the Future of Privacy Forum and the Electronic Privacy Information Center.

PGP* (Pretty Good Policy) for Crypto Podcast
Zcon4 Special Edition featuring Jay Stanley (ACLU), Kurt Opsahl (Filecoin), and Jake Wiener (EPIC)

PGP* (Pretty Good Policy) for Crypto Podcast

Play Episode Listen Later Aug 9, 2023 40:23


Join Paul Brigner, Head of US Policy and Strategic Advocacy for the Electric Coin Co., as he dives deep into the world of cryptocurrency policy and regulation. Recorded live from ZCon4 in Barcelona, Spain, this episode features a panel of esteemed guests: Kurt Opsahl, Associate General Counsel for Cyber Security and Civil Liberties Policy at the Filecoin Foundation. Jay Stanley, Senior Policy Analyst at the ACLU. Jake Wiener, Counsel at the Electronic Privacy Information Center. Together, they discuss the importance of privacy in the digital age, the potential of zero-knowledge proof technology, and the challenges and opportunities of decentralizing the internet. Plus, hear insights from the Zcash community's annual meeting and a surprise visit from a special guest! EPISODE LINKS ACLU Website: https://www.aclu.org/ Jay Stanley's LinkedIn: https://www.linkedin.com/in/jay-c-stanley/ Jay Stanley's Twitter: https://twitter.com/JayCStanley Electronic Privacy Information Center (EPIC) Website: https://epic.org/ Jake Wiener's LinkedIn: https://www.linkedin.com/in/jacob-wiener-70a4baa8/ Jake Wiener's Twitter: https://twitter.com/RealJakeWiener Filecoin Foundation Website: https://fil.org/ Kurt Opsahl's LinkedIn: https://www.linkedin.com/in/kurt-opsahl-0858654/ Kurt Opsahl's Twitter: https://twitter.com/kurtopsahl PODCAST INFO Podcast website: https://podcast.pgpforcrypto.org Apple Podcasts: https://podcasts.apple.com/us/podcast/pgp*-pretty-good-policy-for-crypto-podcast/id1669504720 RSS: https://feed.pod.co/pgp-for-crypto-podcast HOST INFO Paul Brigner's Twitter: https://twitter.com/paulbrigner Paul Brigner's LinkedIn: https://www.linkedin.com/in/paulbrigner/ Electric Coin Co. Website: https://electriccoin.co Electric Coin Co. Twitter: https://twitter.com/ElectricCoinCo TIMESTAMPS 00:00 Welcome 00:42 Introduction 01:49 Kurt Opsahl on Privacy and ZCon4 02:46 Jay Stanley on Cryptocurrency and Privacy 05:27 Jake Wiener on Policing, Digital Identity, and Privacy 07:43 The Promise of ZK Proof Technology 09:51 Jay's Insights on Cryptography and Digital Identity 12:41 The Challenge of Achieving True Privacy 14:59 Kurt's Transition to Filecoin 16:55 The Debate on Decentralization 19:56 Further Decentralizing Zcash 22:21 The Future of Zcash and User Experience and a Surprise Guest 27:49 Recent Developments in U.S. Policy 36:32 Concluding Remarks DISCLAIMER Please be advised that the information provided in this podcast is for informational and educational purposes only and is not to be taken as legal or financial advice. The opinions and views expressed by our guests are their own and may not reflect the official stance of the organizations they represent or those of Electric Coin Co. Always consult a legal or financial professional before making any decisions.

CFR On the Record
Higher Education Webinar: Implications of Artificial Intelligence in Higher Education

CFR On the Record

Play Episode Listen Later Jun 27, 2023


Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education.   FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)

The Sunday Show
Responsible Release and Accountability for Generative AI Systems

The Sunday Show

Play Episode Listen Later May 28, 2023 48:36


Today's show has two segments both focused on generative AI. In the first segment, Justin Hendrix speaks with Irene Solaiman, a researcher who has put a lot of thought into evaluating the release strategies for generative AI systems. Organizations big and small have pursued different methods for release of these systems, some holding their models and details about them very close, and some pursuing a more open approach. And in the second segment, Justin Hendrix speaks with Calli Schroeder and Ben Winters at the Electronic Privacy Information Center about a new report they helped write about the harms of generative AI, and what to do about them.

The Irish Tech News Podcast
Is the Future Secure? Insights with Bruce Schneier

The Irish Tech News Podcast

Play Episode Listen Later Apr 16, 2023 56:37


Is the Future Secure?  This week on The Futurists we get into the future of cybercrime and personal security in the smart world with renowned “security guru” Bruce Schneier. The author of over a dozen books (his latest bestseller being “A Hacker's Mind”), Lecturer on Public Policy at Harvard Kennedy School, Congressional advisor and Media personality. Will AI and Quantum kill passwords? How secure will your DNA records be? The answers might surprise you About Bruce Schneier Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A Hacker's Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. Brett King is a world-renowned entrepreneur, futurist, speaker, international bestselling author, and media personality. China's President Xi Jinping cited his book Augmented: Life in the Smart Lane on the topic of Artificial Intelligence in his 2018 national address; the same book that was listed as a Top 10 non-fiction book in North America. In 2019 his book Bank 4.0 was awarded the Top Book by a Foreign Author in Russia for that year. n and innovation. In his professional capacity, Mr. Tercek is a seasoned business executive with deep expertise in digital media and internet services. He is a prolific creator of interactive programs and products. He has designed and launched successful consumer experiences on every digital platform, including digital television, game consoles, broadband Internet, and mobile networks. In 2021, Mr. Tercek was recognized as the Humanitarian of the Year by the Media Excellence Awards for his leadership in designing and launching COVID SMART™, an interactive training program designed to keep workers safe on the job during the pandemic.

No Name Podcast
No Name Podcast with Bruce Schneier

No Name Podcast

Play Episode Listen Later Mar 27, 2023 66:55


Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books — including his latest, A Hacker's Mind — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

SCOTUStalk
Section 230 and the internet

SCOTUStalk

Play Episode Listen Later Feb 28, 2023 23:20


In the first week of the February session, the justices heard oral arguments in two cases about the scoop of liability tech companies may face for user content. Amy is joined by Megan Iorio of the Electronic Privacy Information Center to break down those arguments in Gonzalez v. Google and Twitter v. Taamneh. EPIC filed an amicus brief in Gonzalez in support of neither party. Send us a question about the court at scotustalk@scotusblog.com or leave us a voicemail at (202) 596-2906. Please tell us your first name and where you're calling from.(Music by Keys of Moon Music via Soundcloud) Hosted on Acast. See acast.com/privacy for more information.

Inside The War Room
A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back

Inside The War Room

Play Episode Listen Later Feb 18, 2023 28:36


Links from the show:* A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back* Visit Bruce's site* Follow Bruce on Twitter* Follow Ryan on Twitter* Support the showAbout my guest:Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. Get full access to Dispatches from the War Room at dispatchesfromthewarroom.substack.com/subscribe

Keen On Democracy
A Hacker's Mind: Bruce Schneier on how the powerful bend society's rules and how to bend them back

Keen On Democracy

Play Episode Listen Later Feb 7, 2023 28:42


In this KEEN ON episode, Andrew talks to A HACKER'S MIND author Bruce Schneier about how the powerful have learnt to hack society rules and why we need to learn to outhack the hackers. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A HACKER'S MIND —as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. Name as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Healthcare Policy Podcast ®  Produced by David Introcaso
Dr. Susan Linn Discusses Her Just-Published Book, "Who's Raising the Kids: Big Tech, Big Business and the Lives of Children" (December 16th)

The Healthcare Policy Podcast ® Produced by David Introcaso

Play Episode Listen Later Dec 17, 2022 35:42


(This interview is dedicated to my friend Randy Lee, a dedicated public health nurse, who passed away earlier this week.)   Justin Smith in his book published earlier this year, The Internet is Not What You Think It Is, argued the internet is addictive, its use of algorithms leaves our lives warped and impoverished and despite these problems there is little or no federal regulatory oversight.  Concerning the internet's effects on children, Dr. Linn argues in her recently published work, Who's Raising the Kids, that our digital landscape essentially invades children's privacy in order to use their personal information to drive endless consumerism.  Children's screen use, that amounts to upwards of 7.5 hours on average per day - substantially longer for poor and minority children - is having profound negative effects on children of every age.  Generally by threatening childhood development and more specifically Dr. Linn noted by, for example, driving childhood anxiety, conflictual relationships with parents and family stress, depression, diminished language development, eating disorders, erosion of creative play, materialistic values, obesity, precocious sexuality, sleep disturbances, underachievement in school and youth violence.During this 35 minute interview Dr. Linn begins by explaining how Mattel's Aristotle (never commercially launched) and Epic's Fortnite are designed to drive revenue.  She discusses how digital games erodes or undermines children's creative play, how the use of various marketing tools or approaches drive every digital experience leading to a purchase, for example, by creating "frictionless" online experiences.  She discusses the influence corporations have in formal education programming via Sponsored Education Materials (SEMS), discusses what parents can do to monitor children's screen use, what federal policies have been proposed to protect children's privacy and regulate how digital game design and what action the American Psychological Association has taken.               Dr. Susan Linn is currently a Research Associate at Boston Children's Hospital and Lecturer on Psychiatry at Harvard Medical School.  She has lectured on the importance of creative play, the impact of media and marketing on children and the use of puppetry as a therapeutic tool in venues throughout North and South America, Europe, Asia, and the Middle East.   From 2000 to 2015 Dr. Linn was the Founding Director of Campaign for a Commercial-Free Childhood.  Dr. Linn and her puppets appeared in several episodes of Mister Rogers' Neighborhood.  She has written and appeared in a number of video programs designed to help children cope with issues ranging from mental illness to death and loss. This Secret Should Be Told, a syndicated TV program for children about sexual abuse won her a coveted Action for Children's Television and earned Dr. Linn a New England Emmy Award.  With Fred Rogers' production company, Dr. Linn created the acclaimed educational video series Different and the Same: Helping Children Identify and Prevent Prejudice.  Different and the Same has been used in classrooms in all 50 states and won numerous awards including the two top prizes from the International Communication Film and Video Competition and the Media Award from the Association of Multicultural Educators.  Her book, Consuming Kids helped launch the movement to reclaim childhood from corporate marketers.  Her work has been featured on Good Morning America, Today, Sixty Minutes, Dateline, The Colbert Report, and the acclaimed documentary The Corporation.  Among other honors, Dr. Linn received an UNIMA-USA citation for excellence; a special award for puppet therapy from Puppeteers of America; A Champion of Freedom Award from the Electronic Privacy Information Center; The Creative Leadership Award from the Puppet Showplace Theater; and, a Presidential Citation from the American Psychological Association for her work on behalf of children.Information on Who's Raising the Kids is at: https://thenewpress.com/books/whos-raising-kids.  This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.thehealthcarepolicypodcast.com

Marketplace Tech
Can your workplace store your fingerprint or facial scan data?

Marketplace Tech

Play Episode Listen Later Oct 19, 2022 7:20


They have our Social Security numbers and probably our Amazon shopping lists. But should employers be collecting data on, say, our fingerprints, voices or retinas? After all, these things are unique physical identifiers. And if they were somehow stolen in a data breach, they’re not easily changed like a password. A legal case in Illinois provided one of the first tests to a state law that protects this type of data — the Biometric Information Privacy Act, or BIPA. Last week 45,000 truck drivers won their suit against BNSF Railway for collecting their fingerprints without consent. Marketplace’s Meghan McCarty Carino recently spoke with Alan Butler, executive director and president of the Electronic Privacy Information Center, a nonprofit research organization. He said Illinois is at the forefront of a growing movement to regulate biometric data.

Marketplace All-in-One
Can your workplace store your fingerprint or facial scan data?

Marketplace All-in-One

Play Episode Listen Later Oct 19, 2022 7:20


They have our Social Security numbers and probably our Amazon shopping lists. But should employers be collecting data on, say, our fingerprints, voices or retinas? After all, these things are unique physical identifiers. And if they were somehow stolen in a data breach, they’re not easily changed like a password. A legal case in Illinois provided one of the first tests to a state law that protects this type of data — the Biometric Information Privacy Act, or BIPA. Last week 45,000 truck drivers won their suit against BNSF Railway for collecting their fingerprints without consent. Marketplace’s Meghan McCarty Carino recently spoke with Alan Butler, executive director and president of the Electronic Privacy Information Center, a nonprofit research organization. He said Illinois is at the forefront of a growing movement to regulate biometric data.

The Sunday Show
Young Advocates for California's Age Appropriate Design Code

The Sunday Show

Play Episode Listen Later Aug 7, 2022 33:33


Earlier this year in California, two State Assembly members— Democrat Buffy Wicks and Republican Jordan Cunningham— introduced the California Age Appropriate Design Code Bill. The California Age Appropriate Design Code would place limitations on what companies can do with youth data, including tracking location and profiling. It puts limitations on manipulative design, and includes transparency measures so users are aware and consent to the use of their information. The bill makes the California attorney general responsible enforcement of the state's rules, opening up the possibility of litigation or fines against companies that do not follow the Code. It would also require the California Privacy Protection Agency to create a Children's Data Protection Task Force that would formulate recommendations on best practices. A coalition of civil society and tech policy groups supports the Code, including organizations such as Common Sense Media, Accountable Tech, the Electronic Privacy Information Center, the Sesame Workshop, the Consumer Federation of California, and the National Hispanic Media Coalition. Industry groups, such as TechNet and the California Chamber of Commerce, oppose the bill, and other experts have raised concerns in particular about requirements for age verification. The California State Assembly voted 72-0 to pass the bill, and it is now with the California Senate.  For this podcast, Tech Policy Press spoke to three people— all college students and activists— who support it, in part due to their own experiences: Aliza Kopans, a rising sophomore at Brown University, cofounder of Technic(ally) Politics and an intern at Accountable Tech; Emma Lembke, a rising sophomore at the Washington University in St. Louis, founder of the Log Off Movement, cofounder of Technic(ally) politics and an intern at Accountable Tech Khoa-Nathan Ngo, rising college sophomore and a youth collaborator at GoodforMedia.

The Gazette Daily News Podcast
Gazette Daily News Briefing, August 3

The Gazette Daily News Podcast

Play Episode Listen Later Aug 3, 2022 3:24


This is Zack Kucharski with The Gazette digital news desk and I'm here with your update for Wednesday, August 3rd. A quick check of the weather from the National Weather Service: showers and thunderstorms are likely, especially midday. Otherwise, partly sunny with a high of 88. Winds from the west 10 to 15 and 20 mph gusts. 60% chance of rain. For tonight, a 20 percent chance of showers and thunderstorms before 9 p.m. Otherwise, gradual clearing with a low around 63 degrees. West winds will swing to the north after midnight.   Now to what's making news: Facing increasing pressure from Democrats and veterans advocacy organizations, the U.S. Senate passed a bill late Tuesday expanding health care and disability benefits for millions of veterans exposed to toxic burn pits. The Senate passed the measure, 86-11, with Iowa Republican U.S. Sens. Joni Ernst and Chuck Grassley voting in favor. Ernst, a combat veteran, had previously objected with some other Republicans to a budget provision in the bill, which briefly blocked its passage. Democratic President Joe Biden has pushed for the legislation and is expected to sign the bill into law.   Tuesday's vote ended a brief stalemate when the process derailed after some Senate Republicans including Ernst made a late attempt last week to change a budget provision in the bill and blocked it from advancing, infuriating veterans advocacy groups. A key voting bloc in the Nov. midterm elections, the advocacy groups ramped up the political pressure over the last five days on GOP lawmakers to act. +++ A Cedar Rapids man faces four counts of attempted murder and other charges following an alleged road rage incident Tuesday afternoon that resulted in shots being fired as the cars traveled southbound on Interstate 380.   Deryan Sims, 33, of Cedar Rapids, is accused of four counts of attempted murder, four counts of assault while displaying a dangerous weapon, intimidation with a weapon, reckless use of a firearm and carrying weapons in the commission of a crime in connection with the incident.   Police received a call around 2:10 p.m. Tuesday from a passenger in a car involved in a road rage incident on southbound Interstate 380. The incident began in the area of Center Point Rd. NE and 32nd St. NE.   Investigators say Sims pointed the gun at a vehicle with two adults and two kids inside and fired the gun twice during the incident. One of the bullets struck the front end of the car, police said. No one was injured.   Police recovered the gun and ammo during a traffic stop.   The driver of the car Sims was driving in, Kelly Theisen, 40, of Cedar Rapids, was charged with driving on a suspended license. The investigation remains ongoing, and Sims will make an initial appearance Wednesday in Linn County District Court. +++ And finally, Iowa Attorney General Tom Miller announced Tuesday his office has joined a nationwide task force of state attorneys general working to cut down on illegal and unwanted robocalls used to scam seniors and defraud consumers nationwide. The bipartisan group has issued 20 civil investigative demands to 20 gateway providers and other entities that are allegedly responsible for a majority of foreign robocall traffic in the United States, according to a news release. The task force will focus on “the bad actors” throughout the telecommunications industry, to help reduce the number of illegal and unwanted robocalls that Iowans and other Americans receive, according to the release. More than 33 million robocalls are made to Americans every day, according to the National Consumer Law Center and Electronic Privacy Information Center. And an estimated $29.8 billion was stolen through scam calls in 2021, with most of the robocall traffic originating overseas.

Marketplace Tech
Clearview AI settlement limits company’s sale of facial recognition tools

Marketplace Tech

Play Episode Listen Later May 11, 2022 8:26


This week, facial recognition software company Clearview AI settled a lawsuit with the American Civil Liberties Union. The group sued Clearview in 2020 for allegedly violating the Illinois Biometric Information Privacy Act. While the case deals with a state law, the settlement has national implications, including limiting who can access the company's faceprint database. Clearview AI says that database contains some 20 billion facial images. Marketplace’s Kimberly Adams speaks with Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, who said the suit focused on the use of biometric markers, including faces. Your donation powers the journalism you rely on. Give today to support “Marketplace Tech.”

Marketplace All-in-One
Clearview AI settlement limits company’s sale of facial recognition tools

Marketplace All-in-One

Play Episode Listen Later May 11, 2022 8:26


This week, facial recognition software company Clearview AI settled a lawsuit with the American Civil Liberties Union. The group sued Clearview in 2020 for allegedly violating the Illinois Biometric Information Privacy Act. While the case deals with a state law, the settlement has national implications, including limiting who can access the company's faceprint database. Clearview AI says that database contains some 20 billion facial images. Marketplace’s Kimberly Adams speaks with Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, who said the suit focused on the use of biometric markers, including faces. Your donation powers the journalism you rely on. Give today to support “Marketplace Tech.”

CERIAS Security Seminar Podcast
Richard Clarke, Now That We're in a Hybrid War…

CERIAS Security Seminar Podcast

Play Episode Listen Later Mar 30, 2022 65:15


The war Russia has waged on Ukraine has seemed largely kinetic, but the most effective weapons weilded thus far have been consumer technologies. Putin has pursued a hybrid warfare strategy, yet the Ukrainians have fought his mendacious claims with the realities of conflict captured by mobile devices and social media. Bogged down by guerilla fighting in Ukrainian cities, Putin will view offensive cyber action and aggressive information warfare as increasingly appealing options. Richard Clarke, former U.S. National Coordinator for Security, Infrastructure Protection and Counterterrorism, and the nation's first "Cyber Czar," will give us an inside look at what the world has learned about the spillover of warfare in an era of advanced cyber threats.In his latest book, The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats, Clarke delves deep into the political and economic calculations of cyber conflict. He also provides concrete steps that can be taken to achieve cyber resilience, during peacetime and amidst international conflict, including building more resistant systems and raising the costs for escalations in cyberspace. About the speaker: Richard A. Clarke served for thirty years in national security policy roles in the US Government, first in the Pentagon, then the State Department, and finally for an unprecedented decade of continuous service for three Presidents in the White House.In the White House National Security Council for President Bush (41), Clinton, and Bush (43) he served as Special Assistant to the President for Global Affairs, National Coordinator for Security and Counter-terrorism ("Terrorism Czar'), and Special Advisor for Cyberspace (the first "Cyber Czar").  Earlier, in the State Department he had been appointed as Deputy Assistant Secretary for Intelligence in the Reagan Administration and was confirmed by the Senate as Assistant Secretary of State for Politico-Military Affairs in the Bush (41) Administration. In the latter capacity, he coordinated elements in support of the First Gulf War. In the Pentagon and the State Department, he participated in a series of multilateral and bilateral nuclear arms control negotiations.Following his government career, Dick Clarke was an on-air analyst for ABC News for fifteen years, taught at the Harvard Kennedy School for five years, lead a security risk management consulting firm (Good Harbor), and served on corporate advisory boards and Boards of Directors, as well as chairing or serving on state and federal advisory boards on cybersecurity (including President Obama's Advisory Group on Technology and Intelligence, the "NSA Review Group").  He is the Chair of the Board of Governors of the Middle East Institute, an eight decade old educational institution based in Washington. He was a member of President Obama's 2008 Transition team.Clarke's books include both non-fiction (5) and fiction (4). His seminal work on terrorism and al qaeda, Against All Enemies, was a number one NY Times best seller. His 2010 volume Cyber War, co-authored with Rob Knake, was a Washington Post bestseller and acknowledged as an early and foundational book in the field. Its sequel, the Fifth Domain; Defending our Country, our Corporations and Ourselves in the Age of Cyber Threats, also written with Knake, was published in 2019.Clarke is the host of the FUTURE STATE podcast. He is the recipient of membership in the Cybersecurity Hall of Fame, the RSA Lifetime Achievement Award, and the Champion of Freedom Award from the Electronic Privacy Information Center. A graduate of Boston Latin School, the University of Pennsylvania, and MIT, Dick Clarke is a resident of Virginia.

Supreme Court of the United States
Case: 20-297 TransUnion LLC v. Ramirez (2021-March-30)

Supreme Court of the United States

Play Episode Listen Later Apr 12, 2021 90:27


QUESTION PRESENTED:Whether either Article III or Federal Rule of Civil Procedure 23 permits a damages class action when the vast majority of the class suffered no actual injury, let alone an injury anything like what the class representative suffered.DateProceedings and Orders (key to color coding)Sep 02 2020 | Petition for a writ of certiorari filed. (Response due October 8, 2020)Sep 30 2020 | Motion to extend the time to file a response from October 8, 2020 to November 6, 2020, submitted to The Clerk.Oct 02 2020 | Motion to extend the time to file a response is granted and the time is extended to and including November 6, 2020.Oct 08 2020 | Brief amicus curiae of Consumer Data Industry Association filed.Oct 08 2020 | Brief amicus curiae of The Chamber of Commerce of the United States of America filed.Nov 06 2020 | Brief of respondent Sergio L. Ramirez in opposition filed.Nov 24 2020 | DISTRIBUTED for Conference of 12/11/2020.Nov 24 2020 | Reply of petitioner TransUnion LLC filed. (Distributed)Dec 16 2020 | Petition GRANTED limited to Question 1 presented by the petition.Dec 23 2020 | Blanket Consent filed by Respondent, Sergio L. RamirezDec 23 2020 | Blanket Consent filed by Petitioner, TransUnion LLCFeb 01 2021 | SET FOR ARGUMENT on Tuesday, March 30, 2021.Feb 01 2021 | Brief of petitioner TransUnion LLC filed.Feb 01 2021 | Joint appendix (Volumes I, II, & III) filed.Feb 04 2021 | Record requested from the U.S.C.A. 9th Circuit.Feb 04 2021 | Record from the U.S.C.A. 9th Circuit is electronic and located on Pacer.Feb 08 2021 | Brief amicus curiae of Retail Litigation Center, Inc. filed.Feb 08 2021 | Brief amicus curiae of Consumer Data Industry Association filed.Feb 08 2021 | Brief amicus curiae of Washington Legal Foundation filed.Feb 08 2021 | Brief amici curiae of eBay Inc., et al. filed.Feb 08 2021 | Brief amici curiae of The Home Depot, Inc. et al. filed.Feb 08 2021 | Brief amici curiae of National Association of Manufacturers, et al. filed.Feb 08 2021 | Brief amicus curiae of Professional Background Screening Association filed.Feb 08 2021 | Brief amici curiae of The Chamber of Commerce of the United States of America, et al. filed.Feb 08 2021 | Brief amicus curiae of National Consumer Reporting Association, Inc. filed.Feb 08 2021 | Brief amicus curiae of The Product Liability Advisory Council, Inc. filed.Feb 08 2021 | Brief amicus curiae of ACA International filed.Feb 08 2021 | Brief amicus curiae of the United States in support of neither party filed.Feb 22 2021 | CIRCULATED.Mar 03 2021 | Brief of Sergio L. Ramirez not accepted for filing. (March 05, 2021)(Dupilicate submission)Mar 03 2021 | Brief of respondent Sergio L. Ramirez filed. (Distributed)Mar 05 2021 | Motion of the Acting Solicitor General for leave to participate in oral argument as amicus curiae and for divided argument filed.Mar 08 2021 | Brief amicus curiae of Electronic Privacy Information Center filed. (Distributed)Mar 09 2021 | Brief amicus curiae of Public Justice filed. (Distributed)Mar 10 2021 | Brief amicus curiae of National Association of Consumer Advocates filed. (Distributed)Mar 10 2021 | Brief amicus curiae of American Association for Justice filed. (Distributed)Mar 10 2021 | Brief amicus curiae of Electronic Frontier Foundation filed. (Distributed)Mar 10 2021 | Brief amici curiae of Legal Scholars filed. (Distributed)Mar 10 2021 | Brief amici curiae of Complex Litigation Law Professors filed. (Distributed)Mar 10 2021 | Brief amici curiae of Impact Fund, NAACP Legal Defense & Educational Fund, Inc., and 24 Civil Rights Organizations filed. (Distributed)Mar 10 2021 | Brief amicus curiae of Constitutional Accountability Center filed. (Distributed)Mar 10 2021 | Brief amicus curiae of Owner-Operator Independent Drivers Association, Inc. filed. (Distributed)Mar 10 2021 | Brief amici curiae of National Consumer Law Center, et al. filed. (Distributed)Mar 10 2021 | Brief amici curiae of Public Citizen and Public Citizen Foundation filed. (Distributed)Mar 10 2021 | Brief amicus curiae of The Committee to Support the Antitrust Laws filed. (Distributed)Mar 10 2021 | Brief amici curiae of UC Berkeley Center for Consumer Law and Economic Justice, et al. filed. (Distributed)Mar 19 2021 | Motion of the Acting Solicitor General for leave to participate in oral argument as amicus curiae and for divided argument GRANTED.Mar 19 2021 | Reply of petitioner TransUnion LLC filed. (Distributed)Mar 30 2021 | Argued. For petitioner: Paul D. Clement, Washington, D. C. For United States, as amicus curiae: Nicole F. Reaves, Assistant to the Solicitor General, Department of Justice, Washington, D. C. For respondent: Samuel Issacharoff, New York, N. Y.★ Support this podcast on Patreon ★

Firewalls Don't Stop Dragons Podcast
Computers Interviewing Humans (Part 1)

Firewalls Don't Stop Dragons Podcast

Play Episode Listen Later Mar 15, 2021 38:04


Convincing a human to hire you is hard enough. Can you imagine trying to convince a computer? Artificial intelligence is now being used to automate the screening of job candidates, evaluating cognitive ability, vocabulary, and even emotional intelligence. This new "hiretech" promises to weed out the bad applicants and flag the good ones by analyzing not just the substance of answers to interview questions, but also the manor in which you respond - your cadence, your word choices, your tone, your speech patterns, and perhaps even your facial expressions and body language. What could possibly go wrong? We'll discuss this and more today with John Davisson from the Electronic Privacy Information Center. John Davisson is Senior Counsel at EPIC. John works on a variety of appellate litigation and Freedom of Information Act cases. John first came to EPIC in 2015 as a clerk in the Internet Public Interest Opportunities Program. He has previously clerked at Levine Sullivan Koch & Schulz, served as a student attorney in the Civil Rights Section of Georgetown's Institute for Public Representation, and interned at the Appignani Humanist Legal Center. John is a 2016 magna cum laude graduate of Georgetown University Law Center, where he was managing editor of the Georgetown Journal on Poverty Law & Policy, a Georgetown Law Fellow, and an NGO observer to the 9/11 military commission at Naval Station Guantanamo Bay. He worked as a journalist before entering the law and earned his B.A. at Columbia University. John is a member of the New York and District of Columbia bars. Further Info: Electronic Privacy Information Center: https://epic.org/ Become a patron! https://www.patreon.com/FirewallsDontStopDragons Weapons of Math Destruction: https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815

CERIAS Security Seminar Podcast
, Global Challenges in Security and Privacy Policy: elections, pandemics, and biometric technologies

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 30, 2020 56:42


CERIAS 2020 Annual Security Symposium Virtual Event https://ceri.as/sympClosing Keynote Panel Discussion - "Global Challenges in Security and Privacy Policy:elections, pandemics, and biometric technologies"Panelists:- Michel Beaudouin-Lafon, Vice Chair, ACM Technology Policy Council; Member, ACM Europe Council, Professor of Computer Science, Université Paris-Sud- James Hendler, Chair, US Technology Policy Committee, Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic Institute- Barbara Simons, Past President, ACM and ACM 2019 Policy Award Winner, Board of Advisors, US Election Assistance Commission, Chair, Board of Directors, Verified VotingModerated by: Lorraine Kisselburgh, Chair, ACM Technology Policy Council, AdvisoryBoard and 2018 Resident Scholar, Electronic Privacy Information Center, Purdue University About the speaker: Lorraine Kisselburgh is the inaugural Chair of ACM's new global Technology Policy Council, where she oversees technology policy engagement in the US, Europe, and other global regions. At Purdue University, she is a fellow in the Center for Educationand Research in Information Security (CERIAS), lecturer in the Discovery Park Center for Entrepreneurship, andformer professor of media, technology, and society. Her research focuses on thesocial implications of emerging technologies, including privacy, ethics, and collaboration;social interaction in technological contexts; and gender and leadership in STEMcareers. She has been funded by the National Science Foundation and theDepartment of Homeland Security, and with colleagues developed platforms forvirtual creative collaboration, and a framework to enhance ethical reasoningskills of STEM researchers (recognized by the National Academy of Engineering).In 2018 she was the Scholar-in-Residence at the Electronic Privacy InformationCenter (EPIC) in Washington, D.C., coordinating the development of theUniversity Guidelines for Artificial Intelligence, a framework grounded inhuman rights protection.Sheserved on the ACM Task Force on Code of Ethics and Professional Conduct from2017-2018, and ACM's US Technology Policy Committee (USTPC) from 2006-2019, andis a member of the Advisory Board for the Electronic Privacy InformationCenter. At Purdue she has also beenrecognized as the inaugural Faculty Scholar in the Butler Center forLeadership, CERIAS Faculty Fellow, a Service Learning Faculty Fellow and DiversityFaculty Fellow, and was the recipient of the Violet Haas Award for her effortson behalf of women.

The Shift with Doug McKenty
The Thursday Morning Report 2: EPIC

The Shift with Doug McKenty

Play Episode Listen Later Sep 18, 2020 55:36


Enjoy this episode of the Thursday Morning Report as host Doug McKenty discusses internet and electronic privacy with Ginger McCall of the Electronic Privacy Information Center. Find out more at https://epic.org/.

Catalyst Talks
Speaking Truth to Power with Nadine Strossen

Catalyst Talks

Play Episode Listen Later Sep 3, 2020 61:00


Nadine Strossen, the John Marshall Harlan II Professor of Law Emerita at New York Law School and the immediate past President of the American Civil Liberties Union (1991-2008), is a leading expert and frequent speaker/media commentator on constitutional law and civil liberties, who has testified before Congress on multiple occasions. She serves on the advisory boards of the ACLU, Electronic Privacy Information Center, Foundation for Individual Rights in Education (FIRE), Heterodox Academy, and National Coalition Against Censorship. The National Law Journal has named Strossen one of America’s "100 Most Influential Lawyers," and several other publications have named her one of the country’s most influential women. Her many honorary degrees and awards include the American Bar Association’s prestigious Margaret Brent Women Lawyers of Achievement Award (2017). At NYLS’s 2019 commencement, Strossen made history by receiving both the award for outstanding teaching and the award for the best book. When Strossen stepped down as ACLU President, three (ideologically diverse) Supreme Court Justices participated in her farewell/tribute luncheon: Ruth Bader Ginsburg, Antonin Scalia, and David Souter. Strossen’s 2018 book HATE: Why We Should Resist It with Free Speech, Not Censorship has earned praise from ideologically diverse experts, including progressive Harvard University Professor Cornel West and conservative Princeton University Professor Robert George. HATE was selected by Washington University as its 2019 “Common Read” for all incoming students. Her earlier book, Defending Pornography: Free Speech, Sex, and the Fight for Women's Rights, was named a New York Times "notable book" of 1995. Strossen has made thousands of public presentations before diverse audiences around the world, including on more than 500 different campuses and in many foreign countries, and she has appeared on virtually every national TV news program. Her hundreds of publications have appeared in many scholarly and general interest publications. Strossen graduated Phi Beta Kappa from Harvard College and magna cum laude from Harvard Law School. Before becoming a law professor, she practiced law in Minneapolis (her hometown) and New York City. She is a member of the Council on Foreign Relations. Learn more and connect with Nadine here: https://www.nyls.edu/faculty/nadine-strossen/ Nadine’s Book: Hate: Why We Should Resist it With Free Speech, Not Censorship here https://www.barnesandnoble.com/w/hate-nadine-strossen/1127152077 --- Send in a voice message: https://anchor.fm/catalyst-talks/message

Oral Arguments for the Court of Appeals for the D.C. Circuit
Electronic Privacy Information Center v. Drone Advisory Committee

Oral Arguments for the Court of Appeals for the D.C. Circuit

Play Episode Listen Later May 12, 2020 34:44


Electronic Privacy Information Center v. Drone Advisory Committee

Top of Mind with Julie Rose
Boeing, Off-Facebook Activity, Car Chases

Top of Mind with Julie Rose

Play Episode Listen Later Feb 19, 2020 100:16


Boeing's Year-Long Struggle to Get 737 Max Back in the Air May Soon Be Over (0:33)Guest: Natalie Kitroeff, Reporter, New York TimesIt's been nearly a year since Boeing grounded its 737 Max because of two crashes that happened within 5 months and killed 346 people.  Since then, a Congressional investigation surfaced internal Boeing emails that showed disdain for federal regulators and efforts to avoid some requirements to certify the 737 Max as safe for flight. Should Felons Be Allowed to Vote? (18:02)Guest: Christopher Uggen, Professor, Regents Professor and Distinguished McKnight Professor of Sociology and Law, University of Minnesota, Fellow of the American Society of CriminologyMore than six-million Americans won't be able to vote in the presidential election this year. These are people who have been convicted of a felony – they're either in prison or out on parole or probation. But some felons are banned from voting for life. It's all up to the states, and only two states in the country allow all prisoners to vote. Facebook Has So Much More Info About You Than You Think (34:10)Guest: Alan Butler, General Counsel, Electronic Privacy Information Center in Washington D.C.Have you noticed that ads on your Facebook feed are perfectly tailored to other stuff you've been doing online? Julie gets that Facebook knows what she is posting and liking on Facebook. How on earth would it know what she recently bought on Amazon? Or what podcast she listened to on the TuneIn app? Or movies she looked at on RottenTomatoes.com? I learned all this recently – and you can too – from a new feature Facebook just debuted on their site that lets you see all the information it's collecting about you when you're off the Facebook page or app. It's a shocking amount of info.  The Apple Seed (50:39)Guest: Sam Payne, Host, The Apple Seed, BYUradioSam talks about car chases and when he was involved with one in college. Why Can't Los Angeles Stop Watching Car Chases? (1:00:49)Guest: Mary Melton, Editor and Writer, Principal, Mary Melton MediaCar chases have become so common and entertaining in LA that it's almost become a spectator sport. Friends will tell each other to turn on the TV when the news helicopter is following an intense chase. There's even a Facebook group solely dedicated to posting aboutlive pursuits. But these aren't car chases in movies –these are real people in real danger. And some may be gunning it just because they know they'll get on TV. So should car chases be televised at all? Ethiopia Is Building a Dam That Could Have Dramatic Political Repercussions (1:19:29)Guest: Richard Kyle Paisley, Director, International Waters Governance Initiative IAR, University of British ColumbiaThis summer, Ethiopia will begin filling a new reservoir on the Nile River that will ultimately be the size of London. They've built a giant dam on the river that can generate enough electricity to bring its country into the modern era, plus extra to sell to neighboring countries. But there are nearly a dozen countries along the Nile River and many – including Egypt – get virtually all their water from it. Egypt and Ethiopia are right now at such odds over the dam that the US has been involved trying to help the countries settle their dispute without coming to arms.

Top of Mind with Julie Rose
Boeing, Off-Facebook Activity, Car Chases

Top of Mind with Julie Rose

Play Episode Listen Later Feb 19, 2020 100:15


Natalie Kitroeff of The New York Times on Boeing. Alan Butler, Electronic Privacy Information Center, on Off-Facebook Activity. Christopher Uggen, Univ of Minnesota, on felony disenfranchisement. Sam Payne of The Apple Seed on car chases. Mary Melton of Mary Melton Media on LA's obsession with car chases. Richard Paisley of Univ of British Columbia on Ethiopia's Nile Dam.

On the Ground w Esther Iverem
‘ON THE GROUND’ SHOW FOR JUNE 7, 2019: Thom Hartmann on the U.S. and Guns…Lawmakers Question the FBI about White Supremacists…And Are Relations with China Turning Dangerous?

On the Ground w Esther Iverem

Play Episode Listen Later Jun 7, 2019


'ON THE GROUND' FOR JUNE 7, 2019 PART TWO OF THOM HARTMAN ON THE U.S. AND GUNS At the second in a series of congressional hearings on white supremacist violence in the United States, lawmakers questioned this week the priorities and placement of resources by the FBI and other federal agencies. And as the United States faces the horror of the latest mass shooting in Virginia Beach, a new book traces Americas deadly relationship to guns to this country's founding on genocide and slavery. We speak to journalist and author Thom Hartmann. Plus Headlines: -The Grand Canyon Centennial Protection Act introduced to ban uranium mining at the world heritage site. Impacting the drinking water for more than 5 million people along the Ohio River, the Ohio River Valley Water Sanitation Commission, known as ORSANCO, voted to overturn water pollution control standards, which it has overseen for more than 60 years.The Democratic National Committee will not host a climate-specific presidential primary debate and will punish candidates who attend a debate hosted by any other organization. At a hearing of the House Oversight and Reform Committee, the FBI and the Department of Homeland Security, testified that, even though flawed, facial recognition software is being utilized throughout federal law enforcement and safety agencies. The Electronic Privacy Information Center, held a conference on AI (or artificial intelligence) and human rights.Members of the DC Coalition for the Defense of the Cuban Revolution are denouncing the Trump Administration’s decision this week to ban most American tourism to Cuba.Proposed legislation will decriminalize sex work in the District of Columbia. On Tuesday, June 11, a service will be held to celebrate the life of Robert L. White who was shot by police one year ago in Montgomery County, MD. A DC Council hearing was held on Monday to support the survival of the Sankofa Books and Video store, owned By filmmakers Shirikiana and Haile Gerima. The DC Caribbean Filmfest is underway until June 12.The 27th Annual International Commemoration for African Ancestors who perished in the Middle Passage -- the Maafa -- and those who survived, is June 8 and June 9.Gerald Horne on U.S.-China relations, Trump's trip to the UK, Mexico and Sudan. If you enjoy our grassroots news show, which we provide free online, on podcast and on Pacifica stations and affiliates, please click here or click on the Support-Donate tab on this website to subscribe for as little as $3 a month. We are so grateful for this small but growing amount of monthly crowdsource funding on Patreon. You can also give a one-time donation on PayPal. The show is made possible only by our volunteer energy, our resolve to keep the people's voices on the air, and by support from our listeners. In this new era of fake corporate news, we have to be and support our own media! Thank you!

Boston Calling
Heads Up

Boston Calling

Play Episode Listen Later Apr 26, 2019 26:49


The US Department of Homeland Security is turning to facial recognition technology to keep track of people leaving and entering the US, but privacy advocates have serious concerns. Now, the Electronic Privacy Information Center has obtained documents from US Customs and Border Protection that backup their fears. Also, we visit a hair salon in Boston strictly for women who wear hijab; Thando Hopa makes history by being the first model with albinism on the cover of Vogue magazine; the phenomenon of blackface persists around the globe; and Orthodox Jewish women in New York observe an old tradition in a very modern way. (Image: A facial recognition program is demonstrated during the 2004 Biometrics exhibition and conference in London. (Photo by Ian Waldie/Getty Images)

Perspectives
Jerry Kang, Vice Chancellor of Equity, Diversity and Inclusion at UCLA

Perspectives

Play Episode Listen Later Mar 29, 2019 41:20


A transcript of this episode is available here. On this episode of Perspectives, Goodwin's Chairman David Hashmall interviews Jerry Kang, UCLA's first Vice Chancellor for Equity, Diversity and Inclusion. Professor Kang explains the concept of implicit bias and shares his thoughts on ways to improve diversity and inclusion in the legal industry. In addition to his role as the Vice Chancellor for Equity, Diversity and Inclusion, Professor Kang is Distinguished Professor of Law at UCLA School of Law, Distinguished Professor of Asian American Studies (by courtesy), and the inaugural Korea Times — Hankook Ilbo Chair in Korean American Studies and Law. Professor Jerry Kang’s teaching and research interests include civil procedure, race, and communications. On race, he has focused on the nexus between implicit bias and the law, with the goal of advancing a “behavioral realism” that imports new scientific findings from the mind sciences into legal discourse and policymaking. He is also an expert on Asian American communities, and has written about hate crimes, affirmative action, the Japanese American internment, and its lessons for the “War on Terror.” He is a co-author of Race, Rights, and Reparation: The Law and the Japanese American Internment (2d ed. Wolters Kluwer 2013). On communications, Professor Kang has published on the topics of privacy, pervasive computing, mass media policy, and cyber-race (the techno-social construction of race in cyberspace). He is also the author of Communications Law & Policy: Cases and Materials (4th edition Foundation 2012), a leading casebook in the field. During law school, Professor Kang was a supervising editor of the Harvard Law Review and Special Assistant to Harvard University’s Advisory Committee on Free Speech. After graduation, he clerked for Judge William A. Norris of the Ninth Circuit Court of Appeals, then worked at the National Telecommunications and Information Administration on cyberspace policy. He joined UCLA in Fall 1995 and has been recognized for his teaching by being elected Professor of the Year in 1998; receiving the law school’s Rutter Award for Excellence in Teaching in 2007; and being chosen for the highest university-wide distinction, the University Distinguished Teaching Award (The Eby Award for the Art of Teaching) in 2010. At UCLA, he was founding co-Director of the Concentration for Critical Race Studies, the first program of its kind in American legal education. He is also founding co-Director of PULSE: Program on Understanding Law, Science, and Evidence. During 2003-05, Prof. Kang was Visiting Professor at both Harvard Law School and Georgetown Law Center. During the 2013-14 academic year, he was in residence at the Straus Institute for the Advanced Study of Law & Justice at NYU School of Law as a Straus Fellow as well as the David M. Friedman Fellow. Prof. Kang is a member of the American Law Institute, has chaired the American Association of Law School’s Section on Defamation and Privacy, has served on the Board of Directors of the Electronic Privacy Information Center, and has received numerous awards including the World Technology Award for Law and the Vice President’s “Hammer Award” for Reinventing Government. More information about Prof. Kang is available at http://jerrykang.net

Firewalls Don't Stop Dragons Podcast
Guiding the Development of AI

Firewalls Don't Stop Dragons Podcast

Play Episode Listen Later Feb 24, 2019 77:57


Artificial Intelligence (AI) has been around for decades, but has only recently begun to fulfill the promise of truly replicating human-like decision making. The Information Age has generated enormous quantities of data and modern technology has given us unprecedented power to ingest and analyze this data. AI systems today control airplanes, financial and insurance systems, and even criminal sentencing recommendations. We can use AI to conduct law enforcement and intelligence gather operations. AI has even generated audio, video and photos that are completely fake but nearly impossible for a human to detect. Our guest today, Lorraine Kisselburgh, is working with international organization to define common-sense guidelines for the creation and use of these AI systems, to maximize potential and minimize abuse. Lorraine Kisselburgh (Ph.D., Purdue University) is a Scholar with the Electronic Privacy Information Center in Washington, D.C., a former professor of media, technology, and society, and a visiting lecturer in the Center for Entrepreneurship at Purdue University. She studies the social implications of emerging technologies, including privacy and ethics in emerging technology contexts. Her research has been awarded funding from the National Science Foundation and the Department of Homeland Security, and recognized by the National Academy of Engineering. She currently serves on the executive committee of Association of Computing Machinery’s (ACM) US Technology Policy Committee (USTPC) and was a member of the ACM Task Force on Code of Ethics. Email: lorraine@purdue.eduWebsite: www.lkisselburgh.netTwitter: @lkisselburgh, @EPICPrivacyFacebook: EPICPrivacy Further Information: Universal Guidelines for AI: https://thepublicvoice.org/AI-universal-guidelines/Electronic Privacy Informantion Center (EPIC): https://www.epic.org/"Deep Fake" Obama PSA: https://www.youtube.com/watch?v=cQ54GDm1eL0 Lyrebird fake Trump and Obama voices: https://soundcloud.com/user-535691776/dialogOpenAI fake news articles: https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/AI Now Institute: https://ainowinstitute.org/Berkman Klein Center for Internet and Society: https://cyber.harvard.edu/Data & Society Intelligence and Autonomy Initiative: https://autonomy.datasociety.net/WEF’s AI and Machine Learning: https://www.weforum.org/communities/artificial-intelligence-and-machine-learning

FindLaw's 5in5 Podcast
Trump, Social Media, Milly Rock

FindLaw's 5in5 Podcast

Play Episode Listen Later Dec 27, 2018 6:29


Discussed on this week's 5in5: Electronic Privacy Information Center v. Internal Revenue Service Otto v. Hearst Communications, Inc. Mississippi Rising Coalition v. City of Ocean Springs, Mississippi In Re: A.A. Milly Rock Relevant Articles Trump SCOTUS Potential Proposes DIscovery Ban on Cases Under $500K Hearst Facing Trial Over Photo of Trump Crashing a Wedding Appeals Court Upholds Flying Flag With Confederate Emblem Social Media Travel Alert: There’s No First Amendment in the EU Rapper 2 Milly Sues Fortnite Makers Over Stolen Milly Rock Dance Legal Limerick of the Week: Our culture's current great fail; is the importance of posts and email; if followers' needs; are fulfilled by their feeds; the youth are not worried about jail. Please subscribe, rate, and review our show if you enjoyed the content we provided and learn more at FindLaw.com. Contact us at podcasts@thomsonreuters.com.

Tech for Good Live
TFGL at The Federation Episode 6 - Functional Sovereignty with Frank Pasquale

Tech for Good Live

Play Episode Listen Later Dec 10, 2018 38:49


This is the final episode in a special series in partnership with The Federation, exploring ethics in tech. In this episode, we talk about 'Functional Sovereignty' with Frank Pasquale. He talks to us about the concepts in his book The Black Box Society: The Secret Algorithms Behind Money & Information   About Frank: Frank Pasquale's scholarship and public speaking translates complex law and policy into accessible writing and presentations. His research agenda focuses on challenges posed to information law by rapidly changing technology. He is presently researching a book on automation and the professions. Frank has testified before the Judiciary Committee of the House of Representatives, appearing with the General Counsels of Google, Microsoft, and Yahoo. He has also presented before a Department of Health & Human Services/Federal Trade Commission Roundtable (on personal health records) and panels of the National Academy of Sciences (on ubiquitous sensor networks and the IoT). Frank has been a Visiting Fellow at Princeton’s Center for Information Technology, and a Visiting Professor at Yale Law School and Cardozo Law School. He served as the Schering-Plough Professor in Health Care Regulation and Enforcement at Seton Hall University. Frank is an Affiliate Fellow of Yale Law School’s Information Society Project. Frank has been named to the Advisory Boards of the Electronic Privacy Information Center, the Data Competition Institute, Patient Privacy Rights, and the Journal of Legal Education. He has blogged at Concurring Opinions since 2006. His work has been published by the New York Times, Los Angeles Times, Chronicle of Higher Education, Boston Review, and many other media outlets.     Hosted by Rebecca Rae-Evans (@rebeccawho), featuring pod regular Greg Ashton (@grgashton)   Produced by @paul_yakabovski   Get in touch:Twitter: @techforgoodliveInstagram: techforgoodliveEmail: hello@techforgood.live  

Daily Dot Podcasts
EPIC privacy and the voice of Don Jr. - We're All Gonna Die - Oct. 28, 2018

Daily Dot Podcasts

Play Episode Listen Later Oct 28, 2018


You probably already know that Brett Kavanaugh was credibly accused of sexual assault during his confirmation hearing for the Supreme Court. But did you also know that he’s decided multiple court cases in favor of government surveillance and communicated repeatedly with John Yoo, the so-called “architect of warrantless wiretapping?" Alan Butler, senior counsel at the Electronic Privacy Information Center, joins the gang to unpack it all for you. Plus, we have Gabe Gundacker, the voice of Donald Trump, Jr. and Stephen Miller on Showtime’s Our Cartoon President!

Federal Drive with Tom Temin
Stakeholders push WH harder for policy on artificial intelligence

Federal Drive with Tom Temin

Play Episode Listen Later Jul 30, 2018 7:21


If you believe its backers, artificial intelligence is the best thing since the big bang. But it can have side effects...like adversely affecting privacy, accountability and fairness. That's according to the Electronic Privacy Information Center. EPIC is leading a call on the White House to establish policy for artificial intelligence. Jeramie Scott, EPIC's national security counsel, joined Federal Drive with Tom Temin on Federal News Radio to share some ideas.

Ralph Nader Radio Hour
How Central Banks Rigged The World/Facebook Privacy

Ralph Nader Radio Hour

Play Episode Listen Later May 26, 2018 71:45


Ralph talks to former financial insider, Nomi Prins, about what central banks are and how much power they have to shape our world. Plus, president of the Electronic Privacy Information Center, Marc Rotenberg, tells us how Facebook uses our information for good or ill.

Enoch Pratt Free Library Podcast
Writers LIVE: Nadine Strossen, HATE: Why We Should Resist It with Free Speech, Not Censorship

Enoch Pratt Free Library Podcast

Play Episode Listen Later May 16, 2018 90:56


Nadine Strossen's new book, HATE, dispels misunderstandings plaguing our perennial debates about "hate speech vs. free speech," showing that the First Amendment approach promotes free speech and democracy, equality, and societal harmony.U.S. law allows government to punish hateful or discriminatory speech in specific contexts when it directly causes imminent serious harm, but government may not punish such speech solely because its message is disfavored, disturbing, or vaguely feared to possibly contribute to some future harm. When U.S. officials formerly wielded such broad censorship power, they suppressed dissident speech, including equal rights advocacy. Likewise, current politicians have attacked Black Lives Matter protests as "hate speech.""Hate speech" censorship proponents stress the potential harms such speech might further: discrimination, violence, and psychic injuries. However, there has been little analysis of whether censorship effectively counters the feared injuries. Citing evidence from many countries, Strossen shows that "hate speech" laws are at best ineffective and at worst counterproductive. Their inevitably vague terms invest enforcing officials with broad discretion; predictably, regular targets are minority views and speakers.Therefore, prominent social justice advocates in the U.S. and beyond maintain that the best way to resist hate and promote equality is not censorship, but rather, vigorous "counterspeech" and activism.Nadine Strossen is professor of constitutional law at New York Law School and the first woman national President of the American Civil Liberties Union, where she served from 1991 through 2008. A frequent speaker on constitutional and civil liberties issues, her media appearances include 60 Minutes, CBS Sunday Morning, Today, Good Morning America, and The Daily Show.Strossen will be in conversation with Danielle Citron & Dwight Ellis.Danielle Keats Citron is the Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law where she teaches and writes about information privacy, free expression, and civil rights. Professor Citron is an internationally recognized information privacy expert and the author of the book Hate Crimes in Cyberspace (Harvard University Press) and more than 25 law review articles.  Professor Citron is an Affiliate Scholar at the Stanford Center on Internet and Society, Affiliate Fellow at the Yale Information Society Project, and Senior Fellow at the Future of Privacy, a privacy think tank. Professor Citron has advised federal and state legislators, law enforcement, and international lawmakers on privacy and free speech issues. Professor Citron works closely with tech companies on issues involving online safety and privacy. She serves on Twitter’s Trust and Safety Council and has presented her research at Twitter, Facebook, Google, and Microsoft. In addition, Professor Citron is the Chair the Electronic Privacy Information Center’s Board of Directors. She is a member of the American Law Institute and serves as an adviser to the American Law Institute’s Restatement Third Information Privacy Principles Project.An experienced media professional in advancing social equity initiatives and strategies in the realms of government, business and education, Dwight Ellis is in his 11th year as full-time Lecturer in the Communications department of Bowie State University in Maryland and occasional consultant to the U.S. Department of State. Prior to his 25 years as vice president with the National Association of Broadcasters, he served as staff chief to Congresswoman Cardiss Collins (D-IL). A graduate of George Mason University Law School, Ellis’s professional record includes many affiliations, accomplishments, publications and recognitions.Writers LIVE programs are supported in part by The Miss Howard Hubbard Adult Programming Fund.

Enoch Pratt Free Library Podcast
Writers LIVE: Nadine Strossen, HATE: Why We Should Resist It with Free Speech, Not Censorship

Enoch Pratt Free Library Podcast

Play Episode Listen Later May 16, 2018 90:56


Nadine Strossen's new book, HATE, dispels misunderstandings plaguing our perennial debates about "hate speech vs. free speech," showing that the First Amendment approach promotes free speech and democracy, equality, and societal harmony.U.S. law allows government to punish hateful or discriminatory speech in specific contexts when it directly causes imminent serious harm, but government may not punish such speech solely because its message is disfavored, disturbing, or vaguely feared to possibly contribute to some future harm. When U.S. officials formerly wielded such broad censorship power, they suppressed dissident speech, including equal rights advocacy. Likewise, current politicians have attacked Black Lives Matter protests as "hate speech.""Hate speech" censorship proponents stress the potential harms such speech might further: discrimination, violence, and psychic injuries. However, there has been little analysis of whether censorship effectively counters the feared injuries. Citing evidence from many countries, Strossen shows that "hate speech" laws are at best ineffective and at worst counterproductive. Their inevitably vague terms invest enforcing officials with broad discretion; predictably, regular targets are minority views and speakers.Therefore, prominent social justice advocates in the U.S. and beyond maintain that the best way to resist hate and promote equality is not censorship, but rather, vigorous "counterspeech" and activism.Nadine Strossen is professor of constitutional law at New York Law School and the first woman national President of the American Civil Liberties Union, where she served from 1991 through 2008. A frequent speaker on constitutional and civil liberties issues, her media appearances include 60 Minutes, CBS Sunday Morning, Today, Good Morning America, and The Daily Show.Strossen will be in conversation with Danielle Citron & Dwight Ellis.Danielle Keats Citron is the Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law where she teaches and writes about information privacy, free expression, and civil rights. Professor Citron is an internationally recognized information privacy expert and the author of the book Hate Crimes in Cyberspace (Harvard University Press) and more than 25 law review articles.  Professor Citron is an Affiliate Scholar at the Stanford Center on Internet and Society, Affiliate Fellow at the Yale Information Society Project, and Senior Fellow at the Future of Privacy, a privacy think tank. Professor Citron has advised federal and state legislators, law enforcement, and international lawmakers on privacy and free speech issues. Professor Citron works closely with tech companies on issues involving online safety and privacy. She serves on Twitter’s Trust and Safety Council and has presented her research at Twitter, Facebook, Google, and Microsoft. In addition, Professor Citron is the Chair the Electronic Privacy Information Center’s Board of Directors. She is a member of the American Law Institute and serves as an adviser to the American Law Institute’s Restatement Third Information Privacy Principles Project.An experienced media professional in advancing social equity initiatives and strategies in the realms of government, business and education, Dwight Ellis is in his 11th year as full-time Lecturer in the Communications department of Bowie State University in Maryland and occasional consultant to the U.S. Department of State. Prior to his 25 years as vice president with the National Association of Broadcasters, he served as staff chief to Congresswoman Cardiss Collins (D-IL). A graduate of George Mason University Law School, Ellis’s professional record includes many affiliations, accomplishments, publications and recognitions.Writers LIVE programs are supported in part by The Miss Howard Hubbard Adult Programming Fund.Recorded On: Tuesday, May 8, 2018

WashingTECH Tech Policy Podcast with Joe Miller
Danielle Citron: How to Fight for Cyberstalking Victims (Ep. 115)

WashingTECH Tech Policy Podcast with Joe Miller

Play Episode Listen Later Nov 21, 2017 23:23


  Bio Danielle Keats Citron (@daniellecitron) is the Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law where she teaches and writes about information privacy, free expression, and civil rights and was the recipient of the 2005 “Teacher of the Year” award. Professor Citron is an internationally recognized information privacy expert. Her book Hate Crimes in Cyberspace (Harvard University Press 2014) explored the phenomenon of cyber stalking and how law and companies can and should tackle online abuse consistent with our commitment to free speech. The editors of Cosmopolitan included her book in “20 Best Moments for Women in 2014.” Professor Citron has published more than 20 law review articles appearing in California Law Review, Michigan Law Review, Harvard Law Review Forum, Boston University Law Review, Fordham Law Review, George Washington Law Review, Minnesota Law Review, Notre Dame Law Review, Texas Law Review, Washington University Law Review, Southern California Law Review, Washington & Lee Law Review, Wake Forest Law Review, Washington Law Review, UC Davis Law Review, among other journals. Her opinion pieces have appeared in media outlets, such as The New York Times, The Atlantic, Slate, Time, CNN, The Guardian, New Scientist, ars technica, and New York Daily News. In 2015, the United Kingdom's Prospect Magazine named Professor Citron one of the “Top 50 World Thinkers;” the Daily Record named her one of the “Top 50 Most Influential Marylanders.” Professor Citron is an Affiliate Scholar at the Stanford Center on Internet and Society, Affiliate Fellow at the Yale Information Society Project, and Senior Fellow at the Future of Privacy, a privacy think tank. She is a technology contributor for Forbes. Professor Citron has advised federal and state legislators, law enforcement, and international lawmakers on privacy issues. She has testified at congressional briefings on the First Amendment implications of laws regulating cyber stalking, sexual violence, and nonconsensual pornography. From 2014 to December 2016, Professor Citron advised California Attorney General Kamala Harris (elected to the U.S. Senate in 2016) on privacy issues. She served as a member of AG Harris's Task Force to Combat Cyber Exploitation and Violence Against Women. In 2011, Professor Citron testified about online hate speech before the Inter-Parliamentary Committee on Anti-Semitism at the House of Commons. Professor Citron works closely with tech companies on issues involving online safety and privacy. She serves on Twitter's Trust and Safety Council and has presented her research at Twitter, Facebook, Google, and Microsoft. In addition, Professor Citron is an advisor to civil liberties and privacy organizations. She is the Chair the Electronic Privacy Information Center's Board of Directors. Professor Citron is on the Advisory Board of Cyber Civil Rights Initiative, Without My Consent, Future of Privacy, Teach Privacy, SurvJustice, and the International Association of Privacy Professionals Privacy Bar. She is a member of the American Law Institute and serves as an adviser to the American Law Institute's Restatement Third Information Privacy Principles Project. Professor Citron has presented her research at federal agencies, meetings of the National Association of Attorneys General, the National Holocaust Museum, Wikimedia Foundation, the Anti-Defamation League, major universities, and think tanks. Professor Citron has been quoted in hundreds of news stories including in The New York Times, Washington Post, Wall Street Journal, Los Angeles Times, San Francisco Chronicle, Wired,USA Today, HBO's John Oliver Show, HBO's Vice News, Time, Newsweek, New Yorker, New York Magazine, Cosmopolitan, Barron's, Financial Times, The Guardian, Vice News, and BBC. She is a frequent guest on National Public Radio shows, including All Things Considered, WHYY's Radio Times, WNYC's Public Radio International, Minnesota Public Radio, Wisconsin Public Radio, WYPR's Midday with Dan Rodricks, WAMU's The Diane Rehm Show, and Chicago Public Radio. Resources Hate Crimes in Cyberspace by Danielle Keas Citron (Harvard University Press, 2014) Constitutional Coup: Privatization's Threat to the American Republic by Jon D. Michaels (Harvard University Press, 2017) University of Maryland Carey School of Law News Roundup DOJ sues to block AT&T/Tribune Merger The Department of Justice has sued to block AT&T's proposed $85 billion acquisition of Times Warner. The complaint states that the merger would violate Section 7 of the Clayton Act. It refers to AT&T's objection to Comcast's previous acquisition of NBC/Universal, back in 2011, which was also a so-called vertical merger. AT&T argued that a "standard bargaining model" could have been used to show the harmful effect the merger would have had on pricing. If the case reaches the Supreme Court, it will be the first time a vertical merger case has reached the Court since 1972, in the Ford-Autolite case. The Trump administration has been vocal about opposing the AT&T/Time Warner merger and the president himself has railed repeatedly on Twitter about CNN's coverage of his administration. AT&T says it would not rule out using the judicial process in order to obtain correspondence between the White House and the DOJ which would help illustrate that the DOJ's lawsuit is politically motivated. Brian Fung reports in the Washington Post. FCC rolls back media regulations, Lifeline, cracks down on robocalls In its monthly meeting last week, the Federal Communications Commission killed long-standing media ownership rules, including the Newspaper/Broadcast Cross-Ownership rule which, since 1975, had prevented the owner of a  tv station from owning  a newspaper in the same market. The Commission also eliminated the so-called eight-voices test, which required at least eight independently owned TV stations to remain in the market before any entity could own two stations in the market. Critics say the rules were cancelled simply to pave the way for Sinclair Broadcasting, which has proposed to acquire Tribune Media for $4 billion. Two high-ranking Democrats--Frank Pallone and Elijah Cummings--are calling for an investigation into Ajit Pai's relationship with Sinclair. The Commission also restricted Lifeline support--that's the $9.25 per month subsidy for qualified customers who use it to help pay their internet bill. It restricted that support on tribal lands. The Commission is also seeking comment on a proposed plan to cap Lifeline expenditures. The Commission also voted unanimously to crack down on robocallers by giving phone companies more authority to block annoying phone calls from marketers who play a pre-recorded message when you answer the phone. Also at the November meeting, the Commission voted to expand broadcasters' ability to experiment with the Next Generation Broadcast Standard, which will enable closer targeting of viewers for advertising. The Commission also adopted several other rules and proposed rules ostensibly geared toward stimulating broadband infrastructure investment and deployment. In December, FCC Chair Ajit Pai is expected to overturn the net neutrality rules passed during the Obama administration. Wall Street Journal: Comcast seeks to acquire 201st Century Fox Comcast has joined a long list of companies, including Verizon, that are seeking to buy 21st Century Fox, according to the Wall Street Journal. Fox is looking to sell off everything except its news and sports assets. Verizon and Disney also also rumored to be potential suitors. Federal Elections Commission opens rulemaking on political ads The Federal Elections Commission put out a rulemaking for public comment on revisions to the political ad disclosure rules to apply them to internet companies. The rulemaking follows allegations of Russian efforts to sway the election in favor of Donald Trump by placing ads and sponsored content on on Facebook and Twitter. China's supercomputers surpass the U.S. The U.S. has dropped to second place, behind China, in its total number of super computers. The U.S. has 144 compared to China's 202. The number of China's supercomputers rose by 43 over just the last 6 months, compared to a drop in the U.S. by 25.

Philosophical Disquisitions
Episode #31 - Hartzog on Robocops and Automated Law Enforcement

Philosophical Disquisitions

Play Episode Listen Later Oct 28, 2017


In this episode I am joined by Woodrow Hartzog. Woodrow is currently a Professor of Law and Computer Science at Northeastern University (he was the Starnes Professor at Samford University’s Cumberland School of Law when this episode was recorded). His research focuses on privacy, human-computer interaction, online communication, and electronic agreements. He holds a Ph.D. in mass communication from the University of North Carolina at Chapel Hill, an LL.M. in intellectual property from the George Washington University Law School, and a J.D. from Samford University. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center.We talk about the rise of automated law enforcement and the virtue of an inefficient legal system. You can download the episode here or listen below. You can also subscribe to the podcast via iTunes or Stitcher (RSS feed is here). Show Notes0:00 - Introduction2:00 - What is automated law enforcement? The 3 Steps6:30 - What about the robocops?10:00 - The importance of hidden forms of automated law enforcement12:55 - What areas of law enforcement are ripe for automation?17:53 - The ethics of automated prevention vs automated punishment23: 10 - The three reasons for automated law enforcement26:00 - The privacy costs of automated law enforcement32:13 - The virtue of discretion and inefficiency in the application of law40:10 - An empirical study of automated law enforcement44:35 - The conservation of inefficiency principle48:40 - The practicality of conserving inefficiency51:20 - Should we keep a human in the loop?55:10 - The rules vs standards debate in automated law enforcement58:36 - Can we engineer inefficiency into automated systems1:01:10 - When is automation desirable in law?  Relevant LinksWoody's homepageWoody's SSRN page'Inefficiently Automated Law Enforcement' by Woodrow Hartzog, Gregory Conti, John Nelson and Lisa Shay'Obscurity and Privacy' by Woodrow Hartzog and Evan SelingerEpisode 4 with Evan Selinger on Algorithmic Outsourcing and PrivacyKnightscope RobotsRobocop joins Dubai police to fight real life crime    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Twenty Thousand Hertz
#10 | Dolls That Talk... And Some That Listen

Twenty Thousand Hertz

Play Episode Listen Later Mar 7, 2017 19:12


Talking dolls have been around for over a century, but the fascination with making inanimate objects seem human reaches back into our early history. What is it about creating a companion that can interact with us, especially for a child, that fascinates us so? And what will that look like as our technology continues to advance? Featuring Carlene Stephens, Curator for the Smithsonian National Museum of American History and Claire Gartland of the Electronic Privacy Information Center.  Check out Defacto Sound, the studios that produced Twenty Thousand Hertz, hosted by Dallas Taylor.  Consider supporting the show at donate.20k.org  Episode transcript, music, and credits can be found here: https://www.20k.org/episodes/dolls

Bloomberg Law
Court Debate Sex Offender Internet Rules (Audio) (Correct)

Bloomberg Law

Play Episode Listen Later Feb 27, 2017 12:46


(Bloomberg) -- Corrects guest name u0010u0010Marc Rotenberg, president of the Electronic Privacy Information Center, and Melissa Arbus Sherry, deputy managing partner of Latham & Watkins’ Washington D.C. office, discuss the Supreme Court case Packingham v. North Carolina, which will decide whether or not register sex offenders can be prevented from using social media. They speak with June Grasso, Greg Stohr, and Michael Best on Bloomberg Radio’s "Bloomberg Law."

Bloomberg Law
Court Debate Sex Offender Internet Rules (Audio) (Correct)

Bloomberg Law

Play Episode Listen Later Feb 27, 2017 12:46


(Bloomberg) -- Corrects guest name u0010u0010Marc Rotenberg, president of the Electronic Privacy Information Center, and Melissa Arbus Sherry, deputy managing partner of Latham & Watkins' Washington D.C. office, discuss the Supreme Court case Packingham v. North Carolina, which will decide whether or not register sex offenders can be prevented from using social media. They speak with June Grasso, Greg Stohr, and Michael Best on Bloomberg Radio's "Bloomberg Law." Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Note to Self
Tech Under Trump

Note to Self

Play Episode Listen Later Nov 30, 2016 31:44


For Hillary Clinton, that private email server was an Achilles heel. For Donald Trump, late night tweet-storms and the echo chamber of the so-called alt-right were rocket fuel. For American voters, the power of technology was inescapable. We've seen the good, bad and ugly of tech this election cycle. And we all have big feelings about it. So Manoush hosted a good old-fashioned call-in, for listeners to share their thoughts and fears about our digital lives under a Trump administration.  Joining Manoush was Farhad Manjoo, New York Times technology columnist, and Marc Rotenberg, executive director of the Electronic Privacy Information Center.  They looked back at how social media shaped the Presidential race, and forward at privacy in the Trump era. We wish we could tell you it's uplifting. But we don't like to lie.  The call-in show was part of the United States of Anxiety, a series from WNYC Studios. If you're having big feelings about what the new administration means for the arts, women, the economy or just in general, they've got you covered.  Support Note to Self by becoming a member today at NotetoSelfRadio.org/donate.    

The United States of Anxiety
Call-In Special: Where Technology Takes Us

The United States of Anxiety

Play Episode Listen Later Nov 16, 2016 57:59


For Hillary Clinton, that private email server was an Achilles heel. For Donald Trump, late night tweet-storms and the echo chamber of the alt-right were rocket fuel. And for American voters, the power of technology was inescapable. Manoush Zomorodi, host of WNYC's Note to Self, has seen the good, bad and ugly of tech this election cycle. Farhard Manjoo, New York Times technology columnist, joins her to look back on how social media shaped the Presidential race, and how companies like Twitter and Facebook are responding as vitriol and fake news flood our feeds.  Plus Marc Rotenberg, executive director of the Electronic Privacy Information Center, breaks down how digital privacy might look under the Trump administration.  

DEF CON 23 [Audio] Speeches from the Hacker Convention
Bruce Schneier - Questions and Answers

DEF CON 23 [Audio] Speeches from the Hacker Convention

Play Episode Listen Later Oct 15, 2015


Bruce Schneier Q&A Bruce Schneier CTO, Resilient Systems Bruce Schneier Talks Security. Come hear about what's new, what's hot, and what's hype in security. NSA surveillance, airports, voting machines, ID cards, cryptography -- he'll talk about what's in the news and what matters. Always a lively and interesting talk. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by the Economist. He is the author of 12 books—including the New York Times best-seller Data and Goliath: The Hidden Values to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, and an advisory board member of the Electronic Privacy Information Center. He is the CTO of Resilient Systems. Twitter: @schneierblog

At length with Steve Scher. - The House of Podcasts
The Picture That Emerges is Incredibly Creepy- Marc Rotenberg on Civil Liberties in the Information Age

At length with Steve Scher. - The House of Podcasts

Play Episode Listen Later Nov 21, 2014


Turns out on-call car service Uber compiles a lot of data about its customers. They can get a pretty good idea of what you are up to just by the way you use their service. Are you surprised?  How much privacy is there in the age of the internet? Marc Rotenberg is a lawyer specializing in privacy and civil rights. He is president of the Electronic Privacy Information Center, a public interest research group involved in privacy litigation and public policy. He also teaches about these issues at Georgetown University Law School in Washington D.C.Rotenberg says he doesn't use the word fear when discussing civil liberties in the information age. It is disempowering, he feels. Rather, there are challenges to be met, problems to solve. Rotenberg sat down with Steve Scher in late November, 2014, to discuss those challenges. 

Waking Up Orwell
Waking Up Orwell hibernates as Godzilla .gov drones on

Waking Up Orwell

Play Episode Listen Later Mar 1, 2012 29:16


FAA stokes an EPIC petition over domestic Drone use  Guest  Amie Stepanovich,  National Security Counsel at the Electronic Privacy Information Center shows us the light. - http://epic.org/privacy/drones/ Boston Herald.com: Pressure builds for civilian drone flights at home - http://hosted.ap.org/dynamic/stories/U/US_DRONES_AT_HOME?SITE=MABOH&SECTION=HOME&TEMPLATE=DEFAULTNEWS BITS CITED @BeatTheChip on Twitter - follow me and find out all the crap you've been missing. DIY Government Task List  White House petition on domestic use of drones - get your 2 cents in!http://digg.com/newsbar/story/require_the_faa_to_conduct_a_rulemaking_on_the_privacy_impact_of_domestic_drone_use_the_white_house SOLD! To the man completely disgusted with the indefinite detention provisions of American citizens without due process...   1 copy of the Washington State legislation punting NDAA FY 2012 proviso's  back to the District of Columbia.  SEE HERE: http://apps.leg.wa.gov/billinfo/summary.aspx?bill=2759

SEO Podcast Unknown Secrets of Internet Marketing
Do twitter followers affect your SEO - #seopodcast 133

SEO Podcast Unknown Secrets of Internet Marketing

Play Episode Listen Later Feb 9, 2012 16:31


In this episode we discuss Google plus EPIC, Electronic Privacy Information Center, and SEO. We also discuss the effect being logged into your Google account can have on your personal SERP. We answer the question "Do twitter followers affect your SEO?". We finally discuss Search Engine Optimization in Houston Video. The most popular PPC Houston podcast on iTunes. E-Webstyle provides Houston SEO .

SEO Podcast Unknown Secrets of Internet Marketing
Do twitter followers affect your SEO - #seopodcast 133

SEO Podcast Unknown Secrets of Internet Marketing

Play Episode Listen Later Feb 8, 2012 16:31


In this episode we discuss Google plus EPIC, Electronic Privacy Information Center, and SEO. We also discuss the effect being logged into your Google account can have on your personal SERP. We answer the question "Do twitter followers affect your SEO?". We finally discuss Search Engine Optimization in Houston Video. The most popular PPC Houston podcast on iTunes. E-Webstyle provides Houston SEO .

KUCI: Privacy Piracy
Mari Frank Interviews Dan Solove, Law Professor and Author

KUCI: Privacy Piracy

Play Episode Listen Later Mar 5, 2009


Daniel J. Solove is a professor of law at the George Washington University Law School. He received his A.B. in English Literature from Washington University, where he was an early selection for Phi Beta Kappa, and his J.D. from Yale Law School. At Yale, Professor Solove won the university-wide scholarly writing Field Prize and served as symposium editor of the Yale Law Journal and as an editor of the Yale Journal of Law & the Humanities. Following law school, Professor Solove clerked for The Honorable Stanley Sporkin, U.S. District Court for the District of Columbia. After practicing law as an associate at the firm of Arnold & Porter in Washington, D.C., Professor Solove began a second clerkship with The Honorable Pamela Ann Rymer, U.S. Court of Appeals for the Ninth Circuit. He began his law teaching career at Seton Hall Law School in 2000. He joined the George Washington University Law School faculty in 2004. Professor Solove writes in the areas of information privacy law, cyberspace law, law and literature, jurisprudence, legal pragmatism, and constitutional theory. He teaches information privacy law, criminal procedure, criminal law, and law and literature. An internationally known expert in privacy law, Solove has been interviewed and quoted by the media in several hundred articles and broadcasts, including the New York Times, Washington Post, Wall Street Journal, USA Today, Chicago Tribune, the Associated Press, ABC, CBS, NBC, CNN, and NPR. Professor Solove is the author of Understanding Privacy (Harvard University Press 2008), The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (Yale University Press 2007), The Digital Person: Technology and Privacy in the Information Age (NYU Press 2004) and Information Privacy Law (Aspen Publishing, 3rd ed. 2009), among other titles. His book, The Future of Reputation, won the 2007 McGannon Award. He has written more than 25 articles, which have appeared in many of the leading law reviews, including the Stanford Law Review, Yale Law Journal, California Law Review, U. Pennsylvania Law Review, NYU Law Review, Michigan Law Review, and U. Chicago Law Review, among others. He has consulted in high-profile privacy law cases, contributed to amicus briefs before the U.S. Supreme Court, and testified before Congress. He serves on the advisory board of the Electronic Privacy Information Center and is on the board of the Law and Humanities Institute. Professor Solove blogs at Concurring Opinions, a blog covering issues of law, culture, and current events. It was selected by the ABA Journal as among the 100 best law blogs.

KUCI: Privacy Piracy
Mari Frank Interviews Chris Hoofnagle, Esq. Privacy Expert

KUCI: Privacy Piracy

Play Episode Listen Later May 1, 2008


hris Jay Hoofnagle is senior staff attorney to the Samuelson Law, Technology & Public Policy Clinic and senior fellow with the Berkeley Center for Law & Technology. His focus is consumer privacy law. He is admitted to practice law in California and the District of Columbia. Mr. Hoofnagle was the former director of the Electronic Privacy Information Center's West Coast Office in San Francisco, California. He has testified before Congress on privacy and Social Security Numbers, identity theft, and the Fair Credit Reporting Act, Security Breaches and before the Judicial Conference of the U.S. on public records and privacy. Chris' past work has focused on financial services privacy, gender and privacy, commercial profiling and telemarketing, commercial data brokers, and the privacy implications of emerging technologies including invasive advertising and Digital Rights Management. He participated in the Amy Boyer case, where the New Hampshire Supreme Court held that information brokers and private investigators can be liable for the harms caused by selling personal information. His writings on the First Amendment and privacy have appeared in the San Francisco Chronicle, the Knight Ridder News Service, and in law journals at Columbia Law School, Notre Dame Law School, the University of North Carolina School of Law at Chapel Hill, and at the University of Illinois . Chris is a graduate of the University of Georgia School of Law and is a member of the California, Maryland, and District of Columbia Bars.During Summer 1999, Chris was a law fellow for the American Association of University Professors. http://www.law.berkeley.edu/clinics/samuelson/

KUCI: Privacy Piracy
Mari Frank Interviews Prof. Dan Solove, author of The Future of Reputation on the Internet and The Digital Person

KUCI: Privacy Piracy

Play Episode Listen Later Jan 17, 2008


Dan Solve, Privacy Law Professor, Author of TheFuture of Reputation Professor Solove is an associate professor of law at the George Washington University Law School. He received his A.B. in English Literature from Washington University, where he was an early selection for Phi Beta Kappa, and his J.D. from Yale Law School. At Yale, Professor Solove won the university-wide scholarly writing Field Prize and served as symposium editor of the Yale Law Journal and as an editor of the Yale Journal of Law & the Humanities. Following law school, Professor Solove clerked for The Honorable Stanley Sporkin, U.S. District Court for the District of Columbia. After practicing law as an associate at the firm of Arnold & Porter in Washington, D.C., Professor Solove began a second clerkship with The Honorable Pamela Ann Rymer, U.S. Court of Appeals for the Ninth Circuit. Professor Solove began teaching at Seton Hall Law School in 2000. In the fall semester of 2003, he was a visiting professor at the George Washington University Law School. He permanently joined the George Washington University Law School faculty in 2004. Professor Solove writes in the areas of information privacy law, cyberspace law, law and literature, jurisprudence, legal pragmatism, and constitutional theory. He teaches information privacy law, criminal procedure, criminal law, and law and literature. An internationally known expert in privacy law, Solove has been interviewed and quoted by the media in over 100 articles and broadcasts, including the New York Times, Washington Post, Chicago Tribune, Toronto Star, Associated Press, ABC News, CBS News, NBC News, CNN, and National Public Radio. Professor Solove recently published a new book, THE DIGITAL PERSON: TECHNOLOGY AND PRIVACY IN THE INFORMATION AGE (NYU Press, December 2004), which distinguished Berkeley Law School Professor Pamela Samuelson calls "the best exposition thus far about the threat that computer databases containing personal data about millions of Americans poses for information privacy." The book was recommended in the Wall Street Journal's Recommended Reading column in July 2005. Additionally, Solove also authored a casebook, INFORMATION PRIVACY LAW (Aspen, January 2003) (with Marc Rotenberg). Solove has published about 20 articles and essays, which have appeared or are forthcoming in many of the leading law reviews, including the Stanford Law Review, Yale Law Journal, California Law Review, University of Pennsylvania Law Review, Michigan Law Review, Duke Law Journal, Minnesota Law Review, and Southern California Law Review, among others. He serves on the advisory board of the Electronic Privacy Information Center and is the president of the Law and Humanities Institute. He has contributed to several amicus briefs before the U.S. Supreme Court. Daniel J. Solove Associate Professor of Law George Washington University Law School 2000 H Street, NW Washington, DC 20052 Website: http://www.law.gwu.edu/facweb/dsolove/

KUCI: Privacy Piracy
Mari Frank Interviews Chris Hoofnagle, Privacy ExpertAttorney, Samuelson Law, Technology U. of Ca. Berkeley

KUCI: Privacy Piracy

Play Episode Listen Later Jun 7, 2007


Chris Jay Hoofnagle is senior staff attorney to the Samuelson Law, Technology & Public Policy Clinic and senior fellow with the Berkeley Center for Law & Technology. His focus is consumer privacy law. He is admitted to practice law in California and the District of Columbia. Mr. Hoofnagle was the former director of the Electronic Privacy Information Center's West Coast Office in San Francisco, California. He has testified before Congress on privacy and Social Security Numbers, identity theft, and the Fair Credit Reporting Act, Security Breaches and before the Judicial Conference of the U.S. on public records and privacy. Chris' past work has focused on financial services privacy, gender and privacy, commercial profiling and telemarketing, commercial data brokers, and the privacy implications of emerging technologies including invasive advertising and Digital Rights Management. He participated in the Amy Boyer case, where the New Hampshire Supreme Court held that information brokers and private investigators can be liable for the harms caused by selling personal information. His writings on the First Amendment and privacy have appeared in the San Francisco Chronicle, the Knight Ridder News Service, and in law journals at Columbia Law School, Notre Dame Law School, the University of North Carolina School of Law at Chapel Hill, and at the University of Illinois . Chris is a graduate of the University of Georgia School of Law and is a member of the California, Maryland, and District of Columbia Bars.During Summer 1999, Chris was a law fellow for the American Association of University Professors.

KUCI: Privacy Piracy
Mari Frank, Esq. Interviews Chris Hoofnagle, Esq. West Coast Director -Electronic Privacy Information Center 7 19 05

KUCI: Privacy Piracy

Play Episode Listen Later Jul 20, 2006


Chris Hoofnagle, West Coast Director of the Electrronic Privacy Information Center has testifiEpic is especially concerned with the widespread sale of sensitive information of Americans by information brokers. Chris reveals this and other secrets of privacy invasion by the government and commercial companies..

KUCI: Privacy Piracy
Mari Frank, Esq. interviews Lillie Coney, Assoc. Director, Electronic Privacy Information Center

KUCI: Privacy Piracy

Play Episode Listen Later May 4, 2006


Ms. Coney is Associate Director with the Electronic Privacy Information Center (EPIC). Her issue areas include nanotechnology, surveillance, children's privacy, civil rights and privacy, coalition development, spectrum, census, and electronic voting. Ms. Coney also serves as Coordinator of the recently established National Committee on Voting Integrity (NCVI). NCVI was created in 2003 in response to growing concerns about the reliability of electronic voting systems. Lillie will share the insidious nature of the lack of transparency in voter registration, voting machines and elections. Our liberty is dependent on our confidentiality and privacy in the voting process. She will also tell you the dangers in the myriad types of surveillance in our modern society.