POPULARITY
Much has been made of the promise and concerns around AI technical advances, and guardrails that might be considered to reduce the downside of opaque quasi-algorithmic outcomes associated with current large language model approaches. This panel will examine the current AI regulatory debate and explore how current and proposed corporate and governmental AI is being shaped and normed to provide outputs that reinforce “mainstream” economic, ideological and operational norms, with the risk of vested interests defining such norms. From national security applications, autonomous vehicle safety decisions, economic predictions, pareto-optimal and social benefit determinations, and health care deployment, to how you are entertained and educated, can we control what most of us can’t understand?Featuring:Mr. Stewart A. Baker, Of Counsel, Steptoe & Johnson LLPMr. Christopher Ekren, Global Technology Counsel, Sony Corporation of AmericaMs. Victoria Luxardo Jeffries, Director, United States Public Policy, MetaProf. John C. Yoo, Emanuel S. Heller Professor of Law, University of California at Berkeley; Nonresident Senior Fellow, American Enterprise Institute; Visiting Fellow, Hoover InstitutionModerator: Hon. Stephen Alexander Vaden, Judge, United States Court of International Trade
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?Featuring: - Stewart A. Baker, Partner, Steptoe & Johnson LLP- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley- [Moderator] Curt Levey, President, Committee for JusticeVisit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.
This symposium was co-sponsored by the Regulatory Transparency Project and took place at the Antonin Scalia Law School on February 2, 2018.Authors:Stewart A. Baker, Partner, Steptoe & JohnsonJustin ‘Gus’ Hurwitz, Assistant Professor of Law and Co-Director of Space, Cyber, and Telecom Law Program, University of Nebraska College of LawRegulators in CyberiaDiscussants:Alan Butler, Senior Privacy Counsel, Electronic Privacy Information CenterBrenda Leong, Senior Counsel and Director of Strategy, Future of Privacy ForumModerator:Sandra Aistars, Senior Scholar and Director Copyright Research & Policy, Center for the Protection of Intellectual Property; Clinical Professor, George Mason University Antonin Scalia Law School * * * * * As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speakers.
This symposium was co-sponsored by the Regulatory Transparency Project and took place at the Antonin Scalia Law School on February 2, 2018.Authors:Stewart A. Baker, Partner, Steptoe & JohnsonJustin ‘Gus’ Hurwitz, Assistant Professor of Law and Co-Director of Space, Cyber, and Telecom Law Program, University of Nebraska College of LawRegulators in CyberiaDiscussants:Alan Butler, Senior Privacy Counsel, Electronic Privacy Information CenterBrenda Leong, Senior Counsel and Director of Strategy, Future of Privacy ForumModerator:Sandra Aistars, Senior Scholar and Director Copyright Research & Policy, Center for the Protection of Intellectual Property; Clinical Professor, George Mason University Antonin Scalia Law School * * * * * As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speakers.
This CSIS podcast series, funded by FireEye, interviews cybersecurity experts from different sectors to explore the effectiveness of cyber red lines and the different roles the government and private sectors have in cybersecurity policy. Stewart A. Baker is a partner in the Washington office of Steptoe & Johnson LLP. Previously he was the first Homeland Security Assistant Secretary for Policy for President George W. Bush from 2005-2009. He previously served as the NSA General Counsel from 1992-1994.
How do government officials decide key homeland security questions? How do those decisions affect our day to day lives? In Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism (Hoover Institution, 2010), Stewart Baker, a former senior official from the Department of Homeland Security, takes us behind the scenes of government homeland security decision making. Baker, who was the DHS’s first Assistant Secretary for Policy, examines some of the key security threats the US faces, and some of our greatest challenges in meeting them. While Baker has a healthy respect for the abilities of outside forces would do us harm, he also recognizes that some of our greatest challenges to providing security come from our allies, and from ourselves. In addition, while many people tune out when they hear acronyms like CFIUS of VWP, Baker shows what those acronyms mean, and their implications for our safety and security. Read all about it, and more, in Baker’s informative new book. Please become a fan of “New Books in Public Policy” on Facebook if you haven’t already. Learn more about your ad choices. Visit megaphone.fm/adchoices
How do government officials decide key homeland security questions? How do those decisions affect our day to day lives? In Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism (Hoover Institution, 2010), Stewart Baker, a former senior official from the Department of Homeland Security, takes us behind the scenes of government homeland security decision making. Baker, who was the DHS’s first Assistant Secretary for Policy, examines some of the key security threats the US faces, and some of our greatest challenges in meeting them. While Baker has a healthy respect for the abilities of outside forces would do us harm, he also recognizes that some of our greatest challenges to providing security come from our allies, and from ourselves. In addition, while many people tune out when they hear acronyms like CFIUS of VWP, Baker shows what those acronyms mean, and their implications for our safety and security. Read all about it, and more, in Baker’s informative new book. Please become a fan of “New Books in Public Policy” on Facebook if you haven’t already. Learn more about your ad choices. Visit megaphone.fm/adchoices