POPULARITY
The Atlantic published the entire Signal conversation centered on strikes on Houthi militants in Yemen between multiple administration officials and, mistakenly, Atlantic editor-in-chief Jeffrey Goldberg. Paul Rosenzweig, the former deputy assistant secretary for policy at the Department of Homeland Security under President George W. Bush, joins us to give us some context on the scale of the Signalgate scandal and what it would mean under any other president.And in headlines: Trump announced 25% tariffs on imported cars, the Supreme Court upheld requirements to regulate ghost-guns, and a Democrat defied all odds and flipped a seat in the Pennsylvania State Senate.Show Notes:Check out Paul's story – https://tinyurl.com/3nn8zr3jSubscribe to the What A Day Newsletter – https://tinyurl.com/3kk4nyz8Support victims of the fire – votesaveamerica.com/reliefWhat A Day – YouTube – https://www.youtube.com/@whatadaypodcastFollow us on Instagram – https://www.instagram.com/crookedmedia/For a transcript of this episode, please visit crooked.com/whataday
President Trump's executive action granting clemency to all of the January 6th insurrectionists – violent and non-violent alike – has been met with concern by legal experts and people who have been studying and reporting on militia groups like the Oath Keepers and the Proud Boys for years. Kara speaks with Dr. Amy Cooter, director of research at the Center on Terrorism, Extremism, and Counterterrorism at the Middlebury Institute of International Studies and author of Nostalgia, Nationalism and the US Militia Movement; investigative reporter Tess Owen who has covered violent extremist groups, including the J6 protesters extensively; and Paul Rosenzweig, former Deputy Assistant Secretary for Policy at the Department of Homeland Security under George W. Bush, who specializes in issues relating to domestic and homeland security about the message the pardons send to violent militias, the impact of social media (and Elon Musk) on far-right extremism, and whether Trump has the authority to deputize these groups, especially on the border. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
With a sweep of his pen, President Trump is issuing executive orders, changing the federal government, fulfilling campaign promises and settling scores. It's only been two days, and Trump has already withdrawn, again, from the Paris Climate Agreement and the World Health Organization. He's also ordered all government maps and documents to rename the Gulf of Mexico, and has threatened tariffs on Mexico, China, and now Russia. John Sawers, who formerly led Britain's spy agency MI6 and served as the UK's Ambassador to the UN, joins Christiane to discuss these security challenges and the inner workings of foreign policy. Also on today's show: CNN Senior Global Affairs Analyst Bianna Golodryga; Husam Zomlot, Head of the Palestinian Mission to the UK; Paul Rosenzweig, Former Deputy Assistant Secretary, Homeland Security Department / Founding member, Federalist Society Learn more about your ad choices. Visit podcastchoices.com/adchoices
MSNBC's Ari Melber hosts "The Beat" on Tuesday, December 17, and reports on Donald Trump's friends and enemies and the Democratic party's path forward. Plus, listen to Melber's interview with director and producer Ron Howard. Paul Rosenzweig and Frank Bowman join.
From July 10, 2018: #AbolishICE is the hashtag that has proliferated all over Twitter. Anger over the family separation policy of the Trump administration has many people doubting whether the agency that does interior immigration enforcement is up to a humane performance of its task. Paul Rosenzweig, former policy guru at DHS where he supervised immigration matters, and Carrie Cordero, who has been actively engaged on the subject recently, joined Benjamin Wittes to discuss the substance of our immigration laws. Would abolishing ICE actually make a difference, or would it just be renaming the problem with three other letters?To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
"We have a very real opportunity to bring this conflict to an end" was the assessment of US envoy Amos Hochstein, who is bringing a ceasefire proposal to Lebanon and Israel. Both the Lebanese government and Hezbollah are said to have responded positively. In the meantime, Israel has been intensifying its air strikes, even inside Beirut. Nabih Bulos is Middle East Bureau Chief for the LA Times, based in Lebanon's capital, and he joins the show from there. Also in today's show: former US Ambassador to Israel and Egypt Daniel Kurtzer; Dan Osborn, former independent candidate for US Senate, Nebraska; Paul Rosenzweig, former Deputy Assistant Secretary, Homeland Security Department Learn more about your ad choices. Visit podcastchoices.com/adchoices
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
Last May, Microsoft announced that a Chinese state-sponsored hacking group, Volt Typhoon, appeared to be targeting U.S. critical infrastructure and entities abroad in part through establishing a presence in a malware-infected network, or botnet, consisting of old devices located in the United States. At the end of January, the Justice Department announced it had removed the botnet from hundreds of American devices. Cybersecurity experts Timothy Edgar and Paul Rosenzweig both wrote articles for Lawfare discussing the Volt Typhoon intrusion and the U.S. response. But the authors take away very different lessons from the intrusion. Edgar argued that although the removal of the botnet was a success in terms of cybersecurity, the legal theory the government relied on for conducting this operation has dangerous privacy implications. Rosenzweig, on the other hand, contended that the Volt Typhoon breach illuminates flawed assumptions at the core of the U.S. cybersecurity strategy, which he says must be reexamined. Lawfare Research Fellow Matt Gluck spoke with Edgar and Rosenzweig about why the Volt Typhoon intrusion and the U.S. response that followed matter for the future of U.S. cybersecurity and privacy, how the government should weigh security and privacy when responding to cyber intrusions, whether nuclear conflict is a good analogy for cyber conflict, and much more.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
From November 10, 2018: With the firing of Jeff Sessions and his replacement with former U.S. attorney Matthew Whitaker, all eyes this week are focused on whether Special Counsel Robert Mueller's investigation of Russian interference in the 2016 election and possible coordination between the Trump campaign and the Russians will get to run its full course. But even before the Sessions firing, Benjamin Wittes and Paul Rosenzweig had inquiries into the presidency on their minds. On Tuesday morning, they sat down to discuss Paul's recent 12-part lecture series on presidential investigations released through the online educational platform The Great Courses.They talked about how Paul structured the lecture series, Paul's own experience on Independent Counsel Ken Starr's team investigating the Clinton White House, and the course's relevance to the Mueller investigation.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. It's lawyers holding forth on the frontiers of technology, so take it with a grain of salt. Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it's not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There's no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. But the public measures adopted by the West do not seem likely to effectively defeat or deter China's strategy. The group is less impressed by the New York Times' claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don't do it well, I argue. Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations. We will be eagerly waiting for resolution of the European fight over Facebook's subscription fee and the move by websites to “Pay or Consent” privacy terms fight. I predict that Eurocrats' hypocrisy will be tested by an effort to rule for elite European media sites, which already embrace “Pay or Consent” while ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task. Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more. Cristin also covers a detailed new Google TAG report on commercial spyware. And in quick hits, House Republicans tried and failed to find common ground on renewal of FISA Section 702 I recommend Goody-2, the ‘World's ‘Most Responsible' AI Chatbot Dechert has settled a wealthy businessman's lawsuit claiming that the law firm hacked his computer Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen Download 492nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Stewart returns in one piece from his Canadian Ski Marathon. Paul Rosenzweig discusses AI text-to-speech advancements and emergent capabilities. Cristin Flynn Goodwin and Paul Stephan evaluate the Western reaction to Volt Typhoon and assess China's influence operations in US elections relative to Russia's. The group discusses digital privacy in Europe, the debate over software liability, and Stewart finds an unlikely ally in the EFF in opposition to a UN Cybercrime Treaty.
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. It's lawyers holding forth on the frontiers of technology, so take it with a grain of salt. Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it's not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There's no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. But the public measures adopted by the West do not seem likely to effectively defeat or deter China's strategy. The group is less impressed by the New York Times' claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don't do it well, I argue. Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations. We will be eagerly waiting for resolution of the European fight over Facebook's subscription fee and the move by websites to “Pay or Consent” privacy terms fight. I predict that Eurocrats' hypocrisy will be tested by an effort to rule for elite European media sites, which already embrace “Pay or Consent” while ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task. Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more. Cristin also covers a detailed new Google TAG report on commercial spyware. And in quick hits, House Republicans tried and failed to find common ground on renewal of FISA Section 702 I recommend Goody-2, the ‘World's ‘Most Responsible' AI Chatbot Dechert has settled a wealthy businessman's lawsuit claiming that the law firm hacked his computer Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen Download 492nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I'm hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China's diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China's interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman's defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Creator provides the artwork for the defenestration but not the soft landing. Nick Weaver covers Meta's not-so-new policy on political ads claiming that past elections were rigged. I cover the flap over TikTok videos promoting Osama Bin Laden's letter justifying the 9/11 attack. Jordan and I discuss reports that Applied Materials is facing a criminal probe over shipments to China's SMIC. Nick reports on the most creative ransomware tactic to date: compromising a corporate network and then filing an SEC complaint when the victim doesn't disclose it within four days. This particular gang may have jumped the gun, he reports, but we'll see more such reports in the future, and the SEC will have to decide whether it wants to foster this business model. I cover the effort to disclose a bitcoin wallet security flaw without helping criminals exploit it. And Paul recommends the week's long read: The Mirai Confession – a detailed and engaging story of the kids who invented Mirai, foisted it on the world, and then worked for the FBI for years, eventually avoiding jail, probably thanks to an FBI agent with a paternal streak. Download 482nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I'm hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China's diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China's interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman's defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Creator provides the artwork for the defenestration but not the soft landing. Nick Weaver covers Meta's not-so-new policy on political ads claiming that past elections were rigged. I cover the flap over TikTok videos promoting Osama Bin Laden's letter justifying the 9/11 attack. Jordan and I discuss reports that Applied Materials is facing a criminal probe over shipments to China's SMIC. Nick reports on the most creative ransomware tactic to date: compromising a corporate network and then filing an SEC complaint when the victim doesn't disclose it within four days. This particular gang may have jumped the gun, he reports, but we'll see more such reports in the future, and the SEC will have to decide whether it wants to foster this business model. I cover the effort to disclose a bitcoin wallet security flaw without helping criminals exploit it. And Paul recommends the week's long read: The Mirai Confession – a detailed and engaging story of the kids who invented Mirai, foisted it on the world, and then worked for the FBI for years, eventually avoiding jail, probably thanks to an FBI agent with a paternal streak. Download 482nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Today's episode of the Cyberlaw Podcast begins as it must with Saturday's appalling Hamas attack on Israeli civilians. I ask Adam Hickey and Paul Rosenzweig to comment on the attack and what lessons the U.S. should draw from it, whether in terms of revitalized intelligence programs or the need for workable defenses against drone attacks. In other news, Adam covers the disturbing prediction that the U.S. and China have a fifty percent chance of armed conflict in the next five years—and the supply chain consequences of increasing conflict. Meanwhile, Western companies who were hoping to sit the conflict out may not be given the chance. Adam also covers the related EU effort to assess risks posed by four key technologies. Paul and I share our doubts about the Red Cross's effort to impose ethical guidelines on hacktivists in war. Not that we needed to; the hacktivists seem perfectly capable of expressing their doubts on their own. The Fifth Circuit has expanded its injunction against the U.S. government encouraging or coercing social media to suppress “disinformation.” Now the prohibition covers CISA as well as the White House, FBI, and CDC. Adam, who oversaw FBI efforts to counter foreign disinformation, takes a different view of the facts than the Fifth Circuit. In the same vein, we note a recent paper from two Facebook content moderators who say that government jawboning of social media really does work (if you had any doubts). Paul comments on the EU vulnerability disclosure proposal and the hostile reaction it has attracted from some sensible people. Adam and I find value in an op-ed that explains the weirdly warring camps, not over whether to regulate AI but over how and why. And, finally, Paul mourns yet another step in Apple's step-by-step surrender to Chinese censorship and social control. You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Today's episode of the Cyberlaw Podcast begins as it must with Saturday's appalling Hamas attack on Israeli civilians. I ask Adam Hickey and Paul Rosenzweig to comment on the attack and what lessons the U.S. should draw from it, whether in terms of revitalized intelligence programs or the need for workable defenses against drone attacks. In other news, Adam covers the disturbing prediction that the U.S. and China have a fifty percent chance of armed conflict in the next five years—and the supply chain consequences of increasing conflict. Meanwhile, Western companies who were hoping to sit the conflict out may not be given the chance. Adam also covers the related EU effort to assess risks posed by four key technologies. Paul and I share our doubts about the Red Cross's effort to impose ethical guidelines on hacktivists in war. Not that we needed to; the hacktivists seem perfectly capable of expressing their doubts on their own. The Fifth Circuit has expanded its injunction against the U.S. government encouraging or coercing social media to suppress “disinformation.” Now the prohibition covers CISA as well as the White House, FBI, and CDC. Adam, who oversaw FBI efforts to counter foreign disinformation, takes a different view of the facts than the Fifth Circuit. In the same vein, we note a recent paper from two Facebook content moderators who say that government jawboning of social media really does work (if you had any doubts). Paul comments on the EU vulnerability disclosure proposal and the hostile reaction it has attracted from some sensible people. Adam and I find value in an op-ed that explains the weirdly warring camps, not over whether to regulate AI but over how and why. And, finally, Paul mourns yet another step in Apple's step-by-step surrender to Chinese censorship and social control. You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Former Department of Homeland Security official, Paul Rosenzweig, joins Niki in the studio to share his thoughts on AI, hacking, and national security. They dish some stories from the past and take a look to the future of big tech in China, talk AI hallucinations, and Paul shares his thoughts on what Congress should be doing to respond to the changing - and rapidly evolving - national security landscape. Follow Paul on TwitterLearn more about Red Branch ConsultingListen to Paul on The Federalist Society PodcastFollow Niki on LinkedIn
It's surely fitting that a decision released on July 4 would set off fireworks on the Cyberlaw Podcast. The source of the drama was U.S. District Court Judge Terry Doughty's injunction prohibiting multiple federal agencies from leaning on social media platforms to suppress speech the agencies don't like. Megan Stifel, Paul Rosenzweig, and I could not disagree more about the decision, which seems quite justified to me, given the aggressive White House communications telling the platforms whose speech the government wanted suppressed. Paul and Megan argue that it's not censorship, that the judge got standing law wrong, and that I ought to invite a few content moderation aficionados on for a full hour episode on the topic. That all comes after a much less lively review of recent stories on artificial intelligence. Sultan Meghji downplays OpenAI's claim that they've taken a step forward in preventing the emergence of a “misaligned”—in other words evil—superintelligence. We note what may be the first real-life “liar's dividend” from deep faked voice. Even more interesting is the prospect that large language models will end up poisoning themselves by consuming their own waste—that is, by being trained on recent internet discourse that includes large volumes of text created by earlier models. That might stall progress in AI, Sultan suggests. But not, I predict before government regulation tries to do the same; as witness, New York City's law requiring companies that use AI in hiring to disclose all the evidence needed to sue them for discrimination. Also vying to load large language models with rent-seeking demands are Big Content lawyers. Sultan and I try to separate the few legitimate intellectual property claims against AI from the many bogus ones. I channel a recent New York gubernatorial candidate in opining that the rent-seeking is too damn high. Paul dissects China's most recent and self-defeating effort to deter the West from decoupling from Chinese supply chains. It looks as though China was so eager to punish the West that it rolled out supply chain penalties before it had the leverage to make the punishment stick. Speaking of self-defeating Chinese government policies, it looks as though the government's two-minute hate directed at China's fintech giants is coming to an end. Sultan walks us through the wreckage of the American cryptocurrency industry, pausing to note the executive exodus from Binance and the end of the view that cryptocurrency could be squared with U.S. regulatory authorities. Not in this administration, and maybe not in any, and outcome that will delay financial modernization here for years. I renew my promise to get Gus Coldebella on the podcast to see if he can turn the tide of negativism. In quick hits and updates: There's an effort afoot to amend the National Defense Authorization Act to prevent American government agencies, and only American government agencies, from buying data available to everyone else. We are skeptical that it will pass. The EU and the U.S. have reached a (third) transatlantic data transfer deal, and just in time for Meta, which was facing a new set of competition attacks on its data protection compliance. And Canada, which already looks ineffectual for passing a link tax that led Facebook and Google to simply drop links to Canadian media, now looks ineffectual and petty, announcing it has pulled its paltry advertising budget from Facebook. Oh, and last year's social media villain is this year's social media hero, at least on the left, as Meta launches Threads and threatens Twitter's hopes for a recovery. Download 467th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
It's surely fitting that a decision released on July 4 would set off fireworks on the Cyberlaw Podcast. The source of the drama was U.S. District Court Judge Terry Doughty's injunction prohibiting multiple federal agencies from leaning on social media platforms to suppress speech the agencies don't like. Megan Stifel, Paul Rosenzweig, and I could not disagree more about the decision, which seems quite justified to me, given the aggressive White House communications telling the platforms whose speech the government wanted suppressed. Paul and Megan argue that it's not censorship, that the judge got standing law wrong, and that I ought to invite a few content moderation aficionados on for a full hour episode on the topic. That all comes after a much less lively review of recent stories on artificial intelligence. Sultan Meghji downplays OpenAI's claim that they've taken a step forward in preventing the emergence of a “misaligned”—in other words evil—superintelligence. We note what may be the first real-life “liar's dividend” from deep faked voice. Even more interesting is the prospect that large language models will end up poisoning themselves by consuming their own waste—that is, by being trained on recent internet discourse that includes large volumes of text created by earlier models. That might stall progress in AI, Sultan suggests. But not, I predict before government regulation tries to do the same; as witness, New York City's law requiring companies that use AI in hiring to disclose all the evidence needed to sue them for discrimination. Also vying to load large language models with rent-seeking demands are Big Content lawyers. Sultan and I try to separate the few legitimate intellectual property claims against AI from the many bogus ones. I channel a recent New York gubernatorial candidate in opining that the rent-seeking is too damn high. Paul dissects China's most recent and self-defeating effort to deter the West from decoupling from Chinese supply chains. It looks as though China was so eager to punish the West that it rolled out supply chain penalties before it had the leverage to make the punishment stick. Speaking of self-defeating Chinese government policies, it looks as though the government's two-minute hate directed at China's fintech giants is coming to an end. Sultan walks us through the wreckage of the American cryptocurrency industry, pausing to note the executive exodus from Binance and the end of the view that cryptocurrency could be squared with U.S. regulatory authorities. Not in this administration, and maybe not in any, and outcome that will delay financial modernization here for years. I renew my promise to get Gus Coldebella on the podcast to see if he can turn the tide of negativism. In quick hits and updates: There's an effort afoot to amend the National Defense Authorization Act to prevent American government agencies, and only American government agencies, from buying data available to everyone else. We are skeptical that it will pass. The EU and the U.S. have reached a (third) transatlantic data transfer deal, and just in time for Meta, which was facing a new set of competition attacks on its data protection compliance. And Canada, which already looks ineffectual for passing a link tax that led Facebook and Google to simply drop links to Canadian media, now looks ineffectual and petty, announcing it has pulled its paltry advertising budget from Facebook. Oh, and last year's social media villain is this year's social media hero, at least on the left, as Meta launches Threads and threatens Twitter's hopes for a recovery. Download 467th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
From April 19, 2019: A redacted version of the 448-page Mueller report dropped yesterday, and there's a lot to say about it. In this Special Edition of the Lawfare Podcast, Bob Bauer, Susan Hennessey, Mary McCord, Paul Rosenzweig, Charlie Savage and Benjamin Wittes discuss what the report says about obstruction and collusion, Mueller's legal theories and what this all means for the president and the presidency.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In the past several months, President Biden released a new national cybersecurity strategy. As part of that strategy, the Administration says that it will seek to “Shape Market Forces to Drive Security and Resilience – We will place responsibility on those within our digital ecosystem that are best positioned to reduce risk and shift the consequences of poor cybersecurity away from the most vulnerable in order to make our digital ecosystem more trustworthy, including by: . . . Shifting liability for software products and services to promote secure development practices.” The concept of software liability has been the subject of much debate since it was first suggested more than a decade ago. With the new national strategy that debate becomes much more salient. In this webinar, cybersecurity experts will debate both sides of the question.Featuring: - Prof. Paul Rosenzweig, Professorial Lecturer in Law, The George Washington University- Prof. Jamil N. Jaffer, Founder and Executive Director of the National Security Institute, Antonin Scalia Law School, George Mason University- [Moderator] Robert Strayer, Executive Vice President of Policy, Information Technology Industry CouncilVisit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.
This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers' recent call for attention to the existential risks posed by AI; he thinks it's a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I'm utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination. Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children's privacy and another over security failings at Amazon's Ring doorbell operation. The bigger story is the FTC's effort to issue a commercial death sentence on Meta's children's business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do. Paul flags a batch of China stories: China's version of Bloomberg has begun quietly limiting the information about China's economy that is available to overseas users. TikTok is accused of storing influencers' sensitive financial information In China, contrary to its promises. Malaysia won't ban Huawei from it 5G network. The former Harvard chair convicted of lying about taking Chinese money has been sentenced to just two days in prison. And another professor charged and then exonerated of commercial espionage has won the right to sue the FBI for his arrest. Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times's stubborn refusal to let him cancel his subscription. He also previews California's impending Journalism Preservation Act. Download 461st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers' recent call for attention to the existential risks posed by AI; he thinks it's a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I'm utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination. Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children's privacy and another over security failings at Amazon's Ring doorbell operation. The bigger story is the FTC's effort to issue a commercial death sentence on Meta's children's business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do. Paul flags a batch of China stories: China's version of Bloomberg has begun quietly limiting the information about China's economy that is available to overseas users. TikTok is accused of storing influencers' sensitive financial information In China, contrary to its promises. Malaysia won't ban Huawei from it 5G network. The former Harvard chair convicted of lying about taking Chinese money has been sentenced to just two days in prison. And another professor charged and then exonerated of commercial espionage has won the right to sue the FBI for his arrest. Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times's stubborn refusal to let him cancel his subscription. He also previews California's impending Journalism Preservation Act. Download 461st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models' lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it's nice to see someone trying. The other effort, Anthropic's creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up. The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to big institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that's because Europe doesn't have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT. Anne-Gabrielle Haie is more friendly to the EU's data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider. Paul Rosenzweig introduces us to one of the U.S. government's most impressive technical achievements in cyberdefense—tracking down, reverse engineering, and then killing Snake, one of Russia's best hacking tools. Paul and I chew over China's most recent self-inflicted wound in attracting global investment—the raid on Capvision. I agree that it's going to discourage investors who need information before they part with their cash. But I offer a lukewarm justification for China's fear that Capvision's business model encourages leaks. Maury reviews Chinese tech giant Baidu's ChatGPT-like search add-on. I ask whether we can ever trust models like ChatGPT for search, given their love affair with plausible falsehoods. Paul reviews the technology that will be needed to meet what's looking like a national trend to require social media age verification. Maury reviews the ruling upholding the lawfulness of the UK's interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy. Finally, in quick hits: I note that both the director and the career deputy director are likely to leave NSA in the next several months. And Maury and I both enthuse over Google's new “passkey” technology. Download the 457th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models' lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it's nice to see someone trying. The other effort, Anthropic's creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up. The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to big institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that's because Europe doesn't have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT. Anne-Gabrielle Haie is more friendly to the EU's data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider. Paul Rosenzweig introduces us to one of the U.S. government's most impressive technical achievements in cyberdefense—tracking down, reverse engineering, and then killing Snake, one of Russia's best hacking tools. Paul and I chew over China's most recent self-inflicted wound in attracting global investment—the raid on Capvision. I agree that it's going to discourage investors who need information before they part with their cash. But I offer a lukewarm justification for China's fear that Capvision's business model encourages leaks. Maury reviews Chinese tech giant Baidu's ChatGPT-like search add-on. I ask whether we can ever trust models like ChatGPT for search, given their love affair with plausible falsehoods. Paul reviews the technology that will be needed to meet what's looking like a national trend to require social media age verification. Maury reviews the ruling upholding the lawfulness of the UK's interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy. Finally, in quick hits: I note that both the director and the career deputy director are likely to leave NSA in the next several months. And Maury and I both enthuse over Google's new “passkey” technology. Download the 457th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley's “trust and safety” bureaucracies) but there's no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT's stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people's reputations, Nick Weaver reports. One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance. Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-Va.) and Sen. John Thune (R-S.D.), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action. Nick is impressed by the law enforcement operation to shutter Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years. I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive means the intelligence community will do a pretty good job of telling its employees when they take a job that comes with post-employment restrictions, but IC alumni are so far getting very little guidance. Nick exults in the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance. Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried for their own kids. When the idea of age-checking internet users gets endorsed by the UK, Utah, and the New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s. And in quick hits: Nick comments on the massive 3CX supply-chain hack, which seems to have been a fishing-with-dynamite effort to steal a few people's cryptocurrency. I raise doubts about a much-cited claim that a Florida city's water system was the victim of a cyber attack. Nick unloads on Elon Musk for drawing a German investigation over Twitter's failure to promptly remove hate speech. Paul and I note the UK's most recent paper on how to exercise cyber power responsibly. And Nick and I puzzle over the conflict between the Biden administration and the New York Times about a spyware contract that supposedly undermined the administration's stance on spyware. Download 452nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley's “trust and safety” bureaucracies) but there's no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT's stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people's reputations, Nick Weaver reports. One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance. Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-Va.) and Sen. John Thune (R-S.D.), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action. Nick is impressed by the law enforcement operation to shutter Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years. I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive means the intelligence community will do a pretty good job of telling its employees when they take a job that comes with post-employment restrictions, but IC alumni are so far getting very little guidance. Nick exults in the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance. Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried for their own kids. When the idea of age-checking internet users gets endorsed by the UK, Utah, and the New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s. And in quick hits: Nick comments on the massive 3CX supply-chain hack, which seems to have been a fishing-with-dynamite effort to steal a few people's cryptocurrency. I raise doubts about a much-cited claim that a Florida city's water system was the victim of a cyber attack. Nick unloads on Elon Musk for drawing a German investigation over Twitter's failure to promptly remove hate speech. Paul and I note the UK's most recent paper on how to exercise cyber power responsibly. And Nick and I puzzle over the conflict between the Biden administration and the New York Times about a spyware contract that supposedly undermined the administration's stance on spyware. Download 452nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The latest episode of The Cyberlaw Podcast gets a bit carried away with the China spy balloon saga. Guest host Brian Fleming, along with guests Gus Hurwitz, Nate Jones, and Paul Rosenzweig, share insights (and bad puns) about the latest reporting on the electronic surveillance capabilities of the first downed balloon, the Biden administration's “shoot first, ask questions later” response to the latest “flying objects,” and whether we should all spend more time worrying about China's hackers and satellites. Gus then shares a few thoughts on the State of the Union address and the brief but pointed calls for antitrust and data privacy reform. Sticking with big tech and antitrust, Gus recaps a significant recent loss for the Federal Trade Commission (FTC) and discusses what may be on the horizon for FTC enforcement later this year. Pivoting back to China, Nate and Paul discuss the latest reporting on a forthcoming (at some point) executive order intended to limit and track U.S. outbound investment in certain key aspects of China's tech sector. They also ponder how industry may continue its efforts to narrow the scope of the restrictions and whether Congress will get involved. Sticking with Congress, Paul takes the opportunity to explain the key takeaways from the not-so-bombshell House Oversight Committee hearing featuring former Twitter executives. Gus next describes his favorite ChatGPT jailbreaks and a costly mistake for an artificial intelligence (AI) chatbot competitor during a demo. Paul recommends a fascinating interview with Sinbad.io, the new Bitcoin mixer of choice for North Korean hackers, and reflects on the substantial portion of the Democratic People's Republic of Korea's gross domestic product attributable to ransomware attacks. Finally, Gus questions whether AI-generated “Nothing, Forever” will need to change its name after becoming sentient and channeling Dave Chapelle. To wrap things up in the week's quick hits, Gus briefly highlights where things stand with Chip Wars: Japan edition and Brian covers coordinated U.S./UK sanctions against the Trickbot cybercrime group, confirmation that Twitter's sale will not be investigated by the Committee on Foreign Investment in the United States (CFIUS), and the latest on Security and Exchange Commission (SEC) v. Covington. Download 442nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The latest episode of The Cyberlaw Podcast gets a bit carried away with the China spy balloon saga. Guest host Brian Fleming, along with guests Gus Hurwitz, Nate Jones, and Paul Rosenzweig, share insights (and bad puns) about the latest reporting on the electronic surveillance capabilities of the first downed balloon, the Biden administration's “shoot first, ask questions later” response to the latest “flying objects,” and whether we should all spend more time worrying about China's hackers and satellites. Gus then shares a few thoughts on the State of the Union address and the brief but pointed calls for antitrust and data privacy reform. Sticking with big tech and antitrust, Gus recaps a significant recent loss for the Federal Trade Commission (FTC) and discusses what may be on the horizon for FTC enforcement later this year. Pivoting back to China, Nate and Paul discuss the latest reporting on a forthcoming (at some point) executive order intended to limit and track U.S. outbound investment in certain key aspects of China's tech sector. They also ponder how industry may continue its efforts to narrow the scope of the restrictions and whether Congress will get involved. Sticking with Congress, Paul takes the opportunity to explain the key takeaways from the not-so-bombshell House Oversight Committee hearing featuring former Twitter executives. Gus next describes his favorite ChatGPT jailbreaks and a costly mistake for an artificial intelligence (AI) chatbot competitor during a demo. Paul recommends a fascinating interview with Sinbad.io, the new Bitcoin mixer of choice for North Korean hackers, and reflects on the substantial portion of the Democratic People's Republic of Korea's gross domestic product attributable to ransomware attacks. Finally, Gus questions whether AI-generated “Nothing, Forever” will need to change its name after becoming sentient and channeling Dave Chapelle. To wrap things up in the week's quick hits, Gus briefly highlights where things stand with Chip Wars: Japan edition and Brian covers coordinated U.S./UK sanctions against the Trickbot cybercrime group, confirmation that Twitter's sale will not be investigated by the Committee on Foreign Investment in the United States (CFIUS), and the latest on Security and Exchange Commission (SEC) v. Covington. Download 442nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Yesterday afternoon, Attorney General Merrick Garland announced that he has appointed a special counsel to investigate the revelations that documents bearing classification markings had been found in President Biden's private office and residence. The appointment comes after a preliminary investigation that began on November 14, just days before a different special counsel was appointed to investigate documents found at former President Trump's residence. To go through it all, Lawfare executive editor Natalie Orpett sat down with Lawfare contributor Paul Rosenzweig, Lawfare editor-in-chief Benjamin Wittes, and Lawfare senior editor Scott R. Anderson. They talked about why these circumstances triggered the special counsel regulations, what we know about potential criminal exposure, and how this may impact the ongoing special counsel investigation of Donald Trump.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Our first episode for 2023 features Dmitri Alperovitch, Paul Rosenzweig, and Jim Dempsey trying to cover a months' worth of cyberlaw news. Dmitri and I open with an effort to summarize the state of the tech struggle between the U.S. and China. I think recent developments show the U.S. doing better than expected. U.S. companies like Facebook and Dell are engaged in voluntary decoupling as they imagine what their supply chain will look like if the conflict gets worse. China, after pouring billions into an effort to take a lead in high-end chip production, may be pulling back on the throttle. Dmitri is less sanguine, noting that Chinese companies like Huawei have shown that there is life after sanctions, and there may be room for a fast-follower model in which China dominates production of slightly less sophisticated chips, where much of the market volume is concentrated. Meanwhile, any Chinese retreat is likely tactical; where it has a dominant market position, as in rare earths, it remains eager to hobble U.S. companies. Jim lays out the recent medical device security requirements adopted in the omnibus appropriations bill. It is a watershed for cybersecurity regulation of the private sector and overdue for increasingly digitized devices that in some cases can only be updated with another open-heart surgery. How much of a watershed may become clear when the White House cyber strategy, which has been widely leaked, is finally released. Paul explains what it's likely to say, most notably its likely enthusiasm not just for regulation but for liability as a check on bad cybersecurity. Dmitri points out that all of that will be hard to achieve legislatively now that Republicans control the House. We all weigh in on LastPass's problems with hackers, and with candid, timely disclosures. For reasons fair and unfair, two-thirds of the LastPass users on the show have abandoned the service. I blame LastPass's acquisition by private equity; Dmitri tells me that's sweeping with too broad a brush. I offer an overview of the Twitter Files stories by Bari Weiss, Matt Taibbi, and others. When I say that the most disturbing revelations concern the massive government campaigns to enforce orthodoxy on COVID-19, all hell breaks loose. Paul in particular thinks I'm egregiously wrong to worry about any of this. No chairs are thrown, mainly because I'm in Virginia and Paul's in Costa Rica. But it's an entertaining and maybe even illuminating debate. In shorter and less contentious segments: Dmitri unpacks the latest effort by Russian hackers to subvert the security of a Ukrainian web-based military information site. He thinks the Ukrainian ability to use the site despite Russian attacks may have lessons for NATO. Dmitri also sheds light (and not a little shade) on Chinese claims to have broken RSA with a quantum computer. Jim updates us on TikTok's travails and the ongoing debate over restricting its use in the United States. I point out that another black man has been arrested because of a facial recognition error—bringing the total of mistaken face-recognition arrests in the entire country over the past decade to four. All of which could have been avoided by police department policy. On the other hand, I also identify a shocking abuse of facial recognition to oppress some of the most loathed people in America: Lawyers. Madison Square Garden, in what must be the dumbest corporate policy of the year, uses facial recognition to identify lawyers working for law firms that have ongoing lawsuits against the company. The apparent purpose, or at least the result, is to prevent lawyers from those firms from bringing Girl Scout troops to see the Rockettes. No problem; I am sure everyone would rather watch the ensuing litigation. I remind listeners that Trump's return to Facebook and Instagram could happen very soon. The EU has advanced Its transatlantic data deal with the US, though more thrashing about should be expected.
Our first episode for 2023 features Dmitri Alperovitch, Paul Rosenzweig, and Jim Dempsey trying to cover a months' worth of cyberlaw news. Dmitri and I open with an effort to summarize the state of the tech struggle between the U.S. and China. I think recent developments show the U.S. doing better than expected. U.S. companies like Facebook and Dell are engaged in voluntary decoupling as they imagine what their supply chain will look like if the conflict gets worse. China, after pouring billions into an effort to take a lead in high-end chip production, may be pulling back on the throttle. Dmitri is less sanguine, noting that Chinese companies like Huawei have shown that there is life after sanctions, and there may be room for a fast-follower model in which China dominates production of slightly less sophisticated chips, where much of the market volume is concentrated. Meanwhile, any Chinese retreat is likely tactical; where it has a dominant market position, as in rare earths, it remains eager to hobble U.S. companies. Jim lays out the recent medical device security requirements adopted in the omnibus appropriations bill. It is a watershed for cybersecurity regulation of the private sector and overdue for increasingly digitized devices that in some cases can only be updated with another open-heart surgery. How much of a watershed may become clear when the White House cyber strategy, which has been widely leaked, is finally released. Paul explains what it's likely to say, most notably its likely enthusiasm not just for regulation but for liability as a check on bad cybersecurity. Dmitri points out that all of that will be hard to achieve legislatively now that Republicans control the House. We all weigh in on LastPass's problems with hackers, and with candid, timely disclosures. For reasons fair and unfair, two-thirds of the LastPass users on the show have abandoned the service. I blame LastPass's acquisition by private equity; Dmitri tells me that's sweeping with too broad a brush. I offer an overview of the Twitter Files stories by Bari Weiss, Matt Taibbi, and others. When I say that the most disturbing revelations concern the massive government campaigns to enforce orthodoxy on COVID-19, all hell breaks loose. Paul in particular thinks I'm egregiously wrong to worry about any of this. No chairs are thrown, mainly because I'm in Virginia and Paul's in Costa Rica. But it's an entertaining and maybe even illuminating debate. In shorter and less contentious segments: Dmitri unpacks the latest effort by Russian hackers to subvert the security of a Ukrainian web-based military information site. He thinks the Ukrainian ability to use the site despite Russian attacks may have lessons for NATO. Dmitri also sheds light (and not a little shade) on Chinese claims to have broken RSA with a quantum computer. Jim updates us on TikTok's travails and the ongoing debate over restricting its use in the United States. I point out that another black man has been arrested because of a facial recognition error—bringing the total of mistaken face-recognition arrests in the entire country over the past decade to four. All of which could have been avoided by police department policy. On the other hand, I also identify a shocking abuse of facial recognition to oppress some of the most loathed people in America: Lawyers. Madison Square Garden, in what must be the dumbest corporate policy of the year, uses facial recognition to identify lawyers working for law firms that have ongoing lawsuits against the company. The apparent purpose, or at least the result, is to prevent lawyers from those firms from bringing Girl Scout troops to see the Rockettes. No problem; I am sure everyone would rather watch the ensuing litigation. I remind listeners that Trump's return to Facebook and Instagram could happen very soon. The EU has advanced Its transatlantic data deal with the US, though more thrashing about should be expected.
Former US Department of Homeland Security official Paul Rosenzweig points out potential concerns the US should have about Microsoft's growing workforce in China. The company has announced plans to have around 10,000 workers working in manufacutring and software development there. The conversation covers both personal information security and national security concerns. https://www.law.gwu.edu/paul-rosenzweig
Former federal prosecutor and Homeland Security official Paul Rosenzweig joins the podcast to talk about the national security implications of misinformation and technology.
Ken Starr, the former federal judge and independent counsel who became famous for his investigation of President Bill Clinton, died this week on September 13 at age 76. Starr was a complex and controversial figure: after running the Whitewater and Lewinsky investigations, he went on to serve as president of Baylor University, only to resign over the mishandling of a sex abuse scandal involving the university's football team, and he would later go on to defend President Trump in Trump's first impeachment.To think through Starr's legacy, Lawfare senior editor Quinta Jurecic spoke with Lawfare editor-in-chief Benjamin Wittes, who published a book on Starr, and Lawfare contributing editor Paul Rosenzweig, who worked with Starr on the Clinton investigation. They took a look back on the Starr investigation and how the probe shaped the culture and practice of presidential investigations in ways that are more relevant than ever in the Trump era.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This is our return-from-hiatus episode. Jordan Schneider kicks things off by recapping passage of a major U.S. semiconductor-building subsidy bill, while new contributor Brian Fleming talks with Nick Weaver about new regulatory investment restrictions and new export controls on (artificial Intelligence (AI) chips going to China. Jordan also covers a big corruption scandal arising from China's big chip-building subsidy program, leading me to wonder when we'll have our version. Brian and Nick cover the month's biggest cryptocurrency policy story, the imposition of OFAC sanctions on Tornado Cash. They agree that, while the outer limits of sanctions aren't entirely clear, they are likely to show that sometimes the U.S. Code actually does trump the digital version. Nick points listeners to his bracing essay, OFAC Around and Find Out. Paul Rosenzweig reprises his role as the voice of reason in the debate over location tracking and Dobbs. (Literally. Paul and I did an hour-long panel on the topic last week. It's available here.) I reprise my role as Chief Privacy Skeptic, calling the Dobb/location fuss an overrated tempest in a teapot. Brian takes on one aspect of the Mudge whistleblower complaint about Twitter security: Twitter's poor record at keeping foreign spies from infiltrating its workforce and getting unaudited access to its customer records. In a coincidence, he notes, a former Twitter employee was just convicted of “spying lite”, proves it's as good at national security as it is at content moderation. Meanwhile, returning to U.S.-China economic relations, Jordan notes the survival of high-level government concerns about TikTok. I note that, since these concerns first surfaced in the Trump era, TikTok's lobbying efforts have only grown more sophisticated. Speaking of which, Klon Kitchen has done a good job of highlighting DJI's increasingly sophisticated lobbying in Washington D.C. The Cloudflare decision to deplatform Kiwi Farms kicks off a donnybrook, with Paul and Nick on one side and me on the other. It's a classic Cyberlaw Podcast debate. In quick hits and updates: Nick and I cover the sad story of the Dad who photographed his baby's private parts at a doctor's request and, thanks to Google's lack of human appellate review, lost his email, his phone number, and all of the accounts that used the phone for 2FA. Paul brings us up to speed on the U.S.-EU data fight: and teases tomorrow's webinar on the topic. Nick explains the big changes likely to come to the pornography world because of a lawsuit against Visa. And why Twitter narrowly averted its own child sex scandal. I note that Google's bias against GOP fundraising emails has led to an unlikely result: less spam filtering for all such emails. And, after waiting too long, Brian Krebs retracts the post about a Ubiquity “breach” that led the company to sue him.
This is our return-from-hiatus episode. Jordan Schneider kicks things off by recapping passage of a major U.S. semiconductor-building subsidy bill, while new contributor Brian Fleming talks with Nick Weaver about new regulatory investment restrictions and new export controls on (artificial Intelligence (AI) chips going to China. Jordan also covers a big corruption scandal arising from China's big chip-building subsidy program, leading me to wonder when we'll have our version. Brian and Nick cover the month's biggest cryptocurrency policy story, the imposition of OFAC sanctions on Tornado Cash. They agree that, while the outer limits of sanctions aren't entirely clear, they are likely to show that sometimes the U.S. Code actually does trump the digital version. Nick points listeners to his bracing essay, OFAC Around and Find Out. Paul Rosenzweig reprises his role as the voice of reason in the debate over location tracking and Dobbs. (Literally. Paul and I did an hour-long panel on the topic last week. It's available here.) I reprise my role as Chief Privacy Skeptic, calling the Dobb/location fuss an overrated tempest in a teapot. Brian takes on one aspect of the Mudge whistleblower complaint about Twitter security: Twitter's poor record at keeping foreign spies from infiltrating its workforce and getting unaudited access to its customer records. In a coincidence, he notes, a former Twitter employee was just convicted of “spying lite”, proves it's as good at national security as it is at content moderation. Meanwhile, returning to U.S.-China economic relations, Jordan notes the survival of high-level government concerns about TikTok. I note that, since these concerns first surfaced in the Trump era, TikTok's lobbying efforts have only grown more sophisticated. Speaking of which, Klon Kitchen has done a good job of highlighting DJI's increasingly sophisticated lobbying in Washington D.C. The Cloudflare decision to deplatform Kiwi Farms kicks off a donnybrook, with Paul and Nick on one side and me on the other. It's a classic Cyberlaw Podcast debate. In quick hits and updates: Nick and I cover the sad story of the Dad who photographed his baby's private parts at a doctor's request and, thanks to Google's lack of human appellate review, lost his email, his phone number, and all of the accounts that used the phone for 2FA. Paul brings us up to speed on the U.S.-EU data fight: and teases tomorrow's webinar on the topic. Nick explains the big changes likely to come to the pornography world because of a lawsuit against Visa. And why Twitter narrowly averted its own child sex scandal. I note that Google's bias against GOP fundraising emails has led to an unlikely result: less spam filtering for all such emails. And, after waiting too long, Brian Krebs retracts the post about a Ubiquity “breach” that led the company to sue him.
A few weeks ago on Arbiters of Truth, our series on the online information system, we brought you a conversation with two emergency room doctors about their efforts to push back against members of their profession spreading falsehoods about the coronavirus. Today, we're going to take a look at another profession that's been struggling to counter lies and falsehoods within its ranks: the law. Recently, lawyers involved in efforts to overturn the 2020 election have faced professional discipline—like Rudy Giuliani, whose law license has been suspended temporarily in New York and D.C. while a New York ethics investigation remains ongoing.Quinta Jurecic sat down with Paul Rosenzweig a contributing editor at Lawfare and a board member with the 65 Project, an organization that seeks to hold accountable lawyers who worked to help Trump hold onto power in 2020—often by spreading lies. He's also spent many years working on issues related to legal ethics. So what avenues of discipline are available for lawyers who tell lies about elections? How does the legal discipline process work? And how effective can legal discipline be in reasserting the truth?Support this show http://supporter.acast.com/lawfare. See acast.com/privacy for privacy and opt-out information.
A few weeks ago on Arbiters of Truth, our series on the online information system, we brought you a conversation with two emergency room doctors about their efforts to push back against members of their profession spreading falsehoods about the coronavirus. Today, we're going to take a look at another profession that's been struggling to counter lies and falsehoods within its ranks: the law. Recently, lawyers involved in efforts to overturn the 2020 election have faced professional discipline—like Rudy Giuliani, whose law license has been suspended temporarily in New York and D.C. while a New York ethics investigation remains ongoing.Quinta Jurecic sat down with Paul Rosenzweig a contributing editor at Lawfare and a board member with the 65 Project, an organization that seeks to hold accountable lawyers who worked to help Trump hold onto power in 2020—often by spreading lies. He's also spent many years working on issues related to legal ethics. So what avenues of discipline are available for lawyers who tell lies about elections? How does the legal discipline process work? And how effective can legal discipline be in reasserting the truth? See acast.com/privacy for privacy and opt-out information.
President Trump chose not to act – that's the number one takeaway of the latest congressional hearing into the January 6 insurrection. It demonstrated that Donald Trump not only ignored repeated calls to stop the riot, but he also failed to reach out a single time to law enforcement and national security officials. Paul Rosenzweig is a former federal prosecutor and served in the Department of Homeland Security, and he joins the program to discuss. Also on today's show: author Aaron Stark (I Would Have Been a School Shooter...), now a mental health advocate, provides a unique perspective on America's gun violence epidemic; Tikhon Dzyadko, the Editor-in-Chief TV Rain -- Russia's last independent TV station. To learn more about how CNN protects listener privacy, visit cnn.com/privacy
At least that's the lesson that Paul Rosenzweig and I distill from the recent 11th Circuit decision mostly striking down Florida's law regulating social media platforms' content “moderation” rules. We disagree flamboyantly on pretty much everything else—including whether the court will intervene before judgment in a pending 5th Circuit case where the appeals court stayed a district court's injunction and allowed Texas's similar law to remain in effect. When it comes to content moderation, Silicon Valley is a lot tougher on the Libs of TikTok than the Chinese Communist Party (CCP). Instagram just suspended the Libs of Tiktok account, I report, while a recent Brookings study shows that the Chinese government's narratives are polluting Google and Bing search results on a regular basis. Google News and YouTube do the worst job of keeping the party line out of searches. Both Google News and YouTube return CCP-influenced links on the first page about a quarter of the time. I ask Sultan Meghji to shed some light on the remarkable TerraUSD cryptocurrency crash. Which leads us, not surprisingly, from massive investor losses to whether financial regulators have jurisdiction over cryptocurrency. The short answer: Whether they have jurisdiction or not, all the incentives favor an assertion of jurisdiction. Nick Weaver is with us in spirit as we flag his rip-roaring attack on the whole fiel—a don't-miss interview for readers who can't get enough of Nick. It's a big episode for artificial intelligence (AI) news too. Matthew Heiman contrasts the different approaches to AI regulation in three big jurisdictions. China's is pretty focused, Europe's is ambitious and all-pervading, and the United States isn't ready to do anything. Paul thinks DuckDuckGo should be DuckDuckGone after the search engine allowed Microsoft trackers to follow users of its browser. Sultan and I explore ways of biasing AI algorithms. It turns out that saving money on datasets makes the algorithm especially sensitive to the order in which the data is presented. Debiasing with synthetic data has its own risks, Sultan avers. But if you're looking for good news, here's some: Self-driving car companies who are late to the party are likely to catch up fast, because they can build on a lot of data that's already been collected as well as new training techniques. Matthew breaks down the $150 million fine paid by Twitter for allowing ad targeting of the phone numbers its users supplied for two-factor authentication (2FA) security purposes. Finally, in quick hits: Matthew recommends that we all get popcorn for: Spain's planned investigation of its intelligence services following a phone hacking scandal. Sultan and I call time of death for the Klobuchar bill regulating Silicon Valley self-preferencing. It was the most likely of all the Silicon Valley competition bills to pass, but election year tensions and massive lobbying campaigns by industry have made its path to enactment too steep. And Sultan notes that the Commerce Department has published with relatively little change its rule restricting exports of hacking tools. Download the 409th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.
At least that's the lesson that Paul Rosenzweig and I distill from the recent 11th Circuit decision mostly striking down Florida's law regulating social media platforms' content “moderation” rules. We disagree flamboyantly on pretty much everything else—including whether the court will intervene before judgment in a pending 5th Circuit case where the appeals court stayed a district court's injunction and allowed Texas's similar law to remain in effect. When it comes to content moderation, Silicon Valley is a lot tougher on the Libs of TikTok than the Chinese Communist Party (CCP). Instagram just suspended the Libs of Tiktok account, I report, while a recent Brookings study shows that the Chinese government's narratives are polluting Google and Bing search results on a regular basis. Google News and YouTube do the worst job of keeping the party line out of searches. Both Google News and YouTube return CCP-influenced links on the first page about a quarter of the time. I ask Sultan Meghji to shed some light on the remarkable TerraUSD cryptocurrency crash. Which leads us, not surprisingly, from massive investor losses to whether financial regulators have jurisdiction over cryptocurrency. The short answer: Whether they have jurisdiction or not, all the incentives favor an assertion of jurisdiction. Nick Weaver is with us in spirit as we flag his rip-roaring attack on the whole fiel—a don't-miss interview for readers who can't get enough of Nick. It's a big episode for artificial intelligence (AI) news too. Matthew Heiman contrasts the different approaches to AI regulation in three big jurisdictions. China's is pretty focused, Europe's is ambitious and all-pervading, and the United States isn't ready to do anything. Paul thinks DuckDuckGo should be DuckDuckGone after the search engine allowed Microsoft trackers to follow users of its browser. Sultan and I explore ways of biasing AI algorithms. It turns out that saving money on datasets makes the algorithm especially sensitive to the order in which the data is presented. Debiasing with synthetic data has its own risks, Sultan avers. But if you're looking for good news, here's some: Self-driving car companies who are late to the party are likely to catch up fast, because they can build on a lot of data that's already been collected as well as new training techniques. Matthew breaks down the $150 million fine paid by Twitter for allowing ad targeting of the phone numbers its users supplied for two-factor authentication (2FA) security purposes. Finally, in quick hits: Matthew recommends that we all get popcorn for: Spain's planned investigation of its intelligence services following a phone hacking scandal. Sultan and I call time of death for the Klobuchar bill regulating Silicon Valley self-preferencing. It was the most likely of all the Silicon Valley competition bills to pass, but election year tensions and massive lobbying campaigns by industry have made its path to enactment too steep. And Sultan notes that the Commerce Department has published with relatively little change its rule restricting exports of hacking tools. Download the 409th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.
Modern life relies on digital technology, but with that reliance comes vulnerability. How can we trust our technology? How can we be sure that it does what we expect it to do? Earlier this month, Lawfare released the results of a long-term research project on those very questions. The report, prepared by the Lawfare Institute's Trusted Hardware and Software Working Group, is titled, “Creating a Framework for Supply Chain Trust in Hardware and Software.” On a recent Lawfare Live, Alan Rozenshtein spoke with three members of the team that wrote the piece: Lawfare editor-in-chief Benjamin Wittes; Lawfare contributing editor Paul Rosenzweig, who served as the report's chief drafter; and Justin Sherman, a fellow at the Atlantic Council.Support this show http://supporter.acast.com/lawfare. See acast.com/privacy for privacy and opt-out information.
With the U.S. and Europe united in opposing Russia's attack on Ukraine, a few tough transatlantic disputes are being swept away—or at least under the rug. Most prominently, the data protection crisis touched off by Schrems 2 has been resolved in principle by a new framework agreement between the U.S. and the EU. Michael Ellis and Paul Rosenzweig trade insights on the deal and its prospects before the European Court of Justice. The most controversial aspect of the agreement is the lack of any change in U.S. legislation. That's simple vote-counting if you're in Washington, but the Court of Justice of the European Union (CJEU) clearly expected that it was dictating legislation for the U.S. Congress to adopt, so Europe's acquiescence may simply kick the can down the road a bit. The lack of legislation will be felt in particular, Michael and Paul aver, when it comes to providing remedies to European citizens who feel their rights have been trampled. Instead of going to court, they'll be going to an administrative body with executive branch guarantees of independence and impartiality. We congratulate several old friends of the podcast who patched this solution together. The Russian invasion of Ukraine, meanwhile, continues to throw off new tech stories. Nick Weaver updates us on the single most likely example of Russia using its cyber weapons effectively for military purposes—the bricking of Ukraine's (and a bunch of other European) Viasat terminals. Alex Stamos and I talk about whether the social media companies recently evicted from Russia, especially Instagram, should be induced or required to provide information about their former subscribers' interests to allow microtargeting of news to break Putin's information management barriers; along the way we examine why it is that tech's response to Chinese aggression has been less vigorous. Speaking of microtargeting, Paul gives kudos to the FBI for its microtargeted “talk to us” ads, only visible to Russian speakers within 100 yards of the Russian embassy in Washington. Finally, Nick Weaver and Mike mull the significance of Israel's determination not to sell sophisticated cell phone surveillance malware to Ukraine. Returning to Europe-U.S. tension, Alex and I unpack the European Digital Markets Act, which regulates a handful of U.S. companies because they are “digital gatekeepers.“ I think it's a plausible response to network effect monopolization, ruined by anti-Americanism and the persistent illusion that the EU can regulate its way to a viable tech industry. Alex has a similar take, noting that the adoption of end-to-end encryption was a big privacy victory, thanks to WhatsApp, an achievement that the Digital Markets Act will undo in attempting to force standardized interoperable messaging on gatekeepers. Nick walks us through the surprising achievements of the gang of juvenile delinquents known as Lapsus$. Their breach of Okta is the occasion for speculation about how lawyers skew cyber incident response in directions that turn out to be very bad for the breach victim. Alex vividly captures the lawyerly dynamics that hamper effective response. While we're talking ransomware, Michael cites a detailed report on corporate responses to REvil breaches, authored by the minority staff of the Senate Homeland security committee. Neither the FBI nor CISA comes out of it looking good. But the bureau comes in for more criticism, which may help explain why no one paid much attention when the FBI demanded changes to the cyber incident reporting bill. Finally, Nick and Michael debate whether the musician and Elon Musk sweetheart Grimes could be prosecuted for computer crimes after confessing to having DDOSed an online publication for an embarrassing photo of her. Just to be on the safe side, we conclude, maybe she shouldn't go back to Canada. And Paul and I praise a brilliant WIRED op-ed proposing that Putin's Soviet empire nostalgia deserves a wakeup call; the authors (Rosenzweig and Baker, as it happens) suggest that the least ICANN can do is kill off the Soviet Union's out-of-date .su country code. Download the 400th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
With the U.S. and Europe united in opposing Russia's attack on Ukraine, a few tough transatlantic disputes are being swept away—or at least under the rug. Most prominently, the data protection crisis touched off by Schrems 2 has been resolved in principle by a new framework agreement between the U.S. and the EU. Michael Ellis and Paul Rosenzweig trade insights on the deal and its prospects before the European Court of Justice. The most controversial aspect of the agreement is the lack of any change in U.S. legislation. That's simple vote-counting if you're in Washington, but the Court of Justice of the European Union (CJEU) clearly expected that it was dictating legislation for the U.S. Congress to adopt, so Europe's acquiescence may simply kick the can down the road a bit. The lack of legislation will be felt in particular, Michael and Paul aver, when it comes to providing remedies to European citizens who feel their rights have been trampled. Instead of going to court, they'll be going to an administrative body with executive branch guarantees of independence and impartiality. We congratulate several old friends of the podcast who patched this solution together. The Russian invasion of Ukraine, meanwhile, continues to throw off new tech stories. Nick Weaver updates us on the single most likely example of Russia using its cyber weapons effectively for military purposes—the bricking of Ukraine's (and a bunch of other European) Viasat terminals. Alex Stamos and I talk about whether the social media companies recently evicted from Russia, especially Instagram, should be induced or required to provide information about their former subscribers' interests to allow microtargeting of news to break Putin's information management barriers; along the way we examine why it is that tech's response to Chinese aggression has been less vigorous. Speaking of microtargeting, Paul gives kudos to the FBI for its microtargeted “talk to us” ads, only visible to Russian speakers within 100 yards of the Russian embassy in Washington. Finally, Nick Weaver and Mike mull the significance of Israel's determination not to sell sophisticated cell phone surveillance malware to Ukraine. Returning to Europe-U.S. tension, Alex and I unpack the European Digital Markets Act, which regulates a handful of U.S. companies because they are “digital gatekeepers.“ I think it's a plausible response to network effect monopolization, ruined by anti-Americanism and the persistent illusion that the EU can regulate its way to a viable tech industry. Alex has a similar take, noting that the adoption of end-to-end encryption was a big privacy victory, thanks to WhatsApp, an achievement that the Digital Markets Act will undo in attempting to force standardized interoperable messaging on gatekeepers. Nick walks us through the surprising achievements of the gang of juvenile delinquents known as Lapsus$. Their breach of Okta is the occasion for speculation about how lawyers skew cyber incident response in directions that turn out to be very bad for the breach victim. Alex vividly captures the lawyerly dynamics that hamper effective response. While we're talking ransomware, Michael cites a detailed report on corporate responses to REvil breaches, authored by the minority staff of the Senate Homeland security committee. Neither the FBI nor CISA comes out of it looking good. But the bureau comes in for more criticism, which may help explain why no one paid much attention when the FBI demanded changes to the cyber incident reporting bill. Finally, Nick and Michael debate whether the musician and Elon Musk sweetheart Grimes could be prosecuted for computer crimes after confessing to having DDOSed an online publication for an embarrassing photo of her. Just to be on the safe side, we conclude, maybe she shouldn't go back to Canada. And Paul and I praise a brilliant WIRED op-ed proposing that Putin's Soviet empire nostalgia deserves a wakeup call; the authors (Rosenzweig and Baker, as it happens) suggest that the least ICANN can do is kill off the Soviet Union's out-of-date .su country code. Download the 400th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
A special reminder that we will be doing episode 400 live on video and with audience participation on March 28, 2022 at noon Eastern daylight time. So mark your calendar and when the time comes, use this link to join the audience: https://riverside.fm/studio/the-cyberlaw-podcast-400 See you there! For the third week in a row, we lead with cyber and Russia's invasion of Ukraine. Paul Rosenzweig comments on the most surprising thing about social media's decoupling from Russia—how enthusiastically the industry is pursuing the separation. Facebook is allowing Ukrainians to threaten violence against Russian leadership and removing or fact checking Russian government and media posts. Not satisfied with this, the EU wants Google to remove Russia Today and Sputnik from search results. I ask why the U.S. can't take over Facebook and Twitter infrastructure to deliver the Voice of America to Facebook and Twitter users who've been cut off by their departure. Nobody likes that idea but me. Meanwhile, Paul notes that The Great Cyberwar that Wasn't could still make an appearance, citing Ciaran Martin's sober Lawfare piece. David Kris tells us that Congress has, after a few false starts, finally passed a cyber incident reporting bill, notwithstanding the Justice Department's over-the-top histrionics in opposition. I wonder if the bill, passed in haste due to the Ukraine conflict, should have had another round of edits, since it seems to lock in a leisurely reg-writing process that the Cybersecurity and Infrastructure Security Agency (CISA) can't cut short. Jane Bambauer and David unpack the first district court opinion considering the legal status of “geofence” warrants—where Google gradually releases more data about people whose phones were found near a crime scene when the crime was committed. It's a long opinion by Judge M. Hannah Lauck, but none of us finds it satisfying. As is often true, Orin Kerr's take is more persuasive than the court's. Next, Paul Rosenzweig digs into Biden's cryptocurrency executive order. It's not a nothingburger, he opines, but it is a process-burger, meaning that nothing will happen in the field for many months, but the interagency mill will begin to grind, and sooner or later will likely grind exceeding fine. Jane and I draw lessons from WIRED's “expose” on three wrongful arrests based on face recognition software, but not the “face recognition is Evil” lesson WIRED wanted us to draw. The arrests do reflect less than perfect policing, and are a wrenching view of what it's like for an innocent man to face charges that aren't true. But it's unpersuasive to blame face recognition for mistakes that could have been avoided with a little more care by the cops. David and I highly recommend Brian Krebs's great series on what we can learn from leaked chat logs belonging to the Conti ransomware gang. What we learned from the Conti leaks. My favorite insight was the Conti member who said, when a company resisted paying to keep its files from being published, that “There is a journalist who will help intimidate them for 5 percent of the payout.” I suggest that our listeners crowdsource an effort to find journalists who might fit this description. It might not be hard; after all, how many journalists these days are breaking stories that dive deep into doxxed databases? Paul and I spend a little more time than it deserves on an ICANN paper about ways to block Russia from the network. But I am inspired to suggest that the country code .su—presumably all that's left of the Soviet Union—be permanently retired. I mean, really, does anyone respectable want it back? Jane gives a lick and a promise to the Open App Markets bill coming out of the Senate Judiciary Committee. I alert the American Civil Liberties Union to a shocking porcine privacy invasion. I discover that none of the other panelists is surprised that 15 percent of people have already had sex with a robot but all of them find the idea of falling in love with a robot preposterous. Download the 398th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.
A special reminder that we will be doing episode 400 live on video and with audience participation on March 28, 2022 at noon Eastern daylight time. So mark your calendar and when the time comes, use this link to join the audience: https://riverside.fm/studio/the-cyberlaw-podcast-400 See you there! For the third week in a row, we lead with cyber and Russia's invasion of Ukraine. Paul Rosenzweig comments on the most surprising thing about social media's decoupling from Russia—how enthusiastically the industry is pursuing the separation. Facebook is allowing Ukrainians to threaten violence against Russian leadership and removing or fact checking Russian government and media posts. Not satisfied with this, the EU wants Google to remove Russia Today and Sputnik from search results. I ask why the U.S. can't take over Facebook and Twitter infrastructure to deliver the Voice of America to Facebook and Twitter users who've been cut off by their departure. Nobody likes that idea but me. Meanwhile, Paul notes that The Great Cyberwar that Wasn't could still make an appearance, citing Ciaran Martin's sober Lawfare piece. David Kris tells us that Congress has, after a few false starts, finally passed a cyber incident reporting bill, notwithstanding the Justice Department's over-the-top histrionics in opposition. I wonder if the bill, passed in haste due to the Ukraine conflict, should have had another round of edits, since it seems to lock in a leisurely reg-writing process that the Cybersecurity and Infrastructure Security Agency (CISA) can't cut short. Jane Bambauer and David unpack the first district court opinion considering the legal status of “geofence” warrants—where Google gradually releases more data about people whose phones were found near a crime scene when the crime was committed. It's a long opinion by Judge M. Hannah Lauck, but none of us finds it satisfying. As is often true, Orin Kerr's take is more persuasive than the court's. Next, Paul Rosenzweig digs into Biden's cryptocurrency executive order. It's not a nothingburger, he opines, but it is a process-burger, meaning that nothing will happen in the field for many months, but the interagency mill will begin to grind, and sooner or later will likely grind exceeding fine. Jane and I draw lessons from WIRED's “expose” on three wrongful arrests based on face recognition software, but not the “face recognition is Evil” lesson WIRED wanted us to draw. The arrests do reflect less than perfect policing, and are a wrenching view of what it's like for an innocent man to face charges that aren't true. But it's unpersuasive to blame face recognition for mistakes that could have been avoided with a little more care by the cops. David and I highly recommend Brian Krebs's great series on what we can learn from leaked chat logs belonging to the Conti ransomware gang. What we learned from the Conti leaks. My favorite insight was the Conti member who said, when a company resisted paying to keep its files from being published, that “There is a journalist who will help intimidate them for 5 percent of the payout.” I suggest that our listeners crowdsource an effort to find journalists who might fit this description. It might not be hard; after all, how many journalists these days are breaking stories that dive deep into doxxed databases? Paul and I spend a little more time than it deserves on an ICANN paper about ways to block Russia from the network. But I am inspired to suggest that the country code .su—presumably all that's left of the Soviet Union—be permanently retired. I mean, really, does anyone respectable want it back? Jane gives a lick and a promise to the Open App Markets bill coming out of the Senate Judiciary Committee. I alert the American Civil Liberties Union to a shocking porcine privacy invasion. I discover that none of the other panelists is surprised that 15 percent of people have already had sex with a robot but all of them find the idea of falling in love with a robot preposterous. Download the 398th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.
Wherein it was supposed to be Cheese Night, Ben having failed to get a guest—but then Paul Rosenzweig and Jonathan Rauch show up! Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Wherein we celebrate the nation of France, at the formal request of the French Embassy, with a crocodile shirt, a truly ridiculous set of Twitter exchanges, and a guest—if we can find one. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.