POPULARITY
In an era when many financial institutions prioritize profits over people, one bank stands out for its commitment to the labor movement since its beginning. Nick Weaver, President and Chief Operating Officer of Amalgamated Bank of Chicago (ABOC), joined the America's Work Force Union Podcast to discuss the bank's unique position in serving union members and organizations. President of the North Coast Area Labor Federation, Pat Gallagher joined the America's Work Force Union Podcast to discuss the Social Security Fairness Act, his thoughts on President-elect Trump's proposed policies that could impact the working class and the way labor leaders can build a better future for their members.
Tony opens the show by talking with Nick Weaver - the founder of Blue Delta Jeans - about the origin story of his business, and Tony also talks about having dinner at the Palm with the Socialite. Chuck Todd calls in to make his weekly picks against Reginald the Monkey, Pat Forde calls in to talk a little college basketball, but also about the college football playoff rankings, and Tony closes out the show by opening up the Mailbag. Songs : Beth Peabody “Other Woman” ; “Don't Play” To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
Blue jeans?!? Yes, you read that correctly. In this episode, Nick and Don speak with Nick Weaver, Co-Founder and COO of Blue Delta Jeans. Learn how Nick and his team are using $500 custom-tailored blue jeans to thrill slot players and breathe new life into VIP events and continuity programs. Also in this episode, the EVERI / IGT merger. © 2015 -2024 RM Holdings B.V. and ReelMetrics B.V. All rights reserved.For transcripts of ReelCast episodes, please see https://www.reelmetrics.com/reelcast.For legal statements apropos of this and other ReelMetrics content / "Materials", please see https://www.reelmetrics.com/legal
Series: Daily Bible ReadingService: Sun AMType: SermonSpeaker: Nick Weaver
Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I'm hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China's diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China's interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman's defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Creator provides the artwork for the defenestration but not the soft landing. Nick Weaver covers Meta's not-so-new policy on political ads claiming that past elections were rigged. I cover the flap over TikTok videos promoting Osama Bin Laden's letter justifying the 9/11 attack. Jordan and I discuss reports that Applied Materials is facing a criminal probe over shipments to China's SMIC. Nick reports on the most creative ransomware tactic to date: compromising a corporate network and then filing an SEC complaint when the victim doesn't disclose it within four days. This particular gang may have jumped the gun, he reports, but we'll see more such reports in the future, and the SEC will have to decide whether it wants to foster this business model. I cover the effort to disclose a bitcoin wallet security flaw without helping criminals exploit it. And Paul recommends the week's long read: The Mirai Confession – a detailed and engaging story of the kids who invented Mirai, foisted it on the world, and then worked for the FBI for years, eventually avoiding jail, probably thanks to an FBI agent with a paternal streak. Download 482nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I'm hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China's diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China's interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman's defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Creator provides the artwork for the defenestration but not the soft landing. Nick Weaver covers Meta's not-so-new policy on political ads claiming that past elections were rigged. I cover the flap over TikTok videos promoting Osama Bin Laden's letter justifying the 9/11 attack. Jordan and I discuss reports that Applied Materials is facing a criminal probe over shipments to China's SMIC. Nick reports on the most creative ransomware tactic to date: compromising a corporate network and then filing an SEC complaint when the victim doesn't disclose it within four days. This particular gang may have jumped the gun, he reports, but we'll see more such reports in the future, and the SEC will have to decide whether it wants to foster this business model. I cover the effort to disclose a bitcoin wallet security flaw without helping criminals exploit it. And Paul recommends the week's long read: The Mirai Confession – a detailed and engaging story of the kids who invented Mirai, foisted it on the world, and then worked for the FBI for years, eventually avoiding jail, probably thanks to an FBI agent with a paternal streak. Download 482nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Wi-Fi 7 devices are now entering the market, bringing insane speeds to the home. We talk about the new features, as well as how our favorite mesh Wi-Fi company, eero, came to prominence as we are joined by eero founder Nick Weaver! Andru Edwards on YouTube: https://youtube.com/@Andru Jon Rettinger on YouTube: https://www.youtube.com/user/jon4lakers eero Max 7: https://geni.us/eeromax7 (Affiliate) Circuit Breaker: My Weekly Tech Newsletter Support the show: http://youtube.com/gearlive/joinSee omnystudio.com/listener for privacy information.
I take advantage of Scott Shapiro's participation in this episode of the Cyberlaw Podcast to interview him about his book, Fancy Bear Goes Phishing – The Dark History of the Information Age, in Five Extraordinary Hacks. It's a remarkable tutorial on cybersecurity, told through stories that you'll probably think you already know until you see what Scott has found by digging into historical and legal records. We cover the Morris worm, the Paris Hilton hack, and the earliest Bulgarian virus writer's nemesis. Along the way, we share views about the refreshing emergence of a well-paid profession largely free of the credentialism that infects so much of the American economy. In keeping with the rest of the episode, I ask Bing Image Creator to generate alternative artwork for the book. In the news roundup, Michael Ellis walks us through the “sweeping”™ White House executive order on artificial intelligence. The tl;dr: the order may or may not actually have real impact on the field. The same can probably be said of the advice now being dispensed by AI's “godfathers.”™ -- the keepers of the flame for AI existential risk who have urged that AI companies devote a third of their R&D budgets to AI safety and security and accept liability for serious harm. Scott and I puzzle over how dangerous AI can be when even the most advanced engines can only do multiplication successfully 85% of the time. Along the way, we evaluate methods for poisoning training data and their utility for helping starving artists get paid when their work is repurposed by AI. Speaking of AI regulation, Nick Weaver offers a real-life example: the California DMV's immediate suspension of Cruise's robotaxi permit after a serious accident that the company handled poorly. Michael tells us what's been happening in the Google antitrust trial, to the extent that anyone can tell, thanks to the heavy confidentiality restrictions imposed by Judge Mehta. One number that escaped -- $26 billion in payments to maintain Google as everyone's default search engine – draws plenty of commentary. Scott and I try to make sense of CISA's claim that its vulnerability list has produced cybersecurity dividends. We are inclined to agree that there's a pony in there somewhere. Nick explains why it's dangerous to try to spy on Kaspersky. The rewards my be big, but so is the risk that your intelligence service will be pantsed. Nick also notes that using Let's Encrypt as part of your man in the middle attack has risks as well – advice he probably should deliver auf Deutsch. Scott and I cover a great Andy Greenberg story about a team of hackers who discovered how to unlock a vast store of bitcoin on an IronKey but may not see a payoff soon. I reveal my connection to the story. Michael and I share thoughts about the effort to renew section 702 of FISA, which lost momentum during the long battle over choosing a Speaker of the House. I note that USTR has surrendered to reality in global digital trade and point out that last week's story about judicial interest in tort cases against social media turned out to be the first robin in what now looks like a remake of The Birds. Download 479th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
I take advantage of Scott Shapiro's participation in this episode of the Cyberlaw Podcast to interview him about his book, Fancy Bear Goes Phishing – The Dark History of the Information Age, in Five Extraordinary Hacks. It's a remarkable tutorial on cybersecurity, told through stories that you'll probably think you already know until you see what Scott has found by digging into historical and legal records. We cover the Morris worm, the Paris Hilton hack, and the earliest Bulgarian virus writer's nemesis. Along the way, we share views about the refreshing emergence of a well-paid profession largely free of the credentialism that infects so much of the American economy. In keeping with the rest of the episode, I ask Bing Image Creator to generate alternative artwork for the book. In the news roundup, Michael Ellis walks us through the “sweeping”™ White House executive order on artificial intelligence. The tl;dr: the order may or may not actually have real impact on the field. The same can probably be said of the advice now being dispensed by AI's “godfathers.”™ -- the keepers of the flame for AI existential risk who have urged that AI companies devote a third of their R&D budgets to AI safety and security and accept liability for serious harm. Scott and I puzzle over how dangerous AI can be when even the most advanced engines can only do multiplication successfully 85% of the time. Along the way, we evaluate methods for poisoning training data and their utility for helping starving artists get paid when their work is repurposed by AI. Speaking of AI regulation, Nick Weaver offers a real-life example: the California DMV's immediate suspension of Cruise's robotaxi permit after a serious accident that the company handled poorly. Michael tells us what's been happening in the Google antitrust trial, to the extent that anyone can tell, thanks to the heavy confidentiality restrictions imposed by Judge Mehta. One number that escaped -- $26 billion in payments to maintain Google as everyone's default search engine – draws plenty of commentary. Scott and I try to make sense of CISA's claim that its vulnerability list has produced cybersecurity dividends. We are inclined to agree that there's a pony in there somewhere. Nick explains why it's dangerous to try to spy on Kaspersky. The rewards my be big, but so is the risk that your intelligence service will be pantsed. Nick also notes that using Let's Encrypt as part of your man in the middle attack has risks as well – advice he probably should deliver auf Deutsch. Scott and I cover a great Andy Greenberg story about a team of hackers who discovered how to unlock a vast store of bitcoin on an IronKey but may not see a payoff soon. I reveal my connection to the story. Michael and I share thoughts about the effort to renew section 702 of FISA, which lost momentum during the long battle over choosing a Speaker of the House. I note that USTR has surrendered to reality in global digital trade and point out that last week's story about judicial interest in tort cases against social media turned out to be the first robin in what now looks like a remake of The Birds. Download 479th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The Supreme Court has granted certiorari to review two big state laws trying to impose limits on social media censorship (or “curation,” if you prefer) of platform content. Paul Stephan and I spar over the right outcome, and the likely vote count, in the two cases. One surprise: we both think that the platforms' claim of a first amendment right to curate content is in tension with their claim that they, uniquely among speakers, should have an immunity for their “speech.” Maury weighs in to note that the EU is now gearing up to bring social media to heel on the “disinformation” front. That fight will be ugly for Big Tech, he points out, because Europe doesn't mind if it puts social media out of business, since it's an American industry. I point out that elites all across the globe have rallied to meet and defeat social media's challenge to their agenda-setting and reality-defining authority. India is aggressively doing the same. Paul covers another big story in law and technology. The FTC has sued Amazon for antitrust violations—essentially price gouging and tying. Whether the conduct alleged in the complaint is even a bad thing will depend on the facts, so the case will be hard fought. And, given the FTC's track record, no one should be betting against Amazon. Nick Weaver explains the dynamic behind the massive MGM and Caesars hacks. As with so many globalized industries, ransomware now has Americans in marketing (or social engineering, if you prefer) and foreign technology suppliers. Nick thinks it's time to OFAC ‘em all. Maury explains the latest bulk intercept decision from the European Court of Human Rights. The UK has lost again, but it's not clear how much difference that will make. The ruling says that non-Brits can sue the UK over bulk interception, but the court has already made clear that, with a few legislative tweaks, bulk interception is legal under the European human rights convention. More bad news for 230 maximalists: it turns out that Facebook can be sued for allowing advertisers to target ads based on age and gender. The platform slipped from allowing speech to being liable for speech because it facilitated advertiser's allegedly discriminatory targeting. The UK competition authorities are seeking greater access to AI's inner workings to assess risks, but Maury Shenk is sure this is part of a light touch on AI regulation that is meant to make the UK a safe European harbor for AI companies. In a few quick hits and updates: I explain the splintered PCLOB report that endorses 702 renewal, with widely diverging proposals for reform. Paul tells us that the Biden Administration plans to bring back “net neutrality” rules. Hey, if we get to choose which golden oldie to revive, I actually liked the macarena more. I flag an issue likely to spark a surprisingly bitter clash between the administration and cloud providers – Know Your Customer rules. The government thinks it's irresponsible from a cybersecurity point of view to let randos spin up virtual machines. The industry doesn't think the market will tolerate any other way of doing business. Speaking of government-industry clashes, it looks like Apple is caught between Chinese demands that it impose tough new controls on apps in its app store and, well, human decency. Maury has the story. And I've got a solution. Apple should just rebrand its totalitarian new controls as “app curation.” Seems to be working for everyone else. You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The Supreme Court has granted certiorari to review two big state laws trying to impose limits on social media censorship (or “curation,” if you prefer) of platform content. Paul Stephan and I spar over the right outcome, and the likely vote count, in the two cases. One surprise: we both think that the platforms' claim of a first amendment right to curate content is in tension with their claim that they, uniquely among speakers, should have an immunity for their “speech.” Maury weighs in to note that the EU is now gearing up to bring social media to heel on the “disinformation” front. That fight will be ugly for Big Tech, he points out, because Europe doesn't mind if it puts social media out of business, since it's an American industry. I point out that elites all across the globe have rallied to meet and defeat social media's challenge to their agenda-setting and reality-defining authority. India is aggressively doing the same. Paul covers another big story in law and technology. The FTC has sued Amazon for antitrust violations—essentially price gouging and tying. Whether the conduct alleged in the complaint is even a bad thing will depend on the facts, so the case will be hard fought. And, given the FTC's track record, no one should be betting against Amazon. Nick Weaver explains the dynamic behind the massive MGM and Caesars hacks. As with so many globalized industries, ransomware now has Americans in marketing (or social engineering, if you prefer) and foreign technology suppliers. Nick thinks it's time to OFAC ‘em all. Maury explains the latest bulk intercept decision from the European Court of Human Rights. The UK has lost again, but it's not clear how much difference that will make. The ruling says that non-Brits can sue the UK over bulk interception, but the court has already made clear that, with a few legislative tweaks, bulk interception is legal under the European human rights convention. More bad news for 230 maximalists: it turns out that Facebook can be sued for allowing advertisers to target ads based on age and gender. The platform slipped from allowing speech to being liable for speech because it facilitated advertiser's allegedly discriminatory targeting. The UK competition authorities are seeking greater access to AI's inner workings to assess risks, but Maury Shenk is sure this is part of a light touch on AI regulation that is meant to make the UK a safe European harbor for AI companies. In a few quick hits and updates: I explain the splintered PCLOB report that endorses 702 renewal, with widely diverging proposals for reform. Paul tells us that the Biden Administration plans to bring back “net neutrality” rules. Hey, if we get to choose which golden oldie to revive, I actually liked the macarena more. I flag an issue likely to spark a surprisingly bitter clash between the administration and cloud providers – Know Your Customer rules. The government thinks it's irresponsible from a cybersecurity point of view to let randos spin up virtual machines. The industry doesn't think the market will tolerate any other way of doing business. Speaking of government-industry clashes, it looks like Apple is caught between Chinese demands that it impose tough new controls on apps in its app store and, well, human decency. Maury has the story. And I've got a solution. Apple should just rebrand its totalitarian new controls as “app curation.” Seems to be working for everyone else. You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The Cyberlaw Podcast is back from August hiatus, and the theme of the episode seems to be the way other countries are using the global success of U.S. technology to impose their priorities on the U.S. Exhibit 1 is the EU's Digital Services Act, which took effect last month. Michael Ellis spells out a few of the act's sweeping changes in how U.S. tech companies must operate – nominally in Europe but as a practical matter in the U.S. as well. The largest platforms will be heavily regulated, with restrictions on their content curation algorithms and a requirement that they promote government content when governments declare a crisis. Other social media will also be subject to heavy content regulation, such as transparency in their decisions to demote or ban content and a requirement that they respond promptly to takedown requests from “trusted flaggers” of Bad Speech. In search of a silver lining, I point out that many of the transparency and due process requirements are things that Texas and Florida have advocated over the objections of Silicon Valley companies. Compliance with the EU Act will undercut those claims in the Supreme Court arguments we're likely to hear this term, claiming that it can't be done. Cristin Flynn Goodwin and I note that China's on-again off-again regulatory enthusiasm is off again. Chinese officials are doing their best to ease Western firms' concerns about China's new data security law requirements. Even more remarkable, China's AI regulatory framework was watered down in August, moving away from the EU model and toward a U.S./U.K. ethical/voluntary approach. For now. Cristin also brings us up to speed on the SEC's rule on breach notification. The short version: The rule will make sense to anyone who's ever stopped putting out a kitchen fire to call their insurer to let them know a claim may be coming. Nick Weaver brings us up to date on cryptocurrency and the law. Short version: Cryptocurrency had one victory, which it probably deserved, in the Grayscale case, and a series of devastating losses over Tornado Cash, as a court rejected Tornado Cash's claim that its coders and lawyers had found a hole in Treasury's Office of Foreign Assets Control ("OFAC") regime, and the Justice Department indicted the prime movers in Tornado Cash for conspiracy to launder North Korea's stolen loot. Here's Nick's view in print. Just to show that the EU isn't the only jurisdiction that can use U.S. legal models to hurt U.S. policy, China managed to kill Intel's acquisition of Tower Semiconductor by stalling its competition authority's review of the deal. I see an eerie parallel between the Chinese aspirations of federal antitrust enforcers and those of the Christian missionaries we sent to China in the 1920s. Michael and I discuss the belated leak of the national security negotiations between CFIUS and TikTok. After a nod to substance (no real surprises in the draft), we turn to the question of who leaked it, and whether the effort to curb TikTok is dead. Nick and I explore the remarkable impact of the war in Ukraine on drone technology. It may change the course of war in Ukraine (or, indeed, a war over Taiwan), Nick thinks, but it also means that Joe Biden may be the last President to see the sky while in office. (And if you've got space in D.C. and want to hear Nick's provocative thoughts on the topic, he will be in town next week, and eager to give his academic talk: "Dr. Strangedrone, or How I Learned to Stop Worrying and Love the Slaughterbots".) Cristin, Michael and I dig into another August policy initiative, the “outbound Committee on Foreign Investment in the United States (CFIUS)” order. Given the long delays and halting rollout, I suggest that the Treasury's Advance Notice of Proposed Rulemaking (ANPRM) on the topic really stands for Ambivalent Notice of Proposed Rulemaking.” Finally, I suggest that autonomous vehicles may finally have turned the corner to success and rollout, now that they're being used as rolling hookup locations and (perhaps not coincidentally) being approved to offer 24/7 robotaxi service in San Francisco. Nick's not ready to agree, but we do find common ground in criticizing a study. Download 470th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The Cyberlaw Podcast is back from August hiatus, and the theme of the episode seems to be the way other countries are using the global success of U.S. technology to impose their priorities on the U.S. Exhibit 1 is the EU's Digital Services Act, which took effect last month. Michael Ellis spells out a few of the act's sweeping changes in how U.S. tech companies must operate – nominally in Europe but as a practical matter in the U.S. as well. The largest platforms will be heavily regulated, with restrictions on their content curation algorithms and a requirement that they promote government content when governments declare a crisis. Other social media will also be subject to heavy content regulation, such as transparency in their decisions to demote or ban content and a requirement that they respond promptly to takedown requests from “trusted flaggers” of Bad Speech. In search of a silver lining, I point out that many of the transparency and due process requirements are things that Texas and Florida have advocated over the objections of Silicon Valley companies. Compliance with the EU Act will undercut those claims in the Supreme Court arguments we're likely to hear this term, claiming that it can't be done. Cristin Flynn Goodwin and I note that China's on-again off-again regulatory enthusiasm is off again. Chinese officials are doing their best to ease Western firms' concerns about China's new data security law requirements. Even more remarkable, China's AI regulatory framework was watered down in August, moving away from the EU model and toward a U.S./U.K. ethical/voluntary approach. For now. Cristin also brings us up to speed on the SEC's rule on breach notification. The short version: The rule will make sense to anyone who's ever stopped putting out a kitchen fire to call their insurer to let them know a claim may be coming. Nick Weaver brings us up to date on cryptocurrency and the law. Short version: Cryptocurrency had one victory, which it probably deserved, in the Grayscale case, and a series of devastating losses over Tornado Cash, as a court rejected Tornado Cash's claim that its coders and lawyers had found a hole in Treasury's Office of Foreign Assets Control ("OFAC") regime, and the Justice Department indicted the prime movers in Tornado Cash for conspiracy to launder North Korea's stolen loot. Here's Nick's view in print. Just to show that the EU isn't the only jurisdiction that can use U.S. legal models to hurt U.S. policy, China managed to kill Intel's acquisition of Tower Semiconductor by stalling its competition authority's review of the deal. I see an eerie parallel between the Chinese aspirations of federal antitrust enforcers and those of the Christian missionaries we sent to China in the 1920s. Michael and I discuss the belated leak of the national security negotiations between CFIUS and TikTok. After a nod to substance (no real surprises in the draft), we turn to the question of who leaked it, and whether the effort to curb TikTok is dead. Nick and I explore the remarkable impact of the war in Ukraine on drone technology. It may change the course of war in Ukraine (or, indeed, a war over Taiwan), Nick thinks, but it also means that Joe Biden may be the last President to see the sky while in office. (And if you've got space in D.C. and want to hear Nick's provocative thoughts on the topic, he will be in town next week, and eager to give his academic talk: "Dr. Strangedrone, or How I Learned to Stop Worrying and Love the Slaughterbots".) Cristin, Michael and I dig into another August policy initiative, the “outbound Committee on Foreign Investment in the United States (CFIUS)” order. Given the long delays and halting rollout, I suggest that the Treasury's Advance Notice of Proposed Rulemaking (ANPRM) on the topic really stands for Ambivalent Notice of Proposed Rulemaking.” Finally, I suggest that autonomous vehicles may finally have turned the corner to success and rollout, now that they're being used as rolling hookup locations and (perhaps not coincidentally) being approved to offer 24/7 robotaxi service in San Francisco. Nick's not ready to agree, but we do find common ground in criticizing a study. Download 470th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast kicks off with a stinging defeat for the Federal Trade Commission (FTC), which could not persuade the courts to suspend the Microsoft-Activision Blizzard acquisition. Mark MacCarthy says that the FTC's loss will pave the way for a complete victory for Microsoft, as other jurisdictions trim their sails. We congratulate Brad Smith, Microsoft's President, whose policy smarts likely helped to construct this win. Meanwhile, the FTC is still doubling down on its determination to pursue aggressive legal theories. Maury Shenk explains the agency's investigation of OpenAI, which raises issues not usually associated with consumer protection. Mark and Maury argue that this is just a variation of the tactic that made the FTC the de facto privacy regulator in the U.S. I ask why policing ChatGPT's hallucinatory libel problem constitutes consumer protection, and they answer, plausibly, that libel is a kind of deception, which the FTC does have authority to police. Mark then helps us drill down on the Associated Press deal licensing its archives to OpenAI, a deal that may turn out to be good for both companies. Nick Weaver and I try to make sense of the district court ruling that Ripple's XRP is a regulated investment contract when provided to sophisticated buyers but not when sold to retail customers in the market. It is hard to say that it makes policy sense, since the securities laws are there to protect the retail customers more than sophisticated buyers. But it does seem to be at least temporary good news for the cryptocurrency exchanges, who now have a basis for offering what the SEC has been calling an unregistered security. And it's clearly bad news for the SEC, which may not be able to litigate its way to the Cryptopocalypse it has been pursuing. Andy Greenberg makes a guest appearance to discuss his WIRED story about the still mysterious mechanism by which Chinese cyberspies acquired the ability to forge Microsoft authentication tokens. Maury tells us why Meta's Twitter-killer, Threads, won't be available soon in Europe. That leads me to reflect on just how disastrously Brussels has managed the EU's economy. Fifteen years ago, the U.S. and EU had roughly similar GDPs, at about $15 trillion each. Now the EU GDP has scarcely grown, while U.S. GCP is close to $25 trillion. It's hard to believe that EU tech policy hasn't contributed to this continental impoverishment, which Maury points out is even making Brexit look good. Maury also explains the French police drive to get explicit authority to conduct surveillance through cell phones. Nick offers his take on FISA section 702 reform. Stories. And Maury evaluates Amazon's challenge to new EU content rules, which he thinks have more policy than legal appeal. Not content with his takedown of the Ripple decision, Nick reviews all the criminal cases in which cryptocurrency enthusiasts are embroiled. These include a Chinese bust of Multichain, the sentencing of Variety Jones for his role in the Silk Road crime market, and the arrest of Alex Mashinsky, CEO of the cryptocurrency exchange Celsius. Finally, in quick hits, Mark and I duel over the lawsuit claiming that Texas's TikTok Ban on government phones will threaten academic freedom. I praise the surprisingly good National Cybersecurity-Strategy Implementation Plan and puzzle over the decision not to nominate the acting head of that office to head the office permanently. And I note that the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA, reviled by the left, has withstood a constitutional challenge in the DC Circuit. Download 468th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast kicks off with a stinging defeat for the Federal Trade Commission (FTC), which could not persuade the courts to suspend the Microsoft-Activision Blizzard acquisition. Mark MacCarthy says that the FTC's loss will pave the way for a complete victory for Microsoft, as other jurisdictions trim their sails. We congratulate Brad Smith, Microsoft's President, whose policy smarts likely helped to construct this win. Meanwhile, the FTC is still doubling down on its determination to pursue aggressive legal theories. Maury Shenk explains the agency's investigation of OpenAI, which raises issues not usually associated with consumer protection. Mark and Maury argue that this is just a variation of the tactic that made the FTC the de facto privacy regulator in the U.S. I ask why policing ChatGPT's hallucinatory libel problem constitutes consumer protection, and they answer, plausibly, that libel is a kind of deception, which the FTC does have authority to police. Mark then helps us drill down on the Associated Press deal licensing its archives to OpenAI, a deal that may turn out to be good for both companies. Nick Weaver and I try to make sense of the district court ruling that Ripple's XRP is a regulated investment contract when provided to sophisticated buyers but not when sold to retail customers in the market. It is hard to say that it makes policy sense, since the securities laws are there to protect the retail customers more than sophisticated buyers. But it does seem to be at least temporary good news for the cryptocurrency exchanges, who now have a basis for offering what the SEC has been calling an unregistered security. And it's clearly bad news for the SEC, which may not be able to litigate its way to the Cryptopocalypse it has been pursuing. Andy Greenberg makes a guest appearance to discuss his WIRED story about the still mysterious mechanism by which Chinese cyberspies acquired the ability to forge Microsoft authentication tokens. Maury tells us why Meta's Twitter-killer, Threads, won't be available soon in Europe. That leads me to reflect on just how disastrously Brussels has managed the EU's economy. Fifteen years ago, the U.S. and EU had roughly similar GDPs, at about $15 trillion each. Now the EU GDP has scarcely grown, while U.S. GCP is close to $25 trillion. It's hard to believe that EU tech policy hasn't contributed to this continental impoverishment, which Maury points out is even making Brexit look good. Maury also explains the French police drive to get explicit authority to conduct surveillance through cell phones. Nick offers his take on FISA section 702 reform. Stories. And Maury evaluates Amazon's challenge to new EU content rules, which he thinks have more policy than legal appeal. Not content with his takedown of the Ripple decision, Nick reviews all the criminal cases in which cryptocurrency enthusiasts are embroiled. These include a Chinese bust of Multichain, the sentencing of Variety Jones for his role in the Silk Road crime market, and the arrest of Alex Mashinsky, CEO of the cryptocurrency exchange Celsius. Finally, in quick hits, Mark and I duel over the lawsuit claiming that Texas's TikTok Ban on government phones will threaten academic freedom. I praise the surprisingly good National Cybersecurity-Strategy Implementation Plan and puzzle over the decision not to nominate the acting head of that office to head the office permanently. And I note that the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA, reviled by the left, has withstood a constitutional challenge in the DC Circuit. Download 468th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
It was a disastrous week for cryptocurrency in the United States, as the Securities Exchange Commission (SEC) filed suit against the two biggest exchanges, Binance and Coinbase, on a theory that makes it nearly impossible to run a cryptocurrency exchange that is competitive with overseas exchanges. Nick Weaver lays out the differences between “process crimes” and “crime crimes,” and how they help distinguish the two lawsuits. The SEC action marks the end of an uneasy truce, but not the end of the debate. Both exchanges have the funds for a hundred-million-dollar defense and lobbying campaign. So you can expect to hear more about this issue for years (and years) to come. I touch on two AI regulation stories. First, I found Mark Andreessen's post trying to head off AI regulation pretty persuasive until the end, where he said that the risk of bad people using AI for bad things can be addressed by using AI to stop them. Sorry, Mark, it doesn't work that way. We aren't stopping the crimes that modern encryption makes possible by throwing more crypto at the culprits. My nominee for the AI Regulation Hall of Fame, though, goes to Japan, which has decided to address the phony issue of AI copyright infringement by declaring that it's a phony issue and there'll be no copyright liability for their AI industry when they train models on copyrighted content. This is the right answer, but it's also a brilliant way of borrowing and subverting the EU's GDPR model (“We regulate the world, and help EU industry too”). If Japan applies this policy to models built and trained in Japan, it will give Japanese AI companies at least an arguable immunity from copyright claims around the world. Companies will flock to Japan to train their models and build their datasets in relative regulatory certainty. The rest of the world can follow suit or watch their industries set up shop in Japan. It helps, of course, that copyright claims against AI are mostly rent-seeking by Big Content, but this has to be the smartest piece of international AI regulation any jurisdiction has come up with so far. Kurt Sanger, just back from a NATO cyber conference in Estonia, explains why military cyber defenders are stressing their need for access to the private networks they'll be defending. Whether they'll get it, we agree, is another kettle of fish entirely. David Kris turns to public-private cooperation issues in another context. The Cyberspace Solarium Commission has another report out. It calls on the government to refresh and rethink the aging orders that regulate how the government deals with the private sector on cyber matters. Kurt and I consider whether Russia is committing war crimes by DDOSing emergency services in Ukraine at the same time as its bombing of Ukrainian cities. We agree that the evidence isn't there yet. Nick and I dig into two recent exploits that stand out from the crowd. It turns out that Barracuda's security appliance has been so badly compromised that the only remedial measure involve a woodchipper. Nick is confident that the tradecraft here suggests a nation-state attacker. I wonder if it's also a way to move Barracuda's customers to the cloud. The other compromise is an attack on MOVEit Transfer. The attack on the secure file transfer system has allowed ransomware gang Clop to download so much proprietary data that they have resorted to telling their victims to self-identify and pay the ransom rather than wait for Clop to figure out who they've pawned. Kurt, David, and I talk about the White House effort to sell section 702 of FISA for its cybersecurity value and my effort, with Michael Ellis, to sell 702 (packaged with intelligence reform) to a conservative caucus that is newly skeptical of the intelligence community. David finds himself uncomfortably close to endorsing our efforts. Finally, in quick updates: Nick talks about Tesla's Full Self Driving, and the accidents it has been involved in I warn listeners that Virginia has joined the ranks of states that require an ID proving age to access Pornhub. I predict that twenty states will adopt such a requirement in the next year Download 462nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
It was a disastrous week for cryptocurrency in the United States, as the Securities Exchange Commission (SEC) filed suit against the two biggest exchanges, Binance and Coinbase, on a theory that makes it nearly impossible to run a cryptocurrency exchange that is competitive with overseas exchanges. Nick Weaver lays out the differences between “process crimes” and “crime crimes,” and how they help distinguish the two lawsuits. The SEC action marks the end of an uneasy truce, but not the end of the debate. Both exchanges have the funds for a hundred-million-dollar defense and lobbying campaign. So you can expect to hear more about this issue for years (and years) to come. I touch on two AI regulation stories. First, I found Mark Andreessen's post trying to head off AI regulation pretty persuasive until the end, where he said that the risk of bad people using AI for bad things can be addressed by using AI to stop them. Sorry, Mark, it doesn't work that way. We aren't stopping the crimes that modern encryption makes possible by throwing more crypto at the culprits. My nominee for the AI Regulation Hall of Fame, though, goes to Japan, which has decided to address the phony issue of AI copyright infringement by declaring that it's a phony issue and there'll be no copyright liability for their AI industry when they train models on copyrighted content. This is the right answer, but it's also a brilliant way of borrowing and subverting the EU's GDPR model (“We regulate the world, and help EU industry too”). If Japan applies this policy to models built and trained in Japan, it will give Japanese AI companies at least an arguable immunity from copyright claims around the world. Companies will flock to Japan to train their models and build their datasets in relative regulatory certainty. The rest of the world can follow suit or watch their industries set up shop in Japan. It helps, of course, that copyright claims against AI are mostly rent-seeking by Big Content, but this has to be the smartest piece of international AI regulation any jurisdiction has come up with so far. Kurt Sanger, just back from a NATO cyber conference in Estonia, explains why military cyber defenders are stressing their need for access to the private networks they'll be defending. Whether they'll get it, we agree, is another kettle of fish entirely. David Kris turns to public-private cooperation issues in another context. The Cyberspace Solarium Commission has another report out. It calls on the government to refresh and rethink the aging orders that regulate how the government deals with the private sector on cyber matters. Kurt and I consider whether Russia is committing war crimes by DDOSing emergency services in Ukraine at the same time as its bombing of Ukrainian cities. We agree that the evidence isn't there yet. Nick and I dig into two recent exploits that stand out from the crowd. It turns out that Barracuda's security appliance has been so badly compromised that the only remedial measure involve a woodchipper. Nick is confident that the tradecraft here suggests a nation-state attacker. I wonder if it's also a way to move Barracuda's customers to the cloud. The other compromise is an attack on MOVEit Transfer. The attack on the secure file transfer system has allowed ransomware gang Clop to download so much proprietary data that they have resorted to telling their victims to self-identify and pay the ransom rather than wait for Clop to figure out who they've pawned. Kurt, David, and I talk about the White House effort to sell section 702 of FISA for its cybersecurity value and my effort, with Michael Ellis, to sell 702 (packaged with intelligence reform) to a conservative caucus that is newly skeptical of the intelligence community. David finds himself uncomfortably close to endorsing our efforts. Finally, in quick updates: Nick talks about Tesla's Full Self Driving, and the accidents it has been involved in I warn listeners that Virginia has joined the ranks of states that require an ID proving age to access Pornhub. I predict that twenty states will adopt such a requirement in the next year Download 462nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The latest episode of The Cyberlaw Podcast was not created by chatbots (we swear!). Guest host Brian Fleming, along with guests Jay Healey, Maury Shenk, and Nick Weaver, discuss the latest news on the AI revolution including Google's efforts to protect its search engine dominance, a fascinating look at the websites that feed tools like ChatGPT (leading some on the panel to argue that quality over quantity should be goal), and a possible regulatory speed bump for total AI world domination, at least as far as the EU's General Data Privacy Regulation is concerned. Next, Jay lends some perspective on where we've been and where we're going with respect to cybersecurity by reflecting on some notable recent and upcoming anniversaries. The panel then discusses recent charges brought by the Justice Department, and two arrests, aimed at China's alleged attempt to harass dissidents living in the U.S. (including with fake social media accounts) and ponders how much of Russia's playbook China is willing to adopt. Nick and Brian then discuss the Securities and Exchange Commission's complaint against Bittrex and what it could portend for others in the crypto space and, more broadly, the future of crypto regulation and enforcement in the U.S. Maury then discusses the new EU-wide crypto regulations, and what the EU's approach to regulating this industry could mean going forward. The panel then takes a hard look at an alarming story out of Taiwan and debates what the recent “invisible blockade” on Matsu means for China's future designs on the island and Taiwan's ability to bolster the resiliency of its communications infrastructure. Finally, Nick covers a recent report on the Mexican government's continued reliance on Pegasus spyware. To wrap things up in the week's quick hits, Jay proposes updating the Insurrection Act to avoid its use as a justification for deploying military cyber capabilities against U.S. citizens, Nick discusses the dangers of computer generated swatting services, Brian highlights the recent Supreme Court argument that may settle whether online stalking is a “true threat” v. protected First Amendment activity, and, last but not least, Nick checks in on Elon Musk's threat to sue Microsoft after Twitter is dropped from its ad platform. Download 454th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The latest episode of The Cyberlaw Podcast was not created by chatbots (we swear!). Guest host Brian Fleming, along with guests Jay Healey, Maury Shenk, and Nick Weaver, discuss the latest news on the AI revolution including Google's efforts to protect its search engine dominance, a fascinating look at the websites that feed tools like ChatGPT (leading some on the panel to argue that quality over quantity should be goal), and a possible regulatory speed bump for total AI world domination, at least as far as the EU's General Data Privacy Regulation is concerned. Next, Jay lends some perspective on where we've been and where we're going with respect to cybersecurity by reflecting on some notable recent and upcoming anniversaries. The panel then discusses recent charges brought by the Justice Department, and two arrests, aimed at China's alleged attempt to harass dissidents living in the U.S. (including with fake social media accounts) and ponders how much of Russia's playbook China is willing to adopt. Nick and Brian then discuss the Securities and Exchange Commission's complaint against Bittrex and what it could portend for others in the crypto space and, more broadly, the future of crypto regulation and enforcement in the U.S. Maury then discusses the new EU-wide crypto regulations, and what the EU's approach to regulating this industry could mean going forward. The panel then takes a hard look at an alarming story out of Taiwan and debates what the recent “invisible blockade” on Matsu means for China's future designs on the island and Taiwan's ability to bolster the resiliency of its communications infrastructure. Finally, Nick covers a recent report on the Mexican government's continued reliance on Pegasus spyware. To wrap things up in the week's quick hits, Jay proposes updating the Insurrection Act to avoid its use as a justification for deploying military cyber capabilities against U.S. citizens, Nick discusses the dangers of computer generated swatting services, Brian highlights the recent Supreme Court argument that may settle whether online stalking is a “true threat” v. protected First Amendment activity, and, last but not least, Nick checks in on Elon Musk's threat to sue Microsoft after Twitter is dropped from its ad platform. Download 454th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley's “trust and safety” bureaucracies) but there's no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT's stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people's reputations, Nick Weaver reports. One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance. Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-Va.) and Sen. John Thune (R-S.D.), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action. Nick is impressed by the law enforcement operation to shutter Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years. I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive means the intelligence community will do a pretty good job of telling its employees when they take a job that comes with post-employment restrictions, but IC alumni are so far getting very little guidance. Nick exults in the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance. Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried for their own kids. When the idea of age-checking internet users gets endorsed by the UK, Utah, and the New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s. And in quick hits: Nick comments on the massive 3CX supply-chain hack, which seems to have been a fishing-with-dynamite effort to steal a few people's cryptocurrency. I raise doubts about a much-cited claim that a Florida city's water system was the victim of a cyber attack. Nick unloads on Elon Musk for drawing a German investigation over Twitter's failure to promptly remove hate speech. Paul and I note the UK's most recent paper on how to exercise cyber power responsibly. And Nick and I puzzle over the conflict between the Biden administration and the New York Times about a spyware contract that supposedly undermined the administration's stance on spyware. Download 452nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley's “trust and safety” bureaucracies) but there's no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT's stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people's reputations, Nick Weaver reports. One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance. Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-Va.) and Sen. John Thune (R-S.D.), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action. Nick is impressed by the law enforcement operation to shutter Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years. I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive means the intelligence community will do a pretty good job of telling its employees when they take a job that comes with post-employment restrictions, but IC alumni are so far getting very little guidance. Nick exults in the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance. Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried for their own kids. When the idea of age-checking internet users gets endorsed by the UK, Utah, and the New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s. And in quick hits: Nick comments on the massive 3CX supply-chain hack, which seems to have been a fishing-with-dynamite effort to steal a few people's cryptocurrency. I raise doubts about a much-cited claim that a Florida city's water system was the victim of a cyber attack. Nick unloads on Elon Musk for drawing a German investigation over Twitter's failure to promptly remove hate speech. Paul and I note the UK's most recent paper on how to exercise cyber power responsibly. And Nick and I puzzle over the conflict between the Biden administration and the New York Times about a spyware contract that supposedly undermined the administration's stance on spyware. Download 452nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Dmitri Alperovitch joins the Cyberlaw Podcast to discuss the state of semiconductor decoupling between China and the West. It's a broad movement, fed by both sides. China has announced that it's investigating Micron to see if its memory chips should still be allowed into China's supply chain (spoiler: almost certainly not). Japan has tightened up its chip-making export control rules, which will align it with U.S. and Dutch restrictions, all with the aim of slowing China's ability to make the most powerful chips. Meanwhile, South Korea is boosting its chipmakers with new tax breaks, and Huawei is reporting a profit squeeze. The Biden administration spent much of last week on spyware policy, Winnona DeSombre Berners reports. How much it actually accomplished isn't clear. The spyware executive order restricts U.S. government purchases of surveillance tools that threaten U.S. security or that have been misused against civil society targets. And a group of like-minded nations have set forth the principles they think should govern sales of spyware. But it's not as though countries that want spyware are going to have a tough time finding, I observe, despite all the virtue signaling. Case in point: Iran is getting plenty of new surveillance tech from Russia these days. And spyware campaigns continue to proliferate. Winnona and Dmitri nominate North Korea for the title “Most Innovative Cyber Power,” acknowledging its creative use of social engineering to steal cryptocurrency and gain access to U.S. policy influencers. Dmitri covers the TikTok beat, including the prospects of the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act., which he still rates high despite some criticism from the right. Winnona and I debate the need for another piece of legislation given the breadth of CFIUS review and International Emergency Economic Powers Act sanctions. Dmitri and I note the arrival of GPT-4 cybersecurity, as Microsoft introduces “Security Copilot.” We question whether this will turn out to be a game changer, but it does suggest that bespoke AI tools could play a role in cybersecurity (and pretty much everything else.) In other AI news, Dmitri and I wonder at Italy's decision to cut itself off from access to ChatGPT by claiming that it violates Italian data protection law. That may turn out to be a hard case to prove, especially since the regulator has no clear jurisdiction over OpenAI, which is now selling nothing in Italy. In the same vein, there may be a safety reason to be worried by how fast AI is proceeding these days, but the letter proposing a six-month pause for more safety review is hardly persuasive—specially in a world where “safety” seems to mostly be about stamping out bad pronouns. In news Nick Weaver will kick himself for missing, Binance is facing a bombshell complaint from the Commodities Futures Trading Commission (CFTC) (the Binance response is here). The CFTC clearly had access to the suicidally candid messages exchanged among Binance's compliance team. I predict criminal indictments in the near future and wonder if the CFTC's taking the lead on the issue has given it a jurisdictional leg up on the SEC in the turf fight over who regulates cryptocurrency. Finally, we close with a review of a book arguing that pretty much anyone who ever uttered the words “China's peaceful rise” was the victim of a well-planned and highly successful Chinese influence operation. Download 451st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Dmitri Alperovitch joins the Cyberlaw Podcast to discuss the state of semiconductor decoupling between China and the West. It's a broad movement, fed by both sides. China has announced that it's investigating Micron to see if its memory chips should still be allowed into China's supply chain (spoiler: almost certainly not). Japan has tightened up its chip-making export control rules, which will align it with U.S. and Dutch restrictions, all with the aim of slowing China's ability to make the most powerful chips. Meanwhile, South Korea is boosting its chipmakers with new tax breaks, and Huawei is reporting a profit squeeze. The Biden administration spent much of last week on spyware policy, Winnona DeSombre Berners reports. How much it actually accomplished isn't clear. The spyware executive order restricts U.S. government purchases of surveillance tools that threaten U.S. security or that have been misused against civil society targets. And a group of like-minded nations have set forth the principles they think should govern sales of spyware. But it's not as though countries that want spyware are going to have a tough time finding, I observe, despite all the virtue signaling. Case in point: Iran is getting plenty of new surveillance tech from Russia these days. And spyware campaigns continue to proliferate. Winnona and Dmitri nominate North Korea for the title “Most Innovative Cyber Power,” acknowledging its creative use of social engineering to steal cryptocurrency and gain access to U.S. policy influencers. Dmitri covers the TikTok beat, including the prospects of the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act., which he still rates high despite some criticism from the right. Winnona and I debate the need for another piece of legislation given the breadth of CFIUS review and International Emergency Economic Powers Act sanctions. Dmitri and I note the arrival of GPT-4 cybersecurity, as Microsoft introduces “Security Copilot.” We question whether this will turn out to be a game changer, but it does suggest that bespoke AI tools could play a role in cybersecurity (and pretty much everything else.) In other AI news, Dmitri and I wonder at Italy's decision to cut itself off from access to ChatGPT by claiming that it violates Italian data protection law. That may turn out to be a hard case to prove, especially since the regulator has no clear jurisdiction over OpenAI, which is now selling nothing in Italy. In the same vein, there may be a safety reason to be worried by how fast AI is proceeding these days, but the letter proposing a six-month pause for more safety review is hardly persuasive—specially in a world where “safety” seems to mostly be about stamping out bad pronouns. In news Nick Weaver will kick himself for missing, Binance is facing a bombshell complaint from the Commodities Futures Trading Commission (CFTC) (the Binance response is here). The CFTC clearly had access to the suicidally candid messages exchanged among Binance's compliance team. I predict criminal indictments in the near future and wonder if the CFTC's taking the lead on the issue has given it a jurisdictional leg up on the SEC in the turf fight over who regulates cryptocurrency. Finally, we close with a review of a book arguing that pretty much anyone who ever uttered the words “China's peaceful rise” was the victim of a well-planned and highly successful Chinese influence operation. Download 451st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast opens with a look at some genuinely weird behavior by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – plus the factual error that wrecked the rollout of Google's AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS'ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China's autocracy never gets its AI capabilities off the ground. One thing that AI is creepily good at is faking people's voices. I try out ElevenLabs' technology in the first advertisement ever to run on the Cyberlaw Podcast. The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment on agencies' handling of 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI's access – at great cost to our unified approach to terrorism intelligence, I complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community. Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink's effort to keep providing service to Ukraine without assisting offensive military operations. Chinny does a victory lap over reports that the (still not released) national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I cannot quite understand why Google thinks this is good for Google. Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act and European Union (EU) and California privacy law are the censor's favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges so little. Jim and Chinny bring us up to date on the charm offensive being waged in Washington by TikTok's CEO and the broader debate over China's access to the personal data of Americans, including health data. Jim cites a recent Duke study, which I complain is not clear about when the data being sold is individual and when it is aggregated. Nick reminds us all that aggregate data is often easy to individualize. Finally, we make quick work of a few more stories: This week's oral argument in Gonzalez v. Google is a big deal, but we will cover it in detail once the Justices have chewed it over. If you want to know why conservatives think the whole “disinformation” scare is a scam to suppress conservative speech, look no further than the scandal over the State Department's funding of an non-governmental organization (NGO) devoted to cutting off ad revenue for “risky” purveyors of “disinformation” like Reason (presumably including the Volokh Conspiracy), Real Clear Politics, the N.Y. Post, and the Washington Examiner – all outlets that can only look like disinformation to the most biased judge. The National Endowment for Democracy has already cut off funding, but Microsoft's ad agency still seems to be boycotting these conservative outlets. EU Lawmakers are refusing to endorse the latest EU-U.S. data deal. But it is all virtue signaling. Leaving Twitter over Elon Musk's ownership turns out to be about as popular as leaving the U.S. over Trump's presidency. Chris Inglis has finished his tour of duty as national cyber director. And the Federal Trade Commission's humiliation over its effort to block Meta's acquisition of Within is complete. Meta closed the deal last week. Download 443rd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast opens with a look at some genuinely weird behavior by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – plus the factual error that wrecked the rollout of Google's AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS'ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China's autocracy never gets its AI capabilities off the ground. One thing that AI is creepily good at is faking people's voices. I try out ElevenLabs' technology in the first advertisement ever to run on the Cyberlaw Podcast. The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment on agencies' handling of 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI's access – at great cost to our unified approach to terrorism intelligence, I complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community. Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink's effort to keep providing service to Ukraine without assisting offensive military operations. Chinny does a victory lap over reports that the (still not released) national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I cannot quite understand why Google thinks this is good for Google. Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act and European Union (EU) and California privacy law are the censor's favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges so little. Jim and Chinny bring us up to date on the charm offensive being waged in Washington by TikTok's CEO and the broader debate over China's access to the personal data of Americans, including health data. Jim cites a recent Duke study, which I complain is not clear about when the data being sold is individual and when it is aggregated. Nick reminds us all that aggregate data is often easy to individualize. Finally, we make quick work of a few more stories: This week's oral argument in Gonzalez v. Google is a big deal, but we will cover it in detail once the Justices have chewed it over. If you want to know why conservatives think the whole “disinformation” scare is a scam to suppress conservative speech, look no further than the scandal over the State Department's funding of an non-governmental organization (NGO) devoted to cutting off ad revenue for “risky” purveyors of “disinformation” like Reason (presumably including the Volokh Conspiracy), Real Clear Politics, the N.Y. Post, and the Washington Examiner – all outlets that can only look like disinformation to the most biased judge. The National Endowment for Democracy has already cut off funding, but Microsoft's ad agency still seems to be boycotting these conservative outlets. EU Lawmakers are refusing to endorse the latest EU-U.S. data deal. But it is all virtue signaling. Leaving Twitter over Elon Musk's ownership turns out to be about as popular as leaving the U.S. over Trump's presidency. Chris Inglis has finished his tour of duty as national cyber director. And the Federal Trade Commission's humiliation over its effort to block Meta's acquisition of Within is complete. Meta closed the deal last week. Download 443rd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Welcome to the latest edition of The Radcast! Have you been dying to hear the hot takes on Super Bowl LVII? Look no further; the Radcast team, led by Ryan Alford with Christina Yasi and Nick Weaver, is here to rock your world with their badass post-Super Bowl analysis and the latest trend for business and marketing. Tune in now for a truly exciting and informative experience!Key notes from the news episode:Small talk:All about Super Bowl (01:35)What is everyone's favorite Superbowl commercial? (02:25)RadnewsThis week: First Class Women in Business with Heidi Montag, Tracy Duhs, Heidi Cortez, Cristina Ferrare, and Raquel Pennington (09:30)Next week: Chip Leighton (11:10)Sponsor (14:06)Vaycay - Best 3rd-party lab tested CBDDisposables - Delta 8 & THCDelta 8 gummiesGood RanchersSocial Media HolidaysFebruary 17: Random Acts of Kindness Day #RandomActsOfKindnessDay (15:30)February 18: National Battery Day #NationalBatteryDay (16:16)February 20: World Day of Social Justice #SocialJusticeDay (17:34)February 20: Love Your Pet Day #LoveYourPetDay (18:05)February 21: Presidents Day (Third Monday of February) #PresidentsDay (18:31)Social Media NewsInstagram is killing live shopping in March, will focus on ads instead (18:49)https://techcrunch.com/2023/02/14/instagram-is-killing-live-shopping-in-march-will-focus-on-ads-instead/Google rolls out new features to make in-app browsers better on Android (22:05)https://techcrunch.com/2023/02/14/google-rolls-out-new-features-to-make-in-app-browsers-better-on-android/TikTok is reportedly developing a paywall feature and testing a revamped creator fund (24:08)https://techcrunch.com/2023/02/14/tiktok-paywall-feature-testing-revamped-creator-fund/Sponsor (28:30)Branded Bills Custom HatsRadcast20 = 20% offMarketing NewsCoca-Cola debuts ‘transformation-flavored' soda in its latest limited-time offeringhttps://www.marketingdive.com/news/coca-cola-move-creations-rosalia-transformation-flavored/642618/ (29:30)Learn more by visiting our website at www.theradcast.comSubscribe to our YouTube channel https://www.youtube.com/c/RadicalHomeofTheRadcastIf you enjoyed this episode of The Radcast, Like, Share, and leave us a review!
We kick off a jam-packed episode of the Cyberlaw Podcast by flagging the news that ransomware revenue fell substantially in 2022. There is lots of room for error in that Chainalysis finding, Nick Weaver notes, but the effect is large. Among the reasons to think it might also be real is resistance to paying ransoms on the part of companies and their insurers, who are especially concerned about liability for payments to sanctioned ransomware gangs. I also note that a fascinating additional insight from Jon DiMaggio, who infiltrated the Lockbit ransomware gang. He says that Entrust was hit by Lockbit, which threatened to release its internal files, and that the company responded with days of Distributed Denial of Service (DDoS) attacks on Lockbit's infrastructure – and never did pay up. That would be a heartening display of courage. It would also be a felony, at least according to the conventional wisdom that condemns hacking back. So I cannot help thinking there is more to the story. Like, maybe Canadian Security Intelligence Service is joining Australian Signals Directorate in releasing the hounds on ransomware gangs. I look forward to more stories on this undercovered disclosure. Gus Hurwitz offers two explanations for the Federal Aviation Administration system outage, which grounded planes across the country. There's the official version and the conspiracy theory, as with everything else these days. Nick breaks down the latest cryptocurrency failure; this time it's Genesis. Nick's not a fan of this prepackaged bankruptcy. And Gus and I puzzle over the Federal Trade Commission's determination to write regulations to outlaw most non-compete clauses. Justin Sherman, a first-timer on the podcast, covers recent research showing that alleged Russian social media interference had no meaningful effect on the 2016 election. That spurs an outburst from me about the cynical scam that was the “Russia, Russia, Russia” narrative—a kind of 2016 election denial for which the press and the left have never apologized. Nick explains the looming impact of Twitter's interest payment obligation. We're going to learn a lot more about Elon Musk's business plans from how he deals with that crisis than from anything he's tweeted in recent months. It does not get more cyberlawyerly than a case the Supreme Court will be taking up this term—Gonzalez v. Google. This case will put Section 230 squarely on the Court's docket, and the amicus briefs can be measured by the shovelful. The issue is whether YouTube's recommendation of terrorist videos can ever lead to liability—or whether any judgment is barred by Section 230. Gus and I are on different sides of that question, but we agree that this is going to be a hot case, a divided Court, and a big deal. And, just to show that our foray into cyberlaw was no fluke, Gus and I also predict that the United States Court of Appeals for the District of Columbia Circuit is going to strike down the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA—the legislative exception to Section 230 that civil society loves to hate. Its prohibition on promotion of prostitution may fall to first amendment fears on the court, but the practical impact of the law may remain. Next, Justin gives us a quick primer on the national security reasons for regulation of submarine cables. Nick covers the leak of the terror watchlist thanks to an commuter airline's sloppy security. Justin explains TikTok's latest charm offensive in Washington. Finally, I provide an update on the UK's online safety bill, which just keeps getting tougher, from criminal penalties, to “ten percent of revenue” fines, to mandating age checks that may fail technically or drive away users, or both. And I review the latest theatrical offering from Madison Square Garden—“The Revenge of the Lawyers.” You may root for the snake or for the scorpions, but you will not want to miss it.
We kick off a jam-packed episode of the Cyberlaw Podcast by flagging the news that ransomware revenue fell substantially in 2022. There is lots of room for error in that Chainalysis finding, Nick Weaver notes, but the effect is large. Among the reasons to think it might also be real is resistance to paying ransoms on the part of companies and their insurers, who are especially concerned about liability for payments to sanctioned ransomware gangs. I also note that a fascinating additional insight from Jon DiMaggio, who infiltrated the Lockbit ransomware gang. He says that Entrust was hit by Lockbit, which threatened to release its internal files, and that the company responded with days of Distributed Denial of Service (DDoS) attacks on Lockbit's infrastructure – and never did pay up. That would be a heartening display of courage. It would also be a felony, at least according to the conventional wisdom that condemns hacking back. So I cannot help thinking there is more to the story. Like, maybe Canadian Security Intelligence Service is joining Australian Signals Directorate in releasing the hounds on ransomware gangs. I look forward to more stories on this undercovered disclosure. Gus Hurwitz offers two explanations for the Federal Aviation Administration system outage, which grounded planes across the country. There's the official version and the conspiracy theory, as with everything else these days. Nick breaks down the latest cryptocurrency failure; this time it's Genesis. Nick's not a fan of this prepackaged bankruptcy. And Gus and I puzzle over the Federal Trade Commission's determination to write regulations to outlaw most non-compete clauses. Justin Sherman, a first-timer on the podcast, covers recent research showing that alleged Russian social media interference had no meaningful effect on the 2016 election. That spurs an outburst from me about the cynical scam that was the “Russia, Russia, Russia” narrative—a kind of 2016 election denial for which the press and the left have never apologized. Nick explains the looming impact of Twitter's interest payment obligation. We're going to learn a lot more about Elon Musk's business plans from how he deals with that crisis than from anything he's tweeted in recent months. It does not get more cyberlawyerly than a case the Supreme Court will be taking up this term—Gonzalez v. Google. This case will put Section 230 squarely on the Court's docket, and the amicus briefs can be measured by the shovelful. The issue is whether YouTube's recommendation of terrorist videos can ever lead to liability—or whether any judgment is barred by Section 230. Gus and I are on different sides of that question, but we agree that this is going to be a hot case, a divided Court, and a big deal. And, just to show that our foray into cyberlaw was no fluke, Gus and I also predict that the United States Court of Appeals for the District of Columbia Circuit is going to strike down the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA—the legislative exception to Section 230 that civil society loves to hate. Its prohibition on promotion of prostitution may fall to first amendment fears on the court, but the practical impact of the law may remain. Next, Justin gives us a quick primer on the national security reasons for regulation of submarine cables. Nick covers the leak of the terror watchlist thanks to an commuter airline's sloppy security. Justin explains TikTok's latest charm offensive in Washington. Finally, I provide an update on the UK's online safety bill, which just keeps getting tougher, from criminal penalties, to “ten percent of revenue” fines, to mandating age checks that may fail technically or drive away users, or both. And I review the latest theatrical offering from Madison Square Garden—“The Revenge of the Lawyers.” You may root for the snake or for the scorpions, but you will not want to miss it.
In this bonus episode of the Cyberlaw Podcast, I interview Andy Greenberg, long-time WIRED reporter, about his new book, “Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.” This is Andy's second author interview on the Cyberlaw Podcast. He also came on to discuss an earlier book, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers. They are both excellent cybersecurity stories. “Tracers in the Dark”, I suggest, is a kind of sequel to the Silk Road story, which ends with Ross Ulbricht, the Dread Pirate Roberts, pinioned in a San Francisco library with his laptop open to an administrator's page on the Silk Road digital black market. At that time, cryptocurrency backers believed that Ulbricht's arrest was a fluke, and that properly implemented, bitcoin was anonymous and untraceable. Greenberg's book explains, story by story, how that illusion was trashed by smart cops and techies (including our own Nick Weaver!) who showed that the blockchain's “forever” records make it almost impossible to avoid attribution over time. Among those who fall victim to the illusion of anonymity are two federal officers who helped pursue Ulbricht—and to rip him off; the administrator of AlphaBay, Silk Road's successor dark market, an alleged Russian hacker who made so much money hacking Mt. Gox that he had to create his own exchange to launder it all, and hundreds of child sex abuse consumers and producers. It is a great story, and Andy brings it up to date in the interview as we dig into two massive, multi-billion seizures made possible by transaction tracing. In fact, for all the colorful characters in the book, the protagonist is really Chainalysis and its competitors, who have turned tracing into a kind of science. We close the talk by exploring Andy's deeply mixed feelings about both the world envisioned by cryptocurrency's evangelists and the way Chainalysis is saving us from that world.
In this bonus episode of the Cyberlaw Podcast, I interview Andy Greenberg, long-time WIRED reporter, about his new book, “Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.” This is Andy's second author interview on the Cyberlaw Podcast. He also came on to discuss an earlier book, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers. They are both excellent cybersecurity stories. “Tracers in the Dark”, I suggest, is a kind of sequel to the Silk Road story, which ends with Ross Ulbricht, the Dread Pirate Roberts, pinioned in a San Francisco library with his laptop open to an administrator's page on the Silk Road digital black market. At that time, cryptocurrency backers believed that Ulbricht's arrest was a fluke, and that properly implemented, bitcoin was anonymous and untraceable. Greenberg's book explains, story by story, how that illusion was trashed by smart cops and techies (including our own Nick Weaver!) who showed that the blockchain's “forever” records make it almost impossible to avoid attribution over time. Among those who fall victim to the illusion of anonymity are two federal officers who helped pursue Ulbricht—and to rip him off; the administrator of AlphaBay, Silk Road's successor dark market, an alleged Russian hacker who made so much money hacking Mt. Gox that he had to create his own exchange to launder it all, and hundreds of child sex abuse consumers and producers. It is a great story, and Andy brings it up to date in the interview as we dig into two massive, multi-billion seizures made possible by transaction tracing. In fact, for all the colorful characters in the book, the protagonist is really Chainalysis and its competitors, who have turned tracing into a kind of science. We close the talk by exploring Andy's deeply mixed feelings about both the world envisioned by cryptocurrency's evangelists and the way Chainalysis is saving us from that world.
This episode of the Cyberlaw Podcast delves into the use of location technology in two big events—the surprisingly outspoken lockdown protests in China and the Jan. 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to the provocative question, what's the difference? Jamil's answer (and mine, for what it's worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users' information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded. Whether we should be relying on Google's made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective penalties on companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is there to hose me down. The U.K. is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests. The Department of Homeland Security's Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry's complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to agree on a report format that doesn't resemble the State of the Union address. It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities. Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of the U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That's very likely a World Trade Organziation (WTO) violation, I offer, but one that pales in comparison to thirty years of WTO-violating threats to constrain European data exports to the U.S. When you think of it as retaliation for the use of General Data Protection Regulation (GDPR) to attack U.S. intelligence programs, the EV subsidy is easy to defend. I ask Nick what we learned this week from Twitter coverage. His answer—that Elon Musk doesn't understand how hard content moderation is—doesn't exactly come as news. Nor, really, does most of what we learned from Matt Taibbi's review of Twitter's internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn't come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D.-Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him. Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose. Finally, in quick hits: Meta was fined $275 million for allowing data scraping for personal data. Nick and Jamil tell us that Snowden has at last shown his true colors. Jamil has unwonted praise for Apple, which persuaded TSMC to make more advanced chips in Arizona than it originally planned. And I try to explain why the decision of the DHS cyber safety board to look into the Lapsus$ hacks seems to drawing fire.
This episode of the Cyberlaw Podcast delves into the use of location technology in two big events—the surprisingly outspoken lockdown protests in China and the Jan. 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to the provocative question, what's the difference? Jamil's answer (and mine, for what it's worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users' information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded. Whether we should be relying on Google's made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective penalties on companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is there to hose me down. The U.K. is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests. The Department of Homeland Security's Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry's complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to agree on a report format that doesn't resemble the State of the Union address. It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities. Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of the U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That's very likely a World Trade Organziation (WTO) violation, I offer, but one that pales in comparison to thirty years of WTO-violating threats to constrain European data exports to the U.S. When you think of it as retaliation for the use of General Data Protection Regulation (GDPR) to attack U.S. intelligence programs, the EV subsidy is easy to defend. I ask Nick what we learned this week from Twitter coverage. His answer—that Elon Musk doesn't understand how hard content moderation is—doesn't exactly come as news. Nor, really, does most of what we learned from Matt Taibbi's review of Twitter's internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn't come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D.-Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him. Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose. Finally, in quick hits: Meta was fined $275 million for allowing data scraping for personal data. Nick and Jamil tell us that Snowden has at last shown his true colors. Jamil has unwonted praise for Apple, which persuaded TSMC to make more advanced chips in Arizona than it originally planned. And I try to explain why the decision of the DHS cyber safety board to look into the Lapsus$ hacks seems to drawing fire.
We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 midterm election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does nominations, the House does investigations, and neither does much legislation—which could leave renewal of the critically important intelligence authority, Section 702 of the Foreign Intelligence Surveillance Act (FISA), out in the cold. As supporters of renewal, we conclude that the best hope for the provision is to package it with trust-building measures to restore Republicans' willingness to give national security agencies broad surveillance authorities. I also note that foreign government cyberattacks on our election, which have been much anticipated in election after election, failed once again to make an appearance. At this point, election interference is somewhere between Y2K and Bigfoot on the “things we should have worried about” scale. In other news, cryptocurrency conglomerate FTX has collapsed into bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details. A new panelist on the podcast, Chinny Sharma, explains to a disbelieving U.S. audience the U.K. government's plan to scan all the country's internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the U.K. government does not use a private service for the task. Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he is determined to linger on the comic relief. Dunning-Krueger makes an appearance. Chinny and I speculate on what may emerge from the Biden administration's plan to reconsider the relationship between the Cybersecurity and Infrastructure Security Agency (CISA) and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict turf wars and new authorities for CISA in response. The Obama administration's egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the Federal Trade Commission will finally have to coordinate its privacy enforcement efforts with CISA's cybersecurity standards and priorities. Adam reviews the European Parliament's report on Europe's spyware problems. He's impressed (as am I) by the report's willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition, a problem that was unthinkable for fifty years in the United States. This, we agree, is another reason that Congress needs to put guardrails against such abuse in place quickly. Nick notes the U.S. government's seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it is still worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money. Disinformation—or the appalling acronym MDM (for mis-, dis-, and mal-information)—has been in the news lately. A recent paper counted the staggering cost of “disinformation” suppression during coronavirus times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya—falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even when it sounds a lot like foreign leaders we do not like. Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and Artificial Intelligence (AI) that writes code. The short version is that GitHub has been training an AI engine on all the open source code on the site so that it can “autosuggest” lines of new code as you are writing the boring parts of your program. The upshot is that open source code that the AI strips off the license conditions, such as copyleft, that are part of some open source code. Not surprisingly, copyleft advocates are suing on the ground that important information has been left off their code, particularly the provision that turns all code that uses the open source into open source itself. I remind listeners that this is why Microsoft famously likened open source code to cancer. Nick tells me that it is really more like herpes, thus demonstrating that he has a lot more fun coding than I ever had. In updates and quick hits: I note that the peanut butter sandwich nuclear spies have been sentenced. Adam celebrates TSMC's decision to build a 3 nanometer semiconductor fab in Arizona. We cross sword about whether the fab capital of the U.S. will be Phoenix or Austin. I celebrate the Russian government's acknowledgment of the Cyberlaw Podcast's reach when it designated long-time regular Dmitri Alperovitch for Russian sanctions. Occasional guest Chris Krebs also makes the list. www.mid.ru Adam and I flag the Department of Justice's release of basic rules for what I am calling the Euroappeasement court: the quasi-judicial body that will hear European complaints that the U.S. is not living up to human rights standards that no country in Europe even pretends to live up to.
We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 midterm election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does nominations, the House does investigations, and neither does much legislation—which could leave renewal of the critically important intelligence authority, Section 702 of the Foreign Intelligence Surveillance Act (FISA), out in the cold. As supporters of renewal, we conclude that the best hope for the provision is to package it with trust-building measures to restore Republicans' willingness to give national security agencies broad surveillance authorities. I also note that foreign government cyberattacks on our election, which have been much anticipated in election after election, failed once again to make an appearance. At this point, election interference is somewhere between Y2K and Bigfoot on the “things we should have worried about” scale. In other news, cryptocurrency conglomerate FTX has collapsed into bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details. A new panelist on the podcast, Chinny Sharma, explains to a disbelieving U.S. audience the U.K. government's plan to scan all the country's internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the U.K. government does not use a private service for the task. Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he is determined to linger on the comic relief. Dunning-Krueger makes an appearance. Chinny and I speculate on what may emerge from the Biden administration's plan to reconsider the relationship between the Cybersecurity and Infrastructure Security Agency (CISA) and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict turf wars and new authorities for CISA in response. The Obama administration's egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the Federal Trade Commission will finally have to coordinate its privacy enforcement efforts with CISA's cybersecurity standards and priorities. Adam reviews the European Parliament's report on Europe's spyware problems. He's impressed (as am I) by the report's willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition, a problem that was unthinkable for fifty years in the United States. This, we agree, is another reason that Congress needs to put guardrails against such abuse in place quickly. Nick notes the U.S. government's seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it is still worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money. Disinformation—or the appalling acronym MDM (for mis-, dis-, and mal-information)—has been in the news lately. A recent paper counted the staggering cost of “disinformation” suppression during coronavirus times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya—falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even when it sounds a lot like foreign leaders we do not like. Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and Artificial Intelligence (AI) that writes code. The short version is that GitHub has been training an AI engine on all the open source code on the site so that it can “autosuggest” lines of new code as you are writing the boring parts of your program. The upshot is that open source code that the AI strips off the license conditions, such as copyleft, that are part of some open source code. Not surprisingly, copyleft advocates are suing on the ground that important information has been left off their code, particularly the provision that turns all code that uses the open source into open source itself. I remind listeners that this is why Microsoft famously likened open source code to cancer. Nick tells me that it is really more like herpes, thus demonstrating that he has a lot more fun coding than I ever had. In updates and quick hits: I note that the peanut butter sandwich nuclear spies have been sentenced. Adam celebrates TSMC's decision to build a 3 nanometer semiconductor fab in Arizona. We cross sword about whether the fab capital of the U.S. will be Phoenix or Austin. I celebrate the Russian government's acknowledgment of the Cyberlaw Podcast's reach when it designated long-time regular Dmitri Alperovitch for Russian sanctions. Occasional guest Chris Krebs also makes the list. www.mid.ru Adam and I flag the Department of Justice's release of basic rules for what I am calling the Euroappeasement court: the quasi-judicial body that will hear European complaints that the U.S. is not living up to human rights standards that no country in Europe even pretends to live up to.
This episode features Nick Weaver, Dave Aitel and I covering a Pro Publica story (and forthcoming book) on the difficulties the FBI has encountered in becoming the nation's principal resource on cybercrime and cybersecurity. We end up concluding that, for all its successes, the bureau's structural weaknesses in addressing cybersecurity are going to haunt it for years to come. Speaking of haunting us for years, the effort to decouple U.S. and Chinese tech sectors continues to generate news. Nick and Dave weigh in on the latest (rumored) initiative: cutting off China's access to U.S. quantum computing and AI technology, and what that could mean for the U.S. semiconductor companies, among others. We could not stay away from the Elon Musk-Twitter story, which briefly had a national security dimension, due to news that the Biden Administration was considering a Committee on Foreign Investment in the United States review of the deal. That's not a crazy idea, but in the end, we are skeptical that this will happen. Dave and I exchange views on whether it is logical for the administration to pursue cybersecurity labels for cheap Internet of things devices. He thinks it makes less sense than I do, but we agree that the end result will be to crowd the cheapest competitors from the market. Nick and I discuss the news that Kanye West is buying Parler. Neither of us thinks much of the deal as an investment. And in updates and quick takes: I see a real risk for Google in the Texas attorney general's lawsuit over the company's us of facial recognition. Nick unpacks the dispute between Facebook and The Wire, India's answer to Pro Publica, over The Wire's claim of bias in favor of incumbent Indian politicians. If you had the impression that Facebook has the better of that argument, you're right. And in another platform v. press, story, TikTok's parent ByteDance has been accused by Forbes of planning to use TikTok to monitor the location of specific Americans. TikTok has denied the story. I predict that neither the story nor the denial is enough to bring closure. We'll be hearing more.
This episode features Nick Weaver, Dave Aitel and I covering a Pro Publica story (and forthcoming book) on the difficulties the FBI has encountered in becoming the nation's principal resource on cybercrime and cybersecurity. We end up concluding that, for all its successes, the bureau's structural weaknesses in addressing cybersecurity are going to haunt it for years to come. Speaking of haunting us for years, the effort to decouple U.S. and Chinese tech sectors continues to generate news. Nick and Dave weigh in on the latest (rumored) initiative: cutting off China's access to U.S. quantum computing and AI technology, and what that could mean for the U.S. semiconductor companies, among others. We could not stay away from the Elon Musk-Twitter story, which briefly had a national security dimension, due to news that the Biden Administration was considering a Committee on Foreign Investment in the United States review of the deal. That's not a crazy idea, but in the end, we are skeptical that this will happen. Dave and I exchange views on whether it is logical for the administration to pursue cybersecurity labels for cheap Internet of things devices. He thinks it makes less sense than I do, but we agree that the end result will be to crowd the cheapest competitors from the market. Nick and I discuss the news that Kanye West is buying Parler. Neither of us thinks much of the deal as an investment. And in updates and quick takes: I see a real risk for Google in the Texas attorney general's lawsuit over the company's us of facial recognition. Nick unpacks the dispute between Facebook and The Wire, India's answer to Pro Publica, over The Wire's claim of bias in favor of incumbent Indian politicians. If you had the impression that Facebook has the better of that argument, you're right. And in another platform v. press, story, TikTok's parent ByteDance has been accused by Forbes of planning to use TikTok to monitor the location of specific Americans. TikTok has denied the story. I predict that neither the story nor the denial is enough to bring closure. We'll be hearing more.
It's been a jam-packed week of cyberlaw news, but the big debate of the episode is triggered by the White House blueprint for an AI Bill of Rights. I've just released a long post about the campaign to end “AI bias” in general, and the blueprint in particular. In my view, the bill of rights will end up imposing racial and gender (and intersex!) quotas on a vast swath of American life. Nick Weaver argues that AI is in fact a source of secondhand racism and sexism, something that will not be fixed until we do a better job of forcing the algorithm to explain how it arrives at the outcomes it produces. We do not agree on much, but we do agree that lack of explainability is a big problem for the new technology. President Biden has issued an executive order meant to resolve the U.S.-EU spat over transatlantic data flows. At least for a few years, until the anti-American EU Court of Justice finds it wanting again. Nick and I explore some of the mechanics. I think it's bad for the privacy of U.S. persons and for the comprehensibility of U.S. intelligence reports, but the judicial system the order creates is cleverly designed to discourage litigant grandstanding. Matthew Heiman covers the biggest CISO, or chief information security officer, news of the week, the month, and the year—the criminal conviction of Uber's CSO, Joe Sullivan, for failure to disclose a data breach to the Federal Trade Commission. He is less surprised by the verdict than others, but we agree that it will change the way CISO's do their job and relate to their fellow corporate officers. Brian Fleming joins us to cover an earthquake in U.S.-China tech trade—the sweeping new export restrictions on U.S. chips and technology. This will be a big deal for all U.S. tech companies, we agree, and probably a disaster for them in the long run if U.S. allies don't join the party. I go back to dig a little deeper on two cases we covered with just a couple of hours' notice last week—the Supreme Court's grant of review in two cases touching on Big Tech's liability for hosting the content of terror groups. It turns out that only one of the cases is likely to turn on Section 230. That's Google's almost laughable claim that holding YouTube liable for recommending terrorist videos is holding it liable as a publisher. The other case will almost certainly turn on when distribution of terrorist content can be punished as “material assistance” to terror groups. Brian walks us through the endless negotiations between TikTok and the U.S. over a security deal. We are both puzzled over the partisanization of TikTok security, although I suggest a reason why that might be happening. Matthew catches us up on a little-covered Russian hack and leak operation aimed at former MI6 boss Richard Dearlove and British Prime Minister Boris Johnson. Matthew gives Dearlove's security awareness a low grade. Finally, two updates: Nick catches us up on the Elon Musk-Twitter fight. Nick's gloating now, but he is sure he'll be booted off the platform when Musk takes over. And I pass on some very unhappy feedback from a friend at the Election Integrity Partnership (EIP), who feels we were too credulous in commenting on a JustTheNews story that left a strong impression of unseemly cooperation in suppressing election integrity misinformation. The EIP's response makes several good points in its own defense, but I remain concerned that the project as a whole raises real concerns about how tightly Silicon Valley embraced the suppression of speech “delegitimizing” election results.
It's been a jam-packed week of cyberlaw news, but the big debate of the episode is triggered by the White House blueprint for an AI Bill of Rights. I've just released a long post about the campaign to end “AI bias” in general, and the blueprint in particular. In my view, the bill of rights will end up imposing racial and gender (and intersex!) quotas on a vast swath of American life. Nick Weaver argues that AI is in fact a source of secondhand racism and sexism, something that will not be fixed until we do a better job of forcing the algorithm to explain how it arrives at the outcomes it produces. We do not agree on much, but we do agree that lack of explainability is a big problem for the new technology. President Biden has issued an executive order meant to resolve the U.S.-EU spat over transatlantic data flows. At least for a few years, until the anti-American EU Court of Justice finds it wanting again. Nick and I explore some of the mechanics. I think it's bad for the privacy of U.S. persons and for the comprehensibility of U.S. intelligence reports, but the judicial system the order creates is cleverly designed to discourage litigant grandstanding. Matthew Heiman covers the biggest CISO, or chief information security officer, news of the week, the month, and the year—the criminal conviction of Uber's CSO, Joe Sullivan, for failure to disclose a data breach to the Federal Trade Commission. He is less surprised by the verdict than others, but we agree that it will change the way CISO's do their job and relate to their fellow corporate officers. Brian Fleming joins us to cover an earthquake in U.S.-China tech trade—the sweeping new export restrictions on U.S. chips and technology. This will be a big deal for all U.S. tech companies, we agree, and probably a disaster for them in the long run if U.S. allies don't join the party. I go back to dig a little deeper on two cases we covered with just a couple of hours' notice last week—the Supreme Court's grant of review in two cases touching on Big Tech's liability for hosting the content of terror groups. It turns out that only one of the cases is likely to turn on Section 230. That's Google's almost laughable claim that holding YouTube liable for recommending terrorist videos is holding it liable as a publisher. The other case will almost certainly turn on when distribution of terrorist content can be punished as “material assistance” to terror groups. Brian walks us through the endless negotiations between TikTok and the U.S. over a security deal. We are both puzzled over the partisanization of TikTok security, although I suggest a reason why that might be happening. Matthew catches us up on a little-covered Russian hack and leak operation aimed at former MI6 boss Richard Dearlove and British Prime Minister Boris Johnson. Matthew gives Dearlove's security awareness a low grade. Finally, two updates: Nick catches us up on the Elon Musk-Twitter fight. Nick's gloating now, but he is sure he'll be booted off the platform when Musk takes over. And I pass on some very unhappy feedback from a friend at the Election Integrity Partnership (EIP), who feels we were too credulous in commenting on a JustTheNews story that left a strong impression of unseemly cooperation in suppressing election integrity misinformation. The EIP's response makes several good points in its own defense, but I remain concerned that the project as a whole raises real concerns about how tightly Silicon Valley embraced the suppression of speech “delegitimizing” election results.
The big news of the week was a Fifth Circuit decision upholding Texas social media regulation law. It was poorly received by the usual supporters of social media censorship but I found it both remarkably well written and surprisingly persuasive. That does not mean it will survive the almost inevitable Supreme Court review but Judge AndyOldham wrote an opinion that could be a model for a Supreme Court decision upholding Texas law. The big hacking story of the week was a brutal takedown of Uber, probably by the dreaded Advanced Persistent Teenager. Dave Aitel explains what happened and why no other large corporation should feel smug or certain that it cannot happen to them. Nick Weaver piles on. Maury Shenk explains the recent European court decision upholding sanctions on Google for its restriction of Android phone implementations. Dave points to some of the less well publicized aspects of the Twitter whistleblower's testimony before Congress. We agree on the bottom line—that Twitter is utterly incapable of protecting either U.S. national security or even the security of its users' messages. If there were any doubt about that, it would be laid to rest by Twitter's dependence on Chinese government advertising revenue. Maury and Nick tutor me on The Merge, which moves Ethereum from “proof of work‘ to “proof of stake,” massively reducing the climate footprint of the cryptocurrency. They are both surprisingly upbeat about it. Maury also lays out a new European proposal for regulating the internet of things—and, I point out—for massively increasing the cost of all those things. China is getting into the attribution game. It has issued a report blaming the National Security Agency for intruding on Chinese educational institution networks. Dave is not impressed. The Department of Homeland security, in breaking news from 2003, has been keeping the contents of phones it seizes on the border. Dave predicts that the Department of Homeland Security will have to further pull back on its current practices. I'm less sure. Now that China is regulating vulnerability disclosures, are Chinese companies reluctant to disclose vulnerabilities outside China? The Atlantic Council has a report on the subject, but Dave thinks the results are ambiguous at best. In quick hits: The Senate has confirmed Nate Fick as the first U.S. cyber ambassador I offer data confirming my cynical view that Apple is not so much concerned about your privacy as it is eager to take over the role of Google and Facebook in the advertising market Nick lays out the latest Treasury Department guidance on sanctions and tornado cash Maury explains how the Indian government persuaded 50 million Indians to geotag their homes And I explain why it is in fact possible that the FBI and Silicon Valley are working together to identify conservatives for potential criminal investigation.
The big news of the week was a Fifth Circuit decision upholding Texas social media regulation law. It was poorly received by the usual supporters of social media censorship but I found it both remarkably well written and surprisingly persuasive. That does not mean it will survive the almost inevitable Supreme Court review but Judge AndyOldham wrote an opinion that could be a model for a Supreme Court decision upholding Texas law. The big hacking story of the week was a brutal takedown of Uber, probably by the dreaded Advanced Persistent Teenager. Dave Aitel explains what happened and why no other large corporation should feel smug or certain that it cannot happen to them. Nick Weaver piles on. Maury Shenk explains the recent European court decision upholding sanctions on Google for its restriction of Android phone implementations. Dave points to some of the less well publicized aspects of the Twitter whistleblower's testimony before Congress. We agree on the bottom line—that Twitter is utterly incapable of protecting either U.S. national security or even the security of its users' messages. If there were any doubt about that, it would be laid to rest by Twitter's dependence on Chinese government advertising revenue. Maury and Nick tutor me on The Merge, which moves Ethereum from “proof of work‘ to “proof of stake,” massively reducing the climate footprint of the cryptocurrency. They are both surprisingly upbeat about it. Maury also lays out a new European proposal for regulating the internet of things—and, I point out—for massively increasing the cost of all those things. China is getting into the attribution game. It has issued a report blaming the National Security Agency for intruding on Chinese educational institution networks. Dave is not impressed. The Department of Homeland security, in breaking news from 2003, has been keeping the contents of phones it seizes on the border. Dave predicts that the Department of Homeland Security will have to further pull back on its current practices. I'm less sure. Now that China is regulating vulnerability disclosures, are Chinese companies reluctant to disclose vulnerabilities outside China? The Atlantic Council has a report on the subject, but Dave thinks the results are ambiguous at best. In quick hits: The Senate has confirmed Nate Fick as the first U.S. cyber ambassador I offer data confirming my cynical view that Apple is not so much concerned about your privacy as it is eager to take over the role of Google and Facebook in the advertising market Nick lays out the latest Treasury Department guidance on sanctions and tornado cash Maury explains how the Indian government persuaded 50 million Indians to geotag their homes And I explain why it is in fact possible that the FBI and Silicon Valley are working together to identify conservatives for potential criminal investigation.
This is our return-from-hiatus episode. Jordan Schneider kicks things off by recapping passage of a major U.S. semiconductor-building subsidy bill, while new contributor Brian Fleming talks with Nick Weaver about new regulatory investment restrictions and new export controls on (artificial Intelligence (AI) chips going to China. Jordan also covers a big corruption scandal arising from China's big chip-building subsidy program, leading me to wonder when we'll have our version. Brian and Nick cover the month's biggest cryptocurrency policy story, the imposition of OFAC sanctions on Tornado Cash. They agree that, while the outer limits of sanctions aren't entirely clear, they are likely to show that sometimes the U.S. Code actually does trump the digital version. Nick points listeners to his bracing essay, OFAC Around and Find Out. Paul Rosenzweig reprises his role as the voice of reason in the debate over location tracking and Dobbs. (Literally. Paul and I did an hour-long panel on the topic last week. It's available here.) I reprise my role as Chief Privacy Skeptic, calling the Dobb/location fuss an overrated tempest in a teapot. Brian takes on one aspect of the Mudge whistleblower complaint about Twitter security: Twitter's poor record at keeping foreign spies from infiltrating its workforce and getting unaudited access to its customer records. In a coincidence, he notes, a former Twitter employee was just convicted of “spying lite”, proves it's as good at national security as it is at content moderation. Meanwhile, returning to U.S.-China economic relations, Jordan notes the survival of high-level government concerns about TikTok. I note that, since these concerns first surfaced in the Trump era, TikTok's lobbying efforts have only grown more sophisticated. Speaking of which, Klon Kitchen has done a good job of highlighting DJI's increasingly sophisticated lobbying in Washington D.C. The Cloudflare decision to deplatform Kiwi Farms kicks off a donnybrook, with Paul and Nick on one side and me on the other. It's a classic Cyberlaw Podcast debate. In quick hits and updates: Nick and I cover the sad story of the Dad who photographed his baby's private parts at a doctor's request and, thanks to Google's lack of human appellate review, lost his email, his phone number, and all of the accounts that used the phone for 2FA. Paul brings us up to speed on the U.S.-EU data fight: and teases tomorrow's webinar on the topic. Nick explains the big changes likely to come to the pornography world because of a lawsuit against Visa. And why Twitter narrowly averted its own child sex scandal. I note that Google's bias against GOP fundraising emails has led to an unlikely result: less spam filtering for all such emails. And, after waiting too long, Brian Krebs retracts the post about a Ubiquity “breach” that led the company to sue him.
As Congress barrels toward an election that could see at least one house change hands, efforts to squeeze big bills into law are mounting. The one with the best chance (and better than I expected) would drop $52 billion in cash and a boatload of tax breaks on the semiconductor industry. Michael Ellis points out that this is industrial policy without apology, and a throwback to the 1980s, when the government organized SEMATECH, a name derived from “Semiconductor Manufacturing Technology” to shore up U.S. chipmaking. Thanks to a bipartisan consensus on the need to fight a Chinese challenge, and a trimming of provisions that tried to hitch a ride on the bill, there now looks to be a clear path to enactment for this bill. And if there were doubt about how serious the Chinese challenge in chips will be, an under-covered story revealed that China's chipmaking champion, SMIC, has been making 7-nanometer chips for months without an announcement. That's a diameter that Intel and GlobalFoundries, the main U.S. producers, have yet to reach in commercial production. The national security implications are plain. If commercial products from China are cheap enough to sweep the market, even security-minded agencies will be forced to buy them, as it turns out the FBI and Department of Homeland Security have both been doing with Chinese drones. Nick Weaver points to his Lawfare piece showing just how cheaply the United States (and Ukraine) could be making drones. Responding to the growing political concern about Chinese products, TikTok's owner ByteDance, has increased its U.S. lobbying spending to more than $8 million a year, Christina Ayiotis tells us—about what Google spends on lobbying. In the same vein, Nick and Michael question why the government hasn't come up with the extra $3 billion to fund “rip and replace” for Chinese telecom gear. That effort will certainly get a boost from reports that Chinese telecom sales were offered on especially favorable terms to carriers who service America's nuclear missile locations. I offer an answer: The Obama administration actually paid these same rural carriers to install Chinese equipment as part of the 2009 stimulus law. I cannot help thinking that the rural carriers ought to bear some of the cost of their imprudent investments and not ask U.S. taxpayers to pay them both for installing and ripping out the same gear. In news not tied to China, Nick tells us about the House Energy and Commerce Committee's serious progress on a compromise federal data privacy bill. It is still a doomed bill, given resistance from Dems and GOP in the Senate. I argue that that's a good thing, given the effort to impose “disparate impact” quotas for race, color, religion, national origin, sex, and disability on every algorithm that processes even a little personal data. This is a transformative social engineering project that just one section (208) of the “privacy” bill will impose without any serious debate. Christina grades Russian information warfare based on its latest exploit: hacking a Ukrainian radio broadcaster to spread fake news about Ukrainian President Volodymyr Zelenskyy's health. As a hack, it gets a passing grade, but as a believable bit of information warfare, it is a bust. Tina, Michael and I evaluate YouTube's new policy on removing “misinformation” related to abortion, and the risk that this policy, like so many Silicon Valley speech suppression schemes, will start out sounding plausible and end in political correctness. Nick and I celebrate the Department of Justice's increasing success in sometimes seizing cryptocurrency from hackers and ransomware gangs. It may just be Darwin at work, but it's nice to see. Nick offers the recommended long read of the week—Brian Krebs's takedown of the VPN malware supplier, 911. And in updates and quick hits: That Twitter worker arrested for spying on behalf of Saudi Arabia is going to trial. the United Kingdom's Government Communications Headquarters's cryptoskeptics have returned to ask how we can square end-to-end encryption with child safety. I think the answer is “Not well.” The General Data Protection Regulation has consequences: Turns out that schoolkids in Denmark won't be able to use Chromebooks or Google Workspace. And Nick takes a moment to dunk on the Three Arrows founders, whose cryptocurrency company went under in the bust and who are now giving interviews from an undisclosed location. *An obscure Rhode Island tribute to the Industrial Trust Building that was known to a generation of children as the ‘Dusty Old Trust” building until a new generation christened it the “Superman Building.”
If you want to keep up with Tom, you can follow him on LinkedIn and Twitter @TomRoach, or check out his website www.tomroach.comIf you enjoyed this episode of The Radcast, let us know by visiting our website www.theradcast.com. Check out www.theradicalformula.com. Like, Share and Subscribe to our YouTube channel, or leave us a review on Apple Podcast. Be sure to keep up with all that's radical from @ryanalford @radical_results @the.rad.cast.
Thanks to our sponsor Branded Bills! Use code Radcast20 at www.brandedbills.com for 20% off your purchase!Social Holidays:Friday 6.24 - National Handshake Day, National Take Your Dog to Work DaySaturday 6.25 - National Catfish DayMonday 6.27 - National Sunglasses DayWednesday 6.28 - National body piercing dayThursday 6.30 - Social Media dayTrends in the Metaverse2023 Chevy Corvette Z06 with Zany Green Paint, Matching NFT to Be AuctionedSnoop Dogg files for new NFT and Metaverse trademark applicationsMacy's takes July Fourth bonanza to Web3 with free NFT dropBella Hadid Arrives in the Metaverse With a New Line of NFTsWeekly Marketing NewsNerf's first mascot Murph draws Gritty comparisons, sparks Twitter debateCampaign Trail: Liquid Death pranks taste testers with pricey — but disgusting — beveragesLexus brings marketing to ‘Next Level' on TwitchFacebook and Instagram won't take a cut from creators' revenues until 2024Mastercard released its first-ever music album yesterday, according to a press release. “Priceless,” available now on Spotify, features original songs by 10 artists from around the globe that incorporate the melody of Mastercard's brand soundIf you enjoyed this episode of The Radcast, let us know by visiting our website www.theradcast.com. Check out www.theradicalformula.com. Like, Share and Subscribe to our YouTube channel https://www.youtube.com/c/RadicalHomeofTheRadcast or leave us a review on Apple Podcast. Be sure to keep up with all that's radical from @ryanalford @radicalresults @the.rad.cast @christinaroseyasi @nick_weaver_
This episode of the Cyberlaw Podcast begins by digging into a bill more likely to transform tech regulation than most of the proposals you've actually heard of—a bipartisan effort to repeat U.S. Senator John Cornyn's bipartisan success in transforming the Committee on Foreign Investment in the United States (CFIUS) four years ago. The new bill holds a mirror up to CFIUS, Matthew Heiman reports. Where CFIUS regulates inward investment from adversary nation, the new proposal will regulate outward investment—from the U.S. to adversary nations. The goal is to slow the transfer of technical expertise (and capital) from the U.S. to China. It is opposed by the Chinese government and the same U.S. business alliance that angered Senator Cornyn in 2018. If it passes, I predict, it will be as part of must-pass legislation and will be a big surprise to most technology observers. The cryptocurrency world might as well make Leslie Gore its official chanteuse, because everyone is crying at the end of the crypto party. Well, except for Nick Weaver, who does a Grand Tour of all the overleveraged cryptocurrency firms on or over the verge of collapse as bitcoin values drop to $20 thousand and below. Scott Shapiro and I trade views on the spate of claims that Microsoft is downgrading security in its products. It would unfortunately make sense for Microsoft to strip-mine value from its standalone proprietary software by stinting on security, we think, but we can't explain why it would neglect cloud security as it is increasingly accused of doing. That brings us to NickTalk about TikTok, and a behind-the-scenes look at what has happened to the TikTok-CFIUS case in the years since former President Donald Trump left the stage. Turns out that CFIUS has been doggedly pursuing pieces of the deal that were still on the table in 2020: localization in the U.S. for U.S. user data and no Chinese access to the data. The first is moving forward, Nick tells us; the second is turning out to be a morass. Speaking of localization, India's determination to localize credit card data has been rewarded. Matthew reports that cutting off new credit card customers did the trick: Mastercard has localized its data, and India has lifted the ban. Scott reports on Japan's latest contribution to the techlash: a law that makes 'online insults' a crime. Scott also reports on a modest bright spot in NSO Group 's litigation with Facebook: The Supreme Court answered the company's plea, calling on the U.S. government to comment on whether NSO could claim sovereign immunity for the hacking tools it sells to government. Nick puts his grave dancing shoes back on to report the bad news for NSO: the Biden administration is trashing a rumored acquisition by U.S. - based L3Harris Technologies. Scott makes short work of the idea that a Google AI chatbot has achieved sentience. Of course, as a trained philosopher, Scott seems a little reluctant to concede that I've achieved sentience. We do agree that it's a hell of a good chatbot. And in quick hits, I note the appointment of April Doss as General Counsel for the National Security Agency Counsel after a long series of acting General Counsels.
In such an impersonal world, Andy Paul shares his wisdom on connecting back to the humanness of selling and marketing. Andy give s a synopsis of his 4 Pillars of Selling while discussing the difference between branding vs. performance marketing. You can learn more about Andy Paul at www.andypaul.com and on Twitter @realAndyPaul If you enjoyed this episode of The Radcast, let us know by visiting our website www.theradcast.com. Check out www.theradicalformula.com. Like, Share and Subscribe to our YouTube channel, or leave us a review on Apple Podcast. Be sure to keep up with all that's radical from @ryanalford @radical_results @the.rad.cast.
Social HolidaysJune 17: National Eat Your Veggies DayJune 18: National Sushi Day, National splurge dayJune 19: Juneteenth, Father's DayJune 21: First day of Summer!, International Yoga dayJune 22: National Onion Ring Day, National Chocolate Eclair DayJune 23: National Pink DayTrends in the MetaverseChobani launches intergalactic oat milk race on RobloxYes, advertising students are working on Web3Timberland brings boot innovation to the metaverse with Fortnite islandVictoria's Secret's Happy Nation brand launches Roblox hubWeekly Marketing NewsSour Patch Kids taps Twitch streamers for flavor competitionEdgewell debuts Gen Z-focused skincare brand, FieldtripYouTube Reaches 1.5 Billion Users of Shorts, Its TikTok RivalFacebook is changing it's algorithm to take on TikTokAmazon gets into AR shopping with launch of ‘Virtual Try-On for Shoes'Bye Bye Internet Explorer If you enjoyed this episode of The Radcast, let us know by visiting our website www.theradcast.com. Check out www.theradicalformula.com. Like, Share and Subscribe to our YouTube channel https://www.youtube.com/c/RadicalHomeofTheRadcast or leave us a review on Apple Podcast. Be sure to keep up with all that's radical from @ryanalford @radicalresults @the.rad.cast @christinaroseyasi @nick_weaver_
At least that's the lesson that Paul Rosenzweig and I distill from the recent 11th Circuit decision mostly striking down Florida's law regulating social media platforms' content “moderation” rules. We disagree flamboyantly on pretty much everything else—including whether the court will intervene before judgment in a pending 5th Circuit case where the appeals court stayed a district court's injunction and allowed Texas's similar law to remain in effect. When it comes to content moderation, Silicon Valley is a lot tougher on the Libs of TikTok than the Chinese Communist Party (CCP). Instagram just suspended the Libs of Tiktok account, I report, while a recent Brookings study shows that the Chinese government's narratives are polluting Google and Bing search results on a regular basis. Google News and YouTube do the worst job of keeping the party line out of searches. Both Google News and YouTube return CCP-influenced links on the first page about a quarter of the time. I ask Sultan Meghji to shed some light on the remarkable TerraUSD cryptocurrency crash. Which leads us, not surprisingly, from massive investor losses to whether financial regulators have jurisdiction over cryptocurrency. The short answer: Whether they have jurisdiction or not, all the incentives favor an assertion of jurisdiction. Nick Weaver is with us in spirit as we flag his rip-roaring attack on the whole fiel—a don't-miss interview for readers who can't get enough of Nick. It's a big episode for artificial intelligence (AI) news too. Matthew Heiman contrasts the different approaches to AI regulation in three big jurisdictions. China's is pretty focused, Europe's is ambitious and all-pervading, and the United States isn't ready to do anything. Paul thinks DuckDuckGo should be DuckDuckGone after the search engine allowed Microsoft trackers to follow users of its browser. Sultan and I explore ways of biasing AI algorithms. It turns out that saving money on datasets makes the algorithm especially sensitive to the order in which the data is presented. Debiasing with synthetic data has its own risks, Sultan avers. But if you're looking for good news, here's some: Self-driving car companies who are late to the party are likely to catch up fast, because they can build on a lot of data that's already been collected as well as new training techniques. Matthew breaks down the $150 million fine paid by Twitter for allowing ad targeting of the phone numbers its users supplied for two-factor authentication (2FA) security purposes. Finally, in quick hits: Matthew recommends that we all get popcorn for: Spain's planned investigation of its intelligence services following a phone hacking scandal. Sultan and I call time of death for the Klobuchar bill regulating Silicon Valley self-preferencing. It was the most likely of all the Silicon Valley competition bills to pass, but election year tensions and massive lobbying campaigns by industry have made its path to enactment too steep. And Sultan notes that the Commerce Department has published with relatively little change its rule restricting exports of hacking tools. Download the 409th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.