Podcast appearances and mentions of John P Carlin

  • 9PODCASTS
  • 14EPISODES
  • 41mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 28, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about John P Carlin

Latest podcast episodes about John P Carlin

ITSPmagazine | Technology. Cybersecurity. Society
Book | Cybersecurity Law Fundamentals | Defining 'Reasonable Cybersecurity': A Legal Perspective | A Conversation with Author, Jim Dempsey | Redefining CyberSecurity and Society with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jun 28, 2024 47:02


Guest: Jim Dempsey, Senior Policy Advisor, Stanford Program on Geopolitics, Technology and Governance [@FSIStanford]; Lecturer, UC Berkeley Law School [@BerkeleyLaw]On LinkedIn | https://www.linkedin.com/in/james-dempsey-8a10a623/____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martinHost: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelliView This Show's Sponsors___________________________Episode NotesJoin Sean Martin and Marco Ciappelli for a dynamic discussion with Jim Dempsey as they unearth critical insights into the rapidly evolving field of cybersecurity law. Jim Dempsey, who teaches cybersecurity law at UC California Berkeley Law School and serves as Senior Policy Advisor to the Stanford Program on Geopolitics, Technology, and Governance, shares his extensive knowledge and experience on the subject, providing a wealth of information on the intricacies and developments within this legal domain.Cybersecurity law is a relatively new but increasingly important area of the legal landscape. As Dempsey pointed out, the field is continually evolving, with significant strides made over the past few years in response to the growing complexity and frequency of cyber threats. One key aspect highlighted was the concept of 'reasonable cybersecurity'—a standard that demands organizations implement adequate security measures, not necessarily perfect ones, to protect against breaches and other cyber incidents. This concept parallels other industries where safety standards are continually refined and enforced.The conversation also delved into the historical context of cybersecurity law, referencing the Computer Fraud and Abuse Act of 1986, which initially aimed to combat unauthorized access and exploitation of computer systems. Dempsey provided an enlightening historical perspective on how traditional laws have been adapted to the digital age, emphasizing the role of common law and the evolution of legal principles to meet the challenges posed by technology.One of the pivotal points of discussion was the shift in liability for cybersecurity failures. The Biden administration's National Cybersecurity Strategy of 2023 marks a significant departure from previous policies by advocating for holding software developers accountable for the security of their products, rather than placing the entire burden on end-users. This approach aims to incentivize higher standards of software development and greater accountability within the industry.The discussion also touched on the importance of corporate governance in cybersecurity. With new regulations from bodies like the Securities and Exchange Commission (SEC), companies are now required to disclose material cybersecurity incidents, thus emphasizing the need for collaboration between cybersecurity teams and legal departments to navigate these requirements effectively.Overall, the episode underscored the multifaceted nature of cybersecurity law, implicating not just legal frameworks but also technological standards, corporate policies, and international relations. Dempsey's insights elucidated how cybersecurity law is becoming ever more integral to various aspects of society and governance, marking its transition from a peripheral concern to a central pillar in protecting digital infrastructure and information integrity. This ongoing evolution makes it clear that cybersecurity law will continue to be a critical area of focus for legal professionals, policymakers, and businesses alike.Top Questions AddressedWhat is the importance of defining 'reasonable cybersecurity,' and how is this standard evolving?How has the shift in legal liability for cybersecurity incidents, particularly under the Biden administration, impacted the software industry?In what ways are historical legal principles, like those from the Computer Fraud and Abuse Act, being adapted to meet modern cybersecurity challenges?About the BookFirst published in 2021, Cybersecurity Law Fundamentals has been completely revised and updated.U.S. cybersecurity law is rapidly changing. Since 2021, there have been major Supreme Court decisions interpreting the federal computer crime law and deeply affecting the principles of standing in data breach cases. The Securities and Exchange Commission has adopted new rules for publicly traded companies on cyber incident disclosure. The Federal Trade Commission revised its cybersecurity rules under the Gramm-Leach-Bliley Act and set out new expectations for all businesses collecting personal information. Sector-by-sector, federal regulators have issued binding cybersecurity rules for critical infrastructure, while a majority of states have adopted their own laws requiring reasonable cybersecurity controls. Executive orders have set in motion new requirements for federal contractors.All these changes and many more are addressed in the second edition of Cybersecurity Law Fundamentals, published April, 2024. The second edition is co-authored by John P. Carlin, partner at Paul Weiss and former long-time senior official of the U.S. Justice Department, where he was one of the architects of current U.S. cybersecurity policy.___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

Redefining CyberSecurity
Book | Cybersecurity Law Fundamentals | Defining 'Reasonable Cybersecurity': A Legal Perspective | A Conversation with Author, Jim Dempsey | Redefining CyberSecurity and Society with Sean Martin and Marco Ciappelli

Redefining CyberSecurity

Play Episode Listen Later Jun 28, 2024 47:02


Guest: Jim Dempsey, Senior Policy Advisor, Stanford Program on Geopolitics, Technology and Governance [@FSIStanford]; Lecturer, UC Berkeley Law School [@BerkeleyLaw]On LinkedIn | https://www.linkedin.com/in/james-dempsey-8a10a623/____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martinHost: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelliView This Show's Sponsors___________________________Episode NotesJoin Sean Martin and Marco Ciappelli for a dynamic discussion with Jim Dempsey as they unearth critical insights into the rapidly evolving field of cybersecurity law. Jim Dempsey, who teaches cybersecurity law at UC California Berkeley Law School and serves as Senior Policy Advisor to the Stanford Program on Geopolitics, Technology, and Governance, shares his extensive knowledge and experience on the subject, providing a wealth of information on the intricacies and developments within this legal domain.Cybersecurity law is a relatively new but increasingly important area of the legal landscape. As Dempsey pointed out, the field is continually evolving, with significant strides made over the past few years in response to the growing complexity and frequency of cyber threats. One key aspect highlighted was the concept of 'reasonable cybersecurity'—a standard that demands organizations implement adequate security measures, not necessarily perfect ones, to protect against breaches and other cyber incidents. This concept parallels other industries where safety standards are continually refined and enforced.The conversation also delved into the historical context of cybersecurity law, referencing the Computer Fraud and Abuse Act of 1986, which initially aimed to combat unauthorized access and exploitation of computer systems. Dempsey provided an enlightening historical perspective on how traditional laws have been adapted to the digital age, emphasizing the role of common law and the evolution of legal principles to meet the challenges posed by technology.One of the pivotal points of discussion was the shift in liability for cybersecurity failures. The Biden administration's National Cybersecurity Strategy of 2023 marks a significant departure from previous policies by advocating for holding software developers accountable for the security of their products, rather than placing the entire burden on end-users. This approach aims to incentivize higher standards of software development and greater accountability within the industry.The discussion also touched on the importance of corporate governance in cybersecurity. With new regulations from bodies like the Securities and Exchange Commission (SEC), companies are now required to disclose material cybersecurity incidents, thus emphasizing the need for collaboration between cybersecurity teams and legal departments to navigate these requirements effectively.Overall, the episode underscored the multifaceted nature of cybersecurity law, implicating not just legal frameworks but also technological standards, corporate policies, and international relations. Dempsey's insights elucidated how cybersecurity law is becoming ever more integral to various aspects of society and governance, marking its transition from a peripheral concern to a central pillar in protecting digital infrastructure and information integrity. This ongoing evolution makes it clear that cybersecurity law will continue to be a critical area of focus for legal professionals, policymakers, and businesses alike.Top Questions AddressedWhat is the importance of defining 'reasonable cybersecurity,' and how is this standard evolving?How has the shift in legal liability for cybersecurity incidents, particularly under the Biden administration, impacted the software industry?In what ways are historical legal principles, like those from the Computer Fraud and Abuse Act, being adapted to meet modern cybersecurity challenges?About the BookFirst published in 2021, Cybersecurity Law Fundamentals has been completely revised and updated.U.S. cybersecurity law is rapidly changing. Since 2021, there have been major Supreme Court decisions interpreting the federal computer crime law and deeply affecting the principles of standing in data breach cases. The Securities and Exchange Commission has adopted new rules for publicly traded companies on cyber incident disclosure. The Federal Trade Commission revised its cybersecurity rules under the Gramm-Leach-Bliley Act and set out new expectations for all businesses collecting personal information. Sector-by-sector, federal regulators have issued binding cybersecurity rules for critical infrastructure, while a majority of states have adopted their own laws requiring reasonable cybersecurity controls. Executive orders have set in motion new requirements for federal contractors.All these changes and many more are addressed in the second edition of Cybersecurity Law Fundamentals, published April, 2024. The second edition is co-authored by John P. Carlin, partner at Paul Weiss and former long-time senior official of the U.S. Justice Department, where he was one of the architects of current U.S. cybersecurity policy.___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

Risky Business
Risky Business #740 -- Midnight Blizzard's Microsoft hack isn't over

Risky Business

Play Episode Listen Later Mar 13, 2024


On this week's show Patrick and Adam discuss the week's security news, including: Weather forecast in Redmond is still for blizzards at midnight Maybe Change Healthcare wasn't just crying nation-state wolf Hackers abuse e-prescription systems to sell drugs CISA goes above and beyond to relate to its constituency by getting its Ivantis owned VMware drinks from the Tianfu Cup Much, much more This week's feature guest is John P Carlin. He was principal associate deputy attorney general under Deputy Attorney General Lisa Monaco for about 18 months in 2021 and 2022, and also served as Robert Mueller's chief of staff when he was FBI director. John is joining us this week to talk about all things SEC. He wrote the recent Amicus Brief that says the SEC needs to be careful in its action against Solarwinds. He'll also be talking to us more generally about these new SEC disclosure requirements, which are in full swing. Rad founder Jimmy Mesta will along in this week's sponsor segment to talk about some really interesting work they've done in baselining cloud workloads. It's the sort of thing that sounds simple that really, really isn't. Show notes Risky Biz News: The aftermath of Microsoft's SVR hack is rearing its ugly head Swindled Blackcat affiliate wants money from Change Healthcare ransom - Blog | Menlo Security BlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare – Krebs on Security Change Healthcare systems expected to come back online in mid-March | Cybersecurity Dive LockBit takes credit for February shutdown of South African pension fund Ransomware gang claims to have made $3.4 million after attacking children's hospital Jason D. Clinton on X: "Fully automated vulnerability research is changing the cybersecurity landscape Claude 3 Opus is capable of reading source code and identifying complex security vulnerabilities used by APTs. But scaling is still a challenge. Demo: https://t.co/UfLNGdkLp8 This is beginner-level… https://t.co/mMQb2vYln1" / X Jason Koebler on X: "Hackers are hacking doctors, then using their digital prescription portals to "legitimately" prescribe themselves & their customers adderall, oxy, and other prescription drugs https://t.co/6elTKQnXSB" / X How Hackers Dox Doctors to Order Mountains of Oxy and Adderall CISA forced to take two systems offline last month after Ivanti compromise VMware sandbox escape bugs are so critical, patches are released for end-of-life products | Ars Technica A Close Up Look at the Consumer Data Broker Radaris – Krebs on Security Brief of Amici Curiae Former Government Officials Securities and Exchange Commission v Solarwinds Corp

Risky Business
Risky Business #740 -- Midnight Blizzard's Microsoft hack isn't over

Risky Business

Play Episode Listen Later Mar 13, 2024 64:14


On this week's show Patrick and Adam discuss the week's security news, including: Weather forecast in Redmond is still for blizzards at midnight Maybe Change Healthcare wasn't just crying nation-state wolf Hackers abuse e-prescription systems to sell drugs CISA goes above and beyond to relate to its constituency by getting its Ivantis owned VMware drinks from the Tianfu Cup Much, much more This week's feature guest is John P Carlin. He was principal associate deputy attorney general under Deputy Attorney General Lisa Monaco for about 18 months in 2021 and 2022, and also served as Robert Mueller's chief of staff when he was FBI director. John is joining us this week to talk about all things SEC. He wrote the recent Amicus Brief that says the SEC needs to be careful in its action against Solarwinds. He'll also be talking to us more generally about these new SEC disclosure requirements, which are in full swing. Rad founder Jimmy Mesta will along in this week's sponsor segment to talk about some really interesting work they've done in baselining cloud workloads. It's the sort of thing that sounds simple that really, really isn't. Show notes Risky Biz News: The aftermath of Microsoft's SVR hack is rearing its ugly head Swindled Blackcat affiliate wants money from Change Healthcare ransom - Blog | Menlo Security BlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare – Krebs on Security Change Healthcare systems expected to come back online in mid-March | Cybersecurity Dive LockBit takes credit for February shutdown of South African pension fund Ransomware gang claims to have made $3.4 million after attacking children's hospital Jason D. Clinton on X: "Fully automated vulnerability research is changing the cybersecurity landscape Claude 3 Opus is capable of reading source code and identifying complex security vulnerabilities used by APTs. But scaling is still a challenge. Demo: https://t.co/UfLNGdkLp8 This is beginner-level… https://t.co/mMQb2vYln1" / X Jason Koebler on X: "Hackers are hacking doctors, then using their digital prescription portals to "legitimately" prescribe themselves & their customers adderall, oxy, and other prescription drugs https://t.co/6elTKQnXSB" / X How Hackers Dox Doctors to Order Mountains of Oxy and Adderall CISA forced to take two systems offline last month after Ivanti compromise VMware sandbox escape bugs are so critical, patches are released for end-of-life products | Ars Technica A Close Up Look at the Consumer Data Broker Radaris – Krebs on Security Brief of Amici Curiae Former Government Officials Securities and Exchange Commission v Solarwinds Corp

Amanpour
Amanpour: Joseph Stiglitz, Noah Feldman, John P. Carlin

Amanpour

Play Episode Listen Later Jan 21, 2020 56:21


Joseph Stiglitz, the Nobel Prize-winning economist, joins Christiane Amanpour from the World Economic Forum in Davos, Switzerland to explain why President Trump is misleading the public with the metrics he is using to measure the growth of the U.S. economy. He also lays out the case for listening to climate change scientists and creating a green economy that will lead way to the boom it is yet to see. Noah Feldman, constitutional scholar, unpacks the case for impeachment against Trump and how the Senate, led by majority leader Mitch McConnell, are likely going to conduct their partisan trial. Our Hari Sreenivasen sat down with John P. Carlin, former U.S. Assistant Attorney General for National Security, to discuss the ever-rising threat cybercrime poses to democracies around the world.

Steingarts Morning Briefing – Der Podcast
“Alles fängt an zu kippen”

Steingarts Morning Briefing – Der Podcast

Play Episode Listen Later Nov 27, 2019 30:05


Das große Interview: Die WDR-Journalistin Yvonne Willicks berichtet über Profite der Lebensmittelindustrie auf dem Rücken der Verbraucher. Der ehemalige Präsident des Bundesverfassungsgerichts Hans Jürgen Papier warnt vor einer Erosion des Rechtsstaates. Chelsea Spieker trifft in ihrem Podcast “The Americans” den Cyberspace-Experten und Buchautor John P. Carlin. Haushaltswoche im Bundestag: Die schwarze Null muss stehen! Gummibärchen Krise bei HARIBO. Die BBC kürt die 100 besten Filme von Regisseurinnen.

STEAL THIS SHOW
2: On The Frontline Of The Code War, with John P. Carlin

STEAL THIS SHOW

Play Episode Listen Later Nov 14, 2019 52:12


In this episode Jamie meets up with John P. Carlin (https://www.mofo.com/people/john-carlin.html) , author of Dawn of the Code War (https://www.amazon.com/Dawn-Code-War-Americas-Against-ebook/dp/B079M8813N) and former Assistant Attorney General for the U.S. Department of Justice’s National Security Division to discuss the ongoing network war with China -- one that's about to ratchet up, as 5G connects billions of devices via a technology heavily dependent on China's Huawei. What does it mean to wage war in the era of distributed networks? How do networks change the very idea of 'Command and Control' towards leaderless, non-hierarchical memetic structures? We dig into crowdsourced terrorism' of Al Qaeda (https://en.wikipedia.org/wiki/Al-Qaeda) and look at some similarities with Anonymous and the QAnon (https://en.wikipedia.org/wiki/QAnon) phenomenon. Finally, we discuss the widespread idea that there's a kind of break with authority going on in the online era—what could be described as an 'epistemological crisis' created by our hyper-informational environment—one that's being exploited and amplified by various lords of chaos to create new and unpredictable political realities.

Teleforum
Book Review: Dawn of the Code War

Teleforum

Play Episode Listen Later Apr 15, 2019 48:19


In Dawn of the Code War, authors John P. Carlin and Garrett M. Graff describe how the Internet has been weaponized by hackers to facilitate election tampering, theft of intelligence files, and many other online forms of attack. The digitization of our economy gives our enemies more avenues to attack us. Carlin and Graff explain the unusual difficulties America has faced in cyber warfare, partially due to our adversaries not abiding by the same rules of engagement online. The United States government does not have a developed framework of how to respond to these various attacks, and many of these technological developments are still unfamiliar. Our understanding of the threats we are facing is essential to combatting them, and this book makes it clear how necessary winning the code war is.Featuring: John P. Carlin, Partner, Morrison & Foerster LLP, and former Assistant Attorney General for the U.S. Department of Justice's (DOJ) National Security Division (NSD) Teleforum calls are open to all dues paying members of the Federalist Society. To become a member, sign up on our website. As a member, you should receive email announcements of upcoming Teleforum calls which contain the conference call phone number. If you are not receiving those email announcements, please contact us at 202-822-8138.

united states america internet partner morrison intellectual property graff federalist society assistant attorney general code war teleforum garrett m john p carlin international law & trade international & national secur
Teleforum
Book Review: Dawn of the Code War

Teleforum

Play Episode Listen Later Apr 15, 2019 48:19


In Dawn of the Code War, authors John P. Carlin and Garrett M. Graff describe how the Internet has been weaponized by hackers to facilitate election tampering, theft of intelligence files, and many other online forms of attack. The digitization of our economy gives our enemies more avenues to attack us. Carlin and Graff explain the unusual difficulties America has faced in cyber warfare, partially due to our adversaries not abiding by the same rules of engagement online. The United States government does not have a developed framework of how to respond to these various attacks, and many of these technological developments are still unfamiliar. Our understanding of the threats we are facing is essential to combatting them, and this book makes it clear how necessary winning the code war is.Featuring: John P. Carlin, Partner, Morrison & Foerster LLP, and former Assistant Attorney General for the U.S. Department of Justice's (DOJ) National Security Division (NSD) Teleforum calls are open to all dues paying members of the Federalist Society. To become a member, sign up on our website. As a member, you should receive email announcements of upcoming Teleforum calls which contain the conference call phone number. If you are not receiving those email announcements, please contact us at 202-822-8138.

united states america internet partner morrison intellectual property graff federalist society assistant attorney general code war teleforum garrett m john p carlin international law & trade international & national secur
Cybersecurity and Technology - Audio
Dawn of the Code War

Cybersecurity and Technology - Audio

Play Episode Listen Later Jan 15, 2019 88:29


Please join us for an armchair discussion on responses to national security threats in cyberspace from the Department of Justice, featuring John P. Carlin, Former Assistant Attorney General, National Security Division; and John C. Demers, Assistant Attorney General, National Security Division, on Tuesday, January 15th from 5:30 pm - 7:00 pm at the CSIS headquarters.  In his new book, Dawn of the Code War, Mr. Carlin discusses the rise of cyber threats from U.S. adversaries, and the strategies that have been developed to combat them. At this event, Mr. Carlin and Mr. Demers will discuss the nature of these threats and, more importantly, the U.S. response, including indictments of Russian, Chinese, Iranian, and North Korean nationals. The discussion will focus on the actions the Department of Justice is already taking to combat cyber threats and mitigate risks, and how this could change the cyber threat environment.    John P. Carlin Partner, Morrison & Foerster LLP; Former Assistant Attorney General, National Security Division, Department of Justice John C. Demers Assistant Attorney General, National Security Division, Department of JusticeModerated by James A. Lewis Senior Vice President and Director, CSIS Technology Policy Program This event is made possible through general support to CSIS. 

Inside Out Security
John P. Carlin: Emerging Threats (Part 4)

Inside Out Security

Play Episode Listen Later Jun 15, 2017 12:52


In this concluding post of John Carlin’s Lessons from the DOJ, we cover a few emerging threats: cyber as an entry point, hacking for hire and cybersecurity in the IoT era. One of the most notable anecdotes are John’s descriptions of how easy it was to find hacking for hire shops on the dark web. Reviews of the most usable usernames and passwords and most destructive botnets are widely available to shoppers. Also, expect things to get worse before they get better. With the volume of IoT devices now available developed without security by design, we'll need to find a way to mitigate the risks. Transcript Cindy Ng: You may have following our series on John Carlin's work during his tenure as Assistant Attorney General for the U.S. Justice Department. He described cyber as an entry point as one of our threats using our latest election process as an example. But now, John has a few more emerging threats to bring to your attention, hacking for hire and cyber security in the IoT era. One of John's striking descriptions is how easy it is to find hacking for hire shops on the dark web. Reviews of the most usable usernames and passwords and the most destructive botnets are widely available to shoppers. Expect things to get worse before they get better. With the volume of IoT devices created without security by design, we'll need to find a way to mitigate the risk. John Carlin: Let me move to emerging threats. We've talked about cyber as an entry part, a way that an attack can start. Even when the cyber event isn't really the critical event in the end, our electoral system and confidence in it wasn't damaged because there was an actual attack on the voting infrastructure, if there's an attack where they steal some information that's relatively easy to steal and then they get to combine with the whole campaign of essentially weaponizing information, and that caused the harm. The other trend we're seeing is the hacking for hire. I really worry about this one. I think over the next five years, what we're seeing is, the dark web now, it's so easy to use, well, I don't recommend this necessarily, but when you go on it, you see sophisticated sales bazaars that look as customer-friendly as Amazon. And when I say that I mean it literally looks like Amazon. I went on one site and it's complete with customer reviews, like, "I gave him four stars, he's always been very reliable, and 15% of the stolen user names and passwords that he gives me work, which is a very high rate." Another one will be like, "This crook's botnet has always been really good at doing denial-of-service attacks, five stars!" So that's the way it looks right now on the dark web, and that's because they're making just so much, so much money they can invest in an infrastructure and it starts to look as corporate as our private companies. What I worry about, is because those tools are for rent, use the botnet example, you know, one of the cases that we did was the Iranian Revolutionary Guard Corps attack on the financial sector. They hit 46 different financial institutions with the distributed denial-of-service attack, taking advantage of a huge botnet of hundreds and hundreds of thousands of compromised computers. They'd knocked financial institutions, who have a lot of resources offline, effected hundreds of thousands of customers, cost tens of millions of dollars. Right now, on the dark web, you can rent the use of an already made botnet. So the criminal group creates the botnet, they're not the ones who necessarily use it. Right now they tend to rent it to other criminal groups who will do things like GameOver Zeus, a case that we did, you know, they'll use it for profit, they'll use it for things like injecting malware that will lead to ransomware or injecting malware for a version of extortion, essentially, where they were turning on people's video cameras and taking naked pictures, and then charging money, or all the other criminal purposes you can put a botnet to. But it doesn't take much imagination to see how a nation stayed or a terrorist group could just rent what the criminal groups are doing to cause an attack on your companies. In terms of emerging threats, you're certainly tracking the Internet of Things era. I mean, you think about how far behind we are given where the threat is just because we moved very, very quickly from putting everything we value, from analog to digital space, connecting it to the internet over a 25-year period roughly. We're now on the verge of an even more transformative evolution, where we put not just information, but all the devices that we need from everything, from the pacemakers in our heart, the original versions that were rolled out, actually this is still an issue, for good medical reasons they wanted to be able to track in real-time information coming out of people's hearts, but they rolled it out un-encrypted, because they just don't think about it when it comes to the Internet of Things. They were testing whether it worked, which it did, but they weren't testing whether it would work where they had security by design, if a bad guy, a crook, a terrorist, or a spy wanted to exploit them. Drones in the sky, they were rolled out, same problem, rolled out originally not encrypted commercial drone. So, again, a 12-year-old could kill someone by taking advantage of the early pacemakers, they could with drones as well. And then the automobiles on our roads, forgetting the self-driving vehicle already, estimates are 70% of the cars on the road by 2020 are essentially gonna be computers on wheels. One of the big cases we dealt with was the proof of concept hack where someone got in through the entertainment system through the steering and braking system, then led to 1.4 million car recall of Jeep Cherokees. So that's the smart device used to cause new types of harm, from car accidents, to drones in the sky, to killing people on pacemakers. But we also just have the sheer volume, it's exponentially increasing and we saw the denial-of-service attack that we've all been warning about for a period of time take place this October, knocked down essentially internet connectivity for a short period of time. Because there were just so many devices, from video cameras, etc., that are default being rolled out and can be abused. So, hopefully there will be regulatory public policy focus to try to fix that. In the interim though, my bottom line is, things are gonna get worse before they get better on the threat side, which is why we need to focus on the risk side. We won't spend too much time on what government's been doing. We've talked about some of it a little bit already, but this is...the idea is, we need to, one, bring deterrents to bare, make the bad guys feel pain. Because as long as they're getting away completely cost-free, offense is gonna continue to vastly outstrip defense. Number two, we gotta figure out a way to share information better with the private sector. And I think you're hopefully seeing some of that now, where government agencies, FBI, Justice, Secret Service are incentivized to try to figure out ways to increase information sharing for information that, for many, many years now, has been kept only on the classified side of the house. And that's a whole new approach for government, and it just in its early steps. But, we've been moving too slowly given where the threat is, we need to do more, faster. You know, just a couple weeks ago they heard the Director of the FBI said, "Okay, they came after us in 2016 in the Presidential election, but I'm telling you they're gonna do it again in 2020," and the head of National Security Agency agreed. That's in just one sphere, so I think we're definitely in a trend now where we need to move faster in government. What's law enforcement doing? They're increasing the cooperation. They're doing this new approach on attribution. When I was there, we issued towards towards the end a new presidential policy directive that tried to clarify who's in charge of threat, assets, intel support to make it easier. That said, if any of you guys actually looked at the attachment on that, it had something like 15 different phone numbers that you're supposed to call in the event of an incident. And so, right now, what you need to do is think ahead on your crisis and risk mitigation plan, and know by name and by face who you'd call law enforcement by having an incident response plan that you test when the worst happens. And there's reasons...I'm not saying in every case do it, but there are reasons to do it, and it can increase the intelligence you get back. It's a hedge against risk, if what you thought was a low level act, like a criminal act, the Ferizi example, turns out to be a terrorist, at least you notified somebody. You also want to pick a door, and this requires sometime getting assistance, you want to pick the right door in government, that ideally minimizes the regulatory risk to your company, depending on what space that you're in, that the information that you provide them, as a victim, isn't used against you to say that you didn't meet some standard of care. Even if...with the shift of administration, I know generally there's a talk about trying to decrease regulations under this administration, but when it comes to cyber, everyone's so concerned about where the risk is, that for a period of time I think we're gonna continue to see a spike, that'll hopefully level off at some point as each of the regulators tries to figure out a way they can move into this space. So, what can you do? One, most importantly, treat this as an inevitability. You know there's no wall high enough, deep enough to keep the dedicated adversary out, and that means changing the mindset. So, where...just like many other areas, this is a risk management, incident response area. Yes, you should focus front end on trying to minimize their ability to get in but you also need to assume that they can, and then plan what's gonna happen when they're in my perimeter. That means knowing what you got, knowing where it is, doing things like assuming they can get into my system. If I have crown jewels, I shouldn't put that in a folder that's called "Crown Jewels," maybe put something else in there that will cause the bad guy to steal the wrong information. Have a loss of efficiency, which is why it's a risk mitigation exercise. I mean, you need to bring the business side in to figure out, how can I, assuming they get in, make it hardest for them to damage what need but most to get back to business. Sony, despite all the public attention, their share price was up that spring, and that's because they knew exactly who and how to call someone in the government. They actually had a good internal corporate process in place in terms of who was responsible for handling the crisis and crisis communication. Second, assuming again that there are sophisticated adversaries that get more sophisticated, they can get in if they want to, you need to have a system that's constantly monitoring internally, what's going on from a risk standpoint, because the faster you can catch what's going on inside your system, the faster you can have plan to either kick them out, remediate it, or if you know the data is already lost, start having a plan to figure out how you can respond to it, whether it's anything from intellectual property, to salacious emails inside your system. And that way, you quickly identify and correct anomalies, reduce the loss of information. Implement access controls, can't hit this hard enough. This is true in government as well, by the way, along with the private sector. The default was just it's just easier to give everybody access. And I think people, when it came very highly regulated types of information, maybe literally, if you know, you had source code, key intellectual property, people knew to try to limit that. But all that other type of sensitive peripheral information, pricing discussions, etc., my experience, a majority of companies don't implement internally controls as to who has access and doesn't, and part of the reason for that is because it's too complicated for the business side so they don't pay attention to doing it, and you can limit access to sensitive information and others. Then you can focus your resources, for those who have access, on how they can use it, and really focus on training them and target your training efforts to those who have the access to the highest risk information. Multi-factor authentication, of course, is becoming standard. What else can you do? Segmenting your network. Many of the worst incidents we have are because of the networks were essentially flat and we watch bad guys cruise around the network. Supply chain risk, large majority, Target, Home Depot, etc., a different version of the supply chain but the same idea. Once you get your better practices in place, the risk can sometimes be down the supply chain or with a 3rd party vendor, but it's your brand that suffers in the event of a breach. Train employees. We talked about how access controls can help you target that training. And then have an incident response plan and exercise it. Some of them will be, you'll go in and there will be an incident response plan, but it's like hundreds of pages, and in an actual incident, nobody's going to look at it. So it needs to be simple enough that people can use, accessible both on the IT, technical side of the house, and the business side of the house, and then exercise, which is, you start spotting issues that really are more corporate governance issues inside the company as you try to do table top exercises. And we've talked a lot about building relationships with law enforcement, and the idea is know by name and by face pre-crisis who it is that you trust in law enforcement, have that conversation with them. This is easier to do if you're a Fortune 500 company to get their attention. If you're smaller, you may have to do it in groups or through an association, but have a sense of who it is that'd you call, and then you need to understand who in your organization will make that call.

Inside Out Security
John P. Carlin: Ransomware & Insider Threat (Part 3)

Inside Out Security

Play Episode Listen Later Jun 5, 2017 9:58


We continue with our series with John Carlin, former Assistant Attorney General for the U.S. Department of Justice’s National Security Division. This week, we tackle ransomware and insider threat. According to John, ransomware continues to grow, with no signs of slowing down. Not to mention, it is a vastly underreported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though recently the focus has been on ransomware as an outside threat, let’s not forget insider threat because an insider can potentially do even more damage. Transcript Cindy Ng: We continue our series with John Carlin, former Assistant Attorney General for the U.S. Justice Department. This week we tackle ransomware and insider threats. According to John, ransomware is a vastly under-reported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though, lately, we've been focused on ransomware as an outside threat, one area that doesn't get as much focus is insider threat. And that's worrisome because an insider can potentially do even more damage. John Carlin: Ransomware, it was skyrocketing when I was in government. In the vast, vast, as I said earlier, majority of the cases, we were hearing about them with the caveat that they were asking us not to make it public, and so it is also vastly under-reported. I don't think there's anywhere near, right now, the reporting. I think Verizon attempted to do a good job. There've been other reports that have attempted to get a firm number on how big the problem is. I think the most recent example that's catching peoples attention is Netflix. Another area where I think too few companies right now are thinking through how they'd engage law enforcement. And I don't think there's an easy answer. I mean, there's a lot of confusion out there as to whether you should or shouldn't pay. And there was such confusion over FBI folks, when I was there, giving guidance saying, "Always pay." The FBI issued guidance, and we have a link to it here, that officially says they do not encourage paying a ransom. That doesn't mean, though, that if you go into law enforcement that they're gonna order you not to pay. Just like they have for years in kidnapping, I think they may give you advice. They can also give back valuable information. Number one, if it's a group they've been monitoring, they can tell you, and do as they've tried to move more towards the customer service model, they can tell you whether they've seen that group attack other actors before, and if they have, whether if you pay they're likely to go away or not. Because some groups just take your money and continue. Some groups, the group who's asking for your money isn't the same group that hacked you, and they can help you on that as well. Secondly, just as risk-reduction, as the example I gave earlier of Ferizi shows, or the Syrian Electronic Army, you can end up, number one, violating certain laws when it comes to the Treasury, so called OFAC, and material support for terrorism laws by paying a terrorist or other group that's designated as a bad actor. But more importantly, I think for many of you, then, that potential criminal regulatory loss is the brand. You do not want a situation where it becomes clear later that you paid off a terrorist. And so, by telling law enforcement what your doing, you can hedge against that risk. The other thing you need to do has nothing to do with law enforcement, but is resilience and trying to figure out, "Okay, what are my critical systems, and what's the critical data that could embarrass us? Is it locked down? What would be the risk?" The most recent public example Netflix has shown, you know, some companies decide season 5 of "Orange is the New Black," it's not worth paying off the bad guy. We've been focusing a lot on outside actors coming inside, and something I think has gotten too little attention or sometimes get too little attention, is the insider threat. That's another trend. As we focus on how, when it comes to outsider threats, the approach needs to change, and instead of focusing so much on perimeter defense, we really need to focus on understanding what's inside a company, what the assets are, what we can do to complicate the life of a bad guy when they get inside your company. Risk mitigation, in other words. A lot of the same expenditures that you would make, or same processes that you put in place to help mitigate that risk, are also excellent at mitigating the risk from insider threat. And that's where you can get a economy of scale on your implementation. When I took over National Security Division, my first, I think, week, was the Boston Marathon attack. But then, shortly after that was a fellow named Snowden deciding to disclose, on bulk, information that was devastating to certain government agencies across the board. And one of my last acts was indicting another insider and contractor at the National Security Agency who'd similarly taken large amounts of information in October of last year. So, if I can share one lesson, having lived through it on the government end of the spectrum, that sometimes our best agencies, who are very good at erecting barriers and causing complications for those who try to get them from outside the wall, didn't have the same type of protections in place inside the perimeter area, in those that were trusted. And that's something we just see so often in the private sector, as well. In terms of the amount of damage they can do, the insider may actually be the most significant threat that you face. This is the kind of version of the blended threat, the accidental or negligent threat that happens from a human error, and then that's the gap that, no matter how good you are on the IT, the actor exploits. In order to protect against that, you really need to figure out systems internally for flagging anomalous behavior, knowing where your data is, knowing what's valued inside your system, and then putting access controls in place. From a recent study that Varonis did, and this is completely consistent with my experience both in government, in terms of government systems in government, in terms of providing assistance to the private sector and now giving advice to the private sector, is that it did not surprise me, this fact, although it's disturbing, that nearly half of the respondents indicated that at least 1,000 sensitive files are open to every employee, and that one fifth had 12,000 or more sensitive files exposed to every employee. I can't tell you how many of these I've responded to in crisis mode, where all the lawyers, etc. are trying to figure out how to mitigate risk, who do they need to notify because their files may have been stolen, whether it's business customers or their consumer-type customers. And then, they realize too late, at this point, that they didn't have any access controls in place. This ability to put in an access control is vital, both when you have an insider and also, it shouldn't matter how the person gained access to your system, whether they were outside-in or it's an insider. It's the same risk. And so, what I've found is that...and this was a given example of this that we learned through the OPM hack. But what often happens is the IT side knows how to secure the information or put in access controls, but there's not an easy way to plug in your business side of the house. So, nearly three-fourths of employees say they know they have access to data they don't need to see. More than half said it's frequent or very frequent. And then, on the other side of the house, on the IT, they know that three-quarters of the issues that they're seeing is insider negligence. So, you combine over-access with the fact that people make mistakes, and you get a witches' brew in terms of trying to mitigate risk. So, what you should be looking for there is, "How can I make it as easy as possible to get the business side involved?" They can determine who gets access or who doesn't get access. And the problem right now, I think, with a lot of products out there, is that it's too complicated, and so the business side ignores it and then you have to try to guess at who should or shouldn't have access. All they see then is, "Oh, it's easier just to give everybody access than it is to try to think through and implement the product. I don't know who to call or how to do it." OPM, major breach inside the government where, according to public reporting, China, but the government has not officially said one way or the other so I'm just relying on public reporting, it breached inside our systems, our government systems. And one of the problems was they were able to move laterally, in a way, and we didn't have a product in place where we could see easily what the data was. And then, it turned out afterwards, as well, there was too much access when it came to the personally identifiable information. I have hundreds of thousands of government employees who ultimately had to get notice because you just couldn't tell what had or hadn't been breached. When we went to fix OPM, this is another corporate governance lesson, three times the President tried to get the Cabinet to meet so that the business side would help own this risk and decide what data people should have access to, recognizing when you're doing risk mitigation, there may be a loss of efficiency but you should try to make a conscious decision over what's connected to the internet, and if it's connect to the internet, who has access to it and what level of protection, recognizing, you know, as you slim access there can be a loss of efficiency. In order to do that, the person who's in charge is not the Chief Information Officer, it is the Cabinet sector. It is the Attorney General or the Secretary of State. The President tried three times to convene his Cabinet. Twice, I know for Justice, we were guilty because they sent me and our Chief Information Officer, the Cabinet members didn't show up because they figured, "This is too complicated. It's technical. I'm gonna send the cyber IT people." The third time, the Chief of Staff to the President had to send a harsh email that said, "I don't care who you bring with you, but the President is requiring you to show up to the meeting because you own the business here, and you're the only person who can decide who has access, who doesn't and where they should focus their efforts." So, for all the advice we were given, private companies, at the time, we were good at giving advice from government. We weren't as good, necessarily, at following it. That's simply something we recommend people do.

Inside Out Security
John P. Carlin: Economic Espionage & Weaponized Information (Part 2)

Inside Out Security

Play Episode Listen Later May 25, 2017 15:10


In part two of our series, John Carlin shared with us lessons on economic espionage and weaponized information. As former Assistant Attorney General for the U.S. Department of Justice’s National Security Division, he described how nation state actors exfiltrated data from American companies, costing them hundreds of billions of dollars in losses and more than two million jobs. He also reminded us how important it is for organizations to work with the government as he took us down memory lane with the Sony hack. He explained how destructive an attack can be, by using soft targets, such as email that do not require sophisticated techniques. Transcript Cindy Ng: In part two of John Carlin's talk, we learn more about how nation state actors exfiltrate data from American companies, costing them hundreds of billions of dollars in losses and more than two million jobs. He also took us down memory lane, describing how the Sony hack showed us how successful an attack can be by using soft targets, such as email, that do not require sophisticated techniques. John Carlin: Let me talk a little bit about economic espionage and how we moved into this new space. When I was a computer-hacking prosecutor prosecuting criminal cases, we were plenty busy. And I worked with an FBI squad, and the squad that I worked with did nothing but criminal cases. There was an intelligence squad who was across the hall, and they were behind a locked, secured compartmented door. The whole time I was doing criminal cases, about 10, 15 years ago, we never went on the other side of that door. If an agent switched squads, they just disappeared behind that locked, secured door. I then went over to the FBI to be Chief of Staff to the director, FBI Director Mueller. And when I was there, that door opened and we started to see day-in, day-out what nation state actors were doing to our country. And what we saw were state actors, and we had a literal jumbotron screen the size of a movie theater where we could watch it through a visual interface in real time. And we were watching state actors hop into places like universities, go from the university into your company, and then we would literally watch the data exfiltrate out. As we were watching this, it was an incredible feat of intelligence, but we also realized, "Hey, this is not success. We're watching billions and billions of dollars of what U.S. research and development, and our allies, have developed in losses. We're seeing millions of jobs lost." One estimate has it at more than two million jobs. "What can we do to make it clear that the threat isn't about consumer data or IP, the threat is about everything that you value on your system? And how do we make clear that there's an urgent need to address this problem?" What we did is, when I came back to Justice to lead up the National Security Division, is we looked to start sharing information within government. So, for the first time, every criminal prosecutor's office across the country, all 93 U.S. Attorneys' offices now has someone who's trained on the bits, and the bytes and the Electronic Communication Privacy Act on the one hand. On the other hand, on how to handle sensitive sources and methods, and encouraged to see, can you bring a case? This only happened in 2013. This approach is still very, very new. The FBI issued an edict that said, "Thou shalt share what was formally only on the intelligence side of the house with this new, specially-trained cadre." They then were redeployed out to the field. It's because of that change in approach that we did the first case of its kind, the indictment of five members of the People's Liberation Army, Unit 61398. This was a specialized unit who, as we laid out in the complaint, they were hitting companies like yours and they were doing it for reasons that weren't national security, they weren't nation-state reasons. They were doing things like...Westinghouse was about to do a joint venture with a partner in China, and right before they were gonna into business together, you watched as the Chinese uniformed members of the People's Liberation Army, the second largest military in the world, went in, attacked their system and instead of paying to lease the lead pipe as they were supposed to do the next day, they went in and stole the technical design specifications so they could get it for free. That's one example laid out in the complaint. Or to give another example, and this is why it's not the type of information that is required to be protected by regulation, like consumer data or intellectual property. Instead, for instance, they went in to a solar company, it was a U.S. subsidiary of a German multi-national and they stole the pricing data from that company. Then the Chinese competitor, using this information stolen by the People's Liberation Army, price dumped. They set their product just below where the competitor would be. That forced that competitor into bankruptcy. To add insult to injury, when that company sued them for the illegal practice of price dumping, they went and stole the litigation strategy right out from under them. When people said, "Why are you indicting the People's Liberation Army? It isn't state-to-state type activity. Everybody does it, what's the big deal? Criminal process is the wrong way to do it." The reason why we made it public were a couple. One was to make public what they were doing so that businesses would know what it was to protect themselves. Second, what they were doing was theft and that's never been tolerated. And so, there's a concept in U.S. law of what's called an easement. This is the idea that if you let someone walk across your lawn long enough, in U.S. law, they get what's called an easement. They get the right to walk across your lawn. That's why people put up no trespassing signs. International law, which is primarily a law of customary law, works the same way. And as long as we were continuing to allow them to steal day-in, day-out, the Director of the FBI called them like a drunken gorilla because they were so obvious in terms of who they were. They didn't care if they got caught because they were so confident there'd be no consequence. Then, we are setting international law, we are setting the standard as one where it's okay. So, in some respects, this case was a giant "No trespass" sign, "Get off our lawn." The other thing that we did, though, was we wanted to show the seriousness, that this was their day job. And so, we showed that the activity started at 9 a.m. Beijing time, that it went at a high level from 9:00 to noon Beijing time, it decreased from noon to 1:00, it then increased again from 1:00 to around 6 p.m. Beijing time, decreased on Chinese holidays, weekends. This was the day job of the military, and it's not fair and it can't be expected that a private company alone can defend itself against that type of adversary. This single case had an enormous impact on Chinese behavior, and I wanna move a little bit to the next major cases that occurred. So, that's economic espionage, theft for monetary value. We also started seeing some of the first destructive attacks. Everyone remembers Sony, and many people think of it as the first destructive attack on U.S. soil. It really wasn't the first destructive attack. The first destructive attack was on Sands Casino by what the Director of National Intelligence called Iranian-affiliated officials. Those Iranian-affiliated actors, when they attacked Sands, they did so because they didn't like what the head of Sands Casino had said about Iran and the Ayatollahs called on people within Iran to attack the company. They did a destructive attack that essentially turned computers into bricks. And it was only, actually, because there was someone quick thinking in the IT staff who was not authorized by their policy, by the way, who spotted what was occurring and essentially pulled the plug, and in that respect was able to segment the attack and keep it confined to a small to a small area, it didn't cause more damage. That didn't get nearly the attention of Sony, so let's talk a little bit about Sony. You know, I spent nearly 20 years in government working on national security criminal threats. We did enumerable war games where we war-gamed out, "What's it gonna look like if rogue nuclear arms nation decides to attack the United States through cyber-enabled means?" And I don't know about you guys but we all got it wrong, because not once did we guess that the first major incident was gonna be over a movie about a bunch of pot smokers. It's the only time...I remember every morning I'd meet with the Director of the FBI, the Attorney General to go over at the threats. That Christmas we'd all watched the movie the day before, shared movie reviews. And it's the only time in my career where I've gone into the Situation Room to brief the president on a serious national security incident and had to start by trying to summarize the plot of that movie which, for those of you unlucky enough to have seen it, not that I'm passing critical judgement, it is not an easy plot to summarize. So, why did we do that? Why were we treating this like a serious national security event that had presidential attention? The attack had multiple parts. One was, just like the attack on Sands Casino, it essentially turned computers into bricks. Secondly, they stole, so this is like the economic espionage threat. They stole intellectual property and they distributed it using a third party, the WikiLeaks-type example. Using third parties, they distributed that stolen intellectual property and tried to cause harm to Sony. Nobody remembers those two. What everybody remembers, and this is the weaponizing of the information idea, is that by focusing on a soft target like email communications, it was the salacious email communications inside the company between executives that got such massive media attention. That and, of course, the fact that it's a movie company. That lesson was not unnoticed, and so there's a lot of focus on it and we'll talk about it later. And it was used again, clearly, in the Russian attempt to influence elections not just here in the United States with our most recent election cycle, but both before that in elections across Europe. You can see them trying to use similar tactics and techniques right now when it comes to the French election. They clearly stumbled on the fact that, "Hey, it's not the information inside a company that people put great safeguards around, like their crown jewel of intellectual property. It can be the softer parts like email, like routine communications that, if we gather them in bulk, we can use to weaponize and cause harm to the company." The reason why we treated that as such a serious national security concern in the White House was because of the reason behind the attack. Just like the attack on Sands Casino, this attack on Sony was fundamentally an attack on our values. It was an attack on the idea that we have free speech. And similarly, the Russian attempts are fundamentally an attack on the idea of democracy. That's why they're attacking democratic institutions not just here in the United States, but across the world. For you, in the private sector, as we're designing and you're thinking about, you need to have products inside your system that can allow you to monitor broadly what type of attacks are occurring within your perimeter so you can get ahead of a weaponized information-type attack. That means fortifying defenses beyond those that are under legislation or regulation. In order to do that, that means figuring out and using products that are business-friendly. By that I mean, you may be the best information technology folks in the world, if your business side can't understand the tools that you're using or the risks that you're trying to describe to them, then you can't engage them on what could really harm the company most. And that's what you need to do your job, to figure out what that is. Another thing that we can work on now when it comes to responding quickly is how fast these events occur. And these days, the best practice is to monitor social media. Now, I know a couple companies that they're monitoring social media. In part, it's not just for cyber crisis, right? Every crisis moves that quickly. Some are monitoring it because a certain president of the United States right now, occasionally, will tweet something out in the middle of the night that can cause a company, if he singles you out, he can cause your share price to torpedo by the time the market opens. So certainly, a couple of companies who've actually been though that have rapid communications plans in place, and we've other clients now that just as a best practice have, essentially, a team monitoring that Twitter account from 3 a.m. to 6 a.m. so they can get a communication into the media mainstream before the stock market opens. That's the same idea when it comes to having systems in place, so you're monitoring social media for mentions of your company and then having a rapid response plan in place. That can also be majorly benefitted by you and your understanding of the system. If you spot where the data is that was stolen and think through with your business side how it can be used, you can get in front of it suddenly appearing somewhere on social media through WikiLeaks or some other site, just through Twitter and so that you're ready to have a rapid response that addresses your business risk. I want to focus a little bit, as we did, on this idea of working together, government and the private sector. I'm gonna go back to the economic espionage case for a second, the China case. When we did that PLA case, for years before when I was doing the criminal cases, I think companies didn't work with law enforcement because they figured, "What's the upside?" And I'll just talk about that China case, but that case, the indictment of the People's Liberation Army, it changed Chinese behavior, maybe not forever, but for now. It caused President Xi, I think that case, plus the response to Sony where we used the same type of response when it came to North Korea, which was...look, it was incredibly beneficial to Sony when we were able to say that it was North Korea. Until then, all of the attention was on Sony, "What did they do wrong? Why weren't their systems better? Isn't it ridiculous what their executives were saying?" After we could say that it was North Korea, the narrative changed to, "Hey, Government, what are you doing to protect us against nation-state threats?" That is why attributions can matter. And what did the government do? We applied now, for the second time, the approach that we'd applied for the first time with the People's Liberation Army of, number one, figuring out who did it. And that required working closely with the company to figure out not just what they took, but why they would have taken it, what could have precipitated the event. Number two, collect information in a way that we can make it public. And number three, use it, cause harm to the adversary. And that's why in Sony, unlike in the PLA case, we didn't have a criminal case available to us, so instead of using a criminal case you saw us publicly announce through the FBI who did it, and use that as a basis, then, to sanction North Korea. We realized sitting around the Situation Room table, lucky it was North Korea. If it had been some other cyber actor, unlike North Korea, who hadn't done so many other bad things, we wouldn't have been able to sanction them the way you could terrorists or those who proliferate weapons of mass destruction. So, going forwards, the president signed a new executive order that allows us to sanction cyber actors. The combination of that new executive order which significantly allows, to use the PLA example, you to sanction not just those who take it, but the companies who make money off of it, those who profit from the stolen information. I think it was that combination of the new executive order in place, the PLA case and the realization that we could make things public and would cause harm that caused President Xi, the leader of China, to blink and sign an unprecedented agreement with President Obama. He sent a crew, we negotiated with them day and night for several days. And they said for the first time, "Hey, we agree, using your military intelligence to target private companies for the benefit of their economic competitor is wrong, and we agree that that should be a norm that you don't do that." That caused the G20 to sign it, and since then we have seen in government and private group monitoring, there's a decrease in terms of how China is targeting private companies. Now, as some of you may be seeing, though, their definition of what's theft for private gain and ours might differ, and there's certainly sectors that are still getting hit and traditional intelligence collection continues.

Inside Out Security
John P. Carlin: Lessons Learned from the DOJ (Part 1)

Inside Out Security

Play Episode Listen Later May 9, 2017 15:56


Last week, John P. Carlin, former Assistant Attorney General for the U.S. Department of Justice’s (DOJ) National Security Division, spent an afternoon sharing lessons learned from the DOJ. And because the lessons have been so insightful, we’ll be rebroadcast his talk as podcasts. In part one of our series, John weaves in lessons learned from Ardit Ferizi, Hacktivists/Wikileaks, Russia, and the Syrian Electronic Army. He reminds us that the current threat landscape is no doubt complicated, requiring blended defenses, as well as the significance of collaboration between businesses and law enforcement. John Carlin currently chairs Morrison & Foerster’s global risk and crisis management team. Transcript Cindy Ng: John Carlin, Chair of Morrison and Foerster's Global Risk and Crisis Management Group says the secret to effective crisis management is that you've thought about it before the crisis. We thought we'd put his expertise to good use by having him share with us his experience as Assistant Attorney General for National Security on a wide range of topics. He described the current threat landscape, economic espionage, weaponized information, and what organizations can do to manage their risk. We are re-broadcasting his talk in a series that was held last week by starting with describing what a blended threat looks like, the particular challenges of insider threats, and the significance of the government working collaboratively with the private sector. John Carlin: The threat when it comes to what's facing our private companies has reached a level we haven't seen before. That's true for two reasons really. Some of what we're seeing on the threats are things that in the national security community that we've been monitoring for years, but we've had a change of approach. So in the past, while we were monitoring it, it would stay in classified systems. We would watch what nation states were doing or terrorist groups were doing and we didn't have any method to make it public. So one trend has been governments are starting to make public what they see in cyberspace. The second is that the actual threat itself has increased both in volume and complexity. That's been quite noticeable. In the past year alone, and really the past two years, we've seen cyber incidents that have gotten people's attention from every level. That has caused in government a shift in terms of the regulatory attention that's focused on cyber security breaches. When I recently left government, there was almost an unholy rush across every regulatory and law enforcement agency as they realized what the scope of the threat was and how their existing regulatory or law enforcement authorities were not covering it. That caused them to do two things. One, to try to come up with creative ways to interpret existing regulatory standards so that they can impose liability in the event of a cyber breach, and second, for those who realize that no matter how creative you got, there just was no way to bring it within existing regulations, more countries around the world are adopting data breach laws than ever before, most notably, Europe coming onboard in 2018, but really it's a global phenomenon. And as part of the focus on data breach, they're also having laws that are starting to impose certain standards of care or specific security obligations. I think it's that combination of increased awareness of the threat plus an increasingly complex and potentially punitive regulatory and law enforcement environment that's made this a top-of-mind issue for C-suites in poll after poll, not just here in the United States but in countries throughout the world. It's new and they're not quite sure what the legal regulatory landscape looks like, and accordingly, it's the type of thing that keeps them up at night. For those of you in the information technology space, that could be good news and bad news. It means more scrutiny on what you're doing but then hopefully, as we explain what it is and what can be done, it will also mean more resources. There's the old description of traditional cyber threats, and it's not like any of these have stopped, which would be crooks, nation states, activists, terrorists, everyone who wants to do something bad in the real world moving to cyberspace as we move everything that we value from analog to digital space, and the type of activity that they did ranged from economic espionage type activity to destruction of information, alteration of information, which I think is a trend that we need to watch, this is the idea of the integrity of your data may be at stake. I know, it's top-of-mind for those of us responsible for protecting against criminal and national security threats in government and fraud. I'm not going to spend too much on those traditional buckets. I wanted to highlight two new areas of cyber threat that are here, now. One is the, what I'll call the blended threat and the second is insider threats. Let's start with the blended threat. Imagine you're back at your office, you're in your company, and you spot what looks like a relatively low-level, unsophisticated criminal hack of your system. For many of you, it wouldn't even warrant, as you handle it yourself, informing anyone in the C-suite. It would never reach that high in the company. Now imagine that as a result of that relatively unsophisticated hack, you're a trusted brand name retail company, that the bad guy has managed to steal a relatively small amount of personally identifiable information: some names, some addresses. As you know, happens as we speak to hundreds and thousands of companies across the world. So the vast majority of those companies faced with an unsophisticated hack where it looked like the IT folks had a good control over what had occurred, it would stop there, to the extent it gets reported up to the C-suite, looks like a simple criminal act and will go unreported. The case I'm going through with you now though is a real case and what happened next was several weeks later, this company then received, through email, it was Gmail, so a commercial provider, a notice that said, "Hey, unless you wanna be embarrassed by the release of these names and addresses, you need to pay us $500 through Bitcoin." As these things go, you know, you can't really think of a dollar figure much lower than $500, asking for something through Bitcoin on a Gmail threat also does not look particularly sophisticated, you combine that with great confidence that you've been able to find them on your system and kick them off your system, again, the vast majority of companies, this does not go down as a high risk event and would not be reported. In the case that I'm discussing, which was a real case, the company did work with law enforcement and what they found out that they never would have been able to find out on their own was that what looked like a criminal act, and don't get me wrong, it was criminal, these guys wanted the $500, but it also was something else. And what it also was was it turned out that on the other end of that hack, on the other end of that keyboard was an extremist from Kosovo who had moved from Kosovo to Malaysia and located in Malaysia in a conspiracy with a partner who is still in Kosovo, he'd hacked into this U.S.-based trusted retail company, stolen these names and addresses, and in addition to the $500, he had managed, through Twitter, to befriend one of the most notorious cyber terrorists in the world at the time, a man named Junaid Hussain, who's from the United Kingdom. Junaid Hussain had moved from the United Kingdom to Raqqa, Syria where he was located at the very heart of the Islamic State of the Levant. In my old job, I was the top national security lawyer at the Justice Department responsible for protecting against terrorists and cyber threat, and on the terror side of the arena, this guy, Junaid Hussain along with his cohort in the Islamic State of the Levant, had mastered a new way of trying to commit terrorist acts. Unlike Al Qaeda where they had trained and vetted operatives, what they were doing was crowdsourcing terror. They were using social media against us and consistent with that approach, what Junaid Hussain did is he befriended this individual who moved to Malaysia named Farizi, he communicated with him through U.S. provided technology, Twitter, he got a copy of the stolen names and addresses and then he called those names and addresses into a kill list. He distributed that kill list through Twitter back to the United States and said, totally consistent with their new approach of crowdsourcing terror, "Hey, if you believe in the Islamic State, if you're following me, kill these people," by name, by address, where they live. That's the face of the new threat in a version of the blended threat. I think for any of you, any company, if you knew when you were dealing with the incident, where you'd seen someone breach your system, that the person who breached your system was looking to kill people with the information that they stole, that would immediately be a C-suite event, your crisis risk plans would go into place, you would certainly be contacting law enforcement. The problem with the blended threat, these guys who are both crooks on the one hand and working on behalf of a terrorist or a nation state is you don't. Because they did work together, in this case, Farizi, the guy responsible in Malaysia, was arrested pursuant to U.S. charges, extradited after cooperation from Malaysia, pled guilty and was sentenced this past July to 20 years in Federal prison. And Junaid Hussain, who was operating in ungoverned space in Raqqa, Syria, was killed in a military strike acknowledged by Central Command. This issue that's putting your companies on the frontlines of national security threats in a way that they simply never happened before, there's not another area of threat which has the same effect, requires new approaches in terms of security and in the ways that the Federal government interacts with private companies. Let me go through a little bit of some other examples of this blended threat phenomenon. If you think about what happened with the Wikileaks, you have Wikileaks which acts as a distributor of information but what they do is they end up, it's not necessarily the hacktivist that steals the information. So you see the breach into your system, you're not quite sure how it's gonna be used. Is it gonna be used by someone who wants to make money? Is it gonna be used as someone who has a very specific intelligence purposes? It used to be the case, certainly the assumption for those of us in government working with the private sector that if you had information stolen by a nation state, unless you had some economic espionage type issue, you really didn't need to worry about the nation-state using it against you and that's clearly no longer the case. What you see here with something like Russia and the DNC is information that is taken in one sphere then gets leveraged and used to be put out through another. So a nation state steals it and then they have this shield of Wikileaks for the distribution of the information. You also have with Russia, we tried in terms of the blended threat, you have what look like nation state actors and let's use the most recent Justice case against the Russian actors who attacked Yahoo. What you had there were crooks, I mean, straight up crooks who were Russian who were out to make a profit, and there was an attempt at law enforcement to law enforcement cooperation and U.S. law enforcement authorities passed information to the Russians to try to hold those crooks responsible. What you get instead of cooperation, this is all laid out in the complaint, is that the Russians then signed up the crooks as intelligence assets and used them to continue to steal information and to take some of the information they'd stolen so that the guy was both making a profit on one hand but also was providing it for state purposes. That version of the blended threat has a slight variation on it which his day job is Russian State Security Service Hacker or Chinese State Security Service hacker but there's a lot of corruption in both countries. You wanna make a buck on the side, same actor, same system, daytime working on behalf of the state, night time, looking to line their pockets with profits, what you're trying to figure out on the back end of that attack, "Hey, what type of risk am I dealing with?" It can be incredibly complicated to figure out. Am I in a national security situation or a criminal situation. And that's combined then with the deliberate blending. As we've moved toward doing attribution, you'll see state actors, whether Russian, Chinese or others, they will not use the same sophisticated tools that they used to use in the past to breach your system that were identifiable. So you can tell by the tactics, the TTP, the tactics, the techniques, the procedures that you were dealing with a state actor from Russia or China or another sophisticated state actor. Now they're using the same easily available tools that low-level crooks are using in the first instance looking to see if they can get in through human error or weaknesses in the defenses and that makes it much harder to do the attribution. Final version of the blended threat would be Syrian Electronic Army. Now many of you may be familiar with this group. This was the group who, and, you know, it's in vogue now, everyone's talking about fake news. Well, they're the original fake news case that we did. When we prosecuted the Syrian Electronic Army, what they had done was they spoofed a terrorist attack on the White House by defacing the White House, public facing site. That was very successful and caused the loss of billions of dollars in the stock market until people realized that it was a hoax. That same group though was regularly committing ransomware type offenses, they just weren't calling themselves the Syrian Electronic Army. And so for many of your companies, you would have a policy in place that would again spot it at a high area of risk and say, "We're not gonna make a payment if we knew we were paying off the Syrian Electronic Army," or in the case of Farizi, if we knew we were paying off a terrorist, but the problem is you don't know. And as it was laid out in that complaint when we arrested one of those individuals in Germany, I don't think even their, the people operating them, running them from the Syrian Electronic Army knew that they were using the same tools on the side to make a buck. So what lessons can you learn or how can we help protect our systems recognizing this change in threat? Well, one is as the criminal groups, as the sophisticated type of programs and vulnerabilities that you can sell on the dark web become more and more blended with nation states and terrorist groups taking advantage of them, we need to ask ourselves, "Are our defenses as blended as the threat?" And inside the company, that means making sure that we crosscut those who are responsible for preventing and minimizing the risk from a threat where it doesn't stop and say, "Hey, maybe we could build a wall that's high enough or deep enough to keep someone out," because that doesn't exist, but once they're inside and we're dealing with the actual threat, who do I have in my company who has evolved? Is there a way to make easily available to the business side so we can get their informed views as to what and how information should be protected to mitigate risk on the front end and then how to respond? And similarly, are we working together as companies and as a government with companies as the bad guys are with nation states who are sponsoring them or a terrorist group and that's where there's focus now, on figuring out a better way to do cooperation between business and law enforcement is vital. The division I used to head, the National Security Division, we were created as one of the reforms post-September 11th and the idea was post-September 11th, we gotta get better at sharing information across law enforcement and intelligence divide. The failure to share that type of information led to the death of thousands of people on September 11th. This challenge of how to share information in terms of what the government is seeing on the threat and how to receive information is exponentially more complicated because it's not just about sharing information better within government or within your company, it's how to share information across government to the private sector and back again.