The O'Reilly Security Podcast examines the challenges and opportunities for security practitioners in an increasingly complex and fast-moving world. Through interviews and analysis, we highlight the people who are on the frontlines of security, working to build better defenses.
The O’Reilly Security Podcast: The objectives of agile application security and the vital need for organizations to build functional security culture.In this episode of the Security Podcast, I talk with Rich Smith, director of labs at Duo Labs, the research arm of Duo Security. We discuss the goals of agile application security, how to reframe success for security teams, and the short- and long-term implications of your security culture.Here are some highlights: Less-disruptive security through agile integration Better security is certainly one expected outcome of adopting agile application security processes, and I would say less-disruptive security would be an outcome as well. If I put my agile hat on, or could stand in the shoes of an agile developer, I would say they would have a lot of areas where they feel security gets in the way and doesn't actually really help them or make the product or the company more secure. Their perception is that security creates a lot of busy work, and I think this comes from that lack of understanding of agile from the security camp—and likewise of security from the agile camp. Along those lines, I would also say one of the key outcomes should be less security interference (where it's not necessary) in the agile process. The goal is to create more harmonious working relationships between these two groups. It would be a shame if the agile process was slowed down purely at the expense of security, and we weren't getting any tangible security benefits from that. Changing how security teams measure their success If you’re measuring the success of your security program by looking at what didn’t happen, the hard work your security team is doing may never really be apparent, and people may not understand the amount of hard work that went in to prevent bad things from happening. And obviously, that's difficult to quantify as well, from a management perspective. This often has had the unfortunate side effect that security teams measure themselves and measure their success from the perspective of bad things they stopped from happening. That may well be the case, but it's hard to measure, and it's actually quite a negative message. It can push security teams into the mindset that the way they can stop the bad things from happening is by trying to make sure as few things change as possible. Security teams should measure themselves on what they enable, and what they enable to happen securely. That's a much more tangible and positive way of measuring the worth of that security team and how effective they are. Any old security team, whether it's good or bad, can say no to everything. Good security teams understand the business, understand what the development team is trying to get done. It's really more about what they can enable the business to do securely, and that's going to require some novel problem solving. That's going to mean that you're not just going to take solutions off the shelf and throw them at every problem. Evaluating your organization’s security culture Every company already has a security culture. It may not be the one they want, but they already have one. You need to build a security culture that works well for the larger organization and is in keeping with the larger organization's culture. I think we absolutely can take control of that security culture, and I'll go further and say that we have to. Otherwise, you're just going to end up in a situation where you have a culture that is not serving your organization well. There’s a lot of questions you should be considering when evaluating your culture. What is your current security culture? How does the rest of the company think abut security? How does the rest of the company view your security team? Do people go out of their way to include the security team in conversations and decision-making, or do they prefer to chance it and hope that they don't notice and try to squeak under the radar? That says a lot about your security culture. If people aren't actively engaging with the subject matter experts, well, something's wrong there.
The O’Reilly Security Podcast: The objectives of agile application security and the vital need for organizations to build functional security culture.In this episode of the Security Podcast, I talk with Rich Smith, director of labs at Duo Labs, the research arm of Duo Security. We discuss the goals of agile application security, how to reframe success for security teams, and the short- and long-term implications of your security culture.Here are some highlights: Less-disruptive security through agile integration Better security is certainly one expected outcome of adopting agile application security processes, and I would say less-disruptive security would be an outcome as well. If I put my agile hat on, or could stand in the shoes of an agile developer, I would say they would have a lot of areas where they feel security gets in the way and doesn't actually really help them or make the product or the company more secure. Their perception is that security creates a lot of busy work, and I think this comes from that lack of understanding of agile from the security camp—and likewise of security from the agile camp. Along those lines, I would also say one of the key outcomes should be less security interference (where it's not necessary) in the agile process. The goal is to create more harmonious working relationships between these two groups. It would be a shame if the agile process was slowed down purely at the expense of security, and we weren't getting any tangible security benefits from that. Changing how security teams measure their success If you’re measuring the success of your security program by looking at what didn’t happen, the hard work your security team is doing may never really be apparent, and people may not understand the amount of hard work that went in to prevent bad things from happening. And obviously, that's difficult to quantify as well, from a management perspective. This often has had the unfortunate side effect that security teams measure themselves and measure their success from the perspective of bad things they stopped from happening. That may well be the case, but it's hard to measure, and it's actually quite a negative message. It can push security teams into the mindset that the way they can stop the bad things from happening is by trying to make sure as few things change as possible. Security teams should measure themselves on what they enable, and what they enable to happen securely. That's a much more tangible and positive way of measuring the worth of that security team and how effective they are. Any old security team, whether it's good or bad, can say no to everything. Good security teams understand the business, understand what the development team is trying to get done. It's really more about what they can enable the business to do securely, and that's going to require some novel problem solving. That's going to mean that you're not just going to take solutions off the shelf and throw them at every problem. Evaluating your organization’s security culture Every company already has a security culture. It may not be the one they want, but they already have one. You need to build a security culture that works well for the larger organization and is in keeping with the larger organization's culture. I think we absolutely can take control of that security culture, and I'll go further and say that we have to. Otherwise, you're just going to end up in a situation where you have a culture that is not serving your organization well. There’s a lot of questions you should be considering when evaluating your culture. What is your current security culture? How does the rest of the company think abut security? How does the rest of the company view your security team? Do people go out of their way to include the security team in conversations and decision-making, or do they prefer to chance it and hope that they don't notice and try to squeak under the radar? That says a lot about your security culture. If people aren't actively engaging with the subject matter experts, well, something's wrong there.
The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.Here are some highlights: Educating businesses on the complexities of “being secure” This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’ Security can’t (and shouldn’t) interrupt the progress of other teams The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them. Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other. How to approach security program strategy and planning The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing? Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams. Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.
The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.Here are some highlights: Educating businesses on the complexities of “being secure” This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’ Security can’t (and shouldn’t) interrupt the progress of other teams The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them. Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other. How to approach security program strategy and planning The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing? Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams. Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.
The O’Reilly Security Podcast: Recruiting and building future open source maintainers, how speed and security aren’t mutually exclusive, and identifying and defining first principles for security.In this episode of the Security Podcast, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.Here are some highlights: Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business; the people who work at the ISP; and their routing equipment, power, billing systems, and marketing—but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, 'I'm going to go work on this really cool user application. It feels safer.' They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR), and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for both speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build, you have to take a breath and say, 'How am I going to build this in?' or 'I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.' That takes a long view rather than short-term planning. Identifying and defining first principles for security I worked with colleagues at the Indiana University Center for Applied Cybersecurity Research (CACR) to develop the Information Security Practice Principles (ISPP). In essence, the ISPP project identifies and defines seven rules that create a mental model for securing any technology. Seven may sound like too few, but it dates back to rules of warfare and Sun Tzu and how to protect things and how to make things resilient. I do a lot of work from first principles. Part of my role is that I’m called in when we don't know what we have yet or when something's a disaster and we need to triage. Best practice lists come from somewhere, but why do we teach people just to check off best practice lists without questioning them? If we teach more people to work from first principles, we can have more mature discussions, we can actually get our C-suite or other leadership involved because we can talk in concepts that they understand. Additionally, we can make decisions about things that don't have best practice checklists.
The O’Reilly Security Podcast: Recruiting and building future open source maintainers, how speed and security aren’t mutually exclusive, and identifying and defining first principles for security.In this episode of the Security Podcast, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.Here are some highlights: Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business; the people who work at the ISP; and their routing equipment, power, billing systems, and marketing—but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, 'I'm going to go work on this really cool user application. It feels safer.' They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR), and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for both speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build, you have to take a breath and say, 'How am I going to build this in?' or 'I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.' That takes a long view rather than short-term planning. Identifying and defining first principles for security I worked with colleagues at the Indiana University Center for Applied Cybersecurity Research (CACR) to develop the Information Security Practice Principles (ISPP). In essence, the ISPP project identifies and defines seven rules that create a mental model for securing any technology. Seven may sound like too few, but it dates back to rules of warfare and Sun Tzu and how to protect things and how to make things resilient. I do a lot of work from first principles. Part of my role is that I’m called in when we don't know what we have yet or when something's a disaster and we need to triage. Best practice lists come from somewhere, but why do we teach people just to check off best practice lists without questioning them? If we teach more people to work from first principles, we can have more mature discussions, we can actually get our C-suite or other leadership involved because we can talk in concepts that they understand. Additionally, we can make decisions about things that don't have best practice checklists.
The O’Reilly Security Podcast: The growing role of data science in security, data literacy outside the technical realm, and practical applications of machine learning.In this episode of the Security Podcast, I talk with Charles Givre, senior lead data scientist at Orbital Insight. We discuss how data science skills are increasingly important for security professionals, the critical role of data scientists in making the results of their work accessible to even nontechnical stakeholders, and using machine learning as a dynamic filter for vast amounts of data.Here are some highlights: Data science skills are becoming requisite for security teams I expect to see two trends in the next few years. First, I think we’re going to see tools becoming much smarter. Not to suggest they're not smart now, but I think we're going to see the builders of security-related tools integrating more and more data science. We're already seeing a lot of tools claiming they use machine learning to do anomaly detection and similar tasks. We're going to see even more of that. Secondly, I think rudimentary data science skills are going to become a core competency for security professionals. Considering, I expect we are going to increasingly see security jobs requiring some understanding of core data science principles like machine learning, big data, and data visualization. Of course, I still think there will be a need for data scientists. Data scientists are going to continue to do important work in security, but I also think basic data science skills are going to proliferate throughout the overall security community. Data literacy for all I'm hopeful we're going to start seeing more growth in data literacy training for management and nontechnical staff, because it's going to be increasingly important. In the years to come, management and executive-level professionals will need to understand the basics—maybe not a technical understanding, but at least a conceptual understanding of what these techniques can accomplish. Along those lines, one of the core competencies of a data scientist is, or at least arguably should be, communication skills. I'd include data visualization in that skillset. You can use the most advanced modeling techniques and produce the most amazing results, but if you can't communicate that in an effective manner to a stakeholder, then your work is not likely to be accepted, adopted, or trusted. As such, making results accessible is really a vital component of a data scientist’s work. Machine learning as a dynamic filter for security data Machine learning and deep learning have definitely become the buzzwords du jour of the security world, but they genuinely bring a lot of value to the table. In my opinion, the biggest value machine learning brings to the table is the ability to learn and identify new patterns and behaviors that represent threats. When I teach machine learning classes, one of the examples I use is domain-generating algorithm detection. You can do this with a whitelist or a blacklist, but neither one of these is going to be the most effective approach. There's been a lot of success in using machine learning to identify this, allowing you to then mitigate the threat. A colleague of mine, Austin Taylor, gave a presentation and wrote a blog post about this as well—about how machine learning fits in the overall schema. He views data science in security as being most useful in building a very dynamic filter for your data. If you imagine an inverted triangle, you begin examining tons and tons of data, but you can use machine learning to filter out the vast majority of it. From there, a human might still have to look at the remaining portion. By applying several layers of machine learning to that initial ingested data, you can efficiently filter out the stuff that's not of interest.
The O’Reilly Security Podcast: The growing role of data science in security, data literacy outside the technical realm, and practical applications of machine learning.In this episode of the Security Podcast, I talk with Charles Givre, senior lead data scientist at Orbital Insight. We discuss how data science skills are increasingly important for security professionals, the critical role of data scientists in making the results of their work accessible to even nontechnical stakeholders, and using machine learning as a dynamic filter for vast amounts of data.Here are some highlights: Data science skills are becoming requisite for security teams I expect to see two trends in the next few years. First, I think we’re going to see tools becoming much smarter. Not to suggest they're not smart now, but I think we're going to see the builders of security-related tools integrating more and more data science. We're already seeing a lot of tools claiming they use machine learning to do anomaly detection and similar tasks. We're going to see even more of that. Secondly, I think rudimentary data science skills are going to become a core competency for security professionals. Considering, I expect we are going to increasingly see security jobs requiring some understanding of core data science principles like machine learning, big data, and data visualization. Of course, I still think there will be a need for data scientists. Data scientists are going to continue to do important work in security, but I also think basic data science skills are going to proliferate throughout the overall security community. Data literacy for all I'm hopeful we're going to start seeing more growth in data literacy training for management and nontechnical staff, because it's going to be increasingly important. In the years to come, management and executive-level professionals will need to understand the basics—maybe not a technical understanding, but at least a conceptual understanding of what these techniques can accomplish. Along those lines, one of the core competencies of a data scientist is, or at least arguably should be, communication skills. I'd include data visualization in that skillset. You can use the most advanced modeling techniques and produce the most amazing results, but if you can't communicate that in an effective manner to a stakeholder, then your work is not likely to be accepted, adopted, or trusted. As such, making results accessible is really a vital component of a data scientist’s work. Machine learning as a dynamic filter for security data Machine learning and deep learning have definitely become the buzzwords du jour of the security world, but they genuinely bring a lot of value to the table. In my opinion, the biggest value machine learning brings to the table is the ability to learn and identify new patterns and behaviors that represent threats. When I teach machine learning classes, one of the examples I use is domain-generating algorithm detection. You can do this with a whitelist or a blacklist, but neither one of these is going to be the most effective approach. There's been a lot of success in using machine learning to identify this, allowing you to then mitigate the threat. A colleague of mine, Austin Taylor, gave a presentation and wrote a blog post about this as well—about how machine learning fits in the overall schema. He views data science in security as being most useful in building a very dynamic filter for your data. If you imagine an inverted triangle, you begin examining tons and tons of data, but you can use machine learning to filter out the vast majority of it. From there, a human might still have to look at the remaining portion. By applying several layers of machine learning to that initial ingested data, you can efficiently filter out the stuff that's not of interest.
The O’Reilly Security Podcast: The multidiscliplinary nature of defense, making security accessible, and how the current perception of security professionals hinders innovation and hiring.In this episode of the Security Podcast, I talk with Andrea Limbago, chief social scientist at Endgame. We discuss how the misperception of security as a computer science skillset ultimately restricts innovation, the need to make security easier and accessible for everyone, and how current branding of security can discourage newcomers.Here are some highlights: The multidisciplinary nature of defense The general perception is that security is a skillset in the computer science domain. As I've been in the industry for several years, I've noticed more and more the need for different disciplines, outside of computer science, within security. For example, we need data scientists to help handle the vast amount of security data and guide the daily collection and analysis of data. Another example is the need to craft accessible user interfaces for security. So many of the existing security tools or best practices just aren't user friendly. Of course, you also need that computer science expertise as well--from the more traditional hackers to defenders. All that insight can come together to help inform a more resilient defense. Beyond that, there’s the consideration of the impact of economics and psychology. This is especially relevant when you think about insider threat. It's really something I wish more people would think about in a broader perspective, and I think that would actually help attract a lot more people into the industry as well, which we desperately need right now. Making security accessible and easier for all We need to do a better job of informing the general public about security. Those of us in the security field see information on how to secure our accounts and devices all the time, but I consistently come across people outside of our industry who still don't understand things like two-factor authentication, or why that would be helpful for them. These are very smart people. Part of the challenge is we, as an industry, haven't done a phenomenal job branching out and talking in more common language about the various aspects and steps people can take. People know they need to be secure, but they really don't know what the key steps are. This month for National Cybersecurity Awareness Month, there are going to be hundreds of ‘Here are 10 things you need to do to be secure’-style articles, but these messages are not always making their way to the actual target audience. It needs to become more of a mainstream concern, and it needs to be made easier for people to secure their accounts and devices. We talk a lot about the convenience versus security trade-off, and for a lot of people, convenience is still what matters most. It's really hard to switch the incentive structure for people to help them understand that taking all these steps toward better security truly is worth the investment of their time. For us, as an industry, if we make it as easy as possible, I think that will help. Security has a branding problem We need to do a better job of making security appealing to a broader audience. When I talk to students and ask them what they think about security and cyber security and hacking, they immediately think of a guy in a dark hoodie. And that alone is limiting people from getting excited about entering the workforce. Obviously, the discipline and the industry is much broader than that. We, as an industry, need to rework our marketing campaigns to show other kinds of stock photos. If we can do that, we can start getting more and more diverse people interested and coming into the industry. By attracting the interest of a broader range of students and having them bring their diverse skillsets in from other disciplines, we can strengthen our defenses and increase innovation. If we change the branding of security and the perception of what it means to be a security professional, we can help fill the pipeline, which is one of our most crucial missions as an industry at this time.
The O’Reilly Security Podcast: The multidiscliplinary nature of defense, making security accessible, and how the current perception of security professionals hinders innovation and hiring.In this episode of the Security Podcast, I talk with Andrea Limbago, chief social scientist at Endgame. We discuss how the misperception of security as a computer science skillset ultimately restricts innovation, the need to make security easier and accessible for everyone, and how current branding of security can discourage newcomers.Here are some highlights: The multidisciplinary nature of defense The general perception is that security is a skillset in the computer science domain. As I've been in the industry for several years, I've noticed more and more the need for different disciplines, outside of computer science, within security. For example, we need data scientists to help handle the vast amount of security data and guide the daily collection and analysis of data. Another example is the need to craft accessible user interfaces for security. So many of the existing security tools or best practices just aren't user friendly. Of course, you also need that computer science expertise as well--from the more traditional hackers to defenders. All that insight can come together to help inform a more resilient defense. Beyond that, there’s the consideration of the impact of economics and psychology. This is especially relevant when you think about insider threat. It's really something I wish more people would think about in a broader perspective, and I think that would actually help attract a lot more people into the industry as well, which we desperately need right now. Making security accessible and easier for all We need to do a better job of informing the general public about security. Those of us in the security field see information on how to secure our accounts and devices all the time, but I consistently come across people outside of our industry who still don't understand things like two-factor authentication, or why that would be helpful for them. These are very smart people. Part of the challenge is we, as an industry, haven't done a phenomenal job branching out and talking in more common language about the various aspects and steps people can take. People know they need to be secure, but they really don't know what the key steps are. This month for National Cybersecurity Awareness Month, there are going to be hundreds of ‘Here are 10 things you need to do to be secure’-style articles, but these messages are not always making their way to the actual target audience. It needs to become more of a mainstream concern, and it needs to be made easier for people to secure their accounts and devices. We talk a lot about the convenience versus security trade-off, and for a lot of people, convenience is still what matters most. It's really hard to switch the incentive structure for people to help them understand that taking all these steps toward better security truly is worth the investment of their time. For us, as an industry, if we make it as easy as possible, I think that will help. Security has a branding problem We need to do a better job of making security appealing to a broader audience. When I talk to students and ask them what they think about security and cyber security and hacking, they immediately think of a guy in a dark hoodie. And that alone is limiting people from getting excited about entering the workforce. Obviously, the discipline and the industry is much broader than that. We, as an industry, need to rework our marketing campaigns to show other kinds of stock photos. If we can do that, we can start getting more and more diverse people interested and coming into the industry. By attracting the interest of a broader range of students and having them bring their diverse skillsets in from other disciplines, we can strengthen our defenses and increase innovation. If we change the branding of security and the perception of what it means to be a security professional, we can help fill the pipeline, which is one of our most crucial missions as an industry at this time.
The O’Reilly Security Podcast: Why tools aren’t always the answer to security problems and the oft overlooked impact of user frustration and fatigue.In this episode of the Security Podcast, I talk with Window Snyder, chief security officer at Fastly. We discuss the fact that many core security best practices aren’t easy to achieve with tools, the importance of not discounting user fatigue and frustration, and the need to personalize security tools and processes to your individual environment.Here are some highlights: Many security tasks require a hands-on approach There are a lot of things that we, as an industry, have known how to do for a very long time but that are still expensive and difficult to achieve. This includes things like staying up-to-date with patching or moving to more sophisticated authorization models. These types of tasks generally require significant work, and they might also impose a workflow obstacle to users that's expensive. Another proven and measurable way to improve security is to review deployments and identify features or systems that are no longer serving their original purpose but are still enabled. If they're still enabled but no longer serving a purpose, they may may leave you unneccessarily open to vulnerabilities. In these cases, a plan to reduce attack surface by eliminating these features or systems is work that humans generally must do, and it actually does increase the security of your environments in a measurable way because now your attack surface is smaller. These aren’t the sorts of activities that you can throw a tool in front of and feel like you've checked a box. Frustration and fatigue are often overlooked considerations Realistically, it's challenging for most organizations to achieve all the things we know we need to do as an industry. Getting the patch window down to a smaller and smaller size is critical for most organizations, but you have to consider this within the context of your organization and its goals. For example, if you’re patching a sensitive system, you may have to balance the need to reduce the patch window with the stability of the production environment. Or if a patch requires you to update users’ work stations, the frustration of having to update their systems and having their machines rebooted might derail productivity. It's an organizational leap to say that it's more important to address potential security problems when you are dealing with the very real obstacle of user frustration or security exhaustion. This is complicated by the fact that there's an infinite parade of things we need to be concerned about. More is not commensurate to better It’s reasonable to try to scale security engineering by finding tools you can leverage to help address more of the work that your organization needs. For example, an application security engineer might leverage a source analysis tool. Source analysis tools help scale the number of applications that you can assess in the same amount of time, and that’s reasonable because we all want to make better use of everyone's time. But without someone tuning the source analysis tool to your specific environment, you might end up with a source analysis tool that finds a lot of issues, creates a lot of flags, and then is overwhelming for the engineering team to try to address because of the sheer amount of data. They might conceivably look at the results and realize that the tool doesn't understand the mitigations that are already in place or the reasons these issues aren't going to be a problem and may create a situation where they disregard what the tool identifies. Once fatigue sets in, the tool may well be identifying real problems, but the value the tool contributes ends up being lost.
The O’Reilly Security Podcast: Why tools aren’t always the answer to security problems and the oft overlooked impact of user frustration and fatigue.In this episode of the Security Podcast, I talk with Window Snyder, chief security officer at Fastly. We discuss the fact that many core security best practices aren’t easy to achieve with tools, the importance of not discounting user fatigue and frustration, and the need to personalize security tools and processes to your individual environment.Here are some highlights: Many security tasks require a hands-on approach There are a lot of things that we, as an industry, have known how to do for a very long time but that are still expensive and difficult to achieve. This includes things like staying up-to-date with patching or moving to more sophisticated authorization models. These types of tasks generally require significant work, and they might also impose a workflow obstacle to users that's expensive. Another proven and measurable way to improve security is to review deployments and identify features or systems that are no longer serving their original purpose but are still enabled. If they're still enabled but no longer serving a purpose, they may may leave you unneccessarily open to vulnerabilities. In these cases, a plan to reduce attack surface by eliminating these features or systems is work that humans generally must do, and it actually does increase the security of your environments in a measurable way because now your attack surface is smaller. These aren’t the sorts of activities that you can throw a tool in front of and feel like you've checked a box. Frustration and fatigue are often overlooked considerations Realistically, it's challenging for most organizations to achieve all the things we know we need to do as an industry. Getting the patch window down to a smaller and smaller size is critical for most organizations, but you have to consider this within the context of your organization and its goals. For example, if you’re patching a sensitive system, you may have to balance the need to reduce the patch window with the stability of the production environment. Or if a patch requires you to update users’ work stations, the frustration of having to update their systems and having their machines rebooted might derail productivity. It's an organizational leap to say that it's more important to address potential security problems when you are dealing with the very real obstacle of user frustration or security exhaustion. This is complicated by the fact that there's an infinite parade of things we need to be concerned about. More is not commensurate to better It’s reasonable to try to scale security engineering by finding tools you can leverage to help address more of the work that your organization needs. For example, an application security engineer might leverage a source analysis tool. Source analysis tools help scale the number of applications that you can assess in the same amount of time, and that’s reasonable because we all want to make better use of everyone's time. But without someone tuning the source analysis tool to your specific environment, you might end up with a source analysis tool that finds a lot of issues, creates a lot of flags, and then is overwhelming for the engineering team to try to address because of the sheer amount of data. They might conceivably look at the results and realize that the tool doesn't understand the mitigations that are already in place or the reasons these issues aren't going to be a problem and may create a situation where they disregard what the tool identifies. Once fatigue sets in, the tool may well be identifying real problems, but the value the tool contributes ends up being lost.
The O’Reilly Security Podcast: Shifting secure code responsibility to developers, building secure software quickly, and the importance of changing processes.In this episode of the Security Podcast, I talk with Chris Wysopal, co-founder and CTO of Veracode. We discuss the increasing role of developers in building secure software, maintaining development speed while injecting security testing, and helping developers identify when they need to contact the security team for help.Here are some highlights: The challenges of securing enduring vs. new software One of the big challenges in securing software is that it’s most often built, maintained, and upgraded over many years. Think of online banking software for a financial services company. They probably started building that 15 years ago, and it's probably gone through two or three major changes, but the tooling and the language and the libraries, and all the things that they're using are all built from the original code. Fitting security into that style of software development presents challenges because they're not used to the newer tool sets and the newer ways of doing things. It's actually sometimes easier to integrate security into a newer software. Even though they're moving faster, it's easier to integrate into some of the newer development toolchains. Changing processes to enable small batch testing and fixing There are parallels between where we are with security now and where performance was at the beginning of the Agile movement. With Agile, the thought was, ‘We're going to go fast, but one of the ways we're going to maintain quality is we're going to require unit tests written by every developer for every piece of functIonality they do, and that these automated unit tests will run on every build and every code change.’ By changing the way you do things, from a manual backend weighted full system test to smaller batch incremental tests of pieces of functionality, you're able to speed up the development process, without sacrificing quality. That's a change in process. To have a high performing application, you didn't necessarily need to spend more time building it. You needed better intelligence—so, APM technology put into production to understand performance issues better and more quickly allowed teams to still go fast and not have performance bottlenecks. With security, we're going to see the same thing. There can be some additional technology put into play, but the other key factor is changing your process. We call this ‘shifting left,’ which means: find the security defect as quickly as possible or as early as possible in the development lifecycle so that it's cheaper and quicker to fix. For example, if a developer writes a cross-site scripting error as they're coding in JavaScript, and they're able to detect that within minutes of creating that flaw, it will likely only require minutes or seconds to fix. Whereas if that flaw is discovered two weeks later by a manual tester, that's going to be then entered into a defect tracking system. It's going to be triaged. It's going to be put into someone's bug queue. With the delay in identification, it will have to be researched in its original context and will slow down development. Now, you're potentially talking hours of time to fix the same flaw. Maybe a scale of 10 or 100 times more time is taken. Shifting left is a way of thinking about, ‘How do I do small batch testing and fixing?’ That's a process change that enables you to keep going fast and be secure. Helping developers identify when they need to call for security help We need to teach developers about application security to enable them to identify when there’s a problem and when they don't know enough to solve it themselves. One of the problems with application security is that developers often don't know enough to recognize when they need to call in an expert. For example, when an architect is building a structure and knows there’s a problem with the engineering of a component, the architect knows to call in a structural engineer to augment their expertise. We need to have the same dynamic with software developers. They're experts in their field, and they need to know a lot about security. They also need to know when they require help with threat modeling or to perform a manual code review on a really critical piece of code, like account recovery mechanism. We need to shift more security expertise into the development organization, but part of that is also helping developers know when to call out to the security team. That's also a way we can help the challenge of hiring security experts, because they're hard to find.
The O’Reilly Security Podcast: Shifting secure code responsibility to developers, building secure software quickly, and the importance of changing processes.In this episode of the Security Podcast, I talk with Chris Wysopal, co-founder and CTO of Veracode. We discuss the increasing role of developers in building secure software, maintaining development speed while injecting security testing, and helping developers identify when they need to contact the security team for help.Here are some highlights: The challenges of securing enduring vs. new software One of the big challenges in securing software is that it’s most often built, maintained, and upgraded over many years. Think of online banking software for a financial services company. They probably started building that 15 years ago, and it's probably gone through two or three major changes, but the tooling and the language and the libraries, and all the things that they're using are all built from the original code. Fitting security into that style of software development presents challenges because they're not used to the newer tool sets and the newer ways of doing things. It's actually sometimes easier to integrate security into a newer software. Even though they're moving faster, it's easier to integrate into some of the newer development toolchains. Changing processes to enable small batch testing and fixing There are parallels between where we are with security now and where performance was at the beginning of the Agile movement. With Agile, the thought was, ‘We're going to go fast, but one of the ways we're going to maintain quality is we're going to require unit tests written by every developer for every piece of functIonality they do, and that these automated unit tests will run on every build and every code change.’ By changing the way you do things, from a manual backend weighted full system test to smaller batch incremental tests of pieces of functionality, you're able to speed up the development process, without sacrificing quality. That's a change in process. To have a high performing application, you didn't necessarily need to spend more time building it. You needed better intelligence—so, APM technology put into production to understand performance issues better and more quickly allowed teams to still go fast and not have performance bottlenecks. With security, we're going to see the same thing. There can be some additional technology put into play, but the other key factor is changing your process. We call this ‘shifting left,’ which means: find the security defect as quickly as possible or as early as possible in the development lifecycle so that it's cheaper and quicker to fix. For example, if a developer writes a cross-site scripting error as they're coding in JavaScript, and they're able to detect that within minutes of creating that flaw, it will likely only require minutes or seconds to fix. Whereas if that flaw is discovered two weeks later by a manual tester, that's going to be then entered into a defect tracking system. It's going to be triaged. It's going to be put into someone's bug queue. With the delay in identification, it will have to be researched in its original context and will slow down development. Now, you're potentially talking hours of time to fix the same flaw. Maybe a scale of 10 or 100 times more time is taken. Shifting left is a way of thinking about, ‘How do I do small batch testing and fixing?’ That's a process change that enables you to keep going fast and be secure. Helping developers identify when they need to call for security help We need to teach developers about application security to enable them to identify when there’s a problem and when they don't know enough to solve it themselves. One of the problems with application security is that developers often don't know enough to recognize when they need to call in an expert. For example, when an architect is building a structure and knows there’s a problem with the engineering of a component, the architect knows to call in a structural engineer to augment their expertise. We need to have the same dynamic with software developers. They're experts in their field, and they need to know a lot about security. They also need to know when they require help with threat modeling or to perform a manual code review on a really critical piece of code, like account recovery mechanism. We need to shift more security expertise into the development organization, but part of that is also helping developers know when to call out to the security team. That's also a way we can help the challenge of hiring security experts, because they're hard to find.
The O’Reilly Security Podcast: The open-ended nature of incident response, and how threat intelligence and incident response are two pieces of one process.In this episode of the Security Podcast, I talk with Scott Roberts, security operations manager at GitHub. We discuss threat intelligence, incident response, and how they interrelate.Here are some highlights: Threat intelligence should affect how you identify and respond to incidents Threat intelligence doesn't exist on its own. It really can't. If you're collecting threat intelligence without acting upon it, it serves no purpose. Threat intelligence makes sense when you integrate it with the traditional incident response capability. Intelligence should affect how you identify and respond to incidences. The idea is that these aren't really two separate things, they're simply two pieces of one process. If you're doing incident response without using threat intelligence then you’ll keep getting hit with the same attack time after time. Now, by the same token, if you have threat intelligence without incident response, you're just shouting into the void. No one is taking the information and making it actionable. The open-ended nature of incident response It’s key to think about incidents as ongoing. There are very few times when an attacker will launch an attack once, be rebuffed, and simply go away. In almost all cases, there's a continuous process. I've worked in organizations where we would do the work to identify an incident and promptly forget about it. Then three weeks later, we would suddenly stumble across the exact same thing. Ultimately, intelligence-driven incident response happens in those intervening three weeks. What are you doing in that time between incidents from the same actor, with the same target? And how are you using what you've learned to prepare for the next time? Regardless of the size of your organization, you can implement processes to better your defenses after each incident. It can be as simple as keeping good notes, thinking about root causes, and considering what could better protect your organization from the same or similar attackers in the future. Basically, instead of marking an incident closed as soon as you’ve dealt with the immediate threat, think beyond the current incident and try to understand what the attack is going to look like the next time. Even if you can't identify the next iteration, you don't want to get hit by the same thing again. As your team expands and matures, there are opportunities for more specialized types of analysis and processes, but intelligence-driven incident response is something you can adopt regardless of your size or maturity. Why more threat intelligence data is not always better When a team gets started with threat intelligence, their first impulse is to try collecting the biggest data set imaginable with the idea that there's going to be a magic way to pick out the needle in the haystack. While I understand why that may seem like a logical place to start, that's often a very abstract and time-intensive approach. When I look at intelligence programs, I first want to know what the team is doing with their own investigation data. The mass appeal of gathering a ton of information is all about trying to figure out which IP is most important to me or which piece of information I need to find. Often, I find that information is already available in a team's incident response database or their incident management platform. I think the first place you should always look is internally. If you want to know what threats are going to be important to an organization, look at the ones you've already experienced. Once you’ve got all those figured out, then go look at what else is out there. The first place to be effective and truly know that you're doing relevant work for your organization's defense in the future is to look at your past.
The O’Reilly Security Podcast: The open-ended nature of incident response, and how threat intelligence and incident response are two pieces of one process.In this episode of the Security Podcast, I talk with Scott Roberts, security operations manager at GitHub. We discuss threat intelligence, incident response, and how they interrelate.Here are some highlights: Threat intelligence should affect how you identify and respond to incidents Threat intelligence doesn't exist on its own. It really can't. If you're collecting threat intelligence without acting upon it, it serves no purpose. Threat intelligence makes sense when you integrate it with the traditional incident response capability. Intelligence should affect how you identify and respond to incidences. The idea is that these aren't really two separate things, they're simply two pieces of one process. If you're doing incident response without using threat intelligence then you’ll keep getting hit with the same attack time after time. Now, by the same token, if you have threat intelligence without incident response, you're just shouting into the void. No one is taking the information and making it actionable. The open-ended nature of incident response It’s key to think about incidents as ongoing. There are very few times when an attacker will launch an attack once, be rebuffed, and simply go away. In almost all cases, there's a continuous process. I've worked in organizations where we would do the work to identify an incident and promptly forget about it. Then three weeks later, we would suddenly stumble across the exact same thing. Ultimately, intelligence-driven incident response happens in those intervening three weeks. What are you doing in that time between incidents from the same actor, with the same target? And how are you using what you've learned to prepare for the next time? Regardless of the size of your organization, you can implement processes to better your defenses after each incident. It can be as simple as keeping good notes, thinking about root causes, and considering what could better protect your organization from the same or similar attackers in the future. Basically, instead of marking an incident closed as soon as you’ve dealt with the immediate threat, think beyond the current incident and try to understand what the attack is going to look like the next time. Even if you can't identify the next iteration, you don't want to get hit by the same thing again. As your team expands and matures, there are opportunities for more specialized types of analysis and processes, but intelligence-driven incident response is something you can adopt regardless of your size or maturity. Why more threat intelligence data is not always better When a team gets started with threat intelligence, their first impulse is to try collecting the biggest data set imaginable with the idea that there's going to be a magic way to pick out the needle in the haystack. While I understand why that may seem like a logical place to start, that's often a very abstract and time-intensive approach. When I look at intelligence programs, I first want to know what the team is doing with their own investigation data. The mass appeal of gathering a ton of information is all about trying to figure out which IP is most important to me or which piece of information I need to find. Often, I find that information is already available in a team's incident response database or their incident management platform. I think the first place you should always look is internally. If you want to know what threats are going to be important to an organization, look at the ones you've already experienced. Once you’ve got all those figured out, then go look at what else is out there. The first place to be effective and truly know that you're doing relevant work for your organization's defense in the future is to look at your past.
The O'Reilly Security Podcast: The role of community, the proliferation of BSides and other InfoSec community events, and celebrating our heroes and heroines.In this episode of the Security Podcast, I talk with Jack Daniel, co-founder of Security Bsides. We discuss how each of us (and the industry as a whole) benefits from community building, the importance of historical context, and the inimitable Becky Bace.Here are some highlights: The indispensable role and benefit of community building As I grew in my career, I learned things that I shared. I felt that if you're going to teach me, then as soon as I know something new, I'll teach you. I began to realize that the more I share with people, the more they're willing to share with me. This exchange of information built trust and confidence. When you build that trust, people are more likely to share information beyond what they may feel comfortable saying in a public forum and that may help you solve problems in your own environment. I realized these opportunities to connect and share information were tremendously beneficial not only to me, but to everyone participating. They build professional and personal relationships, which I've become addicted to. It’s a fantastic resource to be part of a community, and the more effort you put into it, the more you get back. Security is such an amazing community. We’re facing incredible challenges. We need to share ideas if we're going to pull it off. Extolling InfoSec history with the Shoulders of InfoSec I realized a few years ago that despite the fact I was friends with a lot of trailblazers in the security space, I didn't have much perspective on the history of InfoSec or hacking. I recognized that I have friends like Gene Spafford and the late Becky Bace who have seen or participated in the foundation of our industry and know many of the stories of our community. I decided to do a presentation a few years ago at DerbyCon that introduced the early contributors and pioneers who made our industry what it is today and built the early foundation for our practices. I quickly realized that cataloging this history wasn't a single presentation, but a larger undertaking. This is why I created the Shoulders of InfoSec program, which shines a light on the contributions of those whose shoulders we stand on. The idea is to make it easy to find a quick history of information security and, to a lesser extent, the hacker culture. As Newton actually paraphrased, if he has seen farther, it's by standing on the shoulders of giants, and we all stand on the shoulders of giants. The inimitable Becky Bace Becky was known as the den mother of IDS, for her work fostering and supporting intrusion detection and network behavior analysis. But even beyond her amazing technical expertise and contributions, Becky gave the best hugs in the world. She was just an amazingly warm, friendly, and welcoming person. One of the things that always struck me about Becky is the number of people she mentored through the years, and the number of people whose careers got a start or a boost because of Becky. She was just pure awesome. She would go out of her way to help people, and the more they needed help, the more likely she would be to find them and help them. She came from southern Alabama, and when she came north to the D.C. area, her dad said, ‘You can go up north and get a job and marry a Yankee, but when you're done doing that, I want you to come home because, remember, we need help down here.’ For those who don't know, when she left her consulting practice, she went to the University of South Alabama—not even University of Alabama, but the University of South Alabama—and set up a cyber security program. She was bringing cyber security education to people who otherwise wouldn't get it and she built a fantastic program. She did it because she promised her dad she would.
The O'Reilly Security Podcast: The role of community, the proliferation of BSides and other InfoSec community events, and celebrating our heroes and heroines.In this episode of the Security Podcast, I talk with Jack Daniel, co-founder of Security Bsides. We discuss how each of us (and the industry as a whole) benefits from community building, the importance of historical context, and the inimitable Becky Bace.Here are some highlights: The indispensable role and benefit of community building As I grew in my career, I learned things that I shared. I felt that if you're going to teach me, then as soon as I know something new, I'll teach you. I began to realize that the more I share with people, the more they're willing to share with me. This exchange of information built trust and confidence. When you build that trust, people are more likely to share information beyond what they may feel comfortable saying in a public forum and that may help you solve problems in your own environment. I realized these opportunities to connect and share information were tremendously beneficial not only to me, but to everyone participating. They build professional and personal relationships, which I've become addicted to. It’s a fantastic resource to be part of a community, and the more effort you put into it, the more you get back. Security is such an amazing community. We’re facing incredible challenges. We need to share ideas if we're going to pull it off. Extolling InfoSec history with the Shoulders of InfoSec I realized a few years ago that despite the fact I was friends with a lot of trailblazers in the security space, I didn't have much perspective on the history of InfoSec or hacking. I recognized that I have friends like Gene Spafford and the late Becky Bace who have seen or participated in the foundation of our industry and know many of the stories of our community. I decided to do a presentation a few years ago at DerbyCon that introduced the early contributors and pioneers who made our industry what it is today and built the early foundation for our practices. I quickly realized that cataloging this history wasn't a single presentation, but a larger undertaking. This is why I created the Shoulders of InfoSec program, which shines a light on the contributions of those whose shoulders we stand on. The idea is to make it easy to find a quick history of information security and, to a lesser extent, the hacker culture. As Newton actually paraphrased, if he has seen farther, it's by standing on the shoulders of giants, and we all stand on the shoulders of giants. The inimitable Becky Bace Becky was known as the den mother of IDS, for her work fostering and supporting intrusion detection and network behavior analysis. But even beyond her amazing technical expertise and contributions, Becky gave the best hugs in the world. She was just an amazingly warm, friendly, and welcoming person. One of the things that always struck me about Becky is the number of people she mentored through the years, and the number of people whose careers got a start or a boost because of Becky. She was just pure awesome. She would go out of her way to help people, and the more they needed help, the more likely she would be to find them and help them. She came from southern Alabama, and when she came north to the D.C. area, her dad said, ‘You can go up north and get a job and marry a Yankee, but when you're done doing that, I want you to come home because, remember, we need help down here.’ For those who don't know, when she left her consulting practice, she went to the University of South Alabama—not even University of Alabama, but the University of South Alabama—and set up a cyber security program. She was bringing cyber security education to people who otherwise wouldn't get it and she built a fantastic program. She did it because she promised her dad she would.
The O’Reilly Security Podcast: The prevalence of convenient data, first steps toward a security data analytics program, and effective data visualization.In this episode of the Security Podcast, Courtney Nash, former chair of O’Reilly Security conference, talks with Jay Jacobs, senior data scientist at BitSight. We discuss the constraints of convenient data, the simple first steps toward building a basic security data analytics program, and effective data visualizations.Here are some highlights: The limitations of convenient data In security, we often see the use of convenient data—essentially, the data we can get our hands on. You see that sometimes in medicine where people studying a specific disease will grab the patients with that disease in the hospital they work in. There's some benefits to doing that. Obviously, the data collection is easy because you get the data that’s readily available. At the same time, there's limitations. The data may not be representative of the larger population. Using multiple studies combats the limitations of convenient data. For example, when I was working on the Verizon Data Breach Investigations Report, we tried to tackle that by diversifying the sources of data. Each individual contributor had their own convenient sample. They're getting the data they can access. Each contributing organization had their own biases and limitations, problems, and areas of focus. There are biases and inherent problems with each data set, but when you combine them, that's when you start to see the strength because now all of these biases start to level out and even off a little bit. There are still problems, including representativeness, but this is one of the ways to combat it. The simple first steps to building a data analysis program The first step is to just count and collect everything. As I work with organizations on their data, I see a challenge where people will try to collect only the right things, or the things that they think are going to be helpful. When they only collect things they originally think will be handy, they often miss some things that are ultimately really helpful to analysis. Just start out counting and collecting everything. Even things you don't think are countable or collectible. At one point, a lot of people didn't think that you could put a breach, which is a series of events, into a format that could be conducive to analysis. I think we've got some areas we could focus on like pen testing and red team activity. I think these are areas just right for a good data collection effort. If you're collecting all this data, you can do some simple counting and comparison. ‘This month I saw X number and this month I saw Y.’ As you compare, you can see whether there’s change, and then discuss that change. Is it significant, and do we care? The other thing: a lot of people capture metrics and don’t actually ask the question do we care if it goes up or down? That's a problem. Considerations for effective data visualization Data visualization is a very popular field right now. It's not just concerned with why pie charts might be bad—there's a lot more nuance and detail. One important factor to consider in data visualization, just like communicating in any other medium, is your audience. You have to be able to understand your audience, their motivations, and experience levels. There are three things you should evaluate when building a data visualization. First, you start with your original research question. Then you figure out how the data collected answers that question. Then once you start to develop a data visualization, you try to ask yourself does that visualization match what the data says, and does it match and answer the original question being asked? Trying to think of those three parts of that equation, that they all have to line up and explain each other, I think that helps people communicate better.
The O’Reilly Security Podcast: The prevalence of convenient data, first steps toward a security data analytics program, and effective data visualization.In this episode of the Security Podcast, Courtney Nash, former chair of O’Reilly Security conference, talks with Jay Jacobs, senior data scientist at BitSight. We discuss the constraints of convenient data, the simple first steps toward building a basic security data analytics program, and effective data visualizations.Here are some highlights: The limitations of convenient data In security, we often see the use of convenient data—essentially, the data we can get our hands on. You see that sometimes in medicine where people studying a specific disease will grab the patients with that disease in the hospital they work in. There's some benefits to doing that. Obviously, the data collection is easy because you get the data that’s readily available. At the same time, there's limitations. The data may not be representative of the larger population. Using multiple studies combats the limitations of convenient data. For example, when I was working on the Verizon Data Breach Investigations Report, we tried to tackle that by diversifying the sources of data. Each individual contributor had their own convenient sample. They're getting the data they can access. Each contributing organization had their own biases and limitations, problems, and areas of focus. There are biases and inherent problems with each data set, but when you combine them, that's when you start to see the strength because now all of these biases start to level out and even off a little bit. There are still problems, including representativeness, but this is one of the ways to combat it. The simple first steps to building a data analysis program The first step is to just count and collect everything. As I work with organizations on their data, I see a challenge where people will try to collect only the right things, or the things that they think are going to be helpful. When they only collect things they originally think will be handy, they often miss some things that are ultimately really helpful to analysis. Just start out counting and collecting everything. Even things you don't think are countable or collectible. At one point, a lot of people didn't think that you could put a breach, which is a series of events, into a format that could be conducive to analysis. I think we've got some areas we could focus on like pen testing and red team activity. I think these are areas just right for a good data collection effort. If you're collecting all this data, you can do some simple counting and comparison. ‘This month I saw X number and this month I saw Y.’ As you compare, you can see whether there’s change, and then discuss that change. Is it significant, and do we care? The other thing: a lot of people capture metrics and don’t actually ask the question do we care if it goes up or down? That's a problem. Considerations for effective data visualization Data visualization is a very popular field right now. It's not just concerned with why pie charts might be bad—there's a lot more nuance and detail. One important factor to consider in data visualization, just like communicating in any other medium, is your audience. You have to be able to understand your audience, their motivations, and experience levels. There are three things you should evaluate when building a data visualization. First, you start with your original research question. Then you figure out how the data collected answers that question. Then once you start to develop a data visualization, you try to ask yourself does that visualization match what the data says, and does it match and answer the original question being asked? Trying to think of those three parts of that equation, that they all have to line up and explain each other, I think that helps people communicate better.
The O’Reilly Security Podcast: Why legal responses to bug reports are an unhealthy reflex, thinking through first steps for a vulnerability disclosure policy, and the value of learning by doing.In this episode, O’Reilly’s Courtney Nash talks with Katie Moussouris, founder and CEO of Luta Security. They discuss why many organizations have a knee-jerk legal response to a bug report (and why your organization shouldn’t), the first steps organizations should take in formulating a vulnerability disclosure program, and how learning through experience and sharing knowledge benefits all.Here are some highlights: Why legal responses to bug reports are a faulty reflex The first reaction to a researcher reporting a bug for many organizations is to immediately respond with legal action. These organizations aren’t considering that their lawyers typically don't keep their users safe from internet crime or harm. Engineers fix bugs and make a difference in terms of security. Having your lawyer respond doesn't keep users safe and doesn't get the bug fixed. It might do something to temporarily protect your brand, but that's only effective as long as the bug in question remains unknown to the media. Ultimately, when you try to kill the messenger with a bunch of lawsuits, it looks much worse than taking the steps to investigate and fix a security issue. Ideally, organizations recognize that fact quickly. It’s also worth noting that the law tends to be on the side of the organization, not the researcher reporting a vulnerability. In the United States, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act have typically been used to harass or silence security researchers who are trying to report something along the lines of “if you see something say something.” Researchers take risks when identifying bugs, because there are laws on the books that can be easily misused and abused to try to kill the messenger. There are laws in other countries as well, that similarly would act as discouragement from well-meaning researchers to come forward. It’s important to keep perspective and remember that, in most cases, you’re talking to helpful hackers, who have stuck their neck out and potentially risked their own freedom to try to warn you about a security issue. Once organizations realize that, they're often more willing to cautiously trust researchers. First steps toward a basic vulnerability disclosure policy In 2015, market studies showed (and the numbers haven't changed significantly since then) that of the Forbes Global 2000, arguably some of the most prepared and proactive security programs, 94% had no published way for researchers to report a security vulnerability. That’s indicative of the fact that these organizations probably have no plan for how they would respond if somebody did reach out and report a vulnerability. They might call in their lawyers. They might just hope the person goes away. At the very basic level, organizations should provide a clear way for someone to report issues. Additionally, organizations should clearly define the scope of issues they’re most interested in hearing about. Defining scope also includes providing the bounds for things that you prefer hackers not do. I've seen a lot of vulnerability disclosure policies published on websites that say, please don't attempt to do a denial of service against our website, or against our service or products, because with sufficient resources, we know attackers would be able to do that. They clearly request people don’t test that capability, as it would provide no value. Learning by doing and the value of sharing experiences At the Cyber U.K. Conference, the U.K. National Cyber Security Centre’s (NCSC) industry conference, there was an announcement about NCSC’s plans to launch a vulnerability coordination pilot program. They've previously worked on vulnerability coordination through the U.K. Computer Emergency Response Team (CERT U.K.) that merged under NCSC. However, they hadn’t standardized the process. They chose to learn by doing and launch pilot programs. They invited focused security researchers, who they knew and had worked with in the past, to come and participate, and then they outlined their intention to publicly share what they learned. This approach offers benefits, as it's not only focused on specific bugs, but more so on the process, on the ways they can improve that process and share knowledge with their constituents globally. Of course, bugs will be uncovered and strengthening security of targeted websites obviously represents one of the goals of the program, but the emphasis on process and learning through experience really differentiates their approach and is particularly exciting.
The O’Reilly Security Podcast: Why legal responses to bug reports are an unhealthy reflex, thinking through first steps for a vulnerability disclosure policy, and the value of learning by doing.In this episode, O’Reilly’s Courtney Nash talks with Katie Moussouris, founder and CEO of Luta Security. They discuss why many organizations have a knee-jerk legal response to a bug report (and why your organization shouldn’t), the first steps organizations should take in formulating a vulnerability disclosure program, and how learning through experience and sharing knowledge benefits all.Here are some highlights: Why legal responses to bug reports are a faulty reflex The first reaction to a researcher reporting a bug for many organizations is to immediately respond with legal action. These organizations aren’t considering that their lawyers typically don't keep their users safe from internet crime or harm. Engineers fix bugs and make a difference in terms of security. Having your lawyer respond doesn't keep users safe and doesn't get the bug fixed. It might do something to temporarily protect your brand, but that's only effective as long as the bug in question remains unknown to the media. Ultimately, when you try to kill the messenger with a bunch of lawsuits, it looks much worse than taking the steps to investigate and fix a security issue. Ideally, organizations recognize that fact quickly. It’s also worth noting that the law tends to be on the side of the organization, not the researcher reporting a vulnerability. In the United States, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act have typically been used to harass or silence security researchers who are trying to report something along the lines of “if you see something say something.” Researchers take risks when identifying bugs, because there are laws on the books that can be easily misused and abused to try to kill the messenger. There are laws in other countries as well, that similarly would act as discouragement from well-meaning researchers to come forward. It’s important to keep perspective and remember that, in most cases, you’re talking to helpful hackers, who have stuck their neck out and potentially risked their own freedom to try to warn you about a security issue. Once organizations realize that, they're often more willing to cautiously trust researchers. First steps toward a basic vulnerability disclosure policy In 2015, market studies showed (and the numbers haven't changed significantly since then) that of the Forbes Global 2000, arguably some of the most prepared and proactive security programs, 94% had no published way for researchers to report a security vulnerability. That’s indicative of the fact that these organizations probably have no plan for how they would respond if somebody did reach out and report a vulnerability. They might call in their lawyers. They might just hope the person goes away. At the very basic level, organizations should provide a clear way for someone to report issues. Additionally, organizations should clearly define the scope of issues they’re most interested in hearing about. Defining scope also includes providing the bounds for things that you prefer hackers not do. I've seen a lot of vulnerability disclosure policies published on websites that say, please don't attempt to do a denial of service against our website, or against our service or products, because with sufficient resources, we know attackers would be able to do that. They clearly request people don’t test that capability, as it would provide no value. Learning by doing and the value of sharing experiences At the Cyber U.K. Conference, the U.K. National Cyber Security Centre’s (NCSC) industry conference, there was an announcement about NCSC’s plans to launch a vulnerability coordination pilot program. They've previously worked on vulnerability coordination through the U.K. Computer Emergency Response Team (CERT U.K.) that merged under NCSC. However, they hadn’t standardized the process. They chose to learn by doing and launch pilot programs. They invited focused security researchers, who they knew and had worked with in the past, to come and participate, and then they outlined their intention to publicly share what they learned. This approach offers benefits, as it's not only focused on specific bugs, but more so on the process, on the ways they can improve that process and share knowledge with their constituents globally. Of course, bugs will be uncovered and strengthening security of targeted websites obviously represents one of the goals of the program, but the emphasis on process and learning through experience really differentiates their approach and is particularly exciting.
The O’Reilly Security Podcast: Threat hunting’s role in improving security posture, measuring threat hunting success, and the potential for automating threat hunting for the sake of efficiency and consistency.In this episode, I talk with Alex Pinto, chief data scientist at Niddel. We discuss the role of threat hunting in security, the necessity for well-defined process and documentation in threat hunting and other activities, and the potential for automating threat hunting using supervised machine learning.Here are some highlights: Threat hunting’s role in improved detection At the end of the day, threat hunting is proactively searching for malicious activity that your existing security tools and processes missed. In a way, it’s an evolution of the more traditional security monitoring and log analysis that organizations currently use. Experienced workers in security operation center environments or with managed security services providers might say, ‘Well, this is what I've been doing all this time, so maybe I was threat hunting all along.’ The idea behind threat hunting is that you're not entirely confident the tools and processes in place are identifying every single problem you might have. So, you decide to scrutinize your environment and available data, and hopefully grow your detection capability based on what you learn. There are some definitions, which I'm not entirely in agreement with, that say that, ‘It's only threat hunting when it's a human activity. So, the definition of threat hunting is when humans are looking for things that the automation missed.’ I personally think that's very self-serving. I think this human-centric qualifier is a little bit beside the point. We should always be striving to automate the work that we're doing as much as we can. Gauging success by measuring dwell time It's still very challenging to manage productivity and success metrics for threat hunting. This is an activity where it’s easy to spin your wheels and never find anything. There's a great metric called dwell time, which admittedly can be hard to measure. Dwell time measures the average time for the incident response team to find something as opposed to when the machine was originally infected or compromised. How long did it take for the alert to be generated or for the issue to be found via hunting? We’ve all heard vendor pitches saying something along the lines of, ‘Companies take more than 100 days to find specific malware in their environments.’ You should be measuring dwell time within your own environment. If you start to engage in threat hunting and you see this number decrease, you're finding issues sooner, and that means the threat hunting is working. The environments where I've seen the most success with threat hunting utilized their incident response (IR) team for the task or built a threat hunting offshoot from their IR team. These team members were already very comfortable with handling incidents within the organization. They already understood the environment well, knew what to look for, and where they should be looking. IR teams may be able to spend some of their time proactively looking for things and formulating hypotheses of where there could be a blind spot or perhaps poorly configured tools, and then researching those potential problem areas. Documentation is key. By documenting everything, you build organizational knowledge and allow for consistency and measurement of success. The potential for automating threat hunting There's a lot of different factors you can consider in deciding whether something is malicious. The hard part is the actual decision-making process. What really matters is the ability of a human analyst to be able to make a decision whether an activity is malicious or not and how to proceed. Using human analysts to review every scenario doesn't scale, especially given the complexity and number of factors they have to explore in order to make a decision. I’ve been exploring when and how we can automate that decision-making process, specifically in the case of threat hunting. For people who have some familiarity with machine learning, it appears threat hunting would fit well with a supervised machine learning model. You have vast amounts of data, and you have to make a call whether to classify something as good or bad. In any model that you’re training, you should use previous experience to classify benign activities to reduce noise. When we automate as much of this process as possible, we improve efficiency, the use of our team’s time, and consistency. Of course, It’s important to also consider the difficulties in pursuing this automation, and how we can try to circumvent those difficulties.
The O’Reilly Security Podcast: Threat hunting’s role in improving security posture, measuring threat hunting success, and the potential for automating threat hunting for the sake of efficiency and consistency.In this episode, I talk with Alex Pinto, chief data scientist at Niddel. We discuss the role of threat hunting in security, the necessity for well-defined process and documentation in threat hunting and other activities, and the potential for automating threat hunting using supervised machine learning.Here are some highlights: Threat hunting’s role in improved detection At the end of the day, threat hunting is proactively searching for malicious activity that your existing security tools and processes missed. In a way, it’s an evolution of the more traditional security monitoring and log analysis that organizations currently use. Experienced workers in security operation center environments or with managed security services providers might say, ‘Well, this is what I've been doing all this time, so maybe I was threat hunting all along.’ The idea behind threat hunting is that you're not entirely confident the tools and processes in place are identifying every single problem you might have. So, you decide to scrutinize your environment and available data, and hopefully grow your detection capability based on what you learn. There are some definitions, which I'm not entirely in agreement with, that say that, ‘It's only threat hunting when it's a human activity. So, the definition of threat hunting is when humans are looking for things that the automation missed.’ I personally think that's very self-serving. I think this human-centric qualifier is a little bit beside the point. We should always be striving to automate the work that we're doing as much as we can. Gauging success by measuring dwell time It's still very challenging to manage productivity and success metrics for threat hunting. This is an activity where it’s easy to spin your wheels and never find anything. There's a great metric called dwell time, which admittedly can be hard to measure. Dwell time measures the average time for the incident response team to find something as opposed to when the machine was originally infected or compromised. How long did it take for the alert to be generated or for the issue to be found via hunting? We’ve all heard vendor pitches saying something along the lines of, ‘Companies take more than 100 days to find specific malware in their environments.’ You should be measuring dwell time within your own environment. If you start to engage in threat hunting and you see this number decrease, you're finding issues sooner, and that means the threat hunting is working. The environments where I've seen the most success with threat hunting utilized their incident response (IR) team for the task or built a threat hunting offshoot from their IR team. These team members were already very comfortable with handling incidents within the organization. They already understood the environment well, knew what to look for, and where they should be looking. IR teams may be able to spend some of their time proactively looking for things and formulating hypotheses of where there could be a blind spot or perhaps poorly configured tools, and then researching those potential problem areas. Documentation is key. By documenting everything, you build organizational knowledge and allow for consistency and measurement of success. The potential for automating threat hunting There's a lot of different factors you can consider in deciding whether something is malicious. The hard part is the actual decision-making process. What really matters is the ability of a human analyst to be able to make a decision whether an activity is malicious or not and how to proceed. Using human analysts to review every scenario doesn't scale, especially given the complexity and number of factors they have to explore in order to make a decision. I’ve been exploring when and how we can automate that decision-making process, specifically in the case of threat hunting. For people who have some familiarity with machine learning, it appears threat hunting would fit well with a supervised machine learning model. You have vast amounts of data, and you have to make a call whether to classify something as good or bad. In any model that you’re training, you should use previous experience to classify benign activities to reduce noise. When we automate as much of this process as possible, we improve efficiency, the use of our team’s time, and consistency. Of course, It’s important to also consider the difficulties in pursuing this automation, and how we can try to circumvent those difficulties.
The O’Reilly Security Podcast: How to approach asset management, improve user education, and strengthen your organization’s defensive security with limited time and resources.In this episode, I talk with Amanda Berlin, security architect at Hurricane Labs. We discuss how to assess and develop defensive security policies when you’re new to the task, how to approach core security fundamentals like asset management, and generally how you can successfully improve your organization’s defensive security with limited time and resources.Here are some highlights: The value of ongoing asset management Whether you're one person or you have a large security team, asset management is always a pain point. It’s exceedingly rare to see an organization correctly implementing asset management. In an ideal situation, you know where all of the devices are coming into your network. You have alerts set to sound if a new Mac address shows up. You want to know and be alerted if something plugs in or connects to your wireless network that you've never seen before, or haven't approved. You should never look at asset management as a box to check; it’s an ongoing process. Collaborate with your purchasing department—as they purchase PCs and distribute them, you should be tracking asset management at each step. And follow the same process when your organization gets rid of equipment. All laptops and servers eventually die; be sure to record those changes as well. This is important from a security perspective and also may save on software licensing so you're not paying for licenses for computers you no longer have. Budget-friendly user education A lot of people have computer-based phishing education once a year; it gets lumped in with things like learning how to use a fire extinguisher. That never sticks. People will click straight through the training, retake the test until they get the passing grade, and quickly forget about it. Instead, you need a repetitive process with multiple levels. The first step is to search the web to find email addresses in your system that are readily available on the web. Those should be your first targets because they are the most likely to be attacked by bots and other automatic phishing programs. Then move on to people in finance, database administrators, and other individuals with significant power within the organization. Send them a couple sentences of plain text and an internal link from a Gmail address to see if they give up their username and password. I have found that, before training, 60% to 80% of the employees targeted will click on the link. You should see clear progress over multiple levels of this training. Keep extensive metrics on the percent of people who clicked the emailed link, and the percent of people who gave their passwords, both before and after training. And be careful not to only identify “wrong behavior.” Place emphasis on educating staff about whom to contact if something seems weird and then provide positive reinforcement when they report suspicious activity quickly and effectively. Empowering your staff in this way provides quick, effective, and budget-friendly reporting. Preparation is key for incident response Incident response plans can be as simple or as complex as fits your organization’s needs. For some organizations, an incident response plan may be to shut everything off and call a third party for help. If you decide to go with a third party incident response plan, you should have that contract in place beforehand. If you wait until you’re in need of services immediately, you’ve no time or space for negotiating fees or comparing providers. You’ll also be facing an emergency situation and lose time by providing background on your systems to the third party. Putting a plan in place in advance, no matter how simple, will be cost effective, save time, and allow you to recover from an incident more efficiently and effectively. Other organizations may be able to manage a full-blown investigation internally, depending on the severity. Some places are advanced enough that they can reverse malware independently. Many places aren't. Regardless, you must know where to draw the line on stopping your incident response internally and getting someone external to come in and help. Once again, determining where that line is for your organization ahead of time is key. You don't want to have to make that decision in the middle of an incident.
The O’Reilly Security Podcast: How to approach asset management, improve user education, and strengthen your organization’s defensive security with limited time and resources.In this episode, I talk with Amanda Berlin, security architect at Hurricane Labs. We discuss how to assess and develop defensive security policies when you’re new to the task, how to approach core security fundamentals like asset management, and generally how you can successfully improve your organization’s defensive security with limited time and resources.Here are some highlights: The value of ongoing asset management Whether you're one person or you have a large security team, asset management is always a pain point. It’s exceedingly rare to see an organization correctly implementing asset management. In an ideal situation, you know where all of the devices are coming into your network. You have alerts set to sound if a new Mac address shows up. You want to know and be alerted if something plugs in or connects to your wireless network that you've never seen before, or haven't approved. You should never look at asset management as a box to check; it’s an ongoing process. Collaborate with your purchasing department—as they purchase PCs and distribute them, you should be tracking asset management at each step. And follow the same process when your organization gets rid of equipment. All laptops and servers eventually die; be sure to record those changes as well. This is important from a security perspective and also may save on software licensing so you're not paying for licenses for computers you no longer have. Budget-friendly user education A lot of people have computer-based phishing education once a year; it gets lumped in with things like learning how to use a fire extinguisher. That never sticks. People will click straight through the training, retake the test until they get the passing grade, and quickly forget about it. Instead, you need a repetitive process with multiple levels. The first step is to search the web to find email addresses in your system that are readily available on the web. Those should be your first targets because they are the most likely to be attacked by bots and other automatic phishing programs. Then move on to people in finance, database administrators, and other individuals with significant power within the organization. Send them a couple sentences of plain text and an internal link from a Gmail address to see if they give up their username and password. I have found that, before training, 60% to 80% of the employees targeted will click on the link. You should see clear progress over multiple levels of this training. Keep extensive metrics on the percent of people who clicked the emailed link, and the percent of people who gave their passwords, both before and after training. And be careful not to only identify “wrong behavior.” Place emphasis on educating staff about whom to contact if something seems weird and then provide positive reinforcement when they report suspicious activity quickly and effectively. Empowering your staff in this way provides quick, effective, and budget-friendly reporting. Preparation is key for incident response Incident response plans can be as simple or as complex as fits your organization’s needs. For some organizations, an incident response plan may be to shut everything off and call a third party for help. If you decide to go with a third party incident response plan, you should have that contract in place beforehand. If you wait until you’re in need of services immediately, you’ve no time or space for negotiating fees or comparing providers. You’ll also be facing an emergency situation and lose time by providing background on your systems to the third party. Putting a plan in place in advance, no matter how simple, will be cost effective, save time, and allow you to recover from an incident more efficiently and effectively. Other organizations may be able to manage a full-blown investigation internally, depending on the severity. Some places are advanced enough that they can reverse malware independently. Many places aren't. Regardless, you must know where to draw the line on stopping your incident response internally and getting someone external to come in and help. Once again, determining where that line is for your organization ahead of time is key. You don't want to have to make that decision in the middle of an incident.
The O’Reilly Security Podcast: Key preparation before implementing a vulnerability disclosure policy, the crucial role of setting scope, and the benefits of collaborative relationships.In this episode, I talk with Kimber Dowsett, security architect at 18F. We discuss how to prepare your organization for a vulnerability disclosure policy, the benefits of starting small, and how to apply lessons learned to build better defenses. Here are some highlights: Gauging readiness for a vulnerability policy or a bug bounty program It’s critical to develop a response and remediation plan before you launch a disclosure policy. You should be asking, ‘Are we set up to respond to vulnerabilities as they come in?’ and ‘Do we have workflow in place for remediation?’ Organizations need to be sure they're not relying on a vulnerability disclosure policy to find bugs, vulnerabilities, or holes in their applications and code. It’s critical to ensure you have a mature, solid product in place before you open it up to the world and invite scrutiny. Additionally, vulnerability disclosure policies and bug bounty programs shouldn't be thought of as low-cost quality assurance. Code that hasn't been tested isn't viable for these programs. If your product hasn't been tested, torn apart, tested again, gone through pen tests, then it’s not ready, particularly for a bug bounty program. Even if you're ready for a vulnerability disclosure policy, there's a good chance you're not yet ready for a bug bounty program. Start small and proceed with caution If you don’t start small, there's a good chance you're going to get hit in ways that you're not prepared to handle, and probably with issues you'd never even considered. When we launched the 18F policy, we launched it with three sites and then rolled out additional sites as they were ready. If a team said to me, ‘Okay, we think we're good to go to be added to the disclosure policy,’ then we would review their pen test results, development, back end, and code reviews. It's a much slower process, but it returns better results. Going all in at the start and declaring that everything is in scope for your policy is shooting yourself in the foot. We have been cautious and we've had a very successful, slow rollout of vulnerability disclosure. We've proceeded with caution and that worked well for us. The benefits of building collaborative relationships When we confirm a vulnerability, our blue team explores how we would defended against it or ways we could defend it until remediation is complete. Then, our pen testers, security engineers, or developers look to add something about the vulnerability to their toolkits to test for similar insecurities as they are building apps. We really shoot for baked-in security, but there's always going to be a ‘gotcha.’ If researchers submit reports in meaningful ways, we are able to use that to save ourselves time and energy with the triage process, and move straight to determining the best defense and how to find and secure similar problems in the future. We’ve built a process that fosters collaborative relationships with researchers. When researchers make high-quality submissions, we ensure their discoveries are welcomed, and of course, responsibly disclosed. In a successful program, researchers have become part of the security process, as they’ve contributed in a meaningful way to the security of one of our applications. When researchers feel welcome, we all win.
The O’Reilly Security Podcast: Key preparation before implementing a vulnerability disclosure policy, the crucial role of setting scope, and the benefits of collaborative relationships.In this episode, I talk with Kimber Dowsett, security architect at 18F. We discuss how to prepare your organization for a vulnerability disclosure policy, the benefits of starting small, and how to apply lessons learned to build better defenses. Here are some highlights: Gauging readiness for a vulnerability policy or a bug bounty program It’s critical to develop a response and remediation plan before you launch a disclosure policy. You should be asking, ‘Are we set up to respond to vulnerabilities as they come in?’ and ‘Do we have workflow in place for remediation?’ Organizations need to be sure they're not relying on a vulnerability disclosure policy to find bugs, vulnerabilities, or holes in their applications and code. It’s critical to ensure you have a mature, solid product in place before you open it up to the world and invite scrutiny. Additionally, vulnerability disclosure policies and bug bounty programs shouldn't be thought of as low-cost quality assurance. Code that hasn't been tested isn't viable for these programs. If your product hasn't been tested, torn apart, tested again, gone through pen tests, then it’s not ready, particularly for a bug bounty program. Even if you're ready for a vulnerability disclosure policy, there's a good chance you're not yet ready for a bug bounty program. Start small and proceed with caution If you don’t start small, there's a good chance you're going to get hit in ways that you're not prepared to handle, and probably with issues you'd never even considered. When we launched the 18F policy, we launched it with three sites and then rolled out additional sites as they were ready. If a team said to me, ‘Okay, we think we're good to go to be added to the disclosure policy,’ then we would review their pen test results, development, back end, and code reviews. It's a much slower process, but it returns better results. Going all in at the start and declaring that everything is in scope for your policy is shooting yourself in the foot. We have been cautious and we've had a very successful, slow rollout of vulnerability disclosure. We've proceeded with caution and that worked well for us. The benefits of building collaborative relationships When we confirm a vulnerability, our blue team explores how we would defended against it or ways we could defend it until remediation is complete. Then, our pen testers, security engineers, or developers look to add something about the vulnerability to their toolkits to test for similar insecurities as they are building apps. We really shoot for baked-in security, but there's always going to be a ‘gotcha.’ If researchers submit reports in meaningful ways, we are able to use that to save ourselves time and energy with the triage process, and move straight to determining the best defense and how to find and secure similar problems in the future. We’ve built a process that fosters collaborative relationships with researchers. When researchers make high-quality submissions, we ensure their discoveries are welcomed, and of course, responsibly disclosed. In a successful program, researchers have become part of the security process, as they’ve contributed in a meaningful way to the security of one of our applications. When researchers feel welcome, we all win.
The O’Reilly Security Podcast: How adversarial posture affects decision-making, how decision trees can build more dynamic defenses, and the imperative role of UX in security.In this episode, I talk with Kelly Shortridge, detection product manager at BAE Systems Applied Intelligence. We talk about how common cognitive biases apply to security roles, how decision trees can help security practitioners overcome assumptions and build more dynamic defenses, and how combining security and UX could lead to a more secure future.Here are some highlights: How the win-or-lose mindset affects defenders’ decision-making Prospect theory asserts that how we make decisions depends on whether we’re in the domain of gains mindset or the domain of losses mindset. An appropriate analogy is to compare how gamblers make decisions. When gamblers are in the hole, they're a lot more likely to make risky decisions. They're trying to recoup their losses and reason they can do that by making a big leap, even if it's unlikely to succeed. In reality, it would be better if they either cut their losses or made smaller, safer bets. But gamblers often don’t see things that way because they’re operating in a domain of losses mindset, which is also true of many security defenders. Defenders, for the most part, manifest biases that make them willing to make riskier decisions. They're more willing to implement solutions against a 1% likelihood of attack rather than implementing the basics—like two factor authentication, good server hygiene, and network segmentation. We see a lot more defenders buying those really niche tools because, in my view, they're trying to get back to the status quo. They’re willing to spend millions on incident response, particularly if they've just experienced an acute loss, like a data breach. If they had spent those millions on basic controls, they likely wouldn't have had that breach in the first place. Planning dynamic defenses and overcoming assumptions with decision trees Defenders frequently have static strategies. They aren't necessarily thinking next steps in how attackers will respond if they implement two factor authentication, antivirus software, or whitelisting. Decision trees codify your thinking and encourage you to figure out how an attacker might respond to or try to work around your initial defenses, not just your first step. Different branches show how you think an attacker could move throughout your network to get to their end goal. By including your defensive strategies and the probability of success for each, you're essentially documenting your assumptions about how likely your defensive tools are to work, and how likely attackers are to use certain moves. That means if you have a breach or incident, or if you get new data on attacker groups, you can start to refine your model. You can identify where your assumptions might have fallen through. It keeps you honest with tangible metrics, which is important in addressing cognitive biases. Knowing where you failed improves your defenses. It shows how your assumptions need to be tweaked. Why security needs UX—and vice versa We've done a terrible job as an industry of incorporating UX into security design. People lament all the time, regardless of product, that security warnings aren't worded correctly. Either they scare users or people blindly click through them. No one seems focused on how to effectively incorporate security into product design itself. Designers or developers often view security as a complete nuisance—necessary but, in many ways, a hindrance. Security professionals often view UX as a waste of time, and blame insecurity on users who click on things they shouldn’t. Security and UX need to meet in the middle. This is an area that is ripe for opportunity and needs to be explored because it could make a meaningful change in the industry. Using UX to encourage users to make better or more secure decisions as they conduct their various IT activities would have a huge impact on security.
The O’Reilly Security Podcast: How adversarial posture affects decision-making, how decision trees can build more dynamic defenses, and the imperative role of UX in security.In this episode, I talk with Kelly Shortridge, detection product manager at BAE Systems Applied Intelligence. We talk about how common cognitive biases apply to security roles, how decision trees can help security practitioners overcome assumptions and build more dynamic defenses, and how combining security and UX could lead to a more secure future.Here are some highlights: How the win-or-lose mindset affects defenders’ decision-making Prospect theory asserts that how we make decisions depends on whether we’re in the domain of gains mindset or the domain of losses mindset. An appropriate analogy is to compare how gamblers make decisions. When gamblers are in the hole, they're a lot more likely to make risky decisions. They're trying to recoup their losses and reason they can do that by making a big leap, even if it's unlikely to succeed. In reality, it would be better if they either cut their losses or made smaller, safer bets. But gamblers often don’t see things that way because they’re operating in a domain of losses mindset, which is also true of many security defenders. Defenders, for the most part, manifest biases that make them willing to make riskier decisions. They're more willing to implement solutions against a 1% likelihood of attack rather than implementing the basics—like two factor authentication, good server hygiene, and network segmentation. We see a lot more defenders buying those really niche tools because, in my view, they're trying to get back to the status quo. They’re willing to spend millions on incident response, particularly if they've just experienced an acute loss, like a data breach. If they had spent those millions on basic controls, they likely wouldn't have had that breach in the first place. Planning dynamic defenses and overcoming assumptions with decision trees Defenders frequently have static strategies. They aren't necessarily thinking next steps in how attackers will respond if they implement two factor authentication, antivirus software, or whitelisting. Decision trees codify your thinking and encourage you to figure out how an attacker might respond to or try to work around your initial defenses, not just your first step. Different branches show how you think an attacker could move throughout your network to get to their end goal. By including your defensive strategies and the probability of success for each, you're essentially documenting your assumptions about how likely your defensive tools are to work, and how likely attackers are to use certain moves. That means if you have a breach or incident, or if you get new data on attacker groups, you can start to refine your model. You can identify where your assumptions might have fallen through. It keeps you honest with tangible metrics, which is important in addressing cognitive biases. Knowing where you failed improves your defenses. It shows how your assumptions need to be tweaked. Why security needs UX—and vice versa We've done a terrible job as an industry of incorporating UX into security design. People lament all the time, regardless of product, that security warnings aren't worded correctly. Either they scare users or people blindly click through them. No one seems focused on how to effectively incorporate security into product design itself. Designers or developers often view security as a complete nuisance—necessary but, in many ways, a hindrance. Security professionals often view UX as a waste of time, and blame insecurity on users who click on things they shouldn’t. Security and UX need to meet in the middle. This is an area that is ripe for opportunity and needs to be explored because it could make a meaningful change in the industry. Using UX to encourage users to make better or more secure decisions as they conduct their various IT activities would have a huge impact on security.
The O’Reilly Security Podcast: Compounding security technical debt, the importance of security hygiene, and how the speed of innovation reintroduces vulnerabilities.In this episode, I talk with Dave Lewis, global security advocate at Akamai. We talk about how technical sprawl and employee churn compounds security debt, the tenacity of solvable security problems, and how the speed of innovation reintroduces vulnerabilities.Here are some highlights: How technical sprawl and employee churn compound security debt Twenty plus years ago when I started working in security, we had a defined set of things we had to deal with on a continuous basis. As our environments expand with things like cloud computing, we have taken that core set of worries and multiplied them plus, plus, plus. Things that we should have been doing well 20 years ago—like patching, asset management—have gotten far worse at this point. We have grown our security debt to unmanageable levels in a lot of cases. People who are responsible for patching end up passing that duty down to the next junior person in line as they move forward in their career. And that junior person in turn passes it on to whomever comes up behind them. So, patching tends to be something that is shunted to the wayside. As a result, the problem keeps growing. Reducing attack surface with consistent security hygiene We don't execute on the processes, standards, and guidelines that should exist in every environment for how you're going to do X, Y, and Z. Like SQL injection. If we are making sure we're sanitizing inputs and outputs from our applications, this attack surface by and large goes away. Is it 100%? No, but nothing in security is 100%, sadly. For patching, again, you have to have a proper regimen in place. It's sort of like this: I could build you a house if I have a hammer, but if I don't have the context of the larger plan to build that house, I’m stuck. There are tools available that can help you execute patch management. The tools and the abilities are there, but we need the processes to follow, and we need to execute on them. But the thing is, patching is not something that most people find enjoyable. We need to do a better job of seeing patching as an important part of protecting our environment and take pride in that. Innovation’s role in reintroducing previously solved problems Well, the Internet of Things (IoT) has really devolved into the new bacon. Any device you can get your hands on and slap an internet connection to is now IoT. I've seen kettles, I've seen toasters, I've seen toothbrushes that had internet connectivity. Here’s a question for you: if you have a device with an internet connection and you pull that connection, does your device stop working? I worry about this because we're getting so bogged down in the crush to create IoT devices that we're really, again, bypassing fundamentals. I've seen devices that are out on the internet using deprecated libraries, and in some cases reintroducing Heartbleed. This is abjectly silly. It's a problem we tackled a few years ago, only to see it reemerge in IoT devices that are online. Or conversely, with the Mirai botnet, we saw default usernames and passwords. Programmatically, there's no good reason for that. That is an easily fixed problem.
The O’Reilly Security Podcast: Compounding security technical debt, the importance of security hygiene, and how the speed of innovation reintroduces vulnerabilities.In this episode, I talk with Dave Lewis, global security advocate at Akamai. We talk about how technical sprawl and employee churn compounds security debt, the tenacity of solvable security problems, and how the speed of innovation reintroduces vulnerabilities.Here are some highlights: How technical sprawl and employee churn compound security debt Twenty plus years ago when I started working in security, we had a defined set of things we had to deal with on a continuous basis. As our environments expand with things like cloud computing, we have taken that core set of worries and multiplied them plus, plus, plus. Things that we should have been doing well 20 years ago—like patching, asset management—have gotten far worse at this point. We have grown our security debt to unmanageable levels in a lot of cases. People who are responsible for patching end up passing that duty down to the next junior person in line as they move forward in their career. And that junior person in turn passes it on to whomever comes up behind them. So, patching tends to be something that is shunted to the wayside. As a result, the problem keeps growing. Reducing attack surface with consistent security hygiene We don't execute on the processes, standards, and guidelines that should exist in every environment for how you're going to do X, Y, and Z. Like SQL injection. If we are making sure we're sanitizing inputs and outputs from our applications, this attack surface by and large goes away. Is it 100%? No, but nothing in security is 100%, sadly. For patching, again, you have to have a proper regimen in place. It's sort of like this: I could build you a house if I have a hammer, but if I don't have the context of the larger plan to build that house, I’m stuck. There are tools available that can help you execute patch management. The tools and the abilities are there, but we need the processes to follow, and we need to execute on them. But the thing is, patching is not something that most people find enjoyable. We need to do a better job of seeing patching as an important part of protecting our environment and take pride in that. Innovation’s role in reintroducing previously solved problems Well, the Internet of Things (IoT) has really devolved into the new bacon. Any device you can get your hands on and slap an internet connection to is now IoT. I've seen kettles, I've seen toasters, I've seen toothbrushes that had internet connectivity. Here’s a question for you: if you have a device with an internet connection and you pull that connection, does your device stop working? I worry about this because we're getting so bogged down in the crush to create IoT devices that we're really, again, bypassing fundamentals. I've seen devices that are out on the internet using deprecated libraries, and in some cases reintroducing Heartbleed. This is abjectly silly. It's a problem we tackled a few years ago, only to see it reemerge in IoT devices that are online. Or conversely, with the Mirai botnet, we saw default usernames and passwords. Programmatically, there's no good reason for that. That is an easily fixed problem.
The O’Reilly Security Podcast: Scaling machine learning for security, the evolving nature of security data, and how adversaries can use machine learning against us.In this special episode of the Security Podcast, O’Reilly’s Ben Lorica talks with Parvez Ahammad, who leads the data science and machine learning efforts at Instart Logic. He has applied machine learning in a variety of domains, most recently to computational neuroscience and security. Lorica and Ahammad discuss the challenges of using machine learning in information security.Here are some highlights: Scaling machine learning for security If you look at a day's worth of logs, even for a mid-size company, it's billions of rows of logs. The scale of the problem is actually incredibly large. Typically, people are working to somehow curate a small data set and convince themselves that using only a small subset of the data is reasonable, and then go to work on that small subset—mostly because they’re unsure how to build a scalable system. They’ve perhaps already signed up for doing a particular machine learning method without strategically thinking about what their situation really requires. Within my company, I have a colleague from a hardcore security background and I come from a more traditional machine learning background. We butt heads, and we essentially help each other learn about the other’s paradigm and how to think about it. The evolving nature of security data and the exploitation of machine learning by adversaries Many times, if you take a survey and see that most of the machine learning applications are supervised, what you're assuming is that you collected the data and you think the underlying distribution of your data collection is true. In statistics, this is called stationarity assumption. You assume that this batch is representative of what you're going to see later. You are going to split your data into two parts; you train on one part and you test on the other part. The issue is, especially in security, there is an adversary. Any time you settle down and build a classifier, there is somebody actively working to break it. There is no assumption of stationarity that is going to hold. Also, there are people and botnets that are actively trying to get around whatever model you constructed. There is an adversarial nature to the problem. These dual-sided problems are typically dealt in the game theoretic framework. Basically, you assume there's an adversary. We’ve recently seen research papers on this topic. One approach we’ve seen is that you can poison a machine learning classifier to act maliciously by messing with how the samples are being constructed or adjusting the distribution that the classifier is looking at. Alternatively, you can try to construct safe machine learning approaches that go in with the assumption that there is going to be an adversary, then reasoning through what you can do to thwart said adversary. Building interpretable and accessible machine learning I think companies like Google or Facebook probably have access to large-scale resources, where they can curate and generate really good quality ground truth. In such a scenario, it's probably wise to try deep learning. On a philosophical level, I also feel that deep learning is like proving there is a Nash equilibrium. You know that it can be done. How it’s exactly getting done is a separate problem. As a scientist, I am interested in understanding what, exactly, is making this work. For example, if you throw deep learning at this problem and the thing comes back, and the classification rates are very small, then we probably need to look at a different problem because you just threw the kitchen sink at it. However, if we found that it is doing a good job, then what we need to do is to start from there and figure out an explainable model that we can train. We are an enterprise, and in the enterprise industry, it's not sufficient to have an answer; we need to be able to explain why. For that, there are issues in simply applying deep learning as it is. What I'm really interested in these days is the idea of explainable machine learning. It’s not enough that we build machine learning systems that can do a certain classification or segmentation job very well. I'm starting to be really interested in the idea of how to build systems that are interpretable, that are explainable—where you can have faith in the outcome of the system by inspecting something about the system that allows you to say, ‘Hey, this was actually a trustworthy result.’ Related resources: Applying Machine Learning in Security: A recent survey paper co-written by Parvez Ahammad
The O’Reilly Security Podcast: Scaling machine learning for security, the evolving nature of security data, and how adversaries can use machine learning against us.In this special episode of the Security Podcast, O’Reilly’s Ben Lorica talks with Parvez Ahammad, who leads the data science and machine learning efforts at Instart Logic. He has applied machine learning in a variety of domains, most recently to computational neuroscience and security. Lorica and Ahammad discuss the challenges of using machine learning in information security.Here are some highlights: Scaling machine learning for security If you look at a day's worth of logs, even for a mid-size company, it's billions of rows of logs. The scale of the problem is actually incredibly large. Typically, people are working to somehow curate a small data set and convince themselves that using only a small subset of the data is reasonable, and then go to work on that small subset—mostly because they’re unsure how to build a scalable system. They’ve perhaps already signed up for doing a particular machine learning method without strategically thinking about what their situation really requires. Within my company, I have a colleague from a hardcore security background and I come from a more traditional machine learning background. We butt heads, and we essentially help each other learn about the other’s paradigm and how to think about it. The evolving nature of security data and the exploitation of machine learning by adversaries Many times, if you take a survey and see that most of the machine learning applications are supervised, what you're assuming is that you collected the data and you think the underlying distribution of your data collection is true. In statistics, this is called stationarity assumption. You assume that this batch is representative of what you're going to see later. You are going to split your data into two parts; you train on one part and you test on the other part. The issue is, especially in security, there is an adversary. Any time you settle down and build a classifier, there is somebody actively working to break it. There is no assumption of stationarity that is going to hold. Also, there are people and botnets that are actively trying to get around whatever model you constructed. There is an adversarial nature to the problem. These dual-sided problems are typically dealt in the game theoretic framework. Basically, you assume there's an adversary. We’ve recently seen research papers on this topic. One approach we’ve seen is that you can poison a machine learning classifier to act maliciously by messing with how the samples are being constructed or adjusting the distribution that the classifier is looking at. Alternatively, you can try to construct safe machine learning approaches that go in with the assumption that there is going to be an adversary, then reasoning through what you can do to thwart said adversary. Building interpretable and accessible machine learning I think companies like Google or Facebook probably have access to large-scale resources, where they can curate and generate really good quality ground truth. In such a scenario, it's probably wise to try deep learning. On a philosophical level, I also feel that deep learning is like proving there is a Nash equilibrium. You know that it can be done. How it’s exactly getting done is a separate problem. As a scientist, I am interested in understanding what, exactly, is making this work. For example, if you throw deep learning at this problem and the thing comes back, and the classification rates are very small, then we probably need to look at a different problem because you just threw the kitchen sink at it. However, if we found that it is doing a good job, then what we need to do is to start from there and figure out an explainable model that we can train. We are an enterprise, and in the enterprise industry, it's not sufficient to have an answer; we need to be able to explain why. For that, there are issues in simply applying deep learning as it is. What I'm really interested in these days is the idea of explainable machine learning. It’s not enough that we build machine learning systems that can do a certain classification or segmentation job very well. I'm starting to be really interested in the idea of how to build systems that are interpretable, that are explainable—where you can have faith in the outcome of the system by inspecting something about the system that allows you to say, ‘Hey, this was actually a trustworthy result.’ Related resources: Applying Machine Learning in Security: A recent survey paper co-written by Parvez Ahammad
The O’Reilly Security Podcast: The five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.In this episode, I talk with Katie Moussouris, founder and CEO of Luta Security. We discuss the five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.Here are some highlights: The five stages of vulnerability disclosure grief There are two kinds of reactions we see from organizations that have never received a bug report before. Some of them are really grateful, and that's ideally where you want people to start, but a lot of them go through what I call the five stages of vulnerability response grief. At first, they are in denial; they say, ‘No, that's not a bug—maybe you're mistaken,’ or they get angry and send the lawyers, or they try to bargain with the bug hunter and say, ‘Maybe, if we just did something really stupid and tried to mask what this is, and maybe you won't talk about it publicly, or tweet about it.’ Then they often get really depressed because they realize this is just one bug report from one bug finder and there might be a ton of bugs they don't know what to do with. Until finally, they get to the acceptance stage. Ideally, we like it when organizations have gotten to that acceptance stage, when they realize there are bugs in everything, and eventually somebody is going to report a security vulnerability to the organization. Even if you've just got a website on the internet, it's possible that somebody will find and report a security issue to you. Hacking the government Hack the Pentagon came about because the U.S. Department of Defense was really interested in hearing about manipulating bug bounty market incentives. Each of those types of bugs would have fetched six figures on the offense market. At the time, Microsoft wasn't paying six figures per bug for beta bugs—in fact, nobody was—so understanding those market behaviors actually helped the Pentagon feel comfortable in trying out a bug bounty pilot, which is what happened last year. The results were great for the Pentagon. They got 138 vulnerabilities reported in a 21-day period. They fixed them all within six weeks, I believe. They paid $75,000 in bug bounties to find that many vulnerabilities. Through their usual vendors, it was costing them more than a million dollars a year in federal contracts with different security vendors, and they were typically receiving maybe two or three bug reports a month. There was finally a legal channel for security researchers who wanted to help the government to be able to do so without risking their freedom. (Editor’s note: Moussouris just helped launch a similar effort with the UK’s National Cyber Security Centre.) The pros and cons of bug bounties Anyone can offer cash for bugs. Whether or not it turns out well for them depends on a whole lot of things. Bug bounties can be useful as a focus incentive. If an organization has a pretty good handle on their vulnerabilities and has a process for dealing with the ones they already know about, then that might be a good area to focus on, but I typically don't think it's a good way to start. It has been trendy, recently, in the last year or so, as bug bounties have caught on, where company leaders are saying, ‘We're not getting good vulnerability reports—let’s pay 10 times the bug bounty amounts for a period of time and attract a whole bunch of researchers.’ You might do that, and yes, you might get a whole swarm of bug reports, but are they really the most valuable bugs—the ones that are actually going to help you secure your users, your customers, your enterprise, or your website? Or, are they just going to be a whole swarm of the same bug reported by multiple sources because it was a little bit of a low-hanging-fruit exercise? I caution people to think through their incentive models. What is it that you really want? Do you want more bug reports? What types of bug reports do you want? How can you structure this so you're not wasting all your resources and money on an outsourced bug bounty service provider, or on triage provider resources, paying them to sift through reports for you. What would you save by finding these bugs more effectively with a decent security testing program and maybe a full-time person in-house? I talk a lot of people off the bug-bounty ledge, especially if they haven't done a whole lot of their own homework and testing. Organizations are always going to have competing needs when it comes to spending their security dollars, and I think from a holistic view, bug bounties are not going to be the 100% perfect answer for making people more secure. You cannot “bounty” your way to being secure, the same way you can't “penetration test” your way to being secure.
The O’Reilly Security Podcast: The five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.In this episode, I talk with Katie Moussouris, founder and CEO of Luta Security. We discuss the five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.Here are some highlights: The five stages of vulnerability disclosure grief There are two kinds of reactions we see from organizations that have never received a bug report before. Some of them are really grateful, and that's ideally where you want people to start, but a lot of them go through what I call the five stages of vulnerability response grief. At first, they are in denial; they say, ‘No, that's not a bug—maybe you're mistaken,’ or they get angry and send the lawyers, or they try to bargain with the bug hunter and say, ‘Maybe, if we just did something really stupid and tried to mask what this is, and maybe you won't talk about it publicly, or tweet about it.’ Then they often get really depressed because they realize this is just one bug report from one bug finder and there might be a ton of bugs they don't know what to do with. Until finally, they get to the acceptance stage. Ideally, we like it when organizations have gotten to that acceptance stage, when they realize there are bugs in everything, and eventually somebody is going to report a security vulnerability to the organization. Even if you've just got a website on the internet, it's possible that somebody will find and report a security issue to you. Hacking the government Hack the Pentagon came about because the U.S. Department of Defense was really interested in hearing about manipulating bug bounty market incentives. Each of those types of bugs would have fetched six figures on the offense market. At the time, Microsoft wasn't paying six figures per bug for beta bugs—in fact, nobody was—so understanding those market behaviors actually helped the Pentagon feel comfortable in trying out a bug bounty pilot, which is what happened last year. The results were great for the Pentagon. They got 138 vulnerabilities reported in a 21-day period. They fixed them all within six weeks, I believe. They paid $75,000 in bug bounties to find that many vulnerabilities. Through their usual vendors, it was costing them more than a million dollars a year in federal contracts with different security vendors, and they were typically receiving maybe two or three bug reports a month. There was finally a legal channel for security researchers who wanted to help the government to be able to do so without risking their freedom. (Editor’s note: Moussouris just helped launch a similar effort with the UK’s National Cyber Security Centre.) The pros and cons of bug bounties Anyone can offer cash for bugs. Whether or not it turns out well for them depends on a whole lot of things. Bug bounties can be useful as a focus incentive. If an organization has a pretty good handle on their vulnerabilities and has a process for dealing with the ones they already know about, then that might be a good area to focus on, but I typically don't think it's a good way to start. It has been trendy, recently, in the last year or so, as bug bounties have caught on, where company leaders are saying, ‘We're not getting good vulnerability reports—let’s pay 10 times the bug bounty amounts for a period of time and attract a whole bunch of researchers.’ You might do that, and yes, you might get a whole swarm of bug reports, but are they really the most valuable bugs—the ones that are actually going to help you secure your users, your customers, your enterprise, or your website? Or, are they just going to be a whole swarm of the same bug reported by multiple sources because it was a little bit of a low-hanging-fruit exercise? I caution people to think through their incentive models. What is it that you really want? Do you want more bug reports? What types of bug reports do you want? How can you structure this so you're not wasting all your resources and money on an outsourced bug bounty service provider, or on triage provider resources, paying them to sift through reports for you. What would you save by finding these bugs more effectively with a decent security testing program and maybe a full-time person in-house? I talk a lot of people off the bug-bounty ledge, especially if they haven't done a whole lot of their own homework and testing. Organizations are always going to have competing needs when it comes to spending their security dollars, and I think from a holistic view, bug bounties are not going to be the 100% perfect answer for making people more secure. You cannot “bounty” your way to being secure, the same way you can't “penetration test” your way to being secure.
The O’Reilly Security Podcast: Focusing on defense, making security better for everyone, and how it takes a village.In this episode, I talk with Allison Miller, product manager for secure browsing at Google and my co-host of the O’Reilly Security conference, which is returning to New York City this fall. We discuss the importance of having an event focused solely on defense, what we’re looking forward to this year, and some notable ideas and topics from the call for proposals.Here are some highlights: Focusing on defense When we created O’Reilly Security conference we took a risk because we said, "We're going to focus on the defenders, the folks who are protecting the users and the systems." I heard from others over and over again, "How are you going to make a whole agenda out of that?" because it's usually one track at a major security event, or a handful of talks on authentication or SIM technology. At some security events, someone who works on the defense side can feel a little under attack because that's what's being discussed—attacks and how people are not successfully defending against them. This was more like, "Hey, you know what? Let's sit down and talk about how to do this right." That engendered a different spirit of dialog amongst the participants. Learning from mistakes to make things better Thematically, we picked some pretty broad topics for the conference, like the effect security has on people. Additionally, most defensive work in the private sector happens in the context of a business, so understanding how security fits into the larger business unit is critical. And when it comes to technology itself—talking about the tools and also data, metrics, analysis and that side of it—those are broad topics with plenty of room to explore. We’re also making room for more war stories, more discussions about learning from the trenches—big “Ooops” moments and how those get turned into lessons learned and concrete improvements. The real emphasis in the discussions we’re looking for is, “Let's make things better.” When something bad happens or a mistake is made, that means you can push off from the wall, like you're doing a kick turn in swimming. It gives you something to push off against, redirecting your effort to allow you to get to the other end of the pool and then to get better. On the horizon for O’Reilly Security conference 2017 For this year’s call for proposals, I would like to hear about what people are doing for end users. That's my personal passion. I am also interested to hear how people are putting their big data to work for them, who’ve figured out how to quantify impact, or measure, or analyze complex systems or situations, and distill those down. Reasonable approaches for small businesses is also a hot topic because a lot of the techniques that we talk about and the aspects of security being considered as a part of the design process are very important here. You're trying to design security into systems for end users, or leverage data in clever ways. Those types of things scale up far more readily than they scale down. It's not even a question of resourcing—when you are an organization with a smaller footprint some of those things, techniques that are used at large high scale organizations, just aren't going to work. It takes a village Security is interdisciplinary because, ultimately, it's not just a technology problem—if it were just a technology problem, we would be done by now. We would just apply the right technology to the technology problem, and we could all go home. But it's a human problem because the actors are humans, they are motivated, and people are a vector of vulnerability, just as much as the systems and data are. Related resources: 6 ways to hack the O’Reilly Security Conference CFP This is how we do it: Behind the curtain of the O’Reilly Security Conference CFP
The O’Reilly Security Podcast: Focusing on defense, making security better for everyone, and how it takes a village.In this episode, I talk with Allison Miller, product manager for secure browsing at Google and my co-host of the O’Reilly Security conference, which is returning to New York City this fall. We discuss the importance of having an event focused solely on defense, what we’re looking forward to this year, and some notable ideas and topics from the call for proposals.Here are some highlights: Focusing on defense When we created O’Reilly Security conference we took a risk because we said, "We're going to focus on the defenders, the folks who are protecting the users and the systems." I heard from others over and over again, "How are you going to make a whole agenda out of that?" because it's usually one track at a major security event, or a handful of talks on authentication or SIM technology. At some security events, someone who works on the defense side can feel a little under attack because that's what's being discussed—attacks and how people are not successfully defending against them. This was more like, "Hey, you know what? Let's sit down and talk about how to do this right." That engendered a different spirit of dialog amongst the participants. Learning from mistakes to make things better Thematically, we picked some pretty broad topics for the conference, like the effect security has on people. Additionally, most defensive work in the private sector happens in the context of a business, so understanding how security fits into the larger business unit is critical. And when it comes to technology itself—talking about the tools and also data, metrics, analysis and that side of it—those are broad topics with plenty of room to explore. We’re also making room for more war stories, more discussions about learning from the trenches—big “Ooops” moments and how those get turned into lessons learned and concrete improvements. The real emphasis in the discussions we’re looking for is, “Let's make things better.” When something bad happens or a mistake is made, that means you can push off from the wall, like you're doing a kick turn in swimming. It gives you something to push off against, redirecting your effort to allow you to get to the other end of the pool and then to get better. On the horizon for O’Reilly Security conference 2017 For this year’s call for proposals, I would like to hear about what people are doing for end users. That's my personal passion. I am also interested to hear how people are putting their big data to work for them, who’ve figured out how to quantify impact, or measure, or analyze complex systems or situations, and distill those down. Reasonable approaches for small businesses is also a hot topic because a lot of the techniques that we talk about and the aspects of security being considered as a part of the design process are very important here. You're trying to design security into systems for end users, or leverage data in clever ways. Those types of things scale up far more readily than they scale down. It's not even a question of resourcing—when you are an organization with a smaller footprint some of those things, techniques that are used at large high scale organizations, just aren't going to work. It takes a village Security is interdisciplinary because, ultimately, it's not just a technology problem—if it were just a technology problem, we would be done by now. We would just apply the right technology to the technology problem, and we could all go home. But it's a human problem because the actors are humans, they are motivated, and people are a vector of vulnerability, just as much as the systems and data are. Related resources: 6 ways to hack the O’Reilly Security Conference CFP This is how we do it: Behind the curtain of the O’Reilly Security Conference CFP
The O’Reilly Security Podcast: Building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.In this episode, O’Reilly Media’s Mac Slocum talks with Scout Brody, executive director of Simply Secure. They discuss building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.Here are some highlights: Building systems that help humans We tend to think of security as a technical problem and the user as the impediment to our perfect solution. That's why I try to bring the human perspective to the community. I think of human beings as the real end-goal of the system. Ultimately, if we aren't building systems that are meeting the needs of humans, why are we building systems at all? It's very important for us to get out and talk to people, to engage with users and understand what their concerns are. Designing better tools through user studies A powerful tool you can adopt when talking to users is cognitive walkthrough. In essence, you ask them to tell you what they're thinking as they're thinking it. So, if you're going to do a cognitive walkthrough for an encryption program, you might say, ‘I'd like you to encrypt this email message. Please tell me what you're doing as you're doing it and all of the thoughts that occur to you.’ You might hear someone say, ‘Oh, wow, okay, so I'm going to encrypt. I don't really know what I'm doing. I'm going to start by pushing this button because that looks good. That's green. I'm going to push that.’ You can really hear the thought process that people are going through. If you're in a more formal user study context, it can be useful to get the user's consent to videotape—not necessarily the person, but the screen—and see what they're doing because then you can play it for your colleagues. This is one of the most convincing ways you can make a case that your tool has problems or your tool needs improvement. Thus, just by videotaping people trying to use a tool and showing the challenges they face, you can identify ways to improve the user experience. Balancing security with shipping software Given my human orientation, I view software as a process, not a product. So, what are the human processes you can build in to make sure the security goals are met? To that end, you should be thinking about your developers and thinking about the people who are trying to get your software out the door. As human beings, what are the psychological components that you, as an engineering manager or a security advocate within your organization, can instrument to try to incentivize them to focus on security? It's a continuous effort, which makes it hard. It's challenging. But just like any kind of technical debt, if you don't chip away at it little bit by little bit, over time it will grow until it's a mountain.
The O’Reilly Security Podcast: Building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.In this episode, O’Reilly Media’s Mac Slocum talks with Scout Brody, executive director of Simply Secure. They discuss building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.Here are some highlights: Building systems that help humans We tend to think of security as a technical problem and the user as the impediment to our perfect solution. That's why I try to bring the human perspective to the community. I think of human beings as the real end-goal of the system. Ultimately, if we aren't building systems that are meeting the needs of humans, why are we building systems at all? It's very important for us to get out and talk to people, to engage with users and understand what their concerns are. Designing better tools through user studies A powerful tool you can adopt when talking to users is cognitive walkthrough. In essence, you ask them to tell you what they're thinking as they're thinking it. So, if you're going to do a cognitive walkthrough for an encryption program, you might say, ‘I'd like you to encrypt this email message. Please tell me what you're doing as you're doing it and all of the thoughts that occur to you.’ You might hear someone say, ‘Oh, wow, okay, so I'm going to encrypt. I don't really know what I'm doing. I'm going to start by pushing this button because that looks good. That's green. I'm going to push that.’ You can really hear the thought process that people are going through. If you're in a more formal user study context, it can be useful to get the user's consent to videotape—not necessarily the person, but the screen—and see what they're doing because then you can play it for your colleagues. This is one of the most convincing ways you can make a case that your tool has problems or your tool needs improvement. Thus, just by videotaping people trying to use a tool and showing the challenges they face, you can identify ways to improve the user experience. Balancing security with shipping software Given my human orientation, I view software as a process, not a product. So, what are the human processes you can build in to make sure the security goals are met? To that end, you should be thinking about your developers and thinking about the people who are trying to get your software out the door. As human beings, what are the psychological components that you, as an engineering manager or a security advocate within your organization, can instrument to try to incentivize them to focus on security? It's a continuous effort, which makes it hard. It's challenging. But just like any kind of technical debt, if you don't chip away at it little bit by little bit, over time it will grow until it's a mountain.
The O’Reilly Security Podcast: Speaking other people’s language, security for small businesses, and how shame is a terrible motivator.In this episode, I talk with Jessy Irwin, VP of security and privacy at Mercury Public Affairs. We discuss how to communicate security to non-technical people, what security might look like for small businesses, and moving beyond shame. We also meet her neighborhood gang of grannies who’ve learned how to hack back.Here are some highlights: Speaking other people’s language One of the first things I do when talking to non-technical people is to stop using jargon. The average person doesn't know what encryption is, and if they've heard of the word before, it probably is perceived as something for terrorists, not for them. Password manager is not an intuitive phrase to most people, so I could say, "Well, you need a password app," and suddenly the whole world becomes a different place for someone who didn't realize that such a thing exists. It’s important that we communicate with people using their own terms and recognize that the average person is not going to use the word "hacked" the way the security person uses the word "hacked." Accepting that those moments, which to a professional ear sound like nails on a chalkboard, are going to happen—that completely changed the way I do things. Your local law office isn’t Netflix or Google A lot of the people I work with aren't from tech companies. They tend to be with government organizations or in verticals that maybe use technology but don't necessarily ship their own technology. It seems like a lot of people in security think it's completely realistic to expect companies to start security teams, to hire a lot of engineers and run these tools that are five to six figure purchases a year. That's not going to work for the average business. These organizations often outsource security services and may not run security tools in-house. They might need security to be managed externally, or they need to focus on configuring tools and processes to allow their small team to build security into the workflow process. Not all of that is going to require engineers, and not every company can or should spend $3 million on security, especially if the organization is a law firm down the street or the mortgage broker around the corner. Making tools work for people As an industry, we need to work really hard to make sure our tools are accessible to the average user. Otherwise, the person who handles these tasks potentially only as a part-time IT staffer is not going to be able to use them. If a business has a full-time IT administrator, they might be able to utilize an intern on occasion. Frequently, businesses won’t have a security-minded IT administrator, meaning the person making decisions in a small business won’t necessarily be a security expert. We need more consumer-friendly tools because then they're also small business-friendly, which is basically the same audience. We have to focus and be prepared to look at security and say, ‘How do we make this work in half the time? How do we make it work for one dedicated IT person, and then how do we make it work for an organization with a small IT team?’ Then, from there, where do they even start with security? At what point do they need to actually have a security hire, and how can they help that security hire build programs and think in a way that's going to produce returns for their business? Moving beyond shame Making people feel bad when they know they have failed or when they're trying to get it together is the number one way we set our average consumer or business up for failure. If someone walks in and says, ‘Hey, I'm having a problem with my router. It's being really weird. I'm not sure what's up.’ And, some security nerd looks at it and says, ‘Oh, my god. You're an idiot. Why would you ever configure it this way?’ That's not just being a bad person; that's being a really bad ambassador for the kind of work we do. We have to work really hard to say, ‘Yes, that’s okay’ in the right way to positively reinforce good decisions. If we don't, I really don't know what the future looks like.
The O’Reilly Security Podcast: Speaking other people’s language, security for small businesses, and how shame is a terrible motivator.In this episode, I talk with Jessy Irwin, VP of security and privacy at Mercury Public Affairs. We discuss how to communicate security to non-technical people, what security might look like for small businesses, and moving beyond shame. We also meet her neighborhood gang of grannies who’ve learned how to hack back.Here are some highlights: Speaking other people’s language One of the first things I do when talking to non-technical people is to stop using jargon. The average person doesn't know what encryption is, and if they've heard of the word before, it probably is perceived as something for terrorists, not for them. Password manager is not an intuitive phrase to most people, so I could say, "Well, you need a password app," and suddenly the whole world becomes a different place for someone who didn't realize that such a thing exists. It’s important that we communicate with people using their own terms and recognize that the average person is not going to use the word "hacked" the way the security person uses the word "hacked." Accepting that those moments, which to a professional ear sound like nails on a chalkboard, are going to happen—that completely changed the way I do things. Your local law office isn’t Netflix or Google A lot of the people I work with aren't from tech companies. They tend to be with government organizations or in verticals that maybe use technology but don't necessarily ship their own technology. It seems like a lot of people in security think it's completely realistic to expect companies to start security teams, to hire a lot of engineers and run these tools that are five to six figure purchases a year. That's not going to work for the average business. These organizations often outsource security services and may not run security tools in-house. They might need security to be managed externally, or they need to focus on configuring tools and processes to allow their small team to build security into the workflow process. Not all of that is going to require engineers, and not every company can or should spend $3 million on security, especially if the organization is a law firm down the street or the mortgage broker around the corner. Making tools work for people As an industry, we need to work really hard to make sure our tools are accessible to the average user. Otherwise, the person who handles these tasks potentially only as a part-time IT staffer is not going to be able to use them. If a business has a full-time IT administrator, they might be able to utilize an intern on occasion. Frequently, businesses won’t have a security-minded IT administrator, meaning the person making decisions in a small business won’t necessarily be a security expert. We need more consumer-friendly tools because then they're also small business-friendly, which is basically the same audience. We have to focus and be prepared to look at security and say, ‘How do we make this work in half the time? How do we make it work for one dedicated IT person, and then how do we make it work for an organization with a small IT team?’ Then, from there, where do they even start with security? At what point do they need to actually have a security hire, and how can they help that security hire build programs and think in a way that's going to produce returns for their business? Moving beyond shame Making people feel bad when they know they have failed or when they're trying to get it together is the number one way we set our average consumer or business up for failure. If someone walks in and says, ‘Hey, I'm having a problem with my router. It's being really weird. I'm not sure what's up.’ And, some security nerd looks at it and says, ‘Oh, my god. You're an idiot. Why would you ever configure it this way?’ That's not just being a bad person; that's being a really bad ambassador for the kind of work we do. We have to work really hard to say, ‘Yes, that’s okay’ in the right way to positively reinforce good decisions. If we don't, I really don't know what the future looks like.
The O’Reilly Security Podcast: The problem with perimeter security, rethinking trust in a networked world, and automation as an enabler.In this episode, I talk with Doug Barth, site reliability engineer at Stripe, and Evan Gilman, Doug’s former colleague from PagerDuty who is now working independently on Zero Trust networking. They are also co-authoring a book for O’Reilly on Zero Trust networks. They discuss the problems with traditional perimeter security models, rethinking trust in a networked world, and automation as an enabler.Here are some highlights: The problem with perimeters Evan: The biggest issue with a perimeter model is that it tends to encourage system administrators to define as few perimeters as possible. You have your firewall, so anything out on the internet is bad, anyone on the inside is trusted, and maybe down the line you'll further segment this and add more firewalls. Maybe if you're really rigorous, you might do per-host firewalls, but in reality, most people say, ‘It's on my trusted network, so it's a trusted interaction. Why should I go through that effort? What's the value?’ The issue with that thought process is that we keep seeing bad people get behind the perimeter, time and time again. Once they get behind it, they can just do whatever they want. Doug: The alternative is proactively figuring out how to manage the trust in your network. Whom do you trust? Why do you trust them? Do you have enough trust for them? When I want to build a secure network, my goal is not to remove people's access; it's to help distribute the problem and get enough eyes on whom I trust and whether I should continue to trust them. It's a trust-but-verify approach. Moving to Zero Trust Evan: Shifting from a perimeter security model to Zero Trust is scary. But the good news is, we know how to do this already. We have internet-facing services, and we know how to serve up resources across the internet and secure them so they network between you and the resources. It's transparent from a security perspective. VPNs famously do this. Secure Sockets Layer (SSL) websites and other similar approaches are what we consider "internet security," and we already know how to do this. In a Zero Trust approach, we just apply it across the board, and use automation as a key enabler. Large migrations to a low trust network, like Google’s recent effort, involve a lot of auditing prep and very careful implementation. For instance, you need to craft policies on a case-by-case basis and turn them in logging mode only, so you're aware of whom will be blocked before you actually block them. Automation as an enabler Doug: Each engineering team in a company should be able to define the security policy that their individual service needs to function. We distribute that problem across many teams, but then we push all those policies into a secure infrastructure that actually implements that policy. This isn't just a crazy idea we had. This is how I understand Google's BeyondCorp initiative works. Google wanted to get rid of their VPNs but still have a lot of secure policies. They call what they built a ‘shared access gateway.’ They give each engineering team a digital subscriber line (DSL) for defining each of their security policies, but the shared access gateway is what actually implements the policy. They layer on top of that the broad-reaching policy for the entire organization. This type of automation—the ability to programmatically define your policy and your enforcement—allows you to give people a lot more access. Once you start capturing all this policy and how it changes over time in code, you can do much more advanced security policy or security enforcement in your network. Evan: Having that policy definition in code is something you can use to programmatically generate enforcement rules. Those enforcement rules can vary based on the underlying platform or a condition, but the key is that they’re generated by a computer. This allows you to rapidly change the enforcement rules, and paves the way to highly dynamic policy, as opposed to the more static policy we see in perimeter networks. Start small Evan: The first place to start is collecting policy and understanding what should be there. Once you know what should be there, you can understand what is there unexpectedly or what is there but should not be. You can build up this list and slowly move things from blacklist to whitelist mode saying, ‘We'll only allow these things known on this list.’ Once you can get to this whitelist mode, it becomes largely self-maintaining. At PagerDuty, we started small. First, we put regular IP policy in place, but that IP policy was going to be automated and backed by code. Once we got that in and vetted, we turned the knob up on granularity. We spread that granularity to more places and eventually turned up the encryption. It's totally acceptable to adopt one or two of the principles we're setting forth here and then add the rest when the time is appropriate. Additionally, it doesn't have to be for 100% of your infrastructure. You can start with the parts of your infrastructure that could benefit the most from it. Doug: If you tilt your head the right way, you could argue that Amazon's security groups are like a shade of Zero Trust network, in that AWS users could arrange their network into nicely crafted subnets or just start tagging hosts with certain security groups and using those to define policy. If you're on AWS, do a security group per role, then use that everywhere to define access. That will get you part of the way there, and you leave yourself open to extending out to different providers later. When the inevitable happens Evan: First and foremost, a Zero Trust approach dramatically slows down any potential breach. And breaches are rare in this kind of architecture because the policies are so granular and the movement is so limited. Another benefit of a Zero Trust network is its robust auditing and logging. When policies get changed programmatically, for example, you'll have a record of it. When a breach does happen, not only is the progression of the attack very slow, but you also have very good visibility into exactly what occurred and when and how. Doug: This is a key point in this type of network design. It's not just about enforcing; it's also about continual monitoring of changes of state. You build yourself a way to detect problems and perform forensics. The ultimate benefit is making that feedback loop where your trust in some system is directly driven from log traffic of what that system is currently doing so you can detect anomalies. If someone logs into an organization’s network from a potentially risky region of the world, then their authorization level can be knocked down until they’re seen in person to validate their access. Organizations can adopt policies that require staff to visit corporate headquarters regularly; otherwise, their trust level gets knocked down. Simplistic policies like that aren't doing any fancy machine learning, but even that sort of basic additional layer of security can help.
The O’Reilly Security Podcast: The problem with perimeter security, rethinking trust in a networked world, and automation as an enabler.In this episode, I talk with Doug Barth, site reliability engineer at Stripe, and Evan Gilman, Doug’s former colleague from PagerDuty who is now working independently on Zero Trust networking. They are also co-authoring a book for O’Reilly on Zero Trust networks. They discuss the problems with traditional perimeter security models, rethinking trust in a networked world, and automation as an enabler.Here are some highlights: The problem with perimeters Evan: The biggest issue with a perimeter model is that it tends to encourage system administrators to define as few perimeters as possible. You have your firewall, so anything out on the internet is bad, anyone on the inside is trusted, and maybe down the line you'll further segment this and add more firewalls. Maybe if you're really rigorous, you might do per-host firewalls, but in reality, most people say, ‘It's on my trusted network, so it's a trusted interaction. Why should I go through that effort? What's the value?’ The issue with that thought process is that we keep seeing bad people get behind the perimeter, time and time again. Once they get behind it, they can just do whatever they want. Doug: The alternative is proactively figuring out how to manage the trust in your network. Whom do you trust? Why do you trust them? Do you have enough trust for them? When I want to build a secure network, my goal is not to remove people's access; it's to help distribute the problem and get enough eyes on whom I trust and whether I should continue to trust them. It's a trust-but-verify approach. Moving to Zero Trust Evan: Shifting from a perimeter security model to Zero Trust is scary. But the good news is, we know how to do this already. We have internet-facing services, and we know how to serve up resources across the internet and secure them so they network between you and the resources. It's transparent from a security perspective. VPNs famously do this. Secure Sockets Layer (SSL) websites and other similar approaches are what we consider "internet security," and we already know how to do this. In a Zero Trust approach, we just apply it across the board, and use automation as a key enabler. Large migrations to a low trust network, like Google’s recent effort, involve a lot of auditing prep and very careful implementation. For instance, you need to craft policies on a case-by-case basis and turn them in logging mode only, so you're aware of whom will be blocked before you actually block them. Automation as an enabler Doug: Each engineering team in a company should be able to define the security policy that their individual service needs to function. We distribute that problem across many teams, but then we push all those policies into a secure infrastructure that actually implements that policy. This isn't just a crazy idea we had. This is how I understand Google's BeyondCorp initiative works. Google wanted to get rid of their VPNs but still have a lot of secure policies. They call what they built a ‘shared access gateway.’ They give each engineering team a digital subscriber line (DSL) for defining each of their security policies, but the shared access gateway is what actually implements the policy. They layer on top of that the broad-reaching policy for the entire organization. This type of automation—the ability to programmatically define your policy and your enforcement—allows you to give people a lot more access. Once you start capturing all this policy and how it changes over time in code, you can do much more advanced security policy or security enforcement in your network. Evan: Having that policy definition in code is something you can use to programmatically generate enforcement rules. Those enforcement rules can vary based on the underlying platform or a condition, but the key is that they’re generated by a computer. This allows you to rapidly change the enforcement rules, and paves the way to highly dynamic policy, as opposed to the more static policy we see in perimeter networks. Start small Evan: The first place to start is collecting policy and understanding what should be there. Once you know what should be there, you can understand what is there unexpectedly or what is there but should not be. You can build up this list and slowly move things from blacklist to whitelist mode saying, ‘We'll only allow these things known on this list.’ Once you can get to this whitelist mode, it becomes largely self-maintaining. At PagerDuty, we started small. First, we put regular IP policy in place, but that IP policy was going to be automated and backed by code. Once we got that in and vetted, we turned the knob up on granularity. We spread that granularity to more places and eventually turned up the encryption. It's totally acceptable to adopt one or two of the principles we're setting forth here and then add the rest when the time is appropriate. Additionally, it doesn't have to be for 100% of your infrastructure. You can start with the parts of your infrastructure that could benefit the most from it. Doug: If you tilt your head the right way, you could argue that Amazon's security groups are like a shade of Zero Trust network, in that AWS users could arrange their network into nicely crafted subnets or just start tagging hosts with certain security groups and using those to define policy. If you're on AWS, do a security group per role, then use that everywhere to define access. That will get you part of the way there, and you leave yourself open to extending out to different providers later. When the inevitable happens Evan: First and foremost, a Zero Trust approach dramatically slows down any potential breach. And breaches are rare in this kind of architecture because the policies are so granular and the movement is so limited. Another benefit of a Zero Trust network is its robust auditing and logging. When policies get changed programmatically, for example, you'll have a record of it. When a breach does happen, not only is the progression of the attack very slow, but you also have very good visibility into exactly what occurred and when and how. Doug: This is a key point in this type of network design. It's not just about enforcing; it's also about continual monitoring of changes of state. You build yourself a way to detect problems and perform forensics. The ultimate benefit is making that feedback loop where your trust in some system is directly driven from log traffic of what that system is currently doing so you can detect anomalies. If someone logs into an organization’s network from a potentially risky region of the world, then their authorization level can be knocked down until they’re seen in person to validate their access. Organizations can adopt policies that require staff to visit corporate headquarters regularly; otherwise, their trust level gets knocked down. Simplistic policies like that aren't doing any fancy machine learning, but even that sort of basic additional layer of security can help.
The O’Reilly Security Podcast: Saving the Network Time Protocol, recruiting and building future open source maintainers, and how speed and security aren’t at odds with each other.In this episode, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.Here are some highlights: “Help. I need a sysadmin.” It all started in February of 2015 when the NTP implementation maintainer, Harlan Stenn, came to me. Among NTP's many problems, there was a build box, and the entire build server—the entire build system—depended on this one server in Harlan's home continuing to function. Harlan no longer had the root password for the system, couldn't update it, didn't know what scripts were running on it, and no one in the world could build NTP without this server continuing to function. As I was helping him, I was seeing the state of the code and infrastructure, and I found out exactly how deep the rabbit hole went. It was a moment of panic. ‘If I don’t fix this, the internet is going to fall down, finance is going to fall down, and a lot of Krypton Security is going to stop working and be very attackable. We're already having major DDoS problems because no one's fixed this.’ I figured out a long time ago that if there's an emergency you’re seeing and no one else is fixing it, that means you're in charge. Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business, the people that work at the ISP and their routing equipment, power, billing systems and marketing, but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, ‘I'm going to go work on this really cool user application. It feels safer.’ They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR) and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build you have to take a breath and say, ‘How am I going to build this in?’ or ‘I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.’ That takes a long view rather than short-term planning. Working from first principles The single biggest issue we're facing right now in the security industry is that we are pushing things that make good sound bites over things that are good first-principle security. Whenever you have a situation where not enough people understand the issues and there's a lot at stake and a lot of money moving around, there is a tendency to try to sound cool and be easy to absorb and make people feel safe instead of getting good work done. I hate to break it to you, but really good engineering is rarely sexy. Fixing the pipes is rarely sexy. Often, the best things to do don't make good sound bites. If we teach more people to work from first principles and have more mature discussions, then we can actually get our C-suite or leadership involved because we can talk in concepts that they understand instead of just talking about what firewall rules we need.
The O’Reilly Security Podcast: Saving the Network Time Protocol, recruiting and building future open source maintainers, and how speed and security aren’t at odds with each other.In this episode, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.Here are some highlights: “Help. I need a sysadmin.” It all started in February of 2015 when the NTP implementation maintainer, Harlan Stenn, came to me. Among NTP's many problems, there was a build box, and the entire build server—the entire build system—depended on this one server in Harlan's home continuing to function. Harlan no longer had the root password for the system, couldn't update it, didn't know what scripts were running on it, and no one in the world could build NTP without this server continuing to function. As I was helping him, I was seeing the state of the code and infrastructure, and I found out exactly how deep the rabbit hole went. It was a moment of panic. ‘If I don’t fix this, the internet is going to fall down, finance is going to fall down, and a lot of Krypton Security is going to stop working and be very attackable. We're already having major DDoS problems because no one's fixed this.’ I figured out a long time ago that if there's an emergency you’re seeing and no one else is fixing it, that means you're in charge. Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business, the people that work at the ISP and their routing equipment, power, billing systems and marketing, but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, ‘I'm going to go work on this really cool user application. It feels safer.’ They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR) and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build you have to take a breath and say, ‘How am I going to build this in?’ or ‘I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.’ That takes a long view rather than short-term planning. Working from first principles The single biggest issue we're facing right now in the security industry is that we are pushing things that make good sound bites over things that are good first-principle security. Whenever you have a situation where not enough people understand the issues and there's a lot at stake and a lot of money moving around, there is a tendency to try to sound cool and be easy to absorb and make people feel safe instead of getting good work done. I hate to break it to you, but really good engineering is rarely sexy. Fixing the pipes is rarely sexy. Often, the best things to do don't make good sound bites. If we teach more people to work from first principles and have more mature discussions, then we can actually get our C-suite or leadership involved because we can talk in concepts that they understand instead of just talking about what firewall rules we need.
The O’Reilly Security Podcast: Human error is not a root cause, studying success along with failure, and how humans make systems more resilient.In this episode, I talk with Steven Shorrock, a human factors and safety science specialist. We discuss the dangers of blaming human error, studying success along with failure, and how humans are critical to making our systems resilient.Here are some highlights: Humans are part of complex sociotechnical systems For several decades now, human error has been blamed as the primary cause of somewhere between 70% to 90% of aircraft accidents. But those statistics don’t really explain anything at all, and they don’t even make sense because all systems are composed of a number of different components. Some of those components are human—people in various positions and roles. Other components are technical—airplanes and computer systems, and so on. Some are procedural, or are soft aspects like the organizational structure. We can never, in a complex sociotechnical system, isolate one of those components as the cause of an accident, and doing so doesn't help us prevent accidents, either. There is no such thing as a root cause We have a long history of using human error as an explanation, partly because the way U.S. accident investigations and statistics are presented at the federal level highlights a primary cause. That is a little naïve (primary and secondary causes don’t really exist; that's an arbitrary line), but if investigators have to choose something, they tend to choose a cause that is closest in time and space to the accident. That is usually a person who operates some kind of control or performs some kind of action, and is at the end of a complex web of actions and decisions that goes way back to the design of the aircraft, the design of the operating procedures, the pressure that's imposed on the operators, the regulations, and so on. All of those are quite complicated and interrelated, so it's very hard to single one out as a primary cause. In fact, we should reject the very notion of a primary cause, never mind assigning the blame on human error. Studying successes along with failures If you only look at accidents or adverse events, then you're assuming that those very rare unwanted events are somehow representative of the system as a whole, but in fact, it's a concatenation of causes that come together to produce a big outcome. There's no big cause; it's just a fairly random bunch of stuff that's happened at the same time and was always there in the system. We should not just be studying when things go wrong, but also how things go well. If we accept that causes of failure are inherent in the system, then we can find them in everyday work and will discover that very often they're also the causes of success. So, we can't simply eliminate them; we've got to look deeper into it. Humans make our systems resilient Richard Cook, Ohio State University SNAFU catcher, says that the most complex sociotechnical systems are constantly in a degraded mode of operation. That means that something in that system (and usually a lot of things) is not working as it was designed. It may be that staffing numbers or competency aren’t at the level they should be, or refresher training's been cut, or the equipment may not be working right. We don't notice that our systems are constantly degraded because people stretch to connect the disparate parts of the systems that don't work right. You know that, in your system, this program doesn't work properly and you have to keep a special eye on it; or you know that this system falls down now and then, and you know when it's likely to fall down, so you keep an eye out for that. You know where the traps are in the system and, as a human being, you want the resilience, you want to stop problems from happening in the first place. The source of resilience is primarily human; it's people that make the system work. People can see the purpose in a system, whereas procedures can only look at a prescribed activity. In the end, we have a big gap between at least two types of work—work as imagined (what we think people do), and work as done (what people actually do)—and in that gap is all sorts of risk. We need to look at how work is actually done by being mindful of how far that's drifted from how we think it's done. Related resources: Behind Human Error Nine Steps to Move Forward From Error ‘Human Error’: The handicap of human factors, safety and justice Human error (position paper for NATO conference on human error)
The O’Reilly Security Podcast: Human error is not a root cause, studying success along with failure, and how humans make systems more resilient.In this episode, I talk with Steven Shorrock, a human factors and safety science specialist. We discuss the dangers of blaming human error, studying success along with failure, and how humans are critical to making our systems resilient.Here are some highlights: Humans are part of complex sociotechnical systems For several decades now, human error has been blamed as the primary cause of somewhere between 70% to 90% of aircraft accidents. But those statistics don’t really explain anything at all, and they don’t even make sense because all systems are composed of a number of different components. Some of those components are human—people in various positions and roles. Other components are technical—airplanes and computer systems, and so on. Some are procedural, or are soft aspects like the organizational structure. We can never, in a complex sociotechnical system, isolate one of those components as the cause of an accident, and doing so doesn't help us prevent accidents, either. There is no such thing as a root cause We have a long history of using human error as an explanation, partly because the way U.S. accident investigations and statistics are presented at the federal level highlights a primary cause. That is a little naïve (primary and secondary causes don’t really exist; that's an arbitrary line), but if investigators have to choose something, they tend to choose a cause that is closest in time and space to the accident. That is usually a person who operates some kind of control or performs some kind of action, and is at the end of a complex web of actions and decisions that goes way back to the design of the aircraft, the design of the operating procedures, the pressure that's imposed on the operators, the regulations, and so on. All of those are quite complicated and interrelated, so it's very hard to single one out as a primary cause. In fact, we should reject the very notion of a primary cause, never mind assigning the blame on human error. Studying successes along with failures If you only look at accidents or adverse events, then you're assuming that those very rare unwanted events are somehow representative of the system as a whole, but in fact, it's a concatenation of causes that come together to produce a big outcome. There's no big cause; it's just a fairly random bunch of stuff that's happened at the same time and was always there in the system. We should not just be studying when things go wrong, but also how things go well. If we accept that causes of failure are inherent in the system, then we can find them in everyday work and will discover that very often they're also the causes of success. So, we can't simply eliminate them; we've got to look deeper into it. Humans make our systems resilient Richard Cook, Ohio State University SNAFU catcher, says that the most complex sociotechnical systems are constantly in a degraded mode of operation. That means that something in that system (and usually a lot of things) is not working as it was designed. It may be that staffing numbers or competency aren’t at the level they should be, or refresher training's been cut, or the equipment may not be working right. We don't notice that our systems are constantly degraded because people stretch to connect the disparate parts of the systems that don't work right. You know that, in your system, this program doesn't work properly and you have to keep a special eye on it; or you know that this system falls down now and then, and you know when it's likely to fall down, so you keep an eye out for that. You know where the traps are in the system and, as a human being, you want the resilience, you want to stop problems from happening in the first place. The source of resilience is primarily human; it's people that make the system work. People can see the purpose in a system, whereas procedures can only look at a prescribed activity. In the end, we have a big gap between at least two types of work—work as imagined (what we think people do), and work as done (what people actually do)—and in that gap is all sorts of risk. We need to look at how work is actually done by being mindful of how far that's drifted from how we think it's done. Related resources: Behind Human Error Nine Steps to Move Forward From Error ‘Human Error’: The handicap of human factors, safety and justice Human error (position paper for NATO conference on human error)
The O’Reilly Security Podcast: Sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance.In this episode, O’Reilly’s Jenn Webb talks with Fang Yu, cofounder and CTO of DataVisor. They discuss sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance against fraud.Here are some highlights: Catching fraudsters while they sleep Today's attackers are not using single accounts to conduct fraud; if they have a single account, the fraud they can conduct is very limited. What they usually do is construct an army of fraud accounts and then orchestrate either mass registration or account takeovers. Each of the individual accounts will then conduct small-scale fraud. They can do spamming, phishing, and all different types of malicious activity. But because they use many coordinated individual accounts, the attacks are massive in scale. To detect these, we take what is called an unsupervised machine learning approach. We do not look at individual users anymore—we take a holistic view of all the users and their correlations and linkage, and we use graph analysis and clustering techniques to identify these fraud rings. We can identity them even while they are sleeping. Hence, we call them ‘sleeper cells.’ Distinguishing bad from good is increasingly difficult The biggest threat we are facing right now is that fraudsters have almost unlimited resources and are equipped with advanced technologies. They can access cloud resources in a data center, for example, and they have underground markets with access to people specialized in creating new accounts, getting stolen credit cards, and taking over users’ existing accounts. In addition, they often have significantly more information than normal users would possess. For example, they can get credit reports and know exactly where a user lived three years ago, five years ago, and where they worked. The information they gather is very accurate, and that makes it easy for fraudsters to effectively impersonate a legitimate person. Accordingly, when online service providers see a request come in online, it's very hard for them to distinguish whether it is coming from a real user or a fraudster. Incubation in money transfer attacks When fraudsters set up different accounts for money transfers, they frequently start by testing small transactions. In the very beginning, it's all legitimate. They send small amounts to different users, and they use legitimate banking information, so there is no charge back. After that, they incubate for weeks or longer. After that incubation period, they use these accounts to conduct much larger transactions, because they’d already established the reputation for these accounts. Then, they begin conducting fraudulent transactions. That's one of the typical trends we see in our analysis. More than a quarter of fraudster accounts incubate, and many incubate a long time—more than 30 days before they start attacking. More than 11% attack after incubating more than 100 days. We saw one extreme case of a group of accounts that aged for more than three years before they started attacking. Moving from reactive to proactive detection At DataVisor, we do not want a point solution that only catches what attackers are already doing. That’s a cat and mouse game. We want to stay ahead of the game and know when fraudsters start doing something, or even anticipate when they’ll start before they do anything. We use data analytics to look at the behavior of attackers along with normal users, and extract fraudulent activities. Attackers have a lot of advanced techniques right now. They can go through two-factor authentication, and they have access to data centers. So, we use the latest technologies to defend against them and then to view the systems that they cannot invade—because, in the end, by looking at the attackers’ behavior, we can create a system that can detect and preempt fraud.
The O’Reilly Security Podcast: Sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance.In this episode, O’Reilly’s Jenn Webb talks with Fang Yu, cofounder and CTO of DataVisor. They discuss sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance against fraud.Here are some highlights: Catching fraudsters while they sleep Today's attackers are not using single accounts to conduct fraud; if they have a single account, the fraud they can conduct is very limited. What they usually do is construct an army of fraud accounts and then orchestrate either mass registration or account takeovers. Each of the individual accounts will then conduct small-scale fraud. They can do spamming, phishing, and all different types of malicious activity. But because they use many coordinated individual accounts, the attacks are massive in scale. To detect these, we take what is called an unsupervised machine learning approach. We do not look at individual users anymore—we take a holistic view of all the users and their correlations and linkage, and we use graph analysis and clustering techniques to identify these fraud rings. We can identity them even while they are sleeping. Hence, we call them ‘sleeper cells.’ Distinguishing bad from good is increasingly difficult The biggest threat we are facing right now is that fraudsters have almost unlimited resources and are equipped with advanced technologies. They can access cloud resources in a data center, for example, and they have underground markets with access to people specialized in creating new accounts, getting stolen credit cards, and taking over users’ existing accounts. In addition, they often have significantly more information than normal users would possess. For example, they can get credit reports and know exactly where a user lived three years ago, five years ago, and where they worked. The information they gather is very accurate, and that makes it easy for fraudsters to effectively impersonate a legitimate person. Accordingly, when online service providers see a request come in online, it's very hard for them to distinguish whether it is coming from a real user or a fraudster. Incubation in money transfer attacks When fraudsters set up different accounts for money transfers, they frequently start by testing small transactions. In the very beginning, it's all legitimate. They send small amounts to different users, and they use legitimate banking information, so there is no charge back. After that, they incubate for weeks or longer. After that incubation period, they use these accounts to conduct much larger transactions, because they’d already established the reputation for these accounts. Then, they begin conducting fraudulent transactions. That's one of the typical trends we see in our analysis. More than a quarter of fraudster accounts incubate, and many incubate a long time—more than 30 days before they start attacking. More than 11% attack after incubating more than 100 days. We saw one extreme case of a group of accounts that aged for more than three years before they started attacking. Moving from reactive to proactive detection At DataVisor, we do not want a point solution that only catches what attackers are already doing. That’s a cat and mouse game. We want to stay ahead of the game and know when fraudsters start doing something, or even anticipate when they’ll start before they do anything. We use data analytics to look at the behavior of attackers along with normal users, and extract fraudulent activities. Attackers have a lot of advanced techniques right now. They can go through two-factor authentication, and they have access to data centers. So, we use the latest technologies to defend against them and then to view the systems that they cannot invade—because, in the end, by looking at the attackers’ behavior, we can create a system that can detect and preempt fraud.
The O’Reilly Security Podcast: DRM in unexpected places, artistic and research hindrances, and ill-anticipated consequences.In this best of 2016 episode, I revisit a conversation from earlier this year with Cory Doctorow, a journalist, activist, and science fiction writer. We discuss the unexpected places where digital rights management (DRM) pops up, how it hinders artistic expression and legitimate security research, and the ill-anticipated (and often dangerous) consequences of copyright exemptions.Early in 2016, Cory and the Electronic Frontier Foundation (EFF) launched a lawsuit against the U.S. government. They are representing two plaintiffs—Matthew Green and Bunnie Huang—in a case that challenges the constitutionality of Section 1201 of the Digital Millennium Copyright Act (DMCA). The DMCA is a notoriously complicated copyright law that was passed in 1998. Section 1201 is the part that relates to bypassing DRM. The law says that it's against the rules to bypass DRM, even for lawful purposes, and it imposes very severe civil and criminal penalties. There's a $500,000 fine and a five-year prison sentence for a first offense provided for in the statute. Here, Cory explains some of the more subtle consequences that arise from DRM in unexpected places. An urgent need to protect individual rights and freedoms Everything has software. Therefore, manufacturers can invoke the DMCA to defend anything they’ve stuck a thin scrim of DRM around, and that defense includes the ability to prevent people from making parts. All they need to do is add a little integrity check, like the ones that have been in printers for forever, that asks, ‘Is this part an original manufacturer's part, or is it a third-party part?’ Original manufacturer's parts get used; third-party parts get refused. Because that check restricts access to a copyrighted work, bypassing it is potentially a felony. Car manufacturers use it to lock you into buying original parts. This is a live issue. Apple has deprecated the 3.5-millimeter audio jack on their phones in favor of using a digital interface. If they put DRM on that digital audio interface, they can specify at a minute level—and even invent laws about—how customers and plug-in product manufacturers can engage with it. Congress has never said, ‘You're not allowed to record anything coming off your iPhone,’ but Apple could set a “no record” flag on audio coming out of that digital interface. Then they could refuse to give license for users to decrypt the audio, making it illegal to use. Simply by using the device, users would be agreeing to accept and honor that no-record stipulation, and bypassing it would be illegal. DRM hinders legitimate research and artistic expression Matthew Green [one of the plaintiffs in the EFF lawsuit] has a National Science Foundation grant to study a bunch of technologies with DRM on them, and the Copyright Office explicitly said he is not allowed to do research on those technologies. The Copyright Office did grant a limited exemption to the DMCA to research consumer products, but it excludes things like aviation systems or payment systems like Green wants to research. Bunnie Huang [the other plaintiff] is running up against similar limitations on bypassing DRM to make narrative films with extracts from movies. We have one branch of the government refusing to grant these exemptions. We have the highest court in the land saying that without fair use, copyright is not constitutional. And we have two plaintiffs who could be criminal defendants in the future if they continue to engage in the same conduct they've engaged in in the past. This gives us standing to now ask the courts whether it’s constitutional for the DMCA to apply to technologies that enable fair use, and whether the Copyright Office really does have the power to determine what they grant exemptions for. Our winning this case would effectively gut Section 1201 of the DMCA for all of the anticompetitive and the security-limiting applications that it's found so far. DCMA exemptions can have serious consequences The Copyright Office granted an exemption for tablets and phones so people could jailbreak them and use alternate stores. This exemption allows individuals to write the necessary software to jailbreak their own personal devices but does not allow individuals to share that tool with anyone else, or publish information about how it works or information that would help someone else make that tool. So, now we have this weird situation where people have to engage in illegal activity (trafficking in a tool by sharing information about how to jailbreak a phone) to allow the average user to engage in a legal activity (jailbreaking their device). This is hugely problematic from a security perspective. Anyone can see the danger of seeking out randos to provide binaries that root a mobile device. To avoid prosecution, those randos are anonymous. And because it’s illegal to give advice about how the tool works, people have no recourse if it turns out that the advice they follow is horribly wrong or ends up poisoning their device with malware. This is a disaster from stem to stern—we're talking about the supercomputer in your pocket with a camera and a microphone that knows who all your friends are. It's like Canada’s recent legalization of heroin use without legalizing heroin sales. A whole bunch of people died of an overdose because they got either adulterated heroin or heroin that was more pure than they were used to. If the harm reduction you’re aiming for demands that an activity be legal, then the laws should support safe engagement in that activity. Instead, in both the heroin and device jailbreak examples, we have made these activities as unsafe as possible. It's really terrible. The security implications really matter, because we hear about vulnerabilities and zero-days and breaks against IoT devices every day in ways that are really, frankly, terrifying. Last winter, it was people accessing baby monitors; this week, it was ransomware for IoT thermostats and breaks against closed-circuit televisions in homes.