SecurityTrails Blog

Follow SecurityTrails Blog
Share on
Copy link to clipboard

Listen to all the articles we release on our blog while commuting, while working or in bed.

SecurityTrails


    • Aug 9, 2022 LATEST EPISODE
    • monthly NEW EPISODES
    • 9m AVG DURATION
    • 108 EPISODES


    Search for episodes from SecurityTrails Blog with a specific topic:

    Latest episodes from SecurityTrails Blog

    August Product Update: Exposed Admin Panels, Risk Rules API, Risk History by Host, and more!

    Play Episode Listen Later Aug 9, 2022 2:50


    At SecurityTrails we continuously upgrade, improve and enhance the quality of user experience in our Attack Surface Intelligence platform. Today, we are thrilled to announce several Attack Surface Intelligence updates we've recently been working on: Risk History by Host, Risk Rules API, Search for Signatures, and other upgrades! Keep reading to learn more. Admin Panel detections in Inventory A great new feature from our latest release is Admin Panels, located within the Inventory tab. This option will help you locate administrator panels in mere seconds. This allows security teams to find exposed control panels from popular technologies and software, which may be out of compliance with policies, and therefore, adding unnecessary risk to your organization. Among its many highlights, the Admin Panel feature: Works on deep paths. Works on IPs without hostnames. Includes firewalls, enterprise software, developer tools, and CMS's. Adds new signatures frequently and automatically. On that interface, you'll find a Counts by Panel summary, where you'll find the top exposed panels, along with the number of affected IP addresses and hostnames. Scrolling down, you'll also find the full list of panels we found, along with a description, the port where it was found, the affected service, and a quick target link so you can jump right into each one of them: Risk Rules API The new Risk Rules API allows users to get immediate data for CVEs, including vulnerability name, description, risk severity (classification), affected hostnames, technical references found on the Internet, and project metadata such as ID, title and snapshot creation date. Risk History by Host The new Risk History by Host feature is the perfect tool for keeping an historical tracking of your current vulnerabilities and misconfigurations. By listing them, you'll know when they appeared for the very first time, and most importantly when they were cleared (fixed, patched) and no longer showing on the Risk Rules report. As shown in the above screenshot, you can also filter the Risk History by Severity or Event type (added or cleared), and even export the results into a CSV file. End-user ability to search signatures This new feature gives Attack Surface Intelligence users the ability to search for risk signatures, so customers can determine whether to check for a certain vulnerability or if a misconfiguration is present on our Attack Surface Intelligence checks, as shown in the following screenshot. SecurityTrails periodically releases updates that improve the performance, security, and logic of your experience in Attack Surface Intelligence. By enhancing the usability of the Attack Surface Intelligence interface, we create a new environment that allows you to identify and prevent threats much more effortlessly. Why don't you try it yourself and facilitate your most thorough and effective way of protection? Book your demo now!

    The CVE Approach: A Reductionist Way to Handle the Attack Surface

    Play Episode Listen Later Jun 23, 2022 7:34


    As recently as the 1990s, the information security industry lacked a fundamental mechanism to deal with the notion of sharing both hardware and software vulnerabilities using any sort of meaningful taxonomy. Previous efforts—largely encumbered by vendor-specific naming convention inconsistencies or by the lack of a community consensus around establishing classification primitives—were centered on multidimensional methods of identifying security problems without regard for interoperability; in a seminal progress report, MITRE will later refer to this budding cacophony of naming schemas as the vulnerability "Tower of Babel." Over the years, a community-led effort formally known as the [Common Vulnerabilities and Exposures (or CVE) knowledge base, will grow to become the vulnerability enumeration product that finally bridged the standardization gap. A (very) brief history of CVE In 1999, as David E. Mann and Steven M. Christey (The MITRE Corporation) were trying to gather momentum for a publicly disclosed alternative to early attempts by organizations at sharing any discovered computer flaws, the internet was already buzzing with a growing number of cybersecurity threats. Consequently, CVE's meteoric rise through corporate networks clearly meant that the industry was ripe for a departure from siloed databases and naming conventions to a more centralized approach involving a unified reference system. Thus, CVE evolved as a practical evaluation tool—a sort of dictionary, if you will—to describe common vulnerabilities across diverse security platforms without incurring the penalty of having a multitude of references attributed to the same exposure. Its subsequent endorsement will come in many forms, including being the point of origin of countless new CVE-compatible products and services originating from the vendor community at large. In addition, as the CVE initiative grew, so did the number of identifiers (or CVE entries) officially received and processed through several refinement phases and advisory boards—from a modest 321 entries back in 1999 to over 185K as of this year; the list keeps growing. A second major catalyst for integration orients us toward operating systems and their inclusion of CVE-related information to deal with software bugs and the inherent asymmetries that arise from product release to patching, as it is well understood that the presence of any high-impact vulnerabilities exponentially increases the probability of a serious breach. Finally, CVEs are the cornerstone of threat-informed defense and vulnerability management strategies in a digital world visibly marked by the presence of miscreants in practically every area, combining these under the banner of the MITRE ATT&CK® framework. This sort of objectivity distills and contextualizes the impact of security vulnerabilities together with adversarial tactics against the risk assessment backdrop, providing defenders with a unique opportunity to plan any mitigation responses accordingly. But, what qualifies as a CVE? In short, a vulnerability becomes a single CVE when the following three criteria are met: The reporting entity, product owner, hardware, or software vendor must acknowledge and/or document the vulnerability as being a proven risk and explain how it violates any existing security policies. The security flaw must be independently fixable; that is, its context representation does not involve references or dependence on any additional vulnerabilities. The flaw affects a discrete codebase, or in cases of shared libraries and/or protocols that cannot be used securely; otherwise, multiple CVEs will be required. After the remainder of the vetting process is complete, every vulnerability that qualifies as a CVE is assigned a unique ID by a body of numbering authorities (or CNAs) and posted on the CVE website for public distribution. CVE and the attack surface With the frantic expansion of the attack surface beginning some years ago came the visibility i...

    The Role of Cloud Misconfigurations & the Attack Surface in the 2022 Verizon Deebir

    Play Episode Listen Later May 26, 2022 6:37


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. This year's 15th installment of the Verizon Data Breach Investigations Report (DBIR) features yet another impressive dataset of corporate breaches and exposures marked by an overriding postulate: attack surfaces matter and they should dictate a large portion of your risk assessment strategy. First launched in 2008, the DBIR's 2022 version has been significantly expanded, from a modest amount of 500 cases, to include 5212 breaches and 23896 incidents examined through the lens of the VERIS 4A's (Actor, Action, Asset, and Attribute) framework. Its timeline section looks at comprehensive aspects such as discovery time, any attacker actions taken pre, and post-breach, and the number of actions per breach. Additionally, there is a pattern-matching initiative to help organizations navigate through some of the most concerning incidents while providing a handful of preliminary security controls. Industry verticals included in this 2022 report include Accommodation and Food Services (72), Arts, Entertainment and Recreation (71), Educational Services (61), Financial and Insurance (52), Healthcare (62), Information (51), Manufacturing (31 to 33), Mining, Quarrying, and Oil & Gas Extraction + Utilities (21 + 22), Professional, Scientific and Technical Services (54), Public Administration (92), Retail (44-45), and Very Small Businesses (10 employees or less). The report highlights threats from different regions of the world such as Asia Pacific, Europe, Middle East, Africa, Northern America, Latin America, and the Caribbean, with SecurityTrails playing the role of intelligence contributor as in the recent past. Summary of key findings Through a series of carefully-selected and correlated investigative scenarios, a collective effort that the DBIR refers to as "creative exploration", albeit without bias, the report's findings continue to highlight several areas of interest from where cybercrime continues to drive profit. For example, identity theft and fraud motivate an important sector of transnational cybercrime, with some of the most explicit cases centered on the use of ransomware, no surprise there. However, a bustling amount of incidents, where default or stolen credentials are being leveraged, extended the attack paths with relative ease, opportunistic or not, the problem showed evidence of being compounded by a growing lack of adequate visibility into publicly-facing assets and (any) corresponding vulnerabilities. At the tail end of the distribution, the vulnerability-to-breach ratios remained particularly significant. To put it in the DBIR's own parlance, this is where attackers are looking (it's a numbers game!); a sustainable environment with enough incentives as miscreants come hard on the heels of struggling security teams. Important, too, are the enticing circumstances applicable to different industries. In other words, and perhaps not surprisingly, attacks based on a specific business model are likely to be more successful in the long run. An observed convergence between the human element and system misconfigurations remained just above the 5th percentile (a decrease from 2020), but it drove an estimated 13% of overall system breaches, with misconfigured cloud storage instances leading the trend. How Attack Surface Intelligence helps prevent DBIR's most popular threats As we can see from the key findings from the 2022 DBIR, lack of visibility into public-facing assets is one of the most prominent problems inhibiting security teams from preventing threats to their organizations. Since we introduced Risk Rules, our main goal was to help security teams find an easy way to generate a complete and dynamic inventory of all their digital assets, as well as identify CVEs and critical misconfigurations over all their hosts. And when it comes to asset discovery, as you see from the following screenshot, A-S-I is particula...

    Prepare, Detect, Respond: Reduce Your Risk of Cyber Attack with Attack Surface Intelligence

    Play Episode Listen Later Apr 28, 2022 7:50


    With the rise in cybersecurity attacks targeting individuals and corporations alike, it's become increasingly important not only to ensure preparedness for cybersecurity attacks but to set up processes for early detection and response as well. The Cybersecurity and Infrastructure Security Agency, commonly known as the C-I-S-A, is an agency of the United States government that actively watches for cybersecurity threats and provides ways to secure various organizations (including other governmental agencies), families, and individuals. The C-I-S-A Shields Up program is a cybersecurity effort aimed at combating state-sponsored and other retaliatory cybersecurity attacks launched against organizations and individuals based in the United States. Shields Up outlines clear cybersecurity procedures for dealing with the most targeted methods of cybersecurity attacks, usually directed at organizations, families, and individuals including, notably, corporate leaders. Protection for families and individuals It's becoming more and more commonplace for everyone in a household to have their own set of personal devices. These include mobile phones, tablets, laptops, and desktops. Devices like mobile phones and tablets offer themselves as easy targets for cybersecurity attacks. Their in-app advertisements and other web-based campaigns can lead to malware being downloaded onto a device, making it imperative to follow certain cybersecurity practices to ensure that you and your family members remain safe. With basic mobile phones and tablets being sold with over 64G-128GB of on-device storage, one can imagine the amount of identifiable, personal, and easily usable information that each device can hold. The C-I-S-A's Shields Up program outlines a list of steps for individuals and families to follow in the interest of preparing themselves for and staying secure from cybersecurity-related threats. Protection for corporate leaders When it comes to cyberattacks, phishing attacks, and ransomware, corporate leaders like company directors, financial heads and CEOs are among the most targeted members of organizations. CEOs and other company leaders are commonly attacked as their systems and email accounts generally hold more useful information than others in a company. Following the guidelines laid out by the C-I-S-A's Shields Up program helps corporate leaders and CEOs stay safe and secure in the face of cybersecurity-related threats. Protection for organizations While protection for organizations is usually handled by cybersecurity experts, the most common sources of cybersecurity attacks on organizations originate from basic points of entry, such as VPN entry points, remote desktops, and other areas typically left unsecured. Fortunately, Shields Up outlines a list of steps that organizations can follow to stay secure against cybersecurity-related threats. How can Attack Surface Intelligence help your organization? Preparation The SecurityTrails Attack Surface Intelligence, A-S-I platform helps transform your security process from being reactive to proactive, and therefore preventive. This allows your organization to be better prepared for any possible cyberattacks and to stay ahead of cybercriminals. With automation being the key strength in heading off attacks, A-S-I ensures that persistent monitoring, CVE detection, and parsing of your organization's virtual assets is no longer a long and tedious process. A-S-I platform features and subjects include: Automatic detection and listing of IP addresses belonging to your organization. ASN, networks on which your organization's assets are hosted. Full domain and subdomain mapping. Detection of dev and staging subdomains. Open ports within your organization, for critical services such as databases. Self-signed SSL certificates issued within your organization. Web server vendors and versions used within your organization. Risk detection, and much more! Consider the very first step of any cybersecurity proc...

    Monitoring Your Digital Assets for Compliance

    Play Episode Listen Later Feb 24, 2022 7:45


    Following the trends set forth by our post-pandemic world, organizations continue to accelerate digitalization and reliance on technology to improve decision making while increasing the efficiency of their communications, all in their efforts to simply optimize business operations. Additionally, the rise in popularity of remote work has enhanced workforce flexibility and satisfaction as well as business continuity. But nothing great can come without risk. As organizations IT infrastructures grow to accommodate all of these advancements, digital assets and resources continue to expand too, and not often flowing neatly into easily visible and monitored areas. Furthermore, the growth of cyber threats aimed at those digital assets make fighting various types of cybercrime a priority for every organization. The compliance side of the digital transformation coin As cybersecurity threats continue to grow, so do data loss prevention trends. This phenomenon is led by government-imposed regulations such as GDPR, HIPAA, PCI DSS, and the growing myriad of new security policies imposed by various agencies for the handling of sensitive assets. The cost associated with lacking an efficient and effective compliance program is growing too. Along with the reputational damage organizations can suffer, studies have shown that organizations can lose an average of $4 million in revenue due to a single non-compliance event. In order to properly adhere to these regulations, organizations need to understand the full scope of their IT infrastructure, which includes knowing what assets they have, where they're located and who is responsible for them. And with today's complex IT infrastructure that includes both on-prem and cloud environments as well as forgotten and shadow infrastructures, this comes as a challenge. The more assets an organization has, the harder it is to gain a full view of them. Managing numerous assets makes spotting security misconfigurations or policy violations among them that much more difficult. Persistent monitoring of their infrastructure, however, can provide real-time visibility into an organization's ever-changing digital assets, allowing them to identify any compliance gaps. And rather than relying upon various types of disparate tooling to achieve this, when having to identify, inventory, classify and monitor digital assets can only add to an already complex environment, a single platform to provide that kind of unified attack surface monitoring process arrives as a solution. Leading your compliance efforts with ASR Our leading platform Attack Surface Reduction (ASR) provides organizations with much-needed attack surface monitoring and a comprehensive understanding of all their digital assets as well as their location, ownership, services, and the technologies running on them, all to keep security teams aware of any potential security risks disrupting regulatory compliance. How can ASR guide your compliance efforts? Know the location of your every asset A large number of organizations employ both an incomplete asset discovery process and an obsolete asset inventory. And like we always say: you can't protect what you can't see. A forgotten or unknown asset is impossible to secure, offering a sure path to a security event, regulatory penalties and fines. With Attack Surface Reduction, you'll be able to gain a complete view across your external infrastructure, allowing you to improve your security posture and lead your compliance program. ASR provides you with a single source of truth regarding the location of each of your internet-connected assets, and reveals any new changes that have been made within your infrastructure, including when and where any new asset is discovered. This way, any shadow or forgotten infrastructure, easy entry points for malicious actors, and easy risks of failure to comply with government and industry regulations, is immediately discovered by ASR. Detect immediate risks and out-of-policy assets...

    5 Steps to Protect Your Enterprise's Attack Surface

    Play Episode Listen Later Feb 17, 2022 9:12


    With the increases in cyber attacks and vulnerabilities detected every day, it's become even more challenging to stay on top of every aspect of your organization's security. Securing your organization is no longer as simple as it was in the past, thanks to the rise in various types of attacks, including targeted attacks towards employees in the form of phishing emails, DNS hijacking, and organizations prioritizing application availability and spreading servers, cloud deployments over various cloud providers. Also, tech stacks and software libraries used in your applications are growing larger with various dependencies, leading to further complexities with regard to an organization's overall security. How the attack surface grows with any organization With the advent of the global pandemic, remote working has become the go-to solution for organizations all over the world. This, in turn, has yielded consequences such as the rise in social engineering attacks, and other forms of targeted attacks. This is because each employee of an organization has become even more targetable while working in a non-maintained home networking environment. For example, if an employee works from home, the employee is often connected to the internet with an ISP-provided WiFi router and modem. These devices frequently run with firmware that is vulnerable or outdated, as they don't receive updates as often as enterprise-based networking gear. So if the same employee works from an office, where enterprise-grade firewalls and networking gear are used, a certain amount of risk is eliminated, risk that could originate from compromised networking gear. While VPNs provide a great amount of security for accessing a corporation's internal assets, there is always a risk of malware entering the employee's work devices through compromised networking gear at home, whenever the VPN connection is disconnected or disrupted. Looking beyond an organization's employees With increasing demands in the reliability and availability of an organization's products, today's organizations have been forced to spread assets over various cloud providers. In the past, a single cloud provider would most likely handle a complete application end-to-end, but spreading an application across multiple cloud vendors has caused a notable increase in the size of the attack surface, with each cloud provider handling ACLs differently, at times, even working with differences in UI or the way certain tasks are handled within a cloud provider. And with multiple cloud vendors, the number of attack vectors increases as well. One cloud vendor getting compromised can lead to the entire application getting compromised. Putting the size of an organization aside, the tech stacks and libraries it uses can also lead to security-based issues. While using popular software libraries is generally considered a good idea, a vulnerability among them can lead to much larger issues. Consider the recent impact of the vulnerability in the Log for J library, a simple yet widely used logging tool. This led to multiple compromises of web applications, all of which needed immediate patching as a large number of the organizations affected had these applications operating on the public internet. Simply put, your attack surface is as spread out as your organization is, and on all fronts. The more widespread your resources (such as employees, cloud servers, tech stack, libraries, etc.) are, the larger your attack surface grows. How can you safeguard your attack surface? To begin using SecurityTrails Attack Surface Reduction (ASR), head over to your account and click on "Access Surface Browser". Next, click on the "Projects" option in the navbar. Once there, click on "Create a New Project". Give your project a name and enter the domain name of your organization, then click on "Create Project". Now, let's take a look at five ways in which your organization can leverage the power of the SecurityTrails ASR tool: 1. Asset mapp...

    February Product Updates: ASR Technologies & SurfaceBrowser IP-Blocks Downloads

    Play Episode Listen Later Feb 10, 2022 2:19


    We've been hard at work getting our product updates ready and we're thrilled to kick them off for 2022. Today, we're announcing new feature launches and updates for our Attack Surface Reduction (ASR)] and Surface Browser platforms. Detailed Application View In November, we rolled out our beta Explorer tab, bringing with it new and exciting capabilities. One of which was improved visual web app identification with the use of home page screenshots, allowing you to visualize screenshots of all assets. Today, we introduce you to Detailed Application View, a new feature of the Beta Explorer Screenshots page. Access to Beta Explorer will allow you to test this significant upgrade by clicking on an item, which will bring up a window that provides even more detailed information on a given application: screenshots and detected technologies. IP-Blocks Downloads During the last year, the IP-Blocks Downloads feature within Surface Browser was being brought back into production. Now we're excited to finally re-release this important asset, complete with improved performance of the Downloads operation. Access to the reworked feature in Surface Browser can be gained via the following path: /app/sb/domain/[domain]/ip-blocks. As always, you'll be able to download IP-Blocks results in both JSON and CSV format. Activity Heatmap Surface Browser scores even more improvements, including Activity Heatmap, now added to /app/sb/domain/[domain]/activity. By using this feature you'll be able to view all newly observed assets created by day for a given domain. This provides a chronological representation of your organization's surface area, letting you visualize exactly when each asset was added to your infrastructure. Conclusion From the Detailed Application View which will provide deeper data points, to the IP Blocks Download which allows export of our data, to the Activity Heatmap which improves visualization and input into your ever-evolving external infrastructure, new features and improvements in ASR and Surface Browser will provide more functionality, more data and a better user experience overall. Take advantage of these new developments, and benefit from a new perspective into all of your digital assets, infrastructure and potential security risks. Book a demo and discover all you can do with Attack Surface Reduction!

    A CISO's Perspective on Attack Surface Reduction: Security Trails Fireside Chat with Terence Runge

    Play Episode Listen Later Feb 3, 2022 5:09


    2021 was a tumultuous period for cybersecurity: it was a record year for the number of reported data breaches. And who can forget Log for j vulnerability, Colonial Pipeline and Kaseya ransomware? Combine that with the continuous growth of cloudification and remote worker sprawl as well as constant supplier diversification and mergers and acquisitions, and you get dynamic attack surfaces in organizations that can be highly challenging to manage. With the ever-changing IT environment organizations must now handle, CISOs and business leaders are turning to new strategies and solutions to help them manage and reduce their organizations' attack surface. To learn how modern CISOs are tackling these new challenges, we were joined by Terence Runge, a seasoned CISO and CISSP with over 20 years of experience working with various cybersecurity companies, and one of the early adopters of our Attack Surface Reduction platform. "I discovered some private IP addresses being published to public DNS and wanted to know how prevalent it was in the company. I have done some work in this area with open source tools, and had an idea that there were around 1,200 or so exposed systems. Lo and behold, Security trails got involved and discovered that there were several thousands and that the attack surface is very dynamic, growing each day." In the January edition of Security trails Fireside Chat, our VP of Sales Scott Donnelly sat down with Terence Runge for a session on the CISO's perspective on attack surface reduction. Key topics included: What the attack surface looks like in this ever-changing world. How supplier risk assessments can drive attack surface understanding. Enforcing policies for remote workers in large organizations. Why asset inventory is a must for efficient attack surface reduction. How a CISO handles the continuously changing attack surface Defining an attack surface from the CISO's perspective starts with considering all systems and services that adversarial attackers can discover from their vantage point, then use to infiltrate your network. But going beyond systems and services, Terence also covers other assets associated with the organization: "An attack surface can be made up of any properties associated with the company, and with past acquisitions, as well as any code, public open repositories, keys, passphrases, secrets, both belonging to the company but also to their customers." Third-party risk All of the properties that make up the attack surface are changing every day. Furthermore, many events in an organization can change their attack surface, such as M&As, where findings from a security assessment led by analysis of the target organization's attack surface can make or break the deal. Suppliers are also an important third party in an attack surface. Regarding supplier security, Terence puts location, where they're hosting data and cyber hygiene, at the top of the checklist. "When assessing a new supplier, we go further than a regular check and look at their attack surface. We do this for several reasons: one is that we will potentially be entrusting them with either access to our systems or our data so we need to know if they have exposures." Securing the remote workforce Another important aspect of modern, dynamic attack surfaces are remote workers and the implementation of policies as a CISO in a remote world. Terence highlights strong authentication as the main story for a remote workforce. Utilizing multi-factor authentication, single sign-on, VPNs and similar processes for authenticating users is key for Terence, but other basics for device authentication should not be forgotten: "Device encryption, policies applied. all of this creates what we call a 'Reltio Authorized Device', a RAD device." Scanning cloud assets For any modern IT environment, cloud assets are an expected part of the attack surface. And scanning and enumerating these cloud assets has its own set of challenges, depending on both the size of t...

    Attack Surface Management Driving Secure Digital Transformation

    Play Episode Listen Later Jan 26, 2022 6:06


    A recent study by IBM found that nearly six in ten responding organizations accelerated their digital transformation efforts due to the COVID-19 pandemic. The disruptive ground brought on by the global crisis, further exacerbated by the rise in hybrid and remote workforces, has shown organizations just how important it is to be built for change. They need to be both scalable and flexible, and the same goes for their IT infrastructures. Cloud adoption and management is now at the top of priority lists for CISOs and executives, with the same study's organizations planning a 20% increase in their prioritization of cloud technology over the next two years. There is clearly no doubt that digital transformation and accelerated cloud adoption can help organizations optimize and streamline their operations, create innovative business offerings and achieve competitive advantage. However, the implication of rapid digital transformation, adoption of new technologies and the remote/hybrid workers sprawl means that CISOs and security teams can find themselves unable to fully grasp, and thus secure, an ever-growing attack surface. The key role of attack surface management in digital transformation Because of digital transformation, today's organizations don't keep all of their digital assets secured tightly behind their perimeter. Rather, they're scattered all over the Internet, sometimes forgotten and often unsecured. With more areas where a threat or cyber attack can take place, organizations need to protect their critical assets. Also because of digital transformation and cloud adoption, many organizations can suffer from issues with vendor migration and legacy tooling left online longer than planned. The best way to react and respond is with a full understanding of their external attack surface and all digital assets. This is why attack surface management plays a key role in the journey to a secure and successful digital transformation. Attack surface management, or ASM, allows organizations to identify, inventory, classify and monitor all digital assets in their external infrastructure. For organizations with large amounts of cloud instances or hundreds of VPNs, AWS instances, etc., ASM can be particularly important, by helping them identify all of their attack surface components, attack vectors and exposures. With a unified view of its external infrastructure, an organization can better navigate across disparate technology systems and quickly map and resolve vulnerabilities while keeping pace with its dynamic attack surface. It can also arm the organization with insights toward making better-informed decisions regarding digital transformation efforts. Solving digital transformation challenges with ASR Attack Surface Reduction, ASR, is the platform we created to tackle the challenges of digital transformation and the dynamic attack surfaces that come with it. ASR can provide your organization with accurate insight into all digital assets, including their location, ownership and the services and technologies running on them. Essentially, ASR is there to make attack surface management easy. Discover and visualize all of your digital assets Many organizations struggle with keeping track of all their assets, but rushing to adopt new transformation technologies and diversify an IT environment can make staying on top of an already chaotic infrastructure even more challenging. The lack of visibility into its infrastructure can give an organization an incomplete picture of its digital risks, putting it at serious danger of a data breach. Asset discovery helps organizations maintain awareness over all of the assets and services running within their infrastructure. With continuous discovery, you can even find risks in forgotten and assets in development, well before they become threats. ASR allows you to visualize and organize all digital assets instantly, providing all information related to your apex domain, subdomains, associated domains, ...

    Pre M&A Security Assessments and Importance of Asset and Risk Discovery

    Play Episode Listen Later Jan 12, 2022 5:23


    In 2021, reports show that global M&A volumes topped $5 trillion. It makes sense: organizations pursue mergers and acquisitions in order to stimulate growth, gain competitive advantage, increase market share through gaining or consolidating personnel, technology and intellectual property. As part of their due diligence, a critical component of any mergers and acquisitions process, organizations assess potential business impacts and risks of the merger or acquisition, in financial, legal and regulatory areas. And while cybersecurity due diligence preceding an mergers and acquisitions process often comes as an afterthought, the consequences of lax security assessment can lead to increased risk of data breach, failure to comply with regulations, financial and reputational losses. Importance of pre mergers and acquisitions security assessment While the importance of cybersecurity in mergers and acquisitions processes is widely recognized, innumerable high-level data breaches surrounding mergers and acquisitions are making it very clear that cybersecurity is frequently overlooked. Cybercriminals find the environment surrounding mergers and acquisitions alluring due to the number of companies and individuals involved, meaning that the potential for human error is heightened. Additionally, combining the cyber risk of two different companies increases the risk for both, and can lead to oversights resulting in failure to comply with regulatory requirements. The main areas for pre mergers and acquisitions security assessment include: Determining the target's compliance to support regulatory due diligence. The amount of digital assets and data they possess. How those assets are protected. The target's potential attack surface and the nature of vulnerabilities it may have. While the discovery of cyber threats and even actual data breaches can harm an merger and acquisition deal, they don't often lead to outright termination. More commonly, they cause delays and add costs, usually due to compliance violations. Yet that can affect the entire outcome of the deal, including the value the acquirer places on the target company. To avoid these consequences, diligence during the pre mergers and acquisitions process is crucial. But this in itself presents a few challenges. The current state of pre mergers and acquisitions security assessments involves a lack of repeatable ways to measure internet-facing assets, incomplete asset lists and no information regarding services running on assets that potentially hold risks or vulnerabilities, or are out of policy. Near-real-time pre mergers and acquisitions security assessment In order to appropriately address the main areas for cybersecurity due diligence preceding an merger and acquisition deal, near-real-time assessment of assets and risks is necessary. A thorough understanding of assets can aid in guiding decisions as to which assets can be safely inherited and which technologies should be sunsetted in acquired companies. Furthermore, near-real-time inventory and assessment of risks of all assets further informs efforts toward regulation or policy compliance and the monitoring of vulnerable services. Instantly uncovering the entire external infrastructure of a subsidiary, pinpointing potential risks, and having actionable data on total assets, assets with services that need to be sunsetted, and assets that are out of policy is easy, all with Attack Surface Reduction (ASR). ASR can aid in pre mergers and acquisitions security assessment with: Asset discovery Depending on the size of the acquired company, mergers and acquisitions can be a messy process. This is especially true when it comes to asset discovery and understanding where assets are located, asset ownership and the services or technologies running on them. With our automated asset analysis, ASR provides you with access to a centralized view into all discovered external infrastructure assets via the Inventory section, including information...

    Resolving Alert Fatigue in Soc's with Asset Context for Incident Evaluation

    Play Episode Listen Later Dec 30, 2021 3:58


    Cyber threats in the modern IT landscape can lead to severe fallout, including compromised data, damage to brand reputation, and loss of customers and revenue. In order to effectively minimize risk, many organizations rely on automated security solutions and software that provide real-time risk analysis and produce alerts whenever an anomaly is detected. These alerts are crucial. They provide security teams with the knowledge of peculiarities necessary to indicate when malicious attackers attempt to breach their network and get their hands on an organization's sensitive data. However, false alerts can and do happen, and over time, this leads to security teams becoming desensitized to them. Dangers of alert fatigue in Sock teams In a security operations center, alerts that originate from innumerable amounts of systems and tools compete to get the attention of security analysts, who battle to defend their organization from cybersecurity threats as effectively as possible. Putting the numbers in perspective, organizations with over 1,000 employees utilize around 70 security products from more than 30 different vendors. And all of those products produce alerts that can cause alert fatigue in Sock teams. Alert fatigue in cybersecurity, also known as operational fatigue, occurs when Sock analysts become desentized to alerts from their tools because of their frequency. It's a major challenge faced by Sock teams as they bear the immense responsibility of maintaining network and data system security. Even the simplest of negligence, caused by alert fatigue, can compromise an entire organization's infrastructure. The fallout from IT alert fatigue in Sock teams can manifest in several ways: Burnout that can lead to a high-stress environment and high turnover of analysts. Lack of financial return to the organization. Security incidents and data breaches being missed by the Sock team. Empowering Sock teams with ASR Sock teams waste valuable time manually correlating high volume alert data from multiple security tools. These alerts lack prioritization and actionable context, leaving the team to do all the heavy lifting, potentially spending time on low-risk alerts while missing out on critical ones. For Sock analysts to respond to questions of incident relevance quickly and combat alert fatigue, having a ready understanding of public-facing internet assets is critical. Access to alert fatigue solutions that provide contextual data is also vital, for Sock analysts to better comprehend the magnitude of an alert and its accompanying threat to a digital asset in their organization's infrastructure. Attack Surface Reduction, (ASR) provides your Sock teams with appropriate asset context to effectively prioritize risks and incidents across your entire cloud and on-prem infrastructure. ASR benefits include: Near real-time inventory of all external-facing assets - ASR's Inventory section gives your team a unified view of all discovered infrastructure data, keeping them informed on potential security issues such as IP's pointing local, remote access points with open ports, exposed VPN endpoints and more. Highlighting of critical exposures on assets - Along with its inventory of all discovered assets, our proprietary automated asset analysis reveals critical security risks such as open database ports, self-signed certificates that can indicate service misconfiguration, and staging and development subdomains that are often left unprotected. Appropriate contextual asset data - To effectively prioritize risks and incidents across your entire cloud and on-prem infrastructure, ASR's Explorer tab allows your team to choose an asset for which they need more context and simply scroll down to uncover relevant data such as open ports, ASN information, redirects and more. Proactivity with actionable data - To make the right call on securing critical assets, the Activity tab lets you keep an eye on all new assets automatically discovered by ASR, allowing for p...

    SecurityTrails Year in Review 2021

    Play Episode Listen Later Dec 21, 2021 16:59


    After 2020, a year of unprecedented change and revelation, and with the whole world facing a multitude of challenges, we entered 2021 colored in a fresh layer of optimism, confidence and defiance. Security trails has taken in this sense of resilience and purposefulness generated by the perilous nature of 2021. We're celebrating this year's innumerable gains in providing our customers with proprietary solutions to secure their IT infrastructure and mitigate cyber risk. Some of our biggest product releases and updates, partnerships, integrations, acquisitions of some of the best infosec tools available, community-led and -focused efforts and campaigns, exciting executive hires and even sharp Security trails swag marked 2021 as a bright new horizon for our company. So as we edge closer to 2022, let's take a moment to look back on 2021 and our achievements throughout those 365 days. Transforming our talented, remote Security trails team Our vision, mission and culture have crystallized throughout 2021, further influencing the Security trails team, but our singular passion for remaining The Total Internet Inventory has remained intact. We've bolstered our executive team with new hires as well as long-time colleagues growing into new positions, shaping our team to help us achieve this goal. While there weren't that many of our beloved retreats, collaboration remains at the heart of our remote team. We've continued to connect and collaborate through internal projects, courses, our Lunch & Learn series and monthly virtual get-togethers. In Q4 our Leadership team had a meeting in Orlando, Florida where a new roadmap for 2022 was brought to life. We're excited to show you how it all unfolds! If you're interested in empowering organizations toward thwarting cyber attacks with up-to-date data, custom solutions and proprietary tools, come join our diverse and talented team of experts! Head over to our Careers page to learn more about Security trails culture and open positions in our departments. Product launches and updates One of the things we're most proud of in 2021 is the immense amount of work we put into releasing new products and solutions, enhancing our existing tools, and continuously bringing incremental improvements to our pipeline. And it was all thanks to the feedback provided by you, our users and customers. You can check all our product updates and launches on our blog, as well as in our Changelog. With regular improvements, releases and fixes, possibly too many to count, let's highlight 2021's major Security trails launches and updates: Attack Surface Reduction Starting the year with a real bang, we released a new version of our powerful Attack Surface Reduction solution. ASR is your one-stop shop for exploring the entire internet surface area of your organization, gaining full visibility over your digital assets and IT infrastructure, giving you a way to take decisive action to reduce risks and prevent attacks. Notable feature updates include: Design changes that provide easy prioritization of information, with more effective aesthetics to enrich any report that calls for your attack surface data. The Screenshots option, to further improve visualization of digital assets. An expanded Explorer tab, to provide even more detailed information about your organization's digital assets and deeper attack surface data analysis. Technology detection that allows all our ASR customers access to important backend technology data into any tech running on remote hosts. WAF detection, which allows security researchers and any organization performing discovery and software identification to determine whether assets do or do not have WAF protection. Security trails SQL Right in the middle of 2021, after many weeks of developing, testing and perfecting, we finally saw the general release of our SQL-like query language: Security trails SQL. This new product allows security researchers and teams to perform massive intelligence collection an...

    Introducing Single Sign-On to SecurityTrails: Secure Authentication with Okta SSO

    Play Episode Listen Later Dec 2, 2021 3:18


    We are excited to announce that we are beginning the implementation of single sign-on (SSO) access across Securitytrails. Okta SSO is the first provider we're bringing on in this effort to deliver secure authentication and a better user experience to our users. SSO and its security benefits Single sign-on (SSO) is an authentication service offered by various providers that allows for the use of only one set of credentials, usually a username and password, to access multiple applications securely. With the emergence of cloud computing and the accelerated use of software-as-a-service (SaaS), organizations are adopting the centralized authentication of SSO as an efficient way to provide risk-free access to multiple resources. Some of the main security benefits organizations have reported with the implementation of SSO are: Decrease in likelihood of password theft: One of the best security practices is to have strong and unique passwords for each account/app, but that can be difficult to manage on an organizational level. With SSO, users only need one strong passphrase, meaning they're more likely to remember it and less likely to store it carelessly. Prevention of shadow IT: Shadow IT is becoming more prevalent in cloud-centric environments. SSO allows for monitoring which apps are used by and permitted for users, thus preventing further shadow IT. Help with regulatory compliance: Common regulations such as HIPAA require effective authentication of users as well as automatic logoff for all accessed resources, which SSO effectively enables. Our choice: Okta SSO Okta was our first choice, as it's one of the as an SSO provider best for enterprise users. Known for its numerous integrations, Okta SSO provides different directory types and powerful and essential features that allows for easy implementation and a user-friendly interface. Okta is standard-compliant with the O-Auth 2.0 protocol that controls authorization of access to sensitive resources and is a certified OpenID Connect provider, a protocol built on the OAuth 2.0 that provides user authentication and SSO functionality. How to enable SSO in Securitytrails To enable SSO authentication in your account, simply contact us requesting to change your default authentication scheme (please note that as a requirement you'll need to previously setup an application inside your Okta organization and provide its client_id along with your designated Okta login's domain name). For a detailed procedure on how to set it up, please check our SSO documentation. After SSO is enabled on your account, you'll receive an email containing an invite link to begin the authentication process. The link in the email will then redirect you to a confirmation page to continue. After confirmation, you'll be presented with a login prompt, where you'll need to sign in with your SSO credentials to be authenticated. Once you enter your credentials, user authentication takes place against the chosen SSO provider—currently with Okta SSO. You're all set! For future SSO authentication usage you can validate your account by using a login link that's unique to your organization, which will be in the following format: This is just the start Implementing Okta is the first step in enabling SSO across Securitytrails and providing centralized authentication to our users. More authentication protocols will be rolled out in the future—stay tuned!

    Security Trails Meets Gigasheet: Taking Your Recon Analysis to a Whole New Level

    Play Episode Listen Later Nov 30, 2021 12:02


    Humans, in most cases, are not built to process and conceptualize data in any significant measure or speed. Notwithstanding, the last several years have seen an unprecedented growth in data collection and ingestion techniques driven by newer forms of network and cloud technologies, arousing a particular (and ever-growing) concern among the cybersecurity community as diminished visibility threatens to grow proportionally to the degree of integration. In other words, organizations should be asking themselves if the logs and data they're collecting are actually telling the whole story and, if they are, is the human component, namely the incident responders and threat hunters at the crossroads, able to quickly align itself with what really took place. There is, however, a new tool on the horizon that threatens to disrupt the old paradigm of looking endlessly at relational entities, such as spreadsheets, in search of the mythical "Aha!" moment. Gigasheet. Combining the succinct dimensionality of structured data with a powerful analytics engine capable of handling billions of data points at a time, Gigasheet will certainly innovate the prescriptive space where data can be manipulated, aggregated, queried, and analyzed under a single web-based ecosystem that is as broadly intuitive as it is powerful. Incidentally, given this project's characteristics and the demands currently placed on good data quality, we could think of no better tool than our very own SQL Explorer to generate large recon activity that could be easily consumed and analyzed, a collaborative endeavor that surely did not disappoint. Enter Gigasheet The future belongs to big data; there's very little doubt about that. The terminology, in all its rich diversity, dominates just about every aspect of our digital lives, including niche (e.g., non-tech) environments that once exhibited a smattering of it, with the cyber security industry being a definitive, representative sample of the ongoing trend. For instance, in cyber, data flows in from a multitude of services often in disparate formats and lattices underscored by the originating application. As the pipeline grows, analysts can be easily caught in a never-ending cat-and-mouse game of chasing after interesting artifacts and traffic, especially if their toolset of choice lacks important filtering, joining, and intersecting capabilities, for the latter, large data dumps can dramatically compound on the problem by requiring significant processing times even when pitted against robust hardware specifications. When the early adoption of cloud-based analytic tools became the dominant narrative, many seized the opportunity to integrate the emerging technology into their processes. This, however, entailed aggregating and normalizing, slicing and dicing, and similar operations, just to arrive at a suitable model capable of interoperability. Thus, when presented with these and similar challenges, many chose (and still do) to resort to off-the-shelf applications (think Microsoft Excel here) for quick data representation, others preferred more programmatic approaches, such as the acclaimed Pandas library, but these precluded many entry-level professionals from expeditiously manipulating the data due to a substantial learning curve. To break down some of these important barriers, Gigasheet's team realized that accelerating analysis meant removing the initial scaffolding, reducing the setup effort to a small number of clicks. This is SaaS at its best, conducting resource-intensive tasks with ease and scalability without worrying about the underlying infrastructure, reliability, and accessibility for all team members who no longer need to be sidetracked by maintenance windows or hardware issues, resulting in increased collaboration, as well as overall faster response times to critical items in need of immediate attention. Best of all, Gigasheet's 24 by 7 development cycle directly translates to optimizations and fixes that are rol...

    Open and Exposed Databases: Risks and Mitigation Techniques Explained

    Play Episode Listen Later Nov 24, 2021 12:01


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Databases are among the most important parts of a web application. Almost every action performed on your web application involves using a database in some form to determine what to perform next, to store a user's input, or to give a user information. These three user interactions form the most essential functions that a web application performs. Databases often contain tons of valuable information, including usernames, passwords, email, IDs, addresses, phone numbers, and much more. This treasure trove, however, also makes the database one of the most targeted parts of a web application. Looking at some of the largest database breaches in history further emphasizes just how valuable the information stored within your organization's database can be. And database hacks are not always sophisticated hacks that occur due to software code faults. Frequently, the simple yet fatal misconfiguration of a database's operation is the root cause of data breaches. Hello Elasticsearch Some of the most common database hacks include the use of Elasticsearch, a popular database that is highly efficient at storing large amounts of data as well as analyzing and visualizing the data it stores. This makes it super popular within organizations that have a lot of logging or other large data to be analyzed. Elasticsearch by default binds to localhost only, which is secure enough, but to make Elasticsearch usable in an organization, database administrators often make the mistake of binding Elasticsearch to the public network interface without firewalling it. While this may seem normal at first, keep in mind that Elasticsearch has no default user authentication setup on it. Manual configuration is required to enable the xpack module which then allows one to set up a password based authentication on Elasticsearch. The above misconfiguration allows attackers to simply enter, delete data, steal data, and exit. Again, there is simply no way to determine whether a user is a hacker or not. This simple flaw has caused a countless number of hacks within Elasticsearch over the years and continues to do so even today. Security breaches caused due to database compromises can lead to loss of data as well. At times, data is not only stolen but also destroyed by attackers. Read more about data loss prevention here. Now, let's take a closer look at how to identify open and exposed databases. Consequences of a database breach The consequences of a database breach are extensive and often seen as a critical cause of trust issues in any web application. Because databases often contain sensitive information like first and last names, home addresses, personal phone numbers, and other information that is shared in confidence, leaks of such data are perceived as negative and highly harmful, leading to trust issues and customer departures from compromised web applications. Another facet of database breaches is the threat of silent attacks. These involve attackers making minor changes to a database in order to gather or steal data over a long period of time and also to compromise targeted accounts. This was often seen in cryptocurrency exchanges in the past, wherein user accounts would get compromised and funds stolen, as well is as in advanced persistent threats (APT). With newer laws coming into place such as the GDPR (in Europe), ICO (in the UK) and various other regional laws, dealing with breaches can also involve legal consequences and financial fines. These laws require companies operating within those regions to report breaches within 24 hours or less. Failure to do so can incur larger fines and other legal consequences. However, beyond laws, legal consequences, and financial losses, your reputation is the most important aspect of your web application. It is essentially what drives users to your platform, makes them stay, and prompts them to recommend it ...

    Who-is History Update: Get the Full Historical View of a Company's Who-is records

    Play Episode Listen Later Nov 23, 2021 2:09


    Today we're excited to announce several improvements in our Whois historical records that take our data to the next level, so you can analyze any domain name ownership information more efficiently. Enhanced Whois timeline Our improved Whois timeline will now detect business 'Start Date'. This feature will show the exact date when new companies and individual owners acquire active domain names. At the exact moment of acquisition of the domain, the domain history changes in our database, and you'll be able to visualize the new records easily on the public Whois timeline. As you can see in the screenshot above, you'll see the ownership change of a domain name in history. Additionally, it will be possible to detect public and private Whois records thanks to our red-and-blue color scheme. Additional Whois historical records Along with the timeline changes, we're also introducing additional Whois historical records to our database. This will let you obtain even more information than before, as shown in this comparison: This new enhancement goes deeper into the history of a domain name and allows you to extract additional data from its records. This helps security analysts find noteworthy spots to examine while uncovering the full historical behaviour of a company's asset. To go even deeper into the domain history analysis, you simply need to scroll left on the timeline to find older records that may seem interesting until the desired timeline position is found. As an example, the above analyzed domain name has several dates to be checked against. Once clicked, the Whois information corresponding to that exact date will be displayed below. With additional historical information, you can easily visualize the life of domain names, their owners, contact information, and much more. Summary These new enhancements to our Whois data will allow you to gain more visibility over changes throughout the Whois timeline, while providing access to historical records spanning further into the past, not previously available in our database. Whether you're using our Security trails API or any of our other products such as Surface browser, the new Whois data is already available for you. If you don't have an account with us yet, grab your Prototyper API key to start querying Whois data today!

    Announcing New Features in Attack Surface Reduction

    Play Episode Listen Later Nov 18, 2021 2:46


    Today we are happy to introduce the new Explorer Tab for ASR. This new version redefines the concept of asset exploration, providing even more detailed information about any of your digital assets, to help you perform attack surface data analysis in a much better way. Look for the new 'Explorer' tab Take a look at all the new and exciting capabilities included within this upgraded version of our ASR Explorer. This release contains several improvements, including: Better visual web app identification with the use of home page screenshots. Extended infrastructure detection capabilities such as Waf detection and Backend Technology mapping, and more! If this summary is as exciting for you as it is for us, please join us in the following sections where we briefly showcase each of the most interesting new features ready for you to test! Technology Detection This new version includes access to Technology detection, particularly important concerning backend technologies running on the remote host, along with their versions. This new analysis feature helps you build a technology profile, showing you what websites are built with, such as CMS, application servers, frameworks, e-commerce platforms, Javascript libraries, and much more, as you can see from the above screenshot. Screenshots In a separate tab, to the right of the host list, you'll find a 'Screenshots' option. This new feature allows you to visualize screenshots of all assets in an extensive way, as shown here: Additionally, it's also possible to see the different screenshots by looking at the Explorer tab's main dashboard and hovering over the listed open ports highlighted with a white sheet. Once that's done, a screenshot snippet will appear next to the position of your pointer, which will provide you with a home page visual preview. Waf Detection Waf Detection helps security researchers during the application discovery and software identification phase and serves well to keep an eye on how many of your assets do or do not have any Waf to protect them. Which Waf's can be detected? ASR can detect almost any kind of Waf, and just to mention some of the more popular ones, they include: Cloudfront, Cloudflare, AWS Elastic Load Balancer, Cache Wall, Incapsula, Kona Site Defender, DOS arrest, Zenedge, Big-IP Local Traffic Manager, Net Scaler App Firewall, Wordfence, and many other commercial and generic Waf's. Summary With these new features in ASR Explorer, organizations can gain even more visibility over the status of their digital assets in a quick and centralized manner, covering previous asset data from our original 'Explorer' version while adding new and critical information about server technologies and software versions, as well as useful crawling details. Take advantage of this bold new infosec feature, get a clear picture of all your assets and begin securing your IT infrastructure as quickly as possible, request access to ASR today.

    AutoRecon: A Multi-Threaded Network Reconnaissance Tool

    Play Episode Listen Later Nov 17, 2021 4:47


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. With organizations' digital footprints growing larger and larger, network recon and the enumeration of services available over the public internet has become a critical area in the security of an organization. And given the increased number of vulnerabilities and threats targeting web applications, performing automated recon and service enumeration is ever more important. Fortunately, using open-source and free-to-use tools such as recon NG has streamlined this process to the point of near-automation. Today we'll take a look at Auto Recon, aimed at doing just that: automating your network recon and service enumeration methods. What is Auto Recon? Auto Recon is an open-source project built to perform network reconnaissance with automated service enumeration. The advantage that Auto Recon provides over other information gathering and internet scanning tools is that it allows one to further process—and further act upon—information gathered directly within Auto Recon. This includes performing actions like Nmap as well as running the gathered data through other scanning tools, such as feroxbuster, SSL Scan, NBT Scan, Nikto and more. Installing Auto Recon Note: As its dependencies are easily available on KaliLinux, we suggest using Auto Recon on that distribution. To begin with, ensure you have python3 and pip available. Next, use Python pip to grab the latest version of Auto Recon and install it. Next, you'll need to install certain dependencies. Run the command to determine whether it's been successfully installed. Which should then give you the following output containing various options available in Auto Recon. Usage Getting started with Auto Recon is super simple—one can even run Auto Recon without any flags or options: Replace domain.com with a domain name that you wish to scan. Once the command has finished executing, it should then return the following output: Analyzing the results After a scan completes, Auto Recon saves the scan results in the "results" directory, inside of which a new subdirectory is created for every target being scanned by Auto Recon. The results structure created by Auto Recon is as shown below: The exploit directory is used to store any exploit code you run for the target being scanned. The loot directory is intended to store any hashes or notable files you find on the target you're scanning. The report directory contains reports of the scan performed by Auto Recon; files are generated as follows: local.txt can be used to store the local.txt flag found on targets. notes.txt should contain a basic template where you can write notes for each service discovered. proof.txt can be used to store the proof.txt flag found on the target. The screenshots directory is used to store any screenshots you use to document the exploitation of the target. The scans directory is where all results from scans performed by Auto Recon will go. This includes all commands executed by Auto Recon and whether any commands failed or succeeded as well. The scans-XML directory stores scan data results in XML format (from Nmap, etc.) which can be used to easily import scan results data into other software for further processing or storing. Further understanding the results Finding the webserver version With the output we gather from Auto Recon, one can find the version of the webserver running on the target system as well. Most web servers expose their name and version by default; for example, from the Nmap output: This tells us the webserver running on the target being scanned is engine X 1.14.0 Detecting operating systems Looking even further with the output we've gathered above, the webserver often exposes the operating system or operating system family, too. Also, as shown above, we can see the target being scanned runs on Ubuntu. Gathering screenshots along the way Often, web application screenshots can tell a l...

    AutoRecon: A Multi-Threaded Network Reconnaissance Tool

    Play Episode Listen Later Nov 17, 2021 4:47


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. With organizations' digital footprints growing larger and larger, network recon and the enumeration of services available over the public internet has become a critical area in the security of an organization. And given the increased number of vulnerabilities and threats targeting web applications, performing automated recon and service enumeration is ever more important. Fortunately, using open-source and free-to-use tools such as recon NG has streamlined this process to the point of near-automation. Today we'll take a look at AutoRecon, aimed at doing just that: automating your network recon and service enumeration methods. What is Autorecon? AutoRecon is an open-source project built to perform network reconnaissance with automated service enumeration. The advantage that AutoRecon provides over other information gathering and internet scanning tools is that it allows one to further process—and further act upon—information gathered directly within AutoRecon. This includes performing actions like Nmap as well as running the gathered data through other scanning tools, such as feroxbuster, SSL Scan, NBT Scan, Nikto and more. Installing Autorecon Note: As its dependencies are easily available on KaliLinux, we suggest using AutoRecon on that distribution. To begin with, ensure you have python3 and pip available. Next, use Python pip to grab the latest version of AutoRecon and install it. Next, you'll need to install certain dependencies. Run the command to determine whether it's been successfully installed. Which should then give you the following output containing various options available in AutoRecon: Usage Getting started with AutoRecon is super simple—one can even run AutoRecon without any flags or options: Replace domain.com with a domain name that you wish to scan. Once the command has finished executing, it should then return the following output: Analyzing the results After a scan completes, AutoRecon saves the scan results in the "results" directory, inside of which a new subdirectory is created for every target being scanned by AutoRecon. The results structure created by AutoRecon is as shown below: The exploit directory is used to store any exploit code you run for the target being scanned. The loot directory is intended to store any hashes or notable files you find on the target you're scanning. The report directory contains reports of the scan performed by AutoRecon; files are generated as follows: local.txt can be used to store the local.txt flag found on targets. notes.txt should contain a basic template where you can write notes for each service discovered. proof.txt can be used to store the proof.txt flag found on the target. The screenshots directory is used to store any screenshots you use to document the exploitation of the target. The scans directory is where all results from scans performed by AutoRecon will go. This includes all commands executed by AutoRecon and whether any commands failed or succeeded as well. The scans-XML directory stores scan data results in XML format (from Nmap, etc.) which can be used to easily import scan results data into other software for further processing or storing. Further understanding the results Finding the webserver version With the output we gather from AutoRecon, one can find the version of the webserver running on the target system as well. Most web servers expose their name and version by default; for example, from the Nmap output: This tells us the webserver running on the target being scanned is engine X 1.14.0 Detecting operating systems Looking even further with the output we've gathered above, the webserver often exposes the operating system or operating system family, too. Also, as shown above, we can see the target being scanned runs on Ubuntu. Gathering screenshots along the way Often, web application screenshots can tell a lot, they can expos...

    Introducing Associated Domains v2

    Play Episode Listen Later Nov 11, 2021 2:55


    Today at Security Trails we're announcing an upgrade to our Associated Domains API endpoint and functionality inside of Surface Browser and Attack Surface Reduction. Associated Domains was originally introduced a few years ago. The purpose is to footprint a company's infrastructure by finding all domains associated with that company. The primary vectors involved a lot of heuristics around Whois data. While Whois is not dead by any means, it has left a lot of gaps after GDPR and privacy guard enablement. We've heard your feedback and have been working on a wonderful new set of features that utilize many other vectors of association and allow us to expand in the future. Based on your feedback, we are now providing the provenance of how we made the association so that you can understand how a domain is related to another. This is available from inside of Surface Browser currently. What's new in Associated Domains v2? Major improvements to the algorithm to find false negatives domains that may have been missed from other methods. A keen attention to mergers, acquisitions, and subsidiaries. Providing the provenance at a glance to be able to detail why an association was made. 10+ additional signals for associations. Enhanced Whois, SSL, Hosting, Nameserver and other infrastructure analysis. From the previous screenshot, you can also notice that ADv2 is now showing why a domain name was associated. Commonly associated reasons you'll find, among others, include: SSL organization. SSL organization name. Whois email. Whois organization. Parent's organization name. Parent's organization legal name. Comparing results of ADv1 vs ADv2 for Netflix.com associations To see these improvements in action, let's first see how many domains, organizations and TLDs can be found with both versions: That's an 81% increase in the number of discovered associated domains! Now let's try using Surface Browser and filtering by ‘Creation by year', and ‘Expiration by year'. With v1 we got 184 domains in the summary 'by Creation Year', starting in 1995 as the first registered date. And for the summary 'by Expiration Year', we got 183 results, from 2019 through 2026. With v2 we got 882 domains, almost 4 times the results, starting in 1992. And reviewing the 'Summary by Expiration year' we got 877, ranging from 2018 to 2026. Summary As you can see, the new version of Associated Domains with all of its improvements and features provides evidently more domain associations than the previous one. This will help organizations make their intelligence collection about hostnames easier than before. All new accounts created after Tuesday, November 16th will have AD v2 enabled by default. Users that had AD enabled on their account prior to this date can contact us to get AD v2 access. Stay tuned for more product updates in the following weeks.

    Introducing Associated Domains v2

    Play Episode Listen Later Nov 11, 2021 2:55


    Today at Security Trails we're announcing an upgrade to our Associated Domains API endpoint and functionality inside of Surface Browser and Attack Surface Reduction. Associated Domains was originally introduced a few years ago. The purpose is to footprint a company's infrastructure by finding all domains associated with that company. The primary vectors involved a lot of heuristics around Whois data. While Whois is not dead by any means, it has left a lot of gaps after GDPR and privacy guard enablement. We've heard your feedback and have been working on a wonderful new set of features that utilize many other vectors of association and allow us to expand in the future. Based on your feedback, we are now providing the provenance of how we made the association so that you can understand how a domain is related to another. This is available from inside the Surface Browser currently. What's new in Associated Domains v2? Major improvements to the algorithm to find false negatives domains that may have been missed from other methods. A keen attention to mergers, acquisitions, and subsidiaries. Providing the provenance at a glance to be able to detail why an association was made. 10+ additional signals for associations. Enhanced Whois, SSL, Hosting, Nameserver and other infrastructure analysis. From the previous screenshot, you can also notice that ADv2 is now showing why a domain name was associated. Commonly associated reasons you'll find, among others, include: SSL organization. SSL organization name. Whois email. Whois organization. Parent's organization name. Parent's organization legal name. Comparing results of ADv1 vs ADv2 for Netflix.com associations To see these improvements in action, let's first see how many domains, organizations and TLDs can be found with both versions: That's an 81% increase in the number of discovered associated domains! Now let's try using Surface Browser and filtering by ‘Creation by year', and ‘Expiration by year'. With v1 we got 184 domains in the summary 'by Creation Year', starting in 1995 as the first registered date. And for the summary 'by Expiration Year', we got 183 results, from 2019 through 2026. With v2 we got 882 domains, almost 4 times the results, starting in 1992. And reviewing the 'Summary by Expiration year' we got 877, ranging from 2018 to 2026. Summary As you can see, the new version of Associated Domains with all of its improvements and features provides evidently more domain associations than the previous one. This will help organizations make their intelligence collection about hostnames easier than before. All new accounts created after Tuesday, November 16th will have AD v2 enabled by default. Users that had AD enabled on their account prior to this date can contact us to get AD v2 access. Stay tuned for more product updates in the following weeks.

    Uniscan: An RFI, LFI, and RCE Vulnerability Scanner

    Play Episode Listen Later Nov 9, 2021 6:10


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. When scanning remote hosts and web applications, the danger of file inclusion attacks is an important consideration, particularly when dealing with web applications that support plugins such as WordPress. An RFI, or remote file inclusion attack, targets web applications that make use of includes via external scripts (commonly known as application plugins), hooks, themes, anything that is dynamically included in the web application during runtime. If these includes contain vulnerabilities, it's highly likely that exploiting the includes can lead to the main web application being exploitable. That's why today we'll take a look at the Uniscan project. In the project's own words, Uniscan is a simple Remote File Include, Local File Include, and Remote Command Execution vulnerability scanner. Installation We recommend using Kali Linux for Uniscan as it is available for easy installation via the package manager. Installing Uniscan on Kali Linux is relatively straightforward, as it can be installed directly via the APT package manager and does not need compiling from source. First, we update our APT package manager information with the following command. Next, we proceed to install Uniscan. To verify a successful Uniscan installation, let's run the following command. This should then return the following output, which displays the options/flags Uniscan has available. Configuration Uniscan can run with minimal configuration as well, but it does allow for a good amount of customization: -h — The -h flag shows us all the options available under Uniscan. -u — The -u flag is used to specify the URL being scanned, for example: www.example.com . -f — If you wish to scan a list of URLs, you can input them into a text file and reference them with the -f flag as well. -b — Scans can take a while to complete if you have multiple URLs to scan. Using the -b flag pushes Uniscan to run in the background; alternatively, you can run Uniscan under a "screen" session as well under Linux. -q — The -q flag enables Directory-based checks for the target being scanned. -w — The "-w" flag enables Uniscan to check for files present on the remote host being scanned. -e — The "-e" flag enables Uniscan to check for robots.txt and sitemap.xml, which can further help identify the type of script/web application running on the target host. -d — The "-d" flag enables Dynamic checks within Uniscan to check for any dynamic file includes. -s — The "-s" flag enables Static checks within Uniscan to check for any static file includes. -r — The "-r" flag enables stress checks to be run on the target being scanned. -i and -o flags perform Bing and Google searches for dorks related to the target being scanned. -g — The "-g" flag is used for web fingerprinting, this helps identify what web application is running on the web server, what plugins are enabled (for example, in WordPress), what version of WordPress is running on the server, and more. -j — The "-j" flag is used to enable the server fingerprint check/listing, which allows for identification of the server software. This performs actions such as ping, N map, traceroute, and listing of the web server and operating system running. Testing and results To run a basic scan on a web app, we use the flags "qweds" which instruct Uniscan to perform the following: Directory checks (q). File checks (w). Robots/sitemap checks (e). Dynamic file include checks (d). Static file include checks (s). The checks performed by the flags "qweds" can all be performed in the same run, with the command. Note: Replace with the actual URL you wish to scan. Which then returns to us the following output. As seen above, when Directory and File checks are being performed, Uniscan will find and list directories as well as files seen on the target being scanned. Next, Uniscan performs checks on the robot.txt, sitemap, and begins enumerat...

    Aquatone: An HTTP-Based Attack Surface Visual Inspection Tool

    Play Episode Listen Later Oct 28, 2021 6:38


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Attack surface management has become one of the most critical aspects of any website on the public internet. Simply knowing your attack surface is no longer enough— and effectively managing it with tools like Aquatone has become the norm. Combining Aquatone with popular tools like Owasp Amass helps improve and streamline website attack surface management even further. What is Aquatone? Aquatone is a free-to-use, open-source project aimed at making visual inspection of websites an easy task. This valuable tool also supports looking up websites in bulk, which can make the task of information gathering for your website's attack surface surprisingly easy. Aquatone works with the help of a web browser like Chrome or Chromium to perform the visual inspection of any website being looked up. Aquatone can be further combined with tools like Nmap to gain even more insight about a website's attack surface. Installation To install Aquatone, grab the latest release of the project's GitHub page for the operating system you run on. Aquatone has released versions for Linux (amd64 and arm64), MacOS, and Windows, which makes it a very handy tool no matter what platform you're on. In our example, we'll take a look at both the Linux and Windows options. For Linux, grab the amd64 build or arm64 build. If in doubt, grab the amd64 build: And then unzip the archive. Now let's run the command for the first time. The help command will show a list of command arguments, features and flags supported by Aquatone. Next, for Aquatone to perform visual lookups of websites, you'll need Chromium or Google Chrome installed on your system. If you are running any Debian-based distro, you can install this package by just running the following command. Similarly, for Windows, download the "windows_amd64.zip" build, and extract the archive. This should result in the following files. Fire up the command prompt with WIN + R and then enter CMD. Navigate to the folder where you extracted the files and run. Which should then result in the following output. As seen with Linux, you'll need either Google Chrome or Chromium installed on your system to aid Aquatone to perform the website visual lookups. Aquatone phases and usage examples. Basic usage. To begin using Aquatone, let's look at scanning websites with basic flags/options available. First, create a text file called "websites.txt" inside the same folder as the Aquatone executable. And inside that, add the websites you wish to scan, ensuring you have only one website per line. Run the command: Which should net you the following output. From the output above, we're able to gather a few important facts. Aquatone is FAST! Using this tool, we were able to gather information about two websites in only five seconds. As for the output returned, Aquatone gives us an HTML report, an HTTP code and a screenshot of the website. Aquatone targets port 80, 443, 8000, 8080 and 8443 by default if no arguments or specific ports are passed into the command. Scanning specific ports. At times you may need to scan only specific ports, or the most commonly used ports (such as 80 and 443). This can be done by using the ports flag. For example: Should return to you the following output. Using Aquatone with Owasp Amass. Another excellent feature of Aquatone is that it can be combined with other tools like Owasp Amass. This extends what Aquatone can achieve even further. Amass is a great tool for DNS enumeration, as it helps find and list subdomains belonging to a domain. With larger organizations having hundreds, if not thousands, of subdomains active at any time, using Amass helps speed up the process, gathering information from multiple 3rd-party sources. Amass carries builds for Linux, Windows and Mac OS, as well as FreeBSD. To begin, grab the latest release of Amass from its GitHub Releases page by executing. And then unzip ...

    N-map on Windows: Installation and Usage Guide for Windows Users

    Play Episode Listen Later Oct 26, 2021 8:10


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Available for Windows, Linux, macOS, and a range of other operating systems, Nmap is widely used to perform network scans, conduct security auditing, and find vulnerabilities in networks. As the project's official page explains, at the most basic level, Nmap allows you to quickly map the ports on your network, and do so without being detected. This functionality is accessed through a well-structured set of Nmap commands which will be familiar to anyone who has worked with command-line network tools before. Commands can also be built into Nmap scripts to extend your capabilities even further. Installing Nmap on Windows, and for that matter using it on Windows, is fairly straightforward. We'll show you how to download Nmap, and how to install it. We'll then take you through the most common use cases for Nmap, before showing you the official GUI alternative called Zenmap. Installing Nmap on Windows Installing Nmap on Windows is straightforward: The first step is to go to the official download page and download the latest stable version of Nmap. NOTE: There are typically a number of different versions of Nmap available ,the latest stable version, in addition to early-release betas that will offer extra features at the cost of some stability. Download the version you feel most comfortable with, which for most beginners will be the latest stable version. Next, navigate to the location where the file is downloaded. If your Windows installation is fairly standard, this will be in your "Downloads" folder. You will see a file there called "Nmap-X.X X-setup", or similar. If you can't find the file, do a quick search for it. This file is an EXE ,an executable. In order to use it, you will have to run it with administrator privileges. To do that, right-click the file and then click "run as administrator." The installer will now run. A window will appear that will ask you to accept the end-user agreement. Click "I Agree" to do so. Next, the installer will ask you which components of Nmap you'd like to install. All of the components will be selected and installed by default. Unless you are experienced with the program and don't need some of these components, go ahead and accept the proposed installation. Then the installer will ask you where you want to install Nmap. It will default to C:Program Files (x86)Nmap, but you can change this if you would like to. The important thing is that you know where Nmap is installed, because (as we'll see shortly) you'll need that information in order to call it from the command line. Click "Install", and Nmap will start to install. This should be a pretty quick process, even on old hardware, Nmap is a small program, despite being so useful! You'll then get confirmation that Nmap is installed. If all went smoothly, you should now have a working version of Nmap on your computer. Depending on your level of experience, however, you might be a little confused at this point. By default, Nmap is a command-line tool, and as such it doesn't have an icon that appears in your programs menu. If you are familiar with using the command line (or want to give it a go), you can proceed to the next section. If you want a graphical program (a GUI) for Nmap, take a look at the section on Zenmap below, this program will provide a more familiar interface for novice users. Running Nmap on Windows 10 - Usage Examples Using Nmap on the Windows command line As we've mentioned above, by default Nmap is used completely through the Windows command line, and this is how most people will use Nmap. If you are not familiar with this, you can either download Zenmap (see below) or even use Nmap as an introduction to the Windows command line, it makes a great place to learn. Here are the most common use cases for Nmap: 1. Detecting the version A simple use case for Nmap is to check the version of Windows (or any other OS) y...

    Securitytrails Bolsters Executive Team

    Play Episode Listen Later Oct 21, 2021 2:25


    ORLANDO, Florida, October 19, 2021 — Securitytrails, the Total Internet Inventory, is adding to its executive team by bringing on Scott Donnelly as Vice President of Sales and Taylor Dondich as Chief Technology Officer. Long term team members Courtney Couch and Kris Lopez take on new roles to round out the team. "Taylor and Scott are joining at the perfect time for Securitytrails. With the launch of our Attack Surface Reduction product, we're taking on bigger challenges for larger organizations. Their experience will help ensure that both Securitytrails and its customers are successful." , Chris Ueland, CEO of Securitytrails. Scott Donnelly brings a wealth of industry knowledge, having held senior sales leadership roles at both Expanse and Recorded Future. During his time as Vice President of Technical Solutions at Recorded Future, Scott drove the integration of their security intelligence with dozens of leading IT and security products. His arrival at Securitytrails will ensure that customers realize the value of the Total Internet Inventory across their entire organization. Taylor Dondich takes over as the new Chief Technical Officer. Co-Founder Courtney Couch will now be focused on finding better ways to identify customer infrastructure and risks in his new role as the Chief Innovation Officer. Dondich has worked in the technology space for over two decades with companies such as Yahoo and Splunk, as well as the VP of Engineering for MaxCDN. His experience in roles of tech executive, advisor and engineer will guide Securitytrails to develop innovative technology and ensure customers are never blindsided by unknown risks. Having previously worked as Strategic Partnerships Manager at Securitytrails, Kris Lopez's expansive business background along with her outstanding interpersonal and project management skills make her a perfect choice as the new Chief of Staff. Securitytrails is bringing forth these instrumental changes to propel them to the next level and to ensure they continue to provide the best quality of data available to help companies protect themselves. About Securitytrails Securitytrails is a total internet inventory that curates comprehensive domain and IP address data for users and applications that demand clarity. By combining current and historic data of all Internet assets, Securitytrails is the proven solution for 3rd-party risk assessment, attack surface reduction and threat hunting. From knowing an organization's attack surface, shadow infrastructure, and spotting new domains, Securitytrails makes sure there's nothing left to be discovered. Learn more at securitytrails.com.

    Information Security Policy: Overview, Key Elements and Best Practices

    Play Episode Listen Later Oct 19, 2021 16:06


    Organizational policies act as the foundation for many programs, rules and guidelines by providing a framework to ensure clarity and consistency around an organization's operations. The importance of information security can't be overstated. If compromised, customer and employee data, intellectual property, trade secrets and other highly sensitive and valuable information can mean the downfall of an organization, which makes keeping it secure one of the most critical operations to maintain. Therefore, a policy accounting for information security becomes an expected progression. With so many different types of data, systems that handle and store it, users that access it and risks that threaten its safety, it becomes increasingly important to have a documented information security policy. Furthermore, compliance requirements regulate ways in which organizations need to keep this information private and secure, further promoting the need for a document that will ensure those requirements are met. Regardless of size or industry, every organization needs a documented information security policy to help protect their data and valuable assets. But where to begin? What is an information security policy? An information security policy (ISP) is a high-level policy that enforces a set of rules, guidelines and procedures that are adopted by an organization to ensure all information technology assets and resources are used and managed in a way that protects their confidentiality, integrity and availability. Typically, an ISP would apply to all organization's users and IT data as well as infrastructure, networks, systems, third and fourth parties. Information security policies help organizations ensure that all users understand and apply the rules and guidelines, practice acceptable use of an organization's IT resources, and know how to act. Ultimately, the ISP's goal is to provide valuable direction to users with regard to security. The way an effective policy is shaped and customized is based on how an organization and its members operate and approach information. ISP sets the tone for the implementation of security controls that will address an organization's relevant cybersecurity risks and procedures to mitigate them as well as the responsibilities needed to manage security properly. Furthermore, it's implemented in a way that supports their business objectives while adhering to industry standards and regulatory requirements. Organizations across industries design and implement security policies for many reasons. These include establishing a foundational approach to information security; documenting measures, procedures and expected behaviours that support and dictate the direction of overall security management; protecting customer and user data; complying with industry and regulatory requirements; and ultimately protecting their reputation. The CIA triad As mentioned, the main goal of an IT security policy is to maintain the confidentiality, integrity and availability of an organization's systems and information. Those three principles—confidentiality, integrity and availability—make up what is known as the CIA triad, a somewhat outdated, but still well-known model that remains at the foundation of many organizations' security infrastructure and security programs. Confidentiality refers to an organization's efforts to keep sensitive data private. Personally identifiable information (PII), credit card data, intellectual property, trade sectors and other sensitive information need to remain private and accessible only to authorized users. This is generally conducted by controlling access to data, often seen in the form of two-factor authentication when logging into accounts or accessing systems, apps, and the like. Integrity in this context describes data that can be trusted. This means that data needs to be kept accurate and reliable during its entire lifecycle, so that it can't be tampered with or altered by unauthorized users. In...

    Understanding the Mitre Attack Framework

    Play Episode Listen Later Oct 12, 2021 17:45


    Before an organization can develop and maintain a successful and relevant threat detection and defense strategy, it must first gain a solid understanding of common adversary techniques. The organization needs to know the various activities that can pose a threat, and how to detect and mitigate them. With the current threat landscape featuring innumerable volumes of attack tactics and techniques, it proves challenging, if not nearly impossible, for every organization to monitor, document and communicate each of them. Cybersecurity frameworks provide a comprehensive plan of standards, guidelines and common language that can predict many of the challenges faced by organizations in protecting critical data and infrastructure in their efforts to better manage cybersecurity risks. Organizations commonly rely on these frameworks to alleviate guesswork, and provide a baseline structure that's further modified to meet the specific organization's needs and goals. After delving into the Nist Cybersecurity Framework, we now turn to another cybersecurity framework often used as a foundation for organizations developing customized threat models. Mitre has developed the attack framework, which systematically defines and organizes common behaviour observed to be carried out by malicious attackers in the wild. It provides a common language that can be used by security teams to communicate these activities. The attack framework is globally recognized as an authority on understanding the behaviour models and techniques that adversaries use against organizations. It allows industry professionals a way to discuss, collaborate on and share intelligence regarding adversary methods and provides practical applications of detection, mitigation and common attributes. What is the Mitre attack framework? Mitre attack, an abbreviation of Mitre'S Adversarial Tactics, Techniques and Common Knowledge is a comprehensive knowledge base and framework for understanding and categorizing adversary behaviour based on real-work observations of various phases of their attack lifecycle. Created in 2013 by the Mitre Corporation, a not-for-profit organization that works across government agencies and various industry and academic institutions, the framework is a globally available collection documenting malicious behaviors carried out by advanced persistent threat (APT) groups. While information found in attack does represent APT behaviors, those malicious behaviors occur every day in organizations of all sizes. Consequently, various public and private sector organizations, no matter the size, have adopted the framework. Importance of the Mitre attack framework attack is regularly updated by Mitre experts, industry researchers and contributors, thus providing a relevant resource for organizations to create their own threat models and test in-place cybersecurity controls against threats in the current landscape. The tactics, techniques and procedures (TTPs) documented in the framework provide a standardized way for threat hunters, red teams, security operations centers (SOCs), and defenders to understand the cybersecurity risks of known adversary actions and inform a more vigorous defense strategy. To better grasp the importance of knowledge Mitre attack engages, let's turn to a concept developed by David Bianco called "Pyramid of Pain". Bianco argues that not all indicators of compromise (IoCs) are created equal. Just as in attack, Pyramid of Pain takes the adversary's point of view, defining the pyramid with levels of pain the adversary will feel when they are denied a specific indicator. TTPs represent the apex of the pyramid, the highest pain level if denied to adversaries. When organizations detect and respond to threats at this level, it means they are operating based on adversary behaviors, rather than just their tools or parts of their attack sources. The thing is, tools can be replaced with other existing or newly created tools, but responding directly to adve...

    Most Popular Subdomains and MX Records on the Internet

    Play Episode Listen Later Oct 7, 2021 6:44


    Simply put, today's internet runs on DNS. The concept is laced with hierarchical overtones, attributable to the structured nature of the protocol itself, and its pivotal role when it comes to the proper functioning of the network of networks. After all, a quick survey of today's visible internet rigorously points to a sizable dataset of nearly 364 million domain registrations across the most popular top-level domains. That's a testament to the rapidly growing need for coherent DNS and IP intelligence solutions that can quickly and effectively sort through the resulting complexity. In short, DNS records can reveal a plethora of important information, including perimeter protection mechanisms and technologies in use, inconsistencies in canonical entities pointing to specific security implications, and similar flaws leading to potential DNS takeover scenarios, so the value is definitely there. At the heart of this blog post lies yet another attempt at recognizing the importance of information gathering and asset discovery regarding the efforts of security researchers and bug bounty hunters alike, as they strive for a suitable interplay of passive DNS enumeration capabilities and techniques. Our goal is to showcase the most commonly used subdomains and MX record types as they complement and enrich the asset discovery ecosystem. If you're in the business of network reconnaissance or asset discovery, mastering the above techniques can go a long way in ensuring flexibility when examining potential areas of exposure and validating legitimate targets of opportunity prior to any engagement. Let's take a quick look. Most popular subdomains on the internet In the recent past, we've articulated that finding associated domains linked to a specific target is central to the idea of extending the attack surface. This long-standing argument reflects the possibility of both horizontal and vertical domain correlation, where the intent is to search for any available subdomains and siblings corresponding to the apex, success in this area is always measured in terms of forgotten or mishandled domain records as an additional target of opportunity for miscreants to capitalize on. As a refresher, domain name features consist of human-readable character strings with a one to one correspondence pointing to a specific web resource. In turn, the canonical internet protocol (DNS) leverages a subordinate arrangement starting with TLDs, or top-level domains, composed of prominent extensions such as .com or .net, followed by second- and third-level domains which consumers can acquire and control at will. This form of domain administration allows for further specialization whereby domains can be scaled to generate the desired aggregates. For instance, third-level domains, or subdomains as they are normally referred to, can identify an FTP server simply by prepending ftp to domain.com; this denotes the collective designation of a resource via a unique identifier such as ftp.domain.com, otherwise known as a fully qualified domain name, or FQDN. Playing a role often attributed to hostnames within organizational boundaries, subdomains typically exhibit the greatest flexibility when it comes to naming conventions. Thus, large-scale DNS intelligence dictates that keeping an eye on the fluidity within domain names offers a critical view of the threat landscape. This is also the case where subdomain knowledge is leveraged at high-level stages of the recon process, targeting institutional privacy via recursive DNS data and any resulting bidirectional activity in the process. So, what are the top most popular subdomains, and how can we identify them? From a corpus of over 17 billion records of crawled web data for the .com TLD, and associated URLs, hosted at Common Crawl, we set out to investigate the feasibility of pulling a subset of these records using a common programming language like Python, some commodity hardware, and supporting tools like the CDX Toolkit. Wor...

    DNS Records and Record Types: Some Commonly Used, and Some You Might Not Know About

    Play Episode Listen Later Oct 5, 2021 16:47


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Without DNS and domain names, our experience of browsing the web would be quite different. As users, we would have to actually memorize the IP addresses of websites we want to visit, which doesn't seem like a pleasant user experience at all. DNS is the system that associates domain names with IP addresses, so whenever we type in “securitytrails.com”, DNS uses a series of protocols to connect us with the authoritative DNS server of the domain name and serves us the content we intended to visit. DNS is one of the most popular internet services, and at the same time, is vulnerable to DNS-based attacks. Understanding DNS is an important step to prevent DNS attacks. DNS servers contain one critical component: the DNS zone file. This file contains a variety of DNS records, each of which contains specific instructions for other servers to follow in order to connect to different services on the domain, such as the web server to visit a website or a mail server. Most domain owners and users are familiar with "standard" DNS record types such as the A record, Cname, MX and TXT records, as they are responsible for the everyday actions of online users. But beyond those more commonly used is an amazingly large list of DNS record types many users haven't heard of. Let's refresh our knowledge about the most common DNS record types, and go over a list of all other, lesser-known ones in use today. What are DNS records? As mentioned, DNS records are essentially the instructions found on authoritative DNS servers and stored in their zone file. All domains need to have the few necessary records that allow a user access to a website, but there are many different DNS records involved. These include mail records, website records and informational records, among others. If you're interested in learning more about the inner workings of DNS, check out our DNS root servers post. Each DNS record has different components: the domain name; time to live (TTL), i.e. the time in seconds in which the client can store the record in cache before the information must be requested again from the DNS server; class, which is set to IN (internet) for common DNS records that involve hostnames, servers or IP addresses; the record type; and the type data, which is the information according to which the domain can be resolved. All of these components of the DNS record are structured in the DNS record syntax, which typically follows the format: Thus, a DNS record for the website.com web server will then look like this: Record types are of high interest, as they indicate the format of the data in the record and instruct on its intended use—for example, the MX record that contains the location of the mail server. Most common DNS record types Since the early days of DNS, the internet has morphed and advanced in such a way that DNS record types have constantly changed right along with it. Many have become obsolete, only to be replaced with newer types. Some of the most common DNS record types are: A record A records are among the simplest and most fundamental DNS record types. The "A" stands for "address" and when you want to visit a website, send an email or, really, do anything on the internet, the domain you enter needs to be connected with the associated IP address. A records indicate the IP address for the given domain. An example of an A record would look like: For example, if you enter "securitytrails.com" the A record will point you to the 151.101.2.132 IP address, essentially connecting your device with our website. Most websites have a single A record, but you can use multiple A records for the same domain to provide redundancy, or use a number of domain names that each have an A record pointing to the same IP address. Something important to know about A records is that they only contain IPv4 addresses. AAAA record What A records are to IPv4 addresses...

    Palo Alto Networks Cortex Xsoar now has access to The Total Internet Inventory

    Play Episode Listen Later Sep 30, 2021 2:45


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Securitytrails 125,000 users can now integrate with the leading SOAR platform. We're excited to announce the immediate availability of our latest API integration into Palo Alto Networks Cortex XSOAR, enabling users to operationalize our security intelligence with over 750 different products. Palo Alto Networks Cortex XSOAR is one of the most comprehensive security orchestration solutions on the market today, enabling organizations to manage and collect data about security threats and drive responses with reduced human involvement. These automated playbooks are an incredible time-saver for overworked security teams. Securitytrails real-time analysis of hostnames, associated domains, IP blocks, SSL certificates, WHOIS, DNS, and historical DNS provides unrivaled context to security investigations. Securitytrails XSOAR enrichments can support a wide variety of playbooks including phishing, log-in analysis, vulnerability management, IOC enrichment, and endpoint diagnostics. How does it work? In order to configure Securitytrails on the Cortex XSOAR platform, you'll need to follow these steps: Navigate to Marketplace. Search for Securitytrails. Click "Install". Navigate to Settings > Integrations > Servers & Services. Search for Securitytrails. Click Add instance to create and configure the new integration instance. Few parameter configurations are required: API key: api.key.here. Trust any certificate (not secure): False. Use system proxy settings: False. Fetch indicators: False. Click Test to check if the URLs, token, and connection are working as expected. If you see a "Success" message, then you're ready to start playing with it. Jump into the playground, and start executing the Securitytrails commands. In the footer area, you'll find a CL-I where you can execute any supported Securitytrails commands, as shown here: Supported commands The following is a list of supported commands that can be executed within Cortex XSOAR CL-I, whether as part of an automation or in a playbook (once you execute a command, a DBot message will be displayed in the War Room showing the command details): With this new Securitytrails API integration for XSOAR, we are helping thousands of users access security data from our API in more alternative ways, providing more clarity for security companies to access subdomain and domain data, DNS and WHOIS historical records, associated domains and IPs, company details, user-agent activity, and much more. Access the Securitytrails API integration for XSOAR today.

    Digital Forensics: Sleuthing Against Cybercrime

    Play Episode Listen Later Sep 28, 2021 14:39


    While digital forensics may have come from a fairly dubious tradecraft background, it has grown to be a major part of many cyber crime investigations. Developments in the field in terms of research, tools and techniques have brought digital forensics to a whole new level. Whether providing valuable evidence that assists in the investigation and prosecutions of crime perpetrators or proving their innocence or as part of the post-breach investigation and incident response process in organizations of all sizes, digital forensics is a widely used craft by investigators in all sectors. The ever-growing advancements in information technology have potentially proven challenging to the branch of digital forensics, but its tools and techniques are continuously used to collect, process, preserve and analyze evidence from a range of digital devices, help uncover vulnerabilities and threats and ultimately help inform ways to mitigate them. What is digital forensics? Formally, digital forensics is defined as the branch of forensic science that is concerned with the identification, preservation, extraction and documentation of digital evidence using scientifically validated methods, evidence that will ultimately be used in a court of law. The term originated from "computer forensics" which includes the investigation of computers and digital storage media, but it has separated into a discipline focused on handling digital evidence found on all digital devices that store data. Digital evidence can be collected from many sources. These include computers, laptops, mobile phones, digital cameras, hard drives, IoT, CD-ROM, USB sticks, databases, servers, cloud, web pages, and more. Data sources like these are subject to digital forensics investigations, and must be handled with the utmost care to avoid any modification or contamination. When it comes to different types of electronic evidence, these include media files (photos, videos, audio), text messages, call logs, social media accounts, emails, internet search history, user account data (usernames, passwords), RAM system files, digital files (PDFs, spreadsheets, text files), network device records, computer backups, and much more. While in the past more commonly known as a practice used in legal cases, today the term "digital forensics" is also used to describe a process of cyber crime investigation in the private sector, even without the involvement of law enforcement or the court. Once a security breach occurs, organizations leverage digital forensics professionals to identify the attack, determine how the attackers gained access to the network, trace the attackers' movement through the network, ascertain whether information has been stolen, and recover compromised data. This can involve decryption, recovering deleted documents, files, cracking passwords, and the like. What is digital forensics used for? Digital forensics tools and techniques are used regularly by analysts and investigators in law enforcement, military, and government organizations as well as organizations in the private sector. Therefore the two main use cases for digital forensics are criminal cases or public investigations, and private or corporate investigations: Public sector Government agencies and law enforcement use digital forensics to obtain additional evidence when a crime has occured, whether it's cyber crime or another type of crime, to support allegations against a suspect. In cyber crime investigations, digital forensics investigators are employed by government agencies once an incident is detected, to find evidence for the prosecution of crimes. Not only is digital forensics useful for solving different types of cyber crime such as data breaches, ransomware and data theft, but it can also be used to solve physical crimes, such as burglary, assault, fraud and murder. The evidence uncovered can lead an investigation toward motives behind the crime and can even connect the suspect to the crime scene or suppo...

    Security Information and Event Management (Siem): History, Definition, Capabilities and Limitations

    Play Episode Listen Later Sep 23, 2021 14:14


    What began as a tool for helping organizations achieve and maintain compliance, security information and event management , SIEM rapidly evolved into an advanced threat detection practice. SIEM has empowered incident response and security operations centers (Soc) analysts as well as a myriad of other security teams to detect and respond to security incidents. While there may be talk about SIEM joining the line of legacy technologies that are proclaimed "dead", SIEM has been a core system for many security teams, and in different capacities. Furthermore, SIEM (along with its evolution) has been intertwined with relevant threats in the ecosystem as well as the market in which it is used. Systems and infrastructures that security professionals must secure in 2021 are vastly different from the systems in use when SIEM first came to the scene. But even if many have decided that SIEM is a thing of the past, its underlying principles and technology remain visible in many new systems such as SOAR, XDR, MDR and other solutions that integrate SIEM capabilities. Vendors and reimaginations come and go, but SIEM prevails as a technology that should be recognized. There will always be a need for experienced individuals to work with SIEM and know how to apply it to the appropriate business touchpoints. We've put together an overview of the history, definition, use cases as well as benefits and limitations of SIEM to provide a greater understanding of its continued usefulness in any security team's toolstack. What is SIEM? SIEM stands for security information and event management. It provides organizations with detection, analysis and response capabilities for dealing with security events. Initially evolving from log management, SIEM has now existed for over a decade and combines security event management (SEM) and security information management (SIM) to offer real-time monitoring and analysis of security events as well as logging of data. SIEM solutions are basically a single system, a single point that offers teams full visibility into network activity and allows for timely threat response. It collects data from a wide range of sources: user devices, servers, network equipment and security controls such as antivirus, firewalls, IPSs and IDSs. That data is then analysed to find and alert analysts toward unusual behavior in mere seconds, letting them respond to internal and external threats as quickly as possible. SIEM also stores log data to provide a record of activities in a given IT environment, helping to maintain compliance with industry regulations. In the past, SIEM platforms were mostly used by organizations to achieve and maintain compliance with industry-specific and regulatory requirements. What brought about its adoption across many organizations was the Payment Card Industry Data Security Standard (PCI DSS) and similar regulations (HIPAA). As advanced persistent threats (APTs) became a concern for other, smaller organizations, the adoption of SIEM has expanded to include a wide array of infrastructures. Today's SIEM solutions have evolved to address the constantly shifting threat landscape, and is now one of the core technologies used in security operations centers (Soc). Advancements in the SIEM field are bringing forward solutions that unify detection, analysis and response; implement and correlate threat intelligence feeds to provide added intelligence to Socs; and include or converge with user and entity behaviour analytics (UEBA) as well as security orchestration, automation and response (SOAR). How does a SIEM solution work? A SIEM solution works by collecting security event-related logs and data from various sources within a network. These include end-user devices, web, mail, proxy and other servers, network devices, security devices such as IDS and IPS, firewalls, antivirus solutions, cloud environments and assets, as well as all applications on devices. All of the data is collected and analyzed in a centralized loca...

    Security Trails Acquires Asset Monitoring Provider Surface.io

    Play Episode Listen Later Sep 23, 2021 2:29


    ORLANDO, FL, September 14, 2021 - SecurityTrails, the Total Internet Inventory, announced it has invested in the enterprise-ready asset monitoring provider Surface.io in an effort to deliver continuous attack surface monitoring through their Attack Surface Reduction platform. "SecurityTrails' comprehensive inventory combined with Surface.io's rapid identification of risky assets and services will provide our customers complete visibility into their real-time attack surface." Chris Ueland, CEO & Co-Founder, SecurityTrails. While launching their attack surface management offering, SecurityTrails conducted over 50 customer interviews to assess areas of improvement. Surface.io filled those gaps with its continuous asset monitoring capabilities, featuring best of breed modern code and expertise to make the concept of assets, applications and endpoints known. Surface.io discovers all assets in a company's external infrastructure and uses best of breed analysis to provide targeted data gathering on assets that really matter. This provides security teams the ability to ask crucial questions and confirm hypotheses about their external infrastructure. With this acquisition, SecurityTrails aims to empower over 125,000 users worldwide to get intel on their internet assets faster and with greater accuracy to eliminate any potential security threats. "As we take this incredible tool and scale it to the entire internet, we will greatly increase our ability to identify critical risks and establish a complete picture of your infrastructure. This is a huge step forward in our mission to ensure you are never blindsided by unknown risks," states Courtney Couch, Co-Founder & Chief Innovation Officer. SecurityTrails team is excited to welcome the power and features of Surface.io to help them further their mission to become the Total Internet Inventory. Integration with Surface.io has already begun and features will be accessible to customers soon. About SecurityTrails SecurityTrails is a total inventory that curates comprehensive domain and IP address data for users and applications that demand clarity. By combining current and historic data of all Internet assets, SecurityTrails is the proven solution for 3rd-party risk assessment, attack surface reduction and threat hunting. From knowing an organization's attack surface, shadow infrastructure, and spotting new domains, SecurityTrails makes sure there's nothing left to be discovered. Learn more at www.securitytrails.com and follow us on Twitter @securitytrails.

    N-map automator: Automating your Nmap Enumeration and Reconnaissance

    Play Episode Listen Later Sep 21, 2021 6:48


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. The rise of reconnaissance tools in the last decade has been remarkable. And understandably so; cybersecurity continues to receive significant attention on all fronts, from secretive accounts of cyber espionage to the now rather ubiquitous corporate breach scenarios pressuring organizations across the globe. Better security comes at a price too, and in the absence of significant security measures anti-patterns quickly evolve to give miscreants ample targets of opportunity. While the existence of indiscriminate internet scanning is largely accepted, automating the information gathering process in a meaningful and productive fashion entails a conscientious effort to arrive at a suitable combination of the best tools and techniques. In the recent past, fine-grained intelligence driven by tools like Nmap, and its supporting Nmap Scripts (NSE) platform, have hinted at the success of open-source tools in dealing with footprinting, the active collection of infrastructure data points, and other interesting aspects, beyond simple enumeration, a growing trend in the identification of exposed assets and applications. In this blog post, we'll examine the Nmap Automator project, as it automates and extends the classification and vulnerability assessment stages of targeted infrastructure via the traditional triggers provided by Nmap's most prominent features, which include port scanning and similar methods. Introducing such a tool would not be complete without practical examples and potential use cases, including some instructions to deliver a seamless setup experience. Let's take a peek. What is Nmap Automator? The Nmap automator, otherwise known as Nmap Automator, is essentially a Posix-compatible shell script that automates the process of target discovery, enumeration, and reconnaissance by leveraging Nmap commands in a unique way. Normally, mastering a tool like Nmap will require not only the ability to memorize and apply a myriad of command-line arguments, or flags, but also the capacity to transform a wealth of output into a consumable product; consequently, conducting scanning activities with such level of detail can easily take several days (if not weeks) to complete. Depending on certain host and network conditions, Nmap Automator can deploy a full-range Nmap vulnerability scan and CVE identification sequence well under 30 minutes. This may seem like a long time, but keep in mind that the scan types are designed to produce as much actionable intelligence about a target as possible. Additionally, Nmap Automator includes running instances of tools such as SSLscan, Nikto, and FFUF, all known throughout the bug bounty and pentesting ecosystems. In all, Nmap Automator supports the following scanning features: Network: Shows all live hosts in the host's network (approximately 15 seconds). Port: Shows all open ports (approximately 15 seconds). Script: Runs a script scan on found ports (approximately 5 minutes). Full: Runs a full range port scan, then runs a thorough scan on new ports (approximately 5-10 minutes). UDP: Runs a UDP scan "requires sudo" (approximately 5 minutes). Vulns: Runs CVE scan and Nmap Vulns scan on all found ports (approximately 5-15 minutes). Recon: Suggests recon commands, then prompts to automatically run them. All: Runs all the scans (approximately 20-30 minutes). For example, the -Network option allows to provide a single IP address and discover live hosts in the same subnet: Nmap automation on remote hosts via Nmap Automator can be achieved with the help of the -r/--remote flag. Known as Remote Mode, this feature (still under development) was designed to harness Posix shell commands without relying on any external tools. Installing Nmap Automator Many of the ethical hacking tools required by Nmap Automator should already be part of popular distributions such as Kali Linux and Parrot OS. Besides S...

    Experience Upgrade, SecurityTrails Product Redesign

    Play Episode Listen Later Sep 9, 2021 2:23


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Last week we announced the general release of SecurityTrails SQL. And today we're excited to let you know that we've been working on improving the overall UX experience of many of our products with a new, unified design. A new, unified look and feel One of the first changes you'll notice is the updated sign in interface, distinguished by our brand new light-violet color scheme. As you can see, these before-and-after screenshots indicate some truly helpful improvements. These include significant changes to the colors we use as well as the presentation of our layout design. Surface browser As part of the new Surface browser palette, we're using high-contrast colors for the text, rounded borders and faded-style items on the left menu, and an organized approach for the rest of the pages that makes them easy on the eyes and a breath of fresh air for your research tasks. We're also introducing a new Dark theme that can be turned on by using a switch located near the footer: This dark mode will help to reduce eye strain by cutting down on direct light exposure. Some of our users already love it, finding interfaces and texts easier to visualize in dark mode than on a bright white screen. ASR version 2 When it comes to Attack Surface Reduction, this redesign is a perfect fit for users seeking to locate the most important data in the blink of an eye, as you can see from the main ASR version 2 interface: All ASR version 2 pages also use the same style by default, highlighting information from your organization in a simple and appealing way. And it looks awesome on any screenshot you want to put in your report. Console The Console is also enhanced with rounded borders on corners and buttons, and our light violet backgrounds can be found on most menu areas as well, resulting in a dashboard that's solid, attractive and easier to navigate than ever before. Graphics on the console also look amazing with this new color palette, making interaction with stats, numbers and general text more simple and straightforward. Our free web app For those who love to enrich their security research by using our free app, we're happy to say that it has also received the same design updates, including the dark mode switch located on the lower left: We would love to hear your feedback regarding the new UX changes for our products, console and free app. Get in touch with us, and most importantly, stay tuned—because we're preparing even more exciting UX improvements!

    Experience Upgrade SecurityTrails Product Redesign

    Play Episode Listen Later Sep 9, 2021 2:23


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Last week we announced the general release of SecurityTrails SQL. And today we're excited to let you know that we've been working on improving the overall UX experience of many of our products with a new, unified design. A new, unified look and feel One of the first changes you'll notice is the updated sign in interface, distinguished by our brand new light-violet color scheme. As you can see, these before-and-after screenshots indicate some truly helpful improvements. These include significant changes to the colors we use as well as the presentation of our layout design. Surface browser As part of the new Surface browser palette, we're using high-contrast colors for the text, rounded borders and faded-style items on the left menu, and an organized approach for the rest of the pages that makes them easy on the eyes and a breath of fresh air for your research tasks. We're also introducing a new Dark theme that can be turned on by using a switch located near the footer: This dark mode will help to reduce eye strain by cutting down on direct light exposure. Some of our users already love it, finding interfaces and texts easier to visualize in dark mode than on a bright white screen. ASR version 2 When it comes to Attack Surface Reduction, this redesign is a perfect fit for users seeking to locate the most important data in the blink of an eye, as you can see from the main ASR version 2 interface: All ASR version 2 pages also use the same style by default, highlighting information from your organization in a simple and appealing way. And it looks awesome on any screenshot you want to put in your report. Console The Console is also enhanced with rounded borders on corners and buttons, and our light violet backgrounds can be found on most menu areas as well, resulting in a dashboard that's solid, attractive and easier to navigate than ever before. Graphics on the console also look amazing with this new color palette, making interaction with stats, numbers and general text more simple and straightforward. Our free web app For those who love to enrich their security research by using our free app, we're happy to say that it has also received the same design updates, including the dark mode switch located on the lower left: We would love to hear your feedback regarding the new UX changes for our products, console and free app. Get in touch with us, and most importantly, stay tuned—because we're preparing even more exciting UX improvements!

    Experience Upgrade SecurityTrails Product Redesign

    Play Episode Listen Later Sep 9, 2021 2:23


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Last week we announced the general release of SecurityTrails SQL. And today we're excited to let you know that we've been working on improving the overall UX experience of many of our products with a new, unified design. A new, unified look and feel One of the first changes you'll notice is the updated sign in interface, distinguished by our brand new light-violet color scheme. As you can see, these before-and-after screenshots indicate some truly helpful improvements. These include significant changes to the colors we use as well as the presentation of our layout design. Surface browser As part of the new Surface browser palette, we're using high-contrast colors for the text, rounded borders and faded-style items on the left menu, and an organized approach for the rest of the pages that makes them easy on the eyes and a breath of fresh air for your research tasks. We're also introducing a new Dark theme that can be turned on by using a switch located near the footer: This dark mode will help to reduce eye strain by cutting down on direct light exposure. Some of our users already love it, finding interfaces and texts easier to visualize in dark mode than on a bright white screen. ASR version 2 When it comes to Attack Surface Reduction, this redesign is a perfect fit for users seeking to locate the most important data in the blink of an eye, as you can see from the main ASR version 2 interface: All ASR version 2 pages also use the same style by default, highlighting information from your organization in a simple and appealing way. And it looks awesome on any screenshot you want to put in your report. Console The Console is also enhanced with rounded borders on corners and buttons, and our light violet backgrounds can be found on most menu areas as well, resulting in a dashboard that's solid, attractive and easier to navigate than ever before. Graphics on the console also look amazing with this new color palette, making interaction with stats, numbers and general text more simple and straightforward. Our free web app For those who love to enrich their security research by using our free app, we're happy to say that it has also received the same design updates, including the dark mode switch located on the lower left: We would love to hear your feedback regarding the new UX changes for our products, console and free app. Get in touch with us, and most importantly, stay tuned—because we're preparing even more exciting UX improvements!

    Intrusion Prevention Systems: Definition, Types, IDS vs IPS

    Play Episode Listen Later Sep 7, 2021 12:24


    Every organization with a cybersecurity strategy has the goal of stopping cyber threats before they become real attacks and cause damage. Because of this, most cybersecurity strategies have turned to more proactive approaches, rather than relying only on reactive security measures. Vulnerability assessment, the use of cyber intelligence feeds, attack surface management and other processes are all used to prevent threats from becoming security breaches. Organizations have also turned to solutions that detect and prevent cyberattacks by monitoring early indicators of attack in network traffic. After all, nearly all types of cyber threats use network communications as part of the attack. The concept of monitoring network traffic to detect anomalous activity has been around for decades, with intrusion detection systems (IDS), the go-to solution for this purpose. As networks and their threats advanced, so did the need for a solution that can combine detection and threat response. The technology that resulted from this are intrusion prevention systems. What are intrusion prevention systems (IPS)? If we go back to the analogy of an IDS being a security system in your house, then IPS would be the security guard who can actively put a halt to incoming threats. While the security system is important in that it can alert the guard of a potential threat, it can't take any action against it. An intrusion prevention system (IPS) is a network security solution that continuously monitors the traffic going in and out of an organization's network. It looks for potentially malicious activity and takes action against any such wrongdoing by alerting, stopping or dropping it from continuing. Since exploits can be executed rather quickly after a malicious actor gains initial access to a network, intrusion prevention systems carry out an automated response to a suspected threat, based on pre-established rules. IPS is used as one of the measures in an incident response plan, and in terms of technology, organizations use IPS for identifying insider threats that can result in internal security policy issues or compliance violations. IPS solutions shine the most, though, when it comes to preventing external cyber threats. Some of the most common network threats IPS is designed to prevent are: DDoS attacks. Computer viruses. Brute force attacks. Zeroday exploits. Buffer overflow attacks. ARP spoofing. IPS has become one of the founding blocks of many organizations' security strategies and infrastructures. Evolution of IPS In the early days of IPS technology, few organizations used it due to different concerns. IPS sat in line between an organization's network and the internet, and because early IPS systems relied on using a signature database against which they would match observed network traffic, the process had the potential to actually slow down network traffic—which certainly isn't ideal. Additionally, there were concerns over IPS blocking potentially harmless traffic; at that time, IPS would immediately block anomalous traffic whenever it was detected. Organizations would then run the risk of blocking traffic from actual prospects (also not ideal). The developing advancements in IPS, which led to what is commonly referred to as next-generation IPS, helped bridge these holes in functionality with faster deep-packet inspection, machine learning for detection and sandboxing and/or emulation capabilities. Today, we commonly see IPS as part of next-generation firewalls (NGFW). This gives IPS more advanced abilities to take action and block malicious traffic and malware, and reconfigure the firewall itself to block future traffic of the same kind. How does an intrusion prevention system work? The main goal of intrusion prevention systems is to quickly identify suspicious activity, log relevant information and attempt to block that activity while it reports it to the security team. IPS stands on the perimeter of the network and provides active scanning ...

    Best Cybercrime Investigation and Digital Forensics Courses and Certifications

    Play Episode Listen Later Aug 31, 2021 17:06


    Cyber criminals target networks in the private and public sector every day, and their threat is growing. Cyber attacks are becoming more common, more menacing, and in the public sector, can compromise public services and put sensitive data at risk. It happens all the time in the private sector too: companies are attacked for trade secrets, customer information and other confidential details. Individuals aren't spared either and are falling victim to identity theft, fraud and various other types of cybercrime. For the prosecution of such acts, preserving and recovering digital evidence is absolutely critical. In the same way a "traditional" detective or law enforcement agent explores crimes in the physical or material sense, a cybercrime investigator delves into internet-based crimes. A cybercrime investigation is the process of investigating, analyzing and recovering digital forensics data from networks that have been attacked, in order to identify not only the perpetrators but their intentions as well. Cybercrime investigations are conducted by experts in criminal justice, national security and private security agencies, and cybersecurity investigators, experts and blue teams all play an indispensable role in preventing, monitoring, migrating and investigating all types of cybercrime against networks, servers and data in private organizations, as well as home devices. Top 8 cybercrime investigation and digital forensics courses and certs Cybersecurity investigators are highly knowledgeable in numerous aspects of cybercrime, including their different types, legal aspects, methods of protection, necessary investigation techniques, and digital forensics. In order to deal with cybercrime incidents in the appropriate manner, from incident response to acquisition and preservation of evidence and advanced forensic analysis, cybercrime investigators require a combination of education and experience to be successful. For aspiring as well as experienced cybercrime investigators seeking additional knowledge and education, courses and certifications provide a wise option. Security professionals working in other fields can also benefit from acquiring cybercrime investigation and digital forensics skills. Some general information security certifications such as the Certified Information Systems Security Professional (CISSP) and Offensive Security Certified Professional (OSCP) can be highly useful for cybercrime and digital forensics investigators. Other, more specialized certifications and online courses in the field are also recognized in the industry. We rounded up our picks for the best cybercrime investigation courses and certifications, listed in no particular order. The list is more focused on vendor-neutral courses and certs so the well-known AccessData Certified Examiner certification didn't make this list. 1. The IFCI Expert Cybercrime Investigator (CCI) course The Cybercrime Investigator's course (CCI) is the flagship training program of The International Fraternity of Cybercrime Investigators available on Udemy. Also known as the IFCI-CCI, this course provides the foundational knowledge needed to kickstart a cybercrime investigator's career, and then some. The CCI covers every aspect of a cybercrime investigation including intrusion investigations, incident response, attack vector identification, and cybercrime profiling, with hands-on labs emulating real-world scenarios. The main goal of the course is to arm aspiring cybercrime investigators with the knowledge and skills needed to perform their work successfully. The course features 13 sections with more than 100 lectures and 15 labs. The sections include: Core concepts of computer forensics. Incident response and forensic acquisition. File deletion recovery. Email analysis. Internet activity analysis. Malware and network intrusion analysis. Dynamic malware analysis. By taking the CCI cybercrime investigation course, students will be able to respond to cybercrime incidents, ...

    Announcing SecurityTrails SQL: a Completely New Way to Access SecurityTrails Data

    Play Episode Listen Later Aug 25, 2021 2:51


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Over the past few months, we've been perfecting our new SQL-like query language, one that will allow security teams to perform massive intelligence collection as well as automate their findings. Today, we're excited to announce the general release of this powerful new product: Securitytrails SQL. By contacting our Sales team, you will be able to use SecurityTrails SQL integrated as a Securitytrails API endpoint, inside Attack Surface Reduction, as well as in the Surfacebrowser SQL Explorer interface. What does Securitytrails SQL look like? Securitytrails SQL will empower you to collect data about any host, including domains, DNS records, Whois, SSL, HTTP, and the organization it belongs to, along with detailed IP data. For your convenience, this tool also supports a wide range of SQL operators and with it we provide full documentation complete with examples and technical assistance. Additionally, SurfaceBrowser SQL Explorer users can enjoy our SQL editor, which allows you to run queries, copy data from it, format and clear everything, as well as download results in JSON or CSV. How can I use Securitytrails SQL? You can use Securitytrails SQL to run different queries to get host, IP and SQL data. And how does the Securitytrails SQL look when used from SQL Explorer's visual editor? See it in action: Now let's look at some query examples you can run from your Securitytrails API. The following query will expose all subdomains from microsoft.com: In the same way, and by merely changing the SQL-query, you can fetch different data. To find all exposed development areas of subdomains ranked by Open PageRank, run: To locate self-signed SSL certificates, using GE.com for this example, run this query: You can find more SSL-based examples in the SQL Explorer: SSL Certificate Scraping Showcase blog post. If you want to find domains that redirect to a certain host—and here we used Securitytrails.com—use: Explore even more ways to query our HTTP header data inside SQL Explorer. To find IPs with SSL certificates that contain a specific hostname in them, like Nike.com for instance, run: There is much more functionality to be discovered once you start playing with Securitytrails SQL. Security teams can use it to: Automate detection of security issues. Map your entire digital infrastructure. Find critical SSL data. Detect open services. Improve phishing detection. Prevent data breaches. Find vulnerable operating systems and services. Are you ready to explore Securitytrails SQL? Find out how Securitytrails SQL can help you find critical data from any organizations within seconds—and take your recon and app automation to the next level!

    Announcing SecurityTrails SQL: a Complete New Way to Access SecurityTrails Data

    Play Episode Listen Later Aug 25, 2021 2:51


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Over the past few months, we've been perfecting our new SQL-like query language, one that will allow security teams to perform massive intelligence collection as well as automate their findings. Today, we're excited to announce the general release of this powerful new product: Securitytrails SQL. By contacting our Sales team, you will be able to use SecurityTrails SQL integrated as a Securitytrails API endpoint, inside Attack Surface Reduction, as well as in the Surfacebrowser SQL Explorer interface. What does Securitytrails SQL look like? Securitytrails SQL will empower you to collect data about any host, including domains, DNS records, Whois, SSL, HTTP, and the organization it belongs to, along with detailed IP data. For your convenience, this tool also supports a wide range of SQL operators and with it we provide full documentation complete with examples and technical assistance. Additionally, SurfaceBrowser SQL Explorer users can enjoy our SQL editor, which allows you to run queries, copy data from it, format and clear everything, as well as download results in JSON or CSV. How can I use Securitytrails SQL? You can use Securitytrails SQL to run different queries to get host, IP and SQL data. And how does the Securitytrails SQL look when used from SQL Explorer's visual editor? See it in action: Now let's look at some query examples you can run from your Securitytrails API. The following query will expose all subdomains from microsoft.com: In the same way, and by merely changing the SQL-query, you can fetch different data. To find all exposed development areas of subdomains ranked by Open PageRank, run: To locate self-signed SSL certificates, using GE.com for this example, run this query: You can find more SSL-based examples in the SQL Explorer: SSL Certificate Scraping Showcase blog post. If you want to find domains that redirect to a certain host—and here we used Securitytrails.com—use: Explore even more ways to query our HTTP header data inside SQL Explorer. To find IPs with SSL certificates that contain a specific hostname in them, like Nike.com for instance, run: There is much more functionality to be discovered once you start playing with Securitytrails SQL. Security teams can use it to: Automate detection of security issues. Map your entire digital infrastructure. Find critical SSL data. Detect open services. Improve phishing detection. Prevent data breaches. Find vulnerable operating systems and services. Are you ready to explore Securitytrails SQL? Find out how Securitytrails SQL can help you find critical data from any organizations within seconds—and take your recon and app automation to the next level!

    Introducing SecurityTrails SQL: a Complete New Way to Access SecurityTrails Data

    Play Episode Listen Later Aug 25, 2021 2:51


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Over the past few months, we've been perfecting our new SQL-like query language, one that will allow security teams to perform massive intelligence collection as well as automate their findings. Today, we're excited to announce the general release of this powerful new product: Securitytrails SQL. By contacting our Sales team, you will be able to use SecurityTrails SQL integrated as a Securitytrails API endpoint, inside Attack Surface Reduction, as well as in the Surfacebrowser SQL Explorer interface. What does Securitytrails SQL look like? Securitytrails SQL will empower you to collect data about any host, including domains, DNS records, Whois, SSL, HTTP, and the organization it belongs to, along with detailed IP data. For your convenience, this tool also supports a wide range of SQL operators and with it we provide full documentation complete with examples and technical assistance. Additionally, SurfaceBrowser SQL Explorer users can enjoy our SQL editor, which allows you to run queries, copy data from it, format and clear everything, as well as download results in JSON or CSV. How can I use Securitytrails SQL? You can use Securitytrails SQL to run different queries to get host, IP and SQL data. And how does the Securitytrails SQL look when used from SQL Explorer's visual editor? See it in action: Now let's look at some query examples you can run from your Securitytrails API. The following query will expose all subdomains from microsoft.com: In the same way, and by merely changing the SQL-query, you can fetch different data. To find all exposed development areas of subdomains ranked by Open PageRank, run: To locate self-signed SSL certificates, using GE.com for this example, run this query: You can find more SSL-based examples in the SQL Explorer: SSL Certificate Scraping Showcase blog post. If you want to find domains that redirect to a certain host—and here we used Securitytrails.com—use: Explore even more ways to query our HTTP header data inside SQL Explorer. To find IPs with SSL certificates that contain a specific hostname in them, like Nike.com for instance, run: There is much more functionality to be discovered once you start playing with Securitytrails SQL. Security teams can use it to: Automate detection of security issues. Map your entire digital infrastructure. Find critical SSL data. Detect open services. Improve phishing detection. Prevent data breaches. Find vulnerable operating systems and services. Are you ready to explore Securitytrails SQL? Find out how Securitytrails SQL can help you find critical data from any organizations within seconds—and take your recon and app automation to the next level!

    Introducing SecurityTrails SQL: a Complete New Way to Access SecurityTrails Data

    Play Episode Listen Later Aug 25, 2021 2:47


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Over the past few months, we've been perfecting our new SQL-like query language, one that will allow security teams to perform massive intelligence collection as well as automate their findings. Today, we're excited to announce the immediate availability of this powerful new product: Securitytrails SQL. By contacting our Sales team, you will be able to use SecurityTrails SQL integrated as a Securitytrails API endpoint, inside Attack Surface Reduction, as well as in the Surfacebrowser SQL Explorer interface. What does Securitytrails SQL look like? Securitytrails SQL will empower you to collect data about any host, including domains, DNS records, Whois, SSL, HTTP, and the organization it belongs to, along with detailed IP data. For your convenience, this tool also supports a wide range of SQL operators and with it we provide full documentation complete with examples and technical assistance. Additionally, SurfaceBrowser SQL Explorer users can enjoy our SQL editor, which allows you to run queries, copy data from it, format and clear everything, as well as download results in JSON or CSV. How can I use Securitytrails SQL? You can use Securitytrails SQL to run different queries to get host, IP and SQL data. And how does the Securitytrails SQL look when used from SQL Explorer's visual editor? See it in action: Now let's look at some query examples you can run from your Securitytrails API. The following query will expose all subdomains from microsoft.com: In the same way, and by merely changing the SQL-query, you can fetch different data. To find all exposed development areas of subdomains ranked by Open PageRank, run: To locate self-signed SSL certificates, using GE.com for this example, run this query: You can find more SSL-based examples in the SQL Explorer: SSL Certificate Scraping Showcase blog post. If you want to find domains that redirect to a certain host—and here we used Securitytrails.com—use: To find IPs with SSL certificates that contain a specific hostname in them, like Nike.com for instance, run: There is much more functionality to be discovered once you start playing with Securitytrails SQL. Security teams can use it to: Automate detection of security issues. Map your entire digital infrastructure. Find critical SSL data. Detect open services. Improve phishing detection. Prevent data breaches. Find vulnerable operating systems and services. Are you ready to explore Securitytrails SQL? Find out how Securitytrails SQL can help you find critical data from any organizations within seconds—and take your recon and app automation to the next level!

    Blast Radius: Mapping, Controlling, and Exploiting Dynamic Self-Registration Services

    Play Episode Listen Later Aug 24, 2021 14:18


    Vendors such as Datto, Geo Vision, Synology and others leverage and depend on self-registered services for their products. These devices frequently leak critical data or have insecure design, unintentional or even intentional design decisions and application flaws. Through insecure network design and installation practices, they can be easily mapped, discovered and attacked by cyber criminals via insecure vendor, software and integrator practices. For our new blog series Blast Radius, security professionals, researchers and experts deep dive into different attacks and vulnerabilities, explore how they can impact the entire internet ecosystem, and examine what they mean for organizations of all sizes, across all industries. To talk about the emerging properties of self-registration services bundled with devices provided by major manufacturers and the implications of their insecure design, we are joined by Ken Pyle. Ken Pyle is a partner of CYBIR, specializing in exploit development, penetration testing, reverse engineering, and enterprise risk management. As a highly rated and popular lecturer he's presented groundbreaking research at major industry events such as Defcon, ShmooCon, Secureworld, HTCIA International, and others. He's also discovered and published numerous critical software vulnerabilities in products from a wide range of companies that includes Cisco, Dell, Netgear, Sonicwall, HP, Datto, Kaseya, and ManageEngine, earning him multiple Hall of Fame acknowledgements for his work. Ken has been publishing DNS work and vulnerability research privately for a number of years. He began showing some of his work in the web application, DNS and IPv4 space at different cybersecurity conferences, with a focus on fixing sets of problems that had already been deemed unfixable. For our latest installment of Blast Radius, Ken will share a continuation of his work, and will disclose how the entire PKI, non-repudiation and encryption design of entire vendor ecosystems is flawed, and how you can use popular IoT devices and services to de-anonymize anonymity networks and map internal networks via poorly managed cloud security features. Additionally, he'll reveal how he gained arbitrary control of firewall rules across millions of devices and multiple vendors. The emergent properties of dynamic DNS scraping At Defcon 29, I presented a number of new attacks, reconnaissance types, exploits, and emergent properties of Self-Registration Services that come with devices provided by major manufacturers such as Datto. In the lead up to Defcon, I have been publishing quietly on the subject and attempting to pre-empt and alert companies to the exposures. I have been a really big fan of Securitytrails all the way back to DNS Trails. I find the engine and dataset to be simple to carve, highly accurate, and many emergent properties can be easily identified using the site and tools. In this write-up, we're going to discuss the emergent properties of passive, historical dynamic DNS registrations and how these can be easily exploited. Mass mapping/arbitrary control of firewall rules One of the many awesome features of Securitytrails is the ability to quickly and easily search data in weird ways no one has thought of. For example, a search for RFC 1918 addresses via ST will turn up some pretty interesting results: Searching for RFC 1918 addresses, specifically those which MSPs, IT folks, or even your home routers distribute, will allow you to very quickly start identifying internal networks and their firewall rules. You'll notice I've highlighted a few interesting zones, remotewd.com, wd2go.com, duckdns.org, dattolical.net. We'll get back to those. In order for many of these devices to register or maintain a record on the manufacturer's dynamic DNS regime, they must consistently beacon or "check-in" every few minutes. This allows the manufacturer (and you) to find the device easily, track it over network changes, and allow it to update and license i...

    Blast Radius: Misconfigured Kubernetes

    Play Episode Listen Later Aug 17, 2021 7:10


    Note: The audio version doesn't include code or commands. Those parts of the post can be seen in the text version. Recognized as a leader in the container market, Kubernetes is an open source microservices cluster manager used by millions of companies worldwide. Bolstering its popularity is its considerable ability in managing container workloads, as it allows for the easy deployment of numerous servers with appropriate scaling as they grow. To show you just how dominant Kubernetes truly is, reports show that of the more than 109 tools used to manage containers, over 89% of companies use various Kubernetes versions. Not a bad statistic for a technology that's only eight years old! And as Kubernetes usage grows, so does interest about, and skepticism concerning, the security of the platform. Companies of many different types, from small developers to big-name brands, use Kubernetes to help deploy systems both easily and in a uniform fashion. And the most common cause of all Kubernetes-related security incidents by far is a familiar threat in the cybersecurity field, misconfigurations. Roughly seven out of ten companies report having detected a misconfiguration in their Kubernetes environment. For our new blog series Blast Radius, security professionals, researchers and experts deep dive into different attacks and vulnerabilities, explore how they can impact the entire internet ecosystem, and examine what they mean for organizations of all sizes, across all industries. As Kubernetes grows in popularity, so do the security concerns around its usage. To talk more about the blast radius of misconfigured Kubernetes, we are joined by Robert Wiggins, better known as Random Robbie. Robbie was featured on our blog in the past when he showed us all the ProTips on Bug Bounty Hunting that he has up his sleeve. Active in the security and bug bounty community, Robbie shares with us his research and techniques for finding misconfigured Kubernetes, and elaborates on the different types of impact he's seen them have on various companies. How many misconfigured Kubernetes are there? On average, there are around 800 misconfigured Kubernetes servers around the world exposing secrets and other fun data. These systems are generally connected to a lot of internal cloud systems, so if they're misconfigured they can handily grant access to a lot of sensitive information to an attacker. Security incidents involving misconfigurations in Kubernetes are a serious matter. As cited by DivvyCloud in their 2020 Cloud Misconfigurations Report, 196 separate data breaches were a result of cloud misconfigurations between January 1, 2018 and December 31, 2019. More than 30 billion records were exposed in these data breaches, creating $5 trillion in losses over that period. How to find misconfigured Kubernetes servers Also on average are around 400 systems exposed via Shodan on port 443 and many more on port 8080. The ones on port 8080, however, generally seem to have been attacked and have an XMR miner on them. Many of the attacked or infected servers have been up for a while, with a large number of them appearing to be located in China. To find exposed Kubernetes systems, you can search via Shodan using the search term http.html:/apis/apiextensions.k8s.io for any HTTP 200 response. That response should give you a list of API endpoints and you can browse to /api/v1/secrets to uncover all of the server's secrets. Here's an example: By running the following bash command you can see which tokens have permission to gain access to the pods. You should now see an output showing you the pods. Once you've found the pod you wish to access, you can run the following command to gain access to that pod, then explore it. To confirm it has access to the pod, it should dump out something like this: Impact of misconfigured Kubernetes While scanning and learning about Kubernetes three years ago, I found a Kubernetes server that belonged to Snapchat. This server was so full of se...

    From Chokeslams To Pwnage: Phillip Wylie Shares His Journey From Pro Wrestling To Offensive Security

    Play Episode Listen Later Aug 10, 2021 14:36


    Cybersecurity is a lucrative career, but knowing which path to follow to break into the industry can be daunting for fresh graduates, enthusiasts, and those switching careers. Not to mention, actually taking the plunge and getting into the industry, especially when coming from a non-traditional background, is a discussion in itself. Fortunately, many inspiring cybersecurity professionals break the illusion that you need to follow a specific path to have a career in this industry. Phillip Wylie has one of the more interesting and inspiring stories of going into cybersecurity and becoming a valued professional, mentor, and teacher. He has been part of the industry since the late 1990s but, before then, he was actually a pro wrestler and even wrestled a bear! Today, Phillip wrestles with issues of accessibility of cybersecurity education by teaching ethical hacking and web app pentesting at Dallas College and running The Pwn School Project, in addition to working as a Senior Cloud Penetration Tester. We jumped into the ring with Phillip to hear his backstory, which skills transferred from his pro wrestling career to cybersecurity, the importance of mentorship in the industry, and his advice to people that want to start on his path. SecurityTrails: You've been in offensive security for over a decade now, but you had an interesting career prior to that. We need to ask about your wrestling career, especially bear wrestling! Can you tell us a little about that part of your life? What was it like to wrestle a bear? Phillip Wylie: When I graduated high school, I did not know what I wanted to do for a career. As a powerlifter and a big muscular guy, my friends said I should be a professional wrestler. I liked the idea and pursued a wrestling career. I attended two different wrestling schools and wrestled for a couple of years. I got to wrestle some very well known wrestlers, including Mick Folley, who wrestled in Texas as Cactus Jack. I also wrestled two of the three Fabulous Freebirds tag team trio, The Road Warriors, The Rock n Roll Express, The Midnight Express, and the Samoan SWAT Team, who happened to be related to Dwayne The Rock Johnson. I did not wrestle often enough to make a living, so my main job was working as a bouncer at a nightclub in my hometown of Denton, TX. The nightclub hosted special events on Sundays, and they decided to bring in a wrestling bear. The nightclub manager asked me to wrestle the bear to help boost attendance of the event since I was a pro wrestler and known by the nightclub patrons. Wrestling the bear was open to anyone that wanted to. The bear was named Sampson and was a 750-pound brown bear. People always ask me who won, and the answer is the bear. ST: There is an interesting parallel between professional wrestling and offensive security, are there any lessons you learned from wrestling and applied to your infosec career? Phillip: The biggest parallel I can draw between pro wrestling and offensive security is the social engineering part of offensive security. Wrestling has become known as sports entertainment since wrestling federations shared that it was not real. With social engineering, you become who you portray during pretexting, much like acting in pro wrestling. Real wrestling and martial arts can also have parallels drawn between them and offensive security. Discovering an opponent's weaknesses and exploiting them is a great example, much like how you find vulnerabilities and exploit/hack them. Focus on the learning. If you don't learn the subject, the certification or degree is not as useful. The degree or cert is nice to have, but if you don't know what you are doing, you will have a more difficult time. ST: How did you discover information security and what did your early days look like? Phillip: My first experience with information security was working for Intrusion, Inc. in early 2000s providing technical support for Linux-based firewall and VPN appliances and a vulnerability scannin...

    Blast Radius: DNS Takeovers

    Play Episode Listen Later Aug 4, 2021 8:07


    Subdomain takeover remains a common vulnerability, and a destructive one at that. On one hand, there are types that practically don't exist anymore, such as C name takeovers, while there are still plenty of hanging DNS records, PoC creation is nearly impossible due to restrictions put in place by major cloud providers (mainly AWS). On the other hand, and in terms of severity, DNS or NS takeovers are less common but create the highest impact. An NS subdomain takeover is similar in principle to other types of subdomain takeovers. And due to the major role that NS records play in internet traffic, and the possibility of attackers chaining multiple attack vectors, an NS takeover can lead to severe implications for the target organization. For our new blog series Blast Radius, security professionals, researchers and experts deep dive into different attacks and vulnerabilities, explore how they can impact the entire internet ecosystem, and examine what they mean for organizations of all sizes, across all industries. To talk about the growing danger of DNS takeovers, we are joined by Patrik Hudák. Patrik has been a regular on our blog, sharing about his latest research on subdomain takeovers, and has been a crucial resource for many in the bug bounty community. He began his research by studying other takeover methods and the different tools used to execute them before discovering the impact of DNS takeovers. While not that common, he has achieved many successes in bug bounty hunting with this particular vulnerability. How companies can be affected When a company hosts its DNS zones on a third-party DNS provider (such as AWS Route 53), there is a possibility of DNS takeover (also known as NS takeover). Such a takeover happens when the DNS zone is removed from the DNS provider, but the DNS delegation link stays in play. If such an event happens, an attacker can register the same DNS zone on the same provider and host arbitrary records for such a zone. For more technical information, please refer to the following link: Although this seems to be only a theoretical attack, there are numerous cases where this has actually occurred, even with large companies. Everybody who uses third-party DNS providers is affected if the process for the creation and removal of DNS zones is incorrect. Thus, companies of any size should audit their internal processes for such events. There are, however, more tricky scenarios. Since DNS uses multiple nameservers for DNS zones for redundancy, sometimes only a subset of such nameservers is affected by DNS takeover. Let's say that domain "example.com" uses two nameservers: "dns.existingdomain.com" and "dns.nonexistingdomain.com". The latter, as the name suggests, does not exist and thus cannot correctly serve requests for the "example.com" zone. From the usability perspective, there is no downtime. Every DNS request made for "example.com" is served by "dns.existingdomain.com" since DNS uses quiet fallback. In this scenario, an attacker can exploit the non-existing nameserver simply by registering the domain name (if it is available) which would lead to becoming an authoritative nameserver for "example.com". During the DNS request, the round-robin mechanism for choosing a nameserver is used, in other words, there is a 50% chance that while requesting DNS info for the "example.com" zone, it would hit the malicious nameserver. If it does, an attacker can serve arbitrary DNS results with the high TTL which would sit quietly in a cache for a long time. Implications of DNS takeover Firstly, DNS takeover is not that different from other types of takeover such as C name. One difference is that DNS takeover can cover multiple subdomains with different domain names. Since the attacker controls the DNS zone, she can create great FQDNs for phishing or other malicious activity. Let's say that "sub.example.com" is affected by the DNS takeover. An attacker might take it further and create a new subdomain called "login.sub.e...

    How I Lost the Securitytrails #ReconMaster Contest, and How You Can Win: Edge-Case Recon Ideas

    Play Episode Listen Later Jul 29, 2021 12:34


    Blast Radius: Apache Airflow Vulnerabilities

    Play Episode Listen Later Jul 28, 2021 9:09


    Blast Radius: Apache Airflow Vulnerabilities

    Play Episode Listen Later Jul 27, 2021 9:08


    AssetFinder: A Handy Subdomain and Domain Discovery Tool

    Play Episode Listen Later Jul 22, 2021 5:52


    Claim SecurityTrails Blog

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel