Each webinar features an SEI researcher discussing their research on software and cybersecurity problems of considerable complexity. The webinar series is a way for the SEI to accomplish its core purpose of improving the state-of-the-art in software engineering and cybersecurity and transitioning th…
In this webcast, Carol Woody presents the scope of a cybersecurity engineering strategy for DevSecOps along with the criticality of sharing information with direct and indirect stakeholders.
In this webcast, Brett Tucker, Ryan Zanin, and Abid Adam discuss the critical factors for risk executives to be successful to not only protect critical assets but also to take advantage of new opportunities created via the pandemic.
Zero Trust Architecture adoption is a challenge for many organizations. It isn't a specific technology to adopt; instead, it's a security initiative that an enterprise must understand, interpret, and implement. Enterprise security initiatives are never simple, and their goal to improve the enterprise's cybersecurity posture requires the alignment of multiple stakeholders, systems, acquisitions, and exponentially changing technology. This alignment is always a complex undertaking and requires cybersecurity strategy and engineering to succeed. What attendees will learn: • The purpose of a Zero Trust Architecture • Zero Trust Architecture components • How to think about Zero Trust Architecture transition
In its 2021 report, the National Security Commission on AI (NSCAI) wrote, "The impact of artificial intelligence (AI) on the world will extend far beyond narrow national security applications." How do we move beyond those narrow AI applications to gain strategic advantage? Join Dr. Matt Gaston, Director of the SEI AI Division, Dr. Steve Chien, NSCAI Commissioner and Technical Group Supervisor of the Artificial Intelligence Group and Senior Research Scientist in the Mission Planning and Execution Section at the Jet Propulsion Laboratory, California Institute of Technology, and Dr. Jane Pinelis, Chief of Test and Evaluation of AI/ML at the DoD Joint AI Center (JAIC) for a discussion on scaling AI. Carnegie Mellon University is proud to partner with NSCAI in this discussion, part of an ongoing series of virtual panel discussions to realize the future of AI. What attendees will learn: • NSCAI recommendations for scaling AI • How AI Engineering can scale the impact of mission capabilities • Where to find leading AI Engineering practices • Challenges and opportunities for the future of AI
Self-driving cars are being tested in our cities, bespoke movie and product recommendations populate our apps, and we can count on our phones to route us around highway traffic... Why, then, do most AI deployments fail? What is needed to create, deploy, and maintain AI systems we can trust to meet our mission needs, particularly for defense and national security? The SEI recently launched an AI Division to ensure that our researchers are working to address these hard questions. In this question and answer session, Dr. Rachel Dzombak and Dr. Matt Gaston share their points of view on what AI engineering is today and where the field is going. Learn about building AI better with the nascent discipline of AI Engineering and how the SEI plans to leverage the new AI Division to advance human-centered, robust and secure, and scalable AI systems. What attendees will learn: • How to find AI Engineering lessons in your own AI practices • What's needed to build an AI Engineering mindset on your team • Leading AI Engineering practices • How to engage with a national initiative dedicated to advancing the discipline of AI Engineering • How the SEI is growing our portfolio of work in the AI Division
Misuse of authorized access to an organization's critical assets is a significant concern for organizations of all sizes, missions, and industries. We at the CERT National Insider Threat Center have been collecting and analyzing data on incidents involving malicious and unintentional insider since 2001, and have worked with numerous organizations across government, industry, and academia to develop and validate controls and best practices to address these concerns. In this webcast, as a part of National Insider Threat Awareness Month, our experts provide an overview of the ongoing research in this area, and answer questions about how the threat landscape continues to evolve, and what organizations can and should do to address insider threats. What Attendees Will Learn: • The complexities of insider risk management and strategies for effectively balancing insider risk management program operations across the dimensions of people, organization, and management. • The latest findings from the CERT National Insider Threat Center's research into the different types of insider incidents – motivations, vulnerabilities, and common attack paths • The changing landscape of insider threat and a look into the future • The newest best practices and other resource that are available through the CERT National Insider Threat Center
The software development lifecycle has changed a lot and continues to evolve. Almost every company now is a software company. Meeting business needs and adapting to the speed of the market for new features requires an agility mindset and continuous-delivery techniques throughout application-development lifecycles. You have software development and deployment questions, such as: Where do I start? How do I establish good continuous integration/deployment practices? What about security? Hasan has the answers! SEI's Hasan Yasar hosts a software development question and answer session. What attendees will learn: • how DevSecOps and Agile are generating more and more questions in DoD environments • where software development is heading • continuous-delivery techniques throughout application-development lifecycles • why constant interaction between developers and information security teams is needed throughout the entire SDLC
In a DevSecOps world the software supply chain extends beyond libraries upon which developed software depends. In this webinar we will look at the Solarwinds incident as a worst-case exemplifying the breadth of the software supply chain issues confronting complex DevSecOps programs. We will explore the important architectural aspects of DevSecOps that are impacted by the software supply chain that require attention and potential mitigations to detect and respond to potential incidents. What attendees will learn: • The software supply chain issue is broad and impacts multiple aspects of DevSecOps • Programs need to be aware of how the software they leverage presents risks • Mitigation strategies must be put in place to address potential issues at the architectural level
How do you teach cybersecurity to a middle school student? To a soldier? To some of the best hackers in the country? How do you evaluate all of these audiences’ skills? Cybersecurity training has been an ongoing challenge for decades. The key to making the best use of your training dollar is to craft training that matches your audience’s needs and engages them in a meaningful manner. When you create an experience so enthralling that your audience is logging in on nights and weekends just to continue participating, the value of immersive training truly shines. Join us during this webinar as Rotem Guttman shares the lessons he’s learned over a decade of developing engaging, immersive training and evaluation environments for a variety of audiences. What attendees will learn: • How to make cybersecurity training engaging • What motivates different types of learners • The history of enhanced cybersecurity training at the SEI
Managing third-party relationships, such as pubic cloud service providers, requires a set of skills often unfamiliar to many technologists. These relationships are constructed on a foundation of verifiable trust. This requires managing the cybersecurity performance of third parties via contractual mechanisms rather than the traditional line-of-sight practices used internal to an organization. Chief among these mechanisms are service-level agreements (SLAs). Cybersecurity SLAs are vital to the success of third-party relationships and a core component of sound governance. What Attendees Will Learn • How to design and implement meaningful SLAs • How best to use SLAs to drive third-party cybersecurity performance • The limits of SLAs as a third-party risk management tool
IEEE 2675 standard specifies technical principles and practices to build, package, and deploy systems and applications in a reliable and secure way. The standard focuses on establishing effective compliance and IT controls. It presents principles of DevOps including mission first, customer focus, shift-left, continuous everything, and systems thinking. It also describes how stakeholders, including developers and operations staff, can collaborate and communicate effectively. Co-authors will discuss their personal experience applying the principles and practices in organizations. What attendees will learn: • Learn DevOps for systems of systems • What DevOps standards means • How to read the DevOps standard and apply to your organization • Key DevOps principles and practices
According to recent estimates, around 85% of AI projects fail to move from conceptualization to implementation. Why are these failures happening, and how can we prevent them? AI engineering is an emergent discipline focused on developing tools, systems, and processes to enable the application of artificial intelligence in real-world contexts. The SEI is leading the national initiative to create an AI engineering discipline to operationalize human-centered, robust and secure, and scalable AI.
Privacy protection isn't just a compliance activity. but It’s also a key area of organizational risk that requires enterprise-wide support and participation; careful planning; and forward-leaning, data-driven controls. In this webcast, we highlight best practices for privacy program planning and implementation. We present strategies for leveraging existing capabilities within your organization to further advance privacy program building, and look ahead to emerging research and operational needs for modernizing privacy programs. What Attendees Will Learn? • The state of the practice for privacy program planning and development • How to align privacy program planning and development activities with related efforts within your organization • Areas of ongoing and future research into privacy frameworks, privacy risk management, and privacy controls efficacy
There is some confusion about how the paradigms of DevOps and Digital Engineering fit together. In the case of software-intensive systems, we believe DevOps practices are an enabler for Digital Engineering, in many forms. During this webcast, we introduced the relatively new concept of Digital Engineering and how we believe DevOps actually complements/enables many of the goals of Digital Engineering. What attendees will learn: What Digital Engineering is Who is using Digital Engineering How implementing DevOps can enable expansion into Digital Engineering Speakers: Hasan Yasar and David Shepard
Many organizations struggle in applying DevSecOps practices and principles in a cybersecurity-constrained environment because programs lack a consistent basis for managing software intensive development, cybersecurity, and operations in a high-speed lifecycle. We will discuss how an authoritative reference, or Platform Independent Model (PIM), is needed to fully design and execute an integrated DevSecOps strategy in which all stakeholder needs are addressed, such as engineering security into all aspects of the DevSecOps pipeline to include both the pipeline and the deployed system. We will discuss how a PIM of a DevSecOps system can be used to 1) Specify the DevSecOps requirements to the lead system integrators who need to develop a platform-specific solution that includes the system and CI/CD pipeline. 2) Assess and analyze alternative pipeline functionality and feature changes as the system evolves. 3) Apply DevSecOps methods to complex systems that do not follow well-established software architectural patterns used in industry. 4) Provide a basis for threat and attack surface analysis to build a cyber assurance case in order to demonstrate that the software system and DevSecOps pipeline are sufficiently free from vulnerabilities and function only as intended
The recent SolarWinds incident demonstrated the challenges of securing systems when they are the product of complex supply chains. Responding effectively to breaches and hacks requires a cross-section of technical skills and process insights. In this webcast, we explored the lifecycle of the SolarWinds activity and discussed both technical and risk assessment to prepare organizations to defend against this type of incident. What attendees will learn: *Technical details regarding the SolarWinds vulnerabilities and exploits *Supply chain risk management principles required to reduce the risk of future incidents *Advice on the core operational capabilities required to respond to and recover from the SolarWinds hack Speakers: Matthew Butkovic and Art Manion
In this webcast, Grace Lewis and Ipek Ozkaya provide perspectives involved in the development and operation of ML systems. What attendees will learn: • Perspectives involved in the development and operation of ML systems • Types of mismatch that occur in the development of ML systems • Future work in software engineering for ML systems
Are the great programmers really 10 times faster than the rest? What does this difference in productivity even mean? What productivity distribution should we expect between professionals? How can we use this knowledge? In this webcast, we make the most of a large set of programmer training data using repeated measures to explore these questions. What attendees will learn: • For routine tasks, professional programmers have a narrower range of productivity than we first supposed, but almost half of the variation in individual productivity is noise, making programmer rankings suspect. • Rather than finding the “fastest” programmers, we should find competent people and give them the training and environment they need to succeed.
In this webcast, Carol Woody and Rita Creel discuss how cybersecurity engineering knowledge, methods, and tools throughout the lifecycle of software-intensive systems will reduce their inherent cyber risk and increase their operational cyber resilience.
This webcast illustrated where machine learning applications can be attacked, the means for carrying out the attack and some mitigations that can be employed. The elements in building and deploying a machine learning application are reviewed, considering both data and processes. The impact of attacks on each element is considered in turn. Special attention is given to transfer learning, a popular way to construct quickly a machine learning application. Mitigations to these attacks are discussed with the engineering tradeoffs between security and accuracy. Finally, the methods by which an attacker could get access to the machine learning system were reviewed. Speaker: Dr. Mark Sherman
One of the primary drivers of the Department of Defense (DoD) Cybersecurity Maturity Model Certification (CMMC) is the congressional mandate to reduce the risk of accidental disclosure of controlled unclassified information (CUI). However, a full CMMC assessment can seem daunting to organizations in the Defense Industrial Base (DIB), and many might not know where to start. In this webcast, Model Architects Gavin Jurecko and Matt Trevors reviewed several steps for identifying CUI exposure in terms of their critical services and the assets that support them. This approach can help DIB organizations properly scope a CMMC assessment and contain the costs of protecting CUI.
Risk managers must often sift through the cacophony of demands for resources and advocacy to identify a diverse set of risks to include in their organization’s risk register. These managers of cyber risk face this problem when trying to prioritize risks within the scope of their function, only to then turn to executives and justify the need for resources. OCTAVE FORTE, a new and upcoming Enterprise Risk Management (ERM) process model developed by Carnegie Mellon’s CERT Division of the SEI, provides a scalable and standardized process that assists managers and with policy guidelines and tools necessary for identifying risks and justifying the resources needed for the organization’s proper response to them. Attendees at the OCTAVE FORTE webcast learn more about the new OCTAVE FORTE process and learn about a report, Advancing Risk Management Capability Using the OCTAVE FORTE Process, due this Fall. More specifically, the webcast attendees can expect to learn about the fundamental steps of the process and how they might apply them in their own organization.
Bringing computation and data storage closer to the edge, such as disaster and tactical environments, has challenging quality attribute requirements. These include improving response time, saving bandwidth, and implementing security in resource-constrained nodes. In this webcast we review characteristics of edge environments with a focus on architectural qualities. The characteristics and quality attribute concerns that we present are generalized from and informed by multiple customer experiences that we have undertaken in recent years. We present an overview of edge environments, in both military and civilian contexts, and provide a discussion about edge-specific challenges and how they can differ based on the context. We discuss architectural quality attributes that are well suited to address the edge-specific challenges, and provide examples of how each apply. A microservices architecture provides an opportunity to address several of the quality attribute concerns at the edge. Through a final consolidated scenario as an exemplar, we discuss how the presented qualities can be addressed using microservices. This webcast should be useful for anyone interested in better understanding the challenges of edge environments and learning about representative scenarios of work currently being done.
This webcast provided practical insights into how a Government Program Office can productively engage with a contractor using Agile and Lean methods. By reorienting the Agile Manifesto for a system acquisition context, we will consider the distinction between oversight and insight then briefly share examples of the impact of continuous delivery on technical review, requirements, testing, and system engineering.
Disruptive events and crises have the potential to irreparably harm your organization. The key to thriving, not simply surviving, in uncertain times is analysis of posture and preplanning. An organization can demonstrate operational resilience, when faced with both cyber and physical disruptions, if it focuses on the fundamentals and makes data-driven risk decisions.
The chasm between what academia researches and what industry uses in cyber is wide. By building mutually beneficial collaborations between the two, we can improve algorithms, datasets and techniques that are applicable to the real-world. Students and researchers should build a solid partnership with professionals early in their career to be exposed to and ground their work in current industry challenges. This ultimately results in more research being transformed into practical solutions. Collaborations between the academia and the industry is one of the best ways for the industry to direct academic research outcomes to solve current problems. Without collaborations it can be challenging for the academia to produce algorithms, datasets and techniques that are directly applicable for real-world problems. Students and researchers have to build a working loop with the professionals early in their carrier to maximize the relevance of their work in practice, which ultimately results in more research being transformed to practical solutions.
The concept of software architecture as a distinct discipline in software engineering started to emerge in 1990 — although the idea had been around for much longer. Throughout my career in industry, then in academia, I’ve witnessed the growth of software architecture, its evolution in leaps and bounds. I’ve also had the privilege to meet and work with many of the key contributors who over 30 years have shaped it to what we know today: a mature discipline. It has its theories, its standards, its processes and tools, its place in schools’ curricula. Industry and academia, although often on different tracks —and often ignoring each other— have been making every year more incremental progress and even branching out subdisciplines or different schools of thoughts. But the obvious question is: are we done? what’s next? Plateau, obsolescence, retirement? Not quite. New problems arose, driven by new technologies, and some old problems were not really fully solved, or their context significantly evolved. In this brief talk, I’ll reflect on these 30 years, and pulling out my crystal ball, I’ll speculate potential developments ahead, from 4+1 different viewpoints.
SEI Chief Technology Officer Tom Longstaff interviewed Jeff Boleng, a senior advisor to the U.S. Department of Defense, on recent DoD software advances and accomplishments. They discussed how the DoD is implementing recommendations from the Defense Science Board and the Defense Innovation Board on continuous development of best practices for software, source selection for evaluating software factories, risk reduction and metrics for new programs, developing workforce competency, and other advancements. Boleng and Longstaff also discussed how the SEI, the DoD’s research and development center for software engineering, will adapt and build on this work to accomplish major changes at the DoD.
In an increasingly cloud-native world, application containers and microservice architectures are the next go-to for system architecture modernization. Like many technology choices, there are trade-offs that have to be carefully considered. Will containers solve my business problems? How will certain responsibilities shift between my software teams? How do I maximize my cyber security posture? Will I need to re-train staff? What is my budget for infrastructure and prototyping? In this webcast, David Shepard and Aaron Volkmann discussed some of the potential pitfalls of using containers and provide some food for thought to software teams considering embarking on a journey to containers.
You may have a secure application today, but you cannot guarantee that it will still be secure tomorrow. Application security is a living process that must be constantly addressed throughout the application lifecycle. This requires continuous security assessments at every phase of the software development lifecycle (SDLC). The SEI has researched a continuous authorization concept—DevSecOps—that allows for constant interaction between developers and information security teams throughout the entire SDLC. This allows any authorizing officials, such as personnel on information security teams, to be in constant contact with developers as changes are made to existing code and as new features are added. From project conception, a developed system security plan should be integrated into the development platform as well as other environments, where both developers and IAs can see the same artifacts for every development and deployment activity. This allows any changes to the system's security posture to be immediately identified and reported to the IA to evaluate and ensure that all security controls are adequately addressed. As a result, all security features can be verified and authorized, and eventually the organization will build a trusted culture among all stakeholders. Hasan Yasar and Eric Bram discussed how the continuous aspect of communication and collaboration among developers and information security teams reinforces core DevOps principles, as well as allowing developers to write code with a "secure” development mindset. Giving developers and DevOps engineers the tools and knowledge to excel in their roles not only leads to enhanced productivity but also a more robust and secure application and environment development mindset. Giving developers and DevOps engineers alike the tools and knowledge to excel in their roles not only leads to enhanced productivity but also a more robust and secure application and environment.
In this webcast, CMMC Architects, Gavin Jurecko & Matt Trevors provide insight on how to evaluate and assess your organization’s readiness for meeting the practice requirements of CMMC Level 1. Learn more about the DIB CS Program at: https://dibnet.dod.mil/ Or email: osd.ncr.dod-cio.mbx.dib-cs-ia-program-registration@mail.mil CISA CRR Resources: https://www.us-cert.gov/resources CMMC Accreditation Body – https://www.cmmcab.org NIST SP 800-171A - https://csrc.nist.gov/publications/detail/sp/800-171a/final
Andrew Hoover and Katie Stewart discussed the DoD’s new CMMC program. They gave a brief overview of CMMC followed by a deep dive into the Process Maturity aspect of the model. The webcast provided insight into how organizations can prepare for CMMC.
This webcast will assist professionals and executives communicate risk concerns despite the cacophony and distraction posed by technical details and other organizational demands using the new OCTAVE FORTE approach. Practical tips for risk appetite development and application will be discussed.
This webcast covered the implementation of an automated, continuous risk pipeline that demonstrates how cyber-resiliency and compliance risk can be traced to and from DevSecOps teams working in the SDLC program and project levels. It will include integration of asset management, DevSecOps tooling, policy-to-procedure platform and risk management platform.
For more than two decades, Carnegie Mellon University’s Software Engineering Institute (SEI) has been instrumental in the creation and development of the field of software architecture. In our past webcasts, What Makes a Good Software Architect? (https://www.youtube.com/watch?v=CbLJC...) and What Makes a Good Software Architect (2019 Edition)? (https://www.youtube.com/watch?v=UFqys...), we have discussed what makes a good software architect. The range of knowledge and skills involved can be daunting, particularly given the pace of change in technologies and practices. In this session, a panel of architects will discuss their personal paths to becoming software architects and how they have helped others on that journey.
Artificial intelligence (AI) holds great promise to empower us with knowledge and scaled effectiveness. To harness the power of AI systems, we can—and must—ensure that we keep humans safe and in control. This session will introduce a new user experience (UX) framework to guide the creation of AI systems that are accountable, de-risked, respectful, secure, honest and usable.
In this webcast, as a part of National Cybersecurity Awareness Month, our experts will provide an overview of the concept of cyber hygiene, which bears an analogy to the concept of hygiene in the medical profession. Like the practice of washing hands to prevent infections, cyber hygiene addresses simple sets of actions that users can take to help reduce cybersecurity risks. Matt Butkovic, Randy Trzeciak, and Matt Trevors will discuss what some of those practices are, such as implementing password security protocols and determining which other practices an organization should implement. Finally, they discuss the special case of phishing—which is a form of attack that can bypass technical safeguards and exploit people’s weaknesses—and how changes in behavior, understanding, and technology might address this issue. What attendees will learn • Key findings from the CERT Division of the SEI, and the CERT-RMM team, in identifying commonalities among cyber practices and aligning them to CERT-RMM practices • The CERT Division’s 11 cyber hygiene areas, comprising 41 CERT-RMM practices that are paramount to every organization’s success • What organizations can do to change behavior, understanding, and technology to implement good cyber hygiene
Misuse of authorized access to an organization’s critical assets is a significant concern for organizations of all sizes, missions, and industries. We at the CERT National Insider Threat Center have been collecting and analyzing data on incidents involving malicious and unintentional insider since 2001, and have worked with numerous organizations across government, industry, and academia to develop and validate controls and best practices to address these concerns. In this webcast, as a part of National Insider Threat Awareness Month, our experts provided an overview of the ongoing research in this area, and answered questions about how the threat landscape continues to evolve, and what organizations can and should do to address insider threats. What attendees will learn: • Key findings from the CERT National Insider Threat Center’s research into the different types of insider incidents – motivations, vulnerabilities, and common attack paths • How the insider threat landscape has changed over time, and what’s to come in the future • What organizations can do to deter, detect, and mitigate insider threats from employees and trusted business partners
Ritwik Gupta and Elli Kanal explain what ransomware is, what it can do to your computer, and how you can help prevent infections using the concept of cyber hygiene. Ransomware is a type of malware that encrypts the files on a computer, preventing the user from accessing them. The attacker then extorts the user by requesting a ransom in exchange for the key that unlocks the files. In this Cyber Talk episode, Ritwik Gupta and Elli Kanal explain how ransomware can infect a computer, and they discuss examples of how criminals have targeted single computers as well as large systems to explain what can happen when ransomware infects a system. To prevent ransomware attacks, Gupta and Kanal explain the concept of “cyber hygiene,” which refers to a set of basic practices that users can perform to decrease the risk of getting infected by malware. They stress the importance of developing an awareness for cyber hygiene, especially after the advent of the Internet of things, which has increased the number of devices that are susceptible to infection, including phones, cars, refrigerators, and more.
Rotem Guttman and Zach Kurtz explain what deepfakes are, how they work, and what kind of content it’s possible to create with current techniques and technology. The term “deepfake” refers to the use of machine learning to produce content for essays or to modify photos and videos. When it comes to photos and videos, the images are often so realistic that viewers are not able to tell that they are fake. In this Cyber Talk episode, Rotem Guttman and Zach Kurtz explain the kinds of machine learning that people use to create deepfakes, how they work, and what kind of content it’s possible to produce with current technology. Rotem and Zach also cover the techniques people use to create fraudulent content. Such techniques include using an actor to film a video and then replacing the actor’s face with someone else’s, as well as more advanced methods that can reproduce a person’s body movements, voice, speech, and facial expressions to make that person appear to say or do something that he or she did not actually say or do. Finally, they discuss the current limitations of these technologies and techniques, and they forecast advances that might occur in the coming years.
Rotem Guttman and April Galyardt describe how machine learning (ML) fits into the bigger picture of artificial intelligence (AI) and discuss the current state of AI. Currently, there is an enormous amount of interest in machine learning and artificial intelligence and what these new technologies can create for the present and future. In this SEI Cyber Talk episode, Rotem Guttman and April Galyardt discuss how machine learning fits into the bigger picture of artificial intelligence. They describe some of the current applications for machine learning as well as some of its limitations, including examples of machines reaching unexpected results, producing miscalculations because of contextual changes in the data they analyze, and introducing bias into their calculations. The participants also discuss possible use cases for and changes to machine learning that could occur in the mid to near future, including how machine learning might describe and explain its analyses for users to take appropriate action or to learn why the machine made certain decisions.
Recently, the Department of Homeland Security (DHS) released a warning about DNS hijacking and how website owners can protect themselves against it. To explain what DNS hijacking is and how adversaries use it to steal sensitive information, Elli Kanal and Daniel Ruef give a high-level overview of how DNS and network traffic work. They discuss how servers communicate with each other, what kind of information servers send to each other and why, and how adversaries can hijack that information. Finally, Elli and Daniel give some advice about what website owners might do to monitor their websites to make sure that adversaries have not hijacked their DNS.
In 2011, the Office of Management and Budget (OMB) issued the “Cloud First” policy to reform federal information technology management, which required agencies to evaluate cloud computing options. In 2012, the DoD Cloud Computing Strategy evolved to identify the most effective ways for the department to capitalize on opportunities and take advantage of cloud computing benefits that accelerate IT delivery, efficiency, and innovation as an enterprise. In the years since, many cloud transition efforts in both federal agencies and the DoD have experienced significant issues. This webinar will address a few of the causes for the transition issues, as well as identify some practices that will assist organizations as they plan to transition assets and capabilities to the cloud. The webinar will wrap up with a brief discussion of the 2019 Federal Cloud Computing Strategy – Cloud Smart, an updated cloud policy to improve cloud adoption for federal agencies developed by OMB.
As every software engineer knows, writing secure software is an incredibly difficult task. There are many techniques available to assist developers in finding bugs hiding in their code, but none are perfect, and an adversary only needs one to cause problems. In this talk, we’ll discuss how a branch of artificial intelligence called Natural Language Processing, or NLP, is being applied to computer code. Using NLP, we can find bugs that aren’t visible to existing techniques, and we can start to understand better what our computers are creating. While this field is still young, advances are coming rapidly, and we talk about the current state of the art and what we expect to see in the near future.
Today's DoD software development and deployment is not responsive to warfighter needs. As a result, the DoD's ability to keep pace with potential adversaries is falling behind. In this webcast, panelists discuss potential enablers of and barriers to using modern software development techniques and processes in the DoD or similar segregated environments. These software development techniques and processes are as commonly known as DevSecOps.
In 2017, the Software Engineering Institute (SEI) Webcast, What Makes a Good Software Architect? (https://www.youtube.com/watch?v=CbLJC...) explored the skills and knowledge needed by successful software architects. The architect’s role continues to evolve; in this webcast we revisited the question in the context of today’s role and responsibilities. We explored the challenges of working in an environment with rapidly evolving technology options, such as the serverless architecture style, and the role of the architect in Agile organizations using DevSecOps and Agile architecture practices to shorten iterations and deliver software faster.
Cybersecurity operators have to keep up with a world that's constantly changing, and they may lack the tools, time, and access to learn how to face actual threats. Simulated environments may not appear or behave the way they do in real life, and classroom-based approaches don’t provide the big picture. Throughout this talk, our team of researchers and engineers discuss the solutions we developed to help achieve a new level of realism in simulated cyber environments. Specific solutions include better Internet emulation, improved live network traffic, and human-like behavior of host systems. This set of tools recreates the real world in a controlled environment, providing the platform where cyber operators can enhance their security skills. Attendees will learn how to • enhance a bare-bones, cyber-emulation environment using open source tools • provide the best training possible by simulating your own networks so that employees can learn how to respond to real-world threats • get help from the SEI to implement these tools in your own environment
In this webcast, Lori Flynn, a CERT senior software security researcher, describes the new features in SCALe v3, a research prototype tool. SCALe v2, available on GitHub, offers a subset of features available in SCALe v3. Over the last three years, as part of alert classification and prioritization research projects she has led, her team has added new features to the (privately released) 2015 version of SCALe (v1) that are intended to assist with automated static analysis alert classification and advanced alert prioritization. Flynn invites people in other organizations to collaborate with her team, including testing SCALe v3 and providing sanitized audit archives. Collaborators also might have an opportunity to become involved in developing a version of SCALe that would be usable in production, not just as a research prototype tool.
In this first webcast in a two-part series, April Galyardt and Carson Sestili described what metadata is and what information can be gleaned from it. Social networks have become part of our daily lives. We browse, share, “like,” and generally communicate with friends using these tools every day. In the midst of all this, we rarely stop to consider how much information about ourselves we are freely handing over to the social network companies. This information, called “metadata,” contains an incredibly rich—and often frighteningly detailed—view of some of the most personal aspects of our lives. In this first webcast in a two-part series, we described what metadata is and what information can be gleaned from it. Specifically, we discussed: • How metadata gets generated • How it can be used to uncover extensive personal information • Steps you can take to protect your privacy
In this webcast, we explain how the technology works and what makes it fundamentally different than its predecessors. We discuss where it fits (and where it doesn’t fit) and help set a rubric to help you determine if you need this technology.
In this webinar, Randy Trzeciak discusses a study to develop insights and risk indicators related to malicious insider activity in the banking and finance sector.