POPULARITY
Categories
The purpose of Russian hacking and their concept of cyber war is conceptually and practically different from Western strategies. This talk will focus on understanding why Russia uses cyber tools to further strategic interests, how they do it (by examining the 2016 interference in the U.S. presidential election and the NotPetya cases), and who does it. About the speaker: Dr. Richard Love is currently a professor at NDU's College of Information and Cyberspace and recently served as a professor of strategic studies at U.S. Army War College's (USAWC) School of Strategic Landpower and as assistant director of the Peacekeeping and Stability Operations Institute from 2016-2021. From 2002 to 2016, Dr. Love served as a professor and senior research fellow at NDU's Institute for National Strategic Studies / WMD Center. He is an adjunct professor teaching law, international relations, and public policy at Catholic University and has taught law and policy courses at Georgetown, the Army Command and General Staff College, the Marshall Center, and the Naval Academy, among others. He holds a Ph.D. in International Relations and Security Studies from the University of New South Wales in Australia (2017), an LLM from American University School of Law (2002), and a Juris Doctor in Corporate and Security Law from George Mason University School of Law. His graduate studies in East-West relations were conducted at the Jagellonian University in Krakow, Poland, and the University of Munich, in Germany. His undergraduate degree is from the University of Virginia.
This talk explores how the principles and practices of the American public health system can inform and enhance modern cybersecurity strategies. Drawing on insights from our recent CRA Quad Paper, we examine the parallels between public health methodologies and the challenges faced in today's digital landscape. By analyzing historical responses to public health crises, we identify strategies for improving situational awareness, inter-organizational collaboration, and adaptive risk management in cybersecurity. The discussion highlights how lessons from public health can bridge the gap between technical cybersecurity teams and policymakers, fostering a more holistic and effective defense against emerging cyber threats. About the speaker: Josiah Dykstra is the Director of Strategic Initiatives at Trail of Bits. He previously served for 19 years as a senior technical leader at the National Security Agency (NSA). Dr. Dykstra is an experienced cyber practitioner and researcher whose focus has included the psychology and economics of cybersecurity. He received the CyberCorps® Scholarship for Service (SFS) fellowship and is one of ten people in the SFS Hall of Fame. In 2017, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) from then President Barack Obama. Dr. Dykstra is a Fellow of the American Academy of Forensic Sciences (AAFS) and a Distinguished Member of the Association for Computing Machinery (ACM). He is the author of numerous research papers, the book Essential Cybersecurity Science (O'Reilly Media, 2016), and co-author of Cybersecurity Myths and Misconceptions (Pearson, 2023). Dr. Dykstra holds a Ph.D. in computer science from the University of Maryland, Baltimore County.
In today's rapidly evolving digital landscape, the lines between Information Technology (IT), Operational Technology (OT), and the Internet of Things (IoT) have become increasingly blurred. While these domains were once distinct, they now converge into a single, interconnected technology ecosystem—one that presents both unprecedented opportunities and critical security challenges. In this keynote, Michael Clothier, Chief Information Security Officer at Northrop Grumman, brings 30 years of global cybersecurity leadership to explore how organizations can rethink their approach to securing "technology" as a whole, rather than as separate silos. Drawing on his extensive experience across the U.S., Australia, Asia, and beyond—including securing mission-critical defense and aerospace systems, leading enterprise IT transformations, and integrating cybersecurity across diverse industries—Michael will examine the evolution of security challenges from historical, international, and cross-industry perspectives. Key discussion points include: From Air-Gapped to Always Connected – A historical view of how IT, OT, and IoT security challenges have evolved and what we can learn from past approaches.The Global Cybersecurity Landscape – Insights from securing critical infrastructure across Asia, Australia, and the U.S., and the lessons we can apply to today's interconnected world.Breaking Down the Silos – Why treating IT, OT, and IoT as distinct domains is outdated and how a unified security strategy strengthens resilience.National Security Meets Enterprise Security – Perspectives from both military and private-sector leadership on protecting sensitive data, intellectual property, and critical systems. As cybersecurity professionals, we must shift our mindset from securing individual components to securing the entire technology ecosystem. Whether you are safeguarding an industrial control system, an aircraft, or a corporate network, the fundamental security principles remain the same. By applying an integrated approach, we can better protect the critical systems that power modern society. Join Michael for this thought-provoking keynote as he challenges conventional thinking, shares real-world case studies, and provides actionable strategies to redefine cybersecurity in an era where everything is just "T." About the speaker: Chief Information Security Officer at Northrop Grumman
As companies expand AI adoption to accelerate business growth, they face an evolving landscape of security risks and regulatory uncertainty. With guidelines and policies still taking shape, organizations must balance innovation with responsibility, ensuring AI is both secure and aligned with emerging standards.This session will explore the challenges and risks organizations encounter on their AI journey, along with new approaches to mitigating threats and strengthening governance. We'll discuss how companies can navigate this shifting environment and implement guardrails that enable AI to drive business success—safely and responsibly. About the speaker: Tim Benedict is a seasoned technology executive with over two decades of experience spanning IT, cybersecurity, AI governance, and digital transformation. As the Chief Technology Officer at COMPLiQ, he leads the development of AI-driven compliance and security solutions, helping organizations navigate regulatory requirements, mitigate risks, and adopt AI securely. His work focuses on building resilient, scalable platforms that empower enterprises to integrate AI while maintaining transparency, security, and operational control.With a strong background in enterprise IT, cloud computing, and security architecture, Tim has worked across multiple industries, including finance, government, and technology. He has led large-scale cloud and cybersecurity initiatives, developed enterprise compliance strategies, and driven business-focused technology solutions that bridge innovation with regulatory and operational needs.Tim's expertise spans strategic leadership, technical innovation, and cross-functional collaboration. He has shaped security-first approaches for AI governance, developed scalable frameworks for risk mitigation, and helped businesses align technology investments with long-term growth strategies. Based in Indiana, he remains actively engaged in fostering industry advancements and driving innovation in AI security and compliance.
In February 2024, Gladstone AI produced a report for the Department of State, which opens by stating that "The recent explosion of progress in advanced artificial intelligence … is creating entirely new categories of weapons of mass destruction-like and weapons of mass destruction-enabling catastrophic risk." To clarify further, they define catastrophic risk as "catastrophic events up to and including events that would lead to human extinction." This strong yet controversial statement has caused much debate in the AI research community and in public discourse. One can imagine scenarios in which this may be true, perhaps in some national security-related scenarios, but how can we judge the merit of these types of statements? It is clear that to do so, it is essential to first truly understand the different risks AI adaptation poses and how those risks are novel. That is, when we talk about AI safety and security, do we truly have a clarity about the meaning of these terms? In this talk, we will examine the characteristics that make AI vulnerable to attacks and misuse in different ways and how they introduce novel risks. These risks may be to the system in which AI is employed, the environment around it, or even to society as a whole. Gaining a better understanding of AI characteristics and vulnerabilities will allow us to evaluate how realistic and pressing the different AI risks are, and better realize the current state of AI, its limitations, and what breakthroughs are still needed to advance its capabilities and safety. About the speaker: Dr. Sadovnik is a senior research scientist and the Research Lead for Center for AI Security Research (CAISER) at Oak Ridge National Lab. As part of this role, Dr. Sadovnik leads multiple research projects related to AI risk, adversarial AI, and large language model vulnerabilities. As one of the founders of CAISER, he's helping to shape its strategy and operations through program leadership, partnership development, workshop organization, teaching, and outreach.Prior to joining the lab, he served as an assistant professor in the department of electrical engineering and computer science at the University of Tennessee, Knoxville and as an assistant professor in the department of computer science at Lafayette College. He received his PhD from the School of Electrical and Computer Engineering at Cornell University, advised by Prof. Tsuhan Chen as member of the Advanced Multimedia Processing Lab. Prior to arriving at Cornell he received his bachelor's in electrical and computer engineering from The Cooper Union. In addition to his work and publications in AI and AI security, Dr. Sadovnik has a deep interest in workforce development and computer science education. He continues to teach graduate courses related to machine leaning and artificial intelligence at the University of Tennessee, Knoxville.
Professional certifications have become a defining feature of the cybersecurity industry, promising enhanced career prospects, higher salaries, and professional credibility. But do they truly deliver on these promises, or are there hidden drawbacks to pursuing them? This presentation takes a deep dive into the dual-edged nature of certifications like CISSP, CISM, CEH, and CompTIA Security+, analyzing their benefits and potential limitations. Drawing on data-driven research, industry insights, and real-world case studies, we explore how certifications influence hiring trends, professional growth, and skills development in cybersecurity. Attendees will gain a balanced perspective on the role of certifications, uncovering whether they are a gateway to career success or an overrated credential. Whether you are an aspiring professional or a seasoned practitioner, this session equips you with the knowledge to decide if certifications are the key to unlocking your cybersecurity potential—or if other paths may hold the answers. About the speaker: Hisham Zahid is a seasoned cybersecurity professional and researcher with over 15 years of combined technical and leadership experience. Currently serving under the CISO as a Security Compliance Manager at a FinTech startup, he has held roles spanning engineering, risk management, audit, and compliance. This breadth of experience gives him unique insight into the complex security challenges organizations face and the strategies needed to overcome them.Hisham holds an MBA and an MS, as well as industry-leading certifications including CISSP, CCSP, CISM, and CDPSE. He is also an active member of the National Society of Leadership and Success (NSLS) and the Open Web Application Security Project (OWASP), reflecting his commitment to professional development and community engagement. As the co-author of The Phantom CISO, Hisham remains dedicated to advancing cybersecurity knowledge, strengthening security awareness, and guiding organizations through an ever-evolving threat landscape.David Haddad is a technology enthusiast and optimist committed to making technology and data more secure and resilient.David serves as an Assistant Director in EY's Technology Risk Management practice, focusing on helping EY member firms comply with internal and external security, data, and regulatory requirements. In this role, David supports firms in enhancing technology governance and oversight through technical reviews, consultations, and assessments. Additionally, David contributes to global AI governance, risk, and control initiatives, ensuring AI products and services align with the firm's strategic technology risk management processes.David is in the fourth year of doctoral studies at Purdue University, specializing in AI and information security. David's experience includes various technology and cybersecurity roles at the Federal Reserve Bank of Chicago and other organizations. David also served as an adjunct instructor and lecturer, teaching undergraduate courses at Purdue University Northwest.A strong advocate for continuous learning, David actively pursues professional growth in cybersecurity and IT through academic degrees, certifications, and speaking engagements worldwide. He holds an MBA with a concentration in Management Information Systems from Purdue University and multiple industry-recognized certifications, including Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Data Privacy Solutions Engineer (CDPSE), and Certified Information Systems Auditor (CISA).His research interests include AI security and risk management, information management security controls, emerging technologies, cybersecurity compliance, and data protection.
This session explores the foundational concepts and practical applications of Zero Trust Architectures (ZTA) and Digital Trust Frameworks (DTF), two paradigms gaining traction in cybersecurity. While Zero Trust challenges the traditional notion of trust by enforcing strict access controls and authentication measures, Digital Trust seeks to build confidence through data integrity, privacy, and ethical considerations. Through this talk, we will investigate whether these approaches intersect, complement, or diverge, and what this means for the future of cybersecurity. Attendees will gain insights into implementing these frameworks to enhance both security and user confidence in digital environments. In addition to a practical overview, this talk will highlight emerging research areas in both domains. About the speaker: Dr. Ali Al-Haj received his undergraduate degree in Electrical Engineering from Yarmouk University, Jordan, in 1985, followed by an M.Sc. degree in Electronics Engineering from Tottori University, Japan, in 1988 and a Ph.D. degree in Computer Engineering from Osaka University, Japan, in 1993. He then worked as a research associate at ATR Advanced Telecommunications Research Laboratories in Kyoto, Japan, until 1995. Prof. Al-Haj joined Princess Sumaya University for Technology, Jordan, in October 1995, where he currently serves as a Full Professor. He has published papers in dataflow computing, information retrieval, VLSI digital signal processing, neural networks, information security, and digital multimedia watermarking.
This talk will look at how systems are secured at a practical engineering level and the science of risk. As we try to engineer secure systems, what are we trying to achieve and how can we do that? Modern threat modeling offers some practical approaches we can apply today. The limits of those approaches are important, and we'll look at how risk management seems to be treated as an axiom, some history of risk as a discipline, and how we might use that history to build better risk management processes. About the speaker: Adam is the author of Threat Modeling: Designing for Security and Threats: What Every Engineer Should Learn from Star Wars. He's a leading expert on threat modeling, a consultant, expert witness, and game designer. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft.His accomplishments include:Helped create the CVE. Now an Emeritus member of the Advisory Board.Fixed Autorun for hundreds of millions of systemsLed the design and delivery of the Microsoft SDL Threat Modeling Tool (v3)Created the Elevation of Privilege threat modeling gameCo-authored The New School of Information SecurityBeyond consulting and training, Shostack serves as a member of the Blackhat Review Board, an advisor to a variety of companies and academic institutions, and an Affiliate Professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.
Facing increasingly sophisticated attacks from external adversaries, networked systems owners have to judiciously allocate their limited security budget to reduce their cyber risks. However, when modeling human decision-making, behavioral economics has shown that humans consistently deviate from classical models of decision-making. Most notably, prospect theory, for which Kahneman and Tversky won the 2002 Nobel memorial prize in economics, argues that humans perceive gains, losses and probabilities in a skewed manner. Furthermore, bounded rationality and imperfect best-response behavior has been frequently observed in human decision-making within the domains of behavioral economics and psychology. While there is a rich literature on these human decision-making factors in economics and psychology, most of the existing work studying security of networked systems does not take into account these biases and noises. In this talk, we show our proposed novel behavioral security game models for the study of human decision-making in networked systems modeled by attack graphs. We show that behavioral biases lead to suboptimal resource allocation patterns. We also analyze the outcomes of protecting multiple isolated assets with heterogeneous valuations via decision- and game-theoretic frameworks. We show that behavioral defenders over-invest in higher-valued assets compared to rational defenders. We then propose different learning-based techniques and adapt two different tax-based mechanisms for guiding behavioral decision-makers towards optimal security investment decisions. In particular, we show the outcomes of such learning and mechanisms on different realistic networked systems. In total, our research establishes rigorous frameworks to analyze the security of both large-scale networked systems and heterogeneous isolated assets managed by human decision makers and provides new and important insights into security vulnerabilities that arise in such settings. About the speaker: Dr. Mustafa Abdallah is a tenure-track Assistant Professor in the Computer and Information Technology (CIT) Department at Purdue University in Indianapolis, with a courtesy appointment at Purdue Polytechnic Institute. He earned his Ph.D. from the Elmore Family School of Electrical and Computer Engineering at Purdue University in 2022 and previously served as a tenure-track faculty member at IUPUI. His research focuses on game theory, behavioral decision-making, explainable AI, meta-learning, and deep learning, with applications in proactive security of networked systems, IoT anomaly detection, and intrusion detection. His work has been published in top security and AI venues, includingIEEE S&P, ACM AsiaCCS, IEEE TCNS, IEEE IoT-J, Computers & Security, and ACM TKDD. He has received the Bilsland Fellowship, multiple IEEE travel grants, and internal research funding from IUPUI. Dr. Abdallah has extensive industrial research experience, including internships at Adobe Research (meta-learning for time-series forecasting), Principal Financial Group (Kalman filter modeling for financial predictions), and RDI (deep learning for speech technology applications), which led to a U.S. patent and multiple publications. He holds B.Sc. and M.Sc. degrees from Cairo University, with a focus on electrical engineering and engineering mathematics, respectively.
Safety and security-critical systems require extensive test and evaluation, but existing high assurance test methods are based on structural coverage criteria that do not apply to many black box AI and machine learning components. AI/ML systems make decisions based on training data rather than conventionally programmed functions. Autonomous systems that rely on these components therefore require assurance methods that evaluate input data to ensure that they can function correctly in their environments with inputs they will encounter. Combinatorial test methods can provide added assurance for these systems and complement conventional verification and test for AI/ML.This talk reviews some combinatorial methods that can be used to provide assured autonomy, including:Background on combinatorial test methodsWhy conventional test methods are not sufficient for many or most autonomous systemsWhere combinatorial methods applyAssurance based on input space coverageExplainable AI as part of validation About the speaker: Rick Kuhn is a computer scientist in the Computer Security Division at NIST, and is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He co-developed the role based access control (RBAC) model that is the dominant form of access control today. His current research focuses on combinatorial methods for assured autonomy and hardware security/functional verification. He has authored three books and more than 200 conference or journal publications on cybersecurity, software failure, and software verification and testing.
Information virality is an increasingly important topic in modern media environments, but it often remains overlooked in discussions about information security. This presentation will explain why information virality is a cybersecurity concern and how it can be exploited to manipulate public discourse. By utilizing theories from prominent cultural psychologists and employing natural language processing techniques, we will demonstrate methods for capturing viral discourse and identifying additional features linked to behavioral patterns that may motivate participation in discussions. This talk will focus solely on the methodology and our preliminary findings, as the research is still ongoing. About the speaker: Nick Harrell has served in the military for 18 years. Currently, he works as a data systems engineer, where he designs, builds, and maintains complex data systems that help Army leaders make informed decisions. He is on a fellowship at Purdue University, pursuing a Ph.D. in Information Security. Nick is a member of the International Information System Security Certification Consortium (ISC2) and the Project Management Institute (PMI). His research interests focus on Natural Language Processing (NLP) for Information Assurance, specifically on mechanisms that enhance user engagement in online public discourse.
Private Information Retrieval (PIR) is a cryptographic primitive that enables a client to retrieve a record from a database hosted by one or more untrusted servers without revealing which record was accessed. It has a wide range of applications, including private web search, private DNS, lightweight cryptocurrency clients, and more. While many existing PIR protocols assume that servers are honest but curious, we explore the scenario where dishonest servers provide incorrect answers to mislead clients into retrieving the wrong results.We begin by presenting a unified classification of protocols that address incorrect server behavior, focusing on the lowest level of resistance—verifiability—which allows the client to detect if the retrieved file is incorrect. Despite this relaxed security notion, verifiability is sufficient for several practical applications, such as private media browsing.Later on, we propose a unified framework for polynomial PIR protocols, encompassing various existing protocols that optimize download rate or total communication cost. We introduce a method to transform a polynomial PIR into a verifiable one without increasing the number of servers. This is achieved by doubling the queries and linking the responses using a secret parameter held by the client. About the speaker: Stanislav Kruglik has been a Research Fellow at the School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, since April 2022. He earned a Ph.D. in the theoretical foundations of computer science from the Moscow Institute of Physics and Technology, Russia, in February 2022. He is an IEEE Senior Member and a recipient of the Simons Foundation Scholarship. With over 40 scientific publications, his work has appeared in top-tier venues, including IEEE Transactions on Information Forensics and Security and the European Symposium on Research in Computer Security. His research interests focus on information theory and its applications, particularly in data storage and security.
Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike' and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a metatheory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. About the speaker: DR. CHRIS YEOMANS is Professor and Head of the Department of Philosophy at Purdue University. He earned his PhD at the University of California, Riverside in 2005 before joining the Purdue faculty in 2009. He is the author of three monographs, Freedom and Reflection: Hegel and the Logic of Agency, The Expansion of Autonomy: Hegel's Pluralistic Philosophy of Action, and The Politics of German Idealism: Law & Social Change at the Turn of the 19th Century (all from Oxford University Press). His work has been supported by the Purdue Provost's Faculty Fellowship for Study in a Second Discipline (history), the Alexander von Humboldt Foundation, and the National Science Foundation.
This presentation outlines adversarial command and control attacks in OT networks. Focusing on the electrical grid, this presentation highlights current gaps in critical infrastructure protection research. After discussing real-world examples, a fictional electrical grid is used to explore cyber-physical threats and mitigations to OT systems. About the speaker: Dr. Mason Rice is the director of the Cyber Resilience and Intelligence Division at Oak Ridge National Laboratory. In this role, he is responsible for an R&D portfolio focused on advanced intelligent systems and resilient cyber-physical systems, including research into (1) AI for national security, (2) cybersecurity for critical systems, (3) machine-augmented intelligence, (4) vulnerability science, and (5) identity science.Following retirement from the Army, Dr. Rice joined ORNL in 2017 as the Cyber-Physical R&D Manager and was soon appointed as the first Group Leader for Resilient Cyber-Physical Systems at ORNL. He ultimately grew the group into four focused research groups, at which point he was selected to be the first Section Head of the new Resilient Cyber-Physical Systems Section.
Oblivious Message Retrieval is designed to protect the privacy of users who retrieve messages from a bulletin board. Our work, HomeRun, stands out by providing unlinkability across multiple requests for the same recipient's address. Moreover, it does not impose a limit on the number of pertinent messages that can be received by a recipient, which thwarts "message balance exhaustion" attacks and enhances system usability. HomeRun also empowers servers to regularly delete the retrieved messages and the associated auxiliary data, which mitigates the constantly increasing computation costs and storage costs incurred by servers. Remarkably, none of the existing solutions offer all of these features collectively. About the speaker: Yanxue Jia is currently a post-doctoral researcher in the Department of Computer Science at Purdue University. In 2022, she obtained her Ph.D. in Computer Science from Shanghai Jiao Tong University. Her research mainly focuses on applied cryptography, especially secure computation, blockchain, and provable security. She is dedicated to designing efficient and secure cryptographic protocols that enhance collaboration while ensuring privacy protection. Her work has been published at top-tier conferences, such as USENIX Security, CCS, and Asiacrypt. For more detailed information about her academic and research background, please refer to her homepage https://yanxue820.github.io/
Students: this is a hybrid event. You are strongly encouraged to attend in-person. Location: STEW G52 (Suite 050B) WL Campus. Everyone knows that multi-factor authentication (MFA) is more secure than a simple login name and password, but too many people think that MFA is a perfect, unhackable solution. It isn't! I can send you a regular phishing email and completely take control of your account even if you use a super-duper MFA token or smartphone app. I can hack ANY MFA solution at least a handful of different ways, although some forms of MFA are more resilient than others. Attend this presentation and learn the 12+ ways hackers can and do get around your favorite MFA solution. The presentation will include a (pre-filmed) hacking demo and real-life successful examples of every attack type. It will end by telling you how to better defend your MFA solution so that you get maximum benefit and security. About the speaker: Roger A. Grimes, CPA, CISSP, CEH, MCSE, CISA, CISM, CNE, yada, yada, Data-Driven Defense Evangelist for KnowBe4, Inc., is the author of 14 books and over 1400 articles on computer security, specializing in host security and preventing hacker and malware attacks. Roger is a frequent speaker at national computer security conferences and was the weekly security columnist at InfoWorld and CSO magazines between 2005 - 2019. He has worked at some of the world's largest computer security companies, including, Foundstone, McAfee, and Microsoft. Roger is frequently interviewed and quoted in the media including Newsweek, CNN, NPR, and WSJ. His presentations are fast-paced and filled with useful facts and recommendations.
Online behavioral advertising has raised privacy concerns due to its dependence on extensive tracking of individuals' behaviors and its potential to influence them. Those concerns have been often juxtaposed with the economic value consumers are expected to gain from receiving behaviorally targeted ads. Those purported economic benefits, however, have been more frequently hypothesized than empirically demonstrated. We present the results of two online experiments designed to assess some of the consumer welfare implications of behaviorally targeted advertising using a counterfactual approach. Study 1 finds that products in ads targeted to a sample of online participants were more relevant to them than randomly picked products but were also more likely to be associated with lower quality vendors and higher product prices compared to competing alternatives found among search results. Study 2 replicates the results of Study 1. Additionally, Study 2 finds the higher product relevance of products in targeted ads relative to randomly picked products to be driven by participants having previously searched for the advertised products. The results help evaluate claims about the direct economic benefits consumers may gain from behavioral advertising. About the speaker: Alessandro Acquisti is the Trustees Professor of Information Technology and Public Policy at Carnegie Mellon University's Heinz College. His research combines economics, behavioral research, and data mining to investigate the role of privacy in a digital society. His studies have promoted the revival of the economics of privacy, advanced the application of behavioral economics to the understanding of consumer privacy valuations and decision-making, and spearheaded the investigation of privacy and disclosures in social media.Alessandro has been the recipient of the PET Award for Outstanding Research in Privacy Enhancing Technologies, the IBM Best Academic Privacy Faculty Award, the IEEE Cybersecurity Award for Innovation, the Heinz College School of Information's Teaching Excellence Award, and numerous Best Paper awards. His studies have been published in journals across multiple disciplines, including Science, Proceedings of the National Academy of Science, Journal of Economic Literature, Management Science, Marketing Science, and Journal of Consumer Research. His research has been featured in global media outlets including the Economist, the New York Times, the Wall Street Journal, NPR, CNN, and 60 Minutes. His TED talks on privacy and human behaviour have been viewed over 1.5 million times.Alessandro is the director of the Privacy Economics Experiments (PeeX) Lab, the Chair of CMU Institutional Review Board (IRB), and the former faculty director of the CMU Digital Transformation and Innovation Center. He is an Andrew Carnegie Fellow (inaugural class), and has been a member of the Board of Regents of the National Library of Medicine and a member of the National Academies' Committee on public response to alerts and warnings using social media and associated privacy considerations. He has testified before the U.S. Senate and House committees and has consulted on issues related to privacy policy and consumer behavior with numerous agencies and organizations, including the White House's Office of Science and Technology Policy (OSTP), the US Federal Trade Commission (FTC), and the European Commission.He has received a PhD from UC Berkeley and Master degrees from UC Berkeley, the London School of Economics, and Trinity College Dublin. He has held visiting positions at the Universities of Rome, Paris, and Freiburg (visiting professor); Harvard University (visiting scholar); University of Chicago (visiting fellow); Microsoft Research (visiting researcher); and Google (visiting scientist).His research interests include privacy, artificial intelligence, and Nutella. In a previous life, he has been a soundtrack composer and a motorcycle racer (USGPRU).
Despite decades of mitigation efforts, SYN flooding attacks continue to increase in frequency and scale, and adaptive adversaries continue to evolve. In this talk, I will briefly introduce some background on the SYN flooding attack, existing defenses via SYN cookies and challenges to scale them to very high line rate (100Gbps+), and then present our latest work SmartCookie (USENIX Security '24). SmartCookie's innovative split-proxy defense design leverages high-speed programmable switches for fast and secure SYN cookie generation and verification, while implementing a server-side agent using eBPF to enable scalability for serving benign traffic. SmartCookie can defend against attack rate up to 130+ million packet per second with no packet loss, while also achieving 2x-6.5x lower end-to-end latency for benign traffic compared to existing switch-based hardware defenses. About the speaker: Xiaoqi Chen recently joined as an assistant professor at the School of Electrical and Computer Engineering, Purdue University. His research focuses on utilizing algorithm design for high-speed network data planes to improve network measurement and telemetry, implement closed-loop optimization for intelligent resource allocation and congestion control, as well as to enable novel approaches for enhancing network security and privacy.
Graph learning has gained prominent traction from the academia and industry as a solution to detect complex cyber-attack campaigns. By constructing a graph that connects various network/host entities and modeling the benign/malicious patterns, threat-hunting tasks like data provenance and entity classification can be automated. We term the systems under this theme as Graph-based Security Analytics (GSAs). In this talk, we first provide a cursory view of GSA research in the recent decade, focusing on the academic side. Then, we elaborate a few GSAs developed in our lab, which are designed for edge-level intrusion detection (Argus), subgraph-level attack reconstruction (ProGrapher) and storage reduction (SEAL). In the end of the talk, we will review the progress and pitfalls along the development of GSA research, and highlight some research opportunities. About the speaker: Zhou Li is an Assistant Professor at UC Irvine, EECS department, leading the Data-driven Security and Privacy Lab. Before joining UC Irvine, he worked as Principal Research Scientist at RSA Labs from 2014 to 2018. His research interests include Internet Security, Organizational network security, Privacy Enhancement Technologies, and Security and privacy for machine learning. He received the NSF CAREER award, Amazon Research Award, Microsoft Security AI award and IRTF Applied Networking Research Prize.
Recent years have been pivotal in the field of Industrial Control Systems (ICS) security, with a large number of high-profile attacks exposing the lack of a design-for-security initiative in ICS. The evolution of ICS abstracting the control logic to a purely software level hosted on a generic OS, combined with hyperconnectivity and the integration of popular open source libraries providing advanced features, have expanded the ICS attack surface by increasing the entry points and by allowing traditional software vulnerabilities to be repurposed to the ICS domain. In this seminar, we will shed light to the security landscape of modern ICS, dissecting firmware from the dominant vendors and motivating the need of employing appropriate vulnerability assessment tools. We will present methodologies for blackbox fuzzing of modern ICS, both directly using the device and by using the development software. We will then proceed with methodologies on hotpatching, since ICS cannot be easily restarted in order to patch any discovered vulnerabilities. We will demonstrate our proposed methodologies on various critical infrastructure testbeds. About the speaker: Michail (Mihalis) Maniatakos is an Associate Professor of Electrical and Computer Engineering at New York University (NYU) Abu Dhabi, UAE, and a Research Associate Professor at the NYU Tandon School of Engineering, New York, USA. He is the Director of the MoMA Laboratory (nyuad.nyu.edu/momalab), NYU Abu Dhabi. He received his Ph.D. in Electrical Engineering, as well as M.Sc., M.Phil. degrees from Yale University. He also received the B.Sc. and M.Sc. degrees in Computer Science and Embedded Systems, respectively, from the University of Piraeus, Greece. His research interests, funded by industrial partners, the US government, and the UAE government include privacy-preserving computation and industrial control systems security.
In the past 30 years, the world has experienced a booming IoT market, advances in automation and OT systems, and an ever-increasing dependence on cyber in every aspect of modern life. This target rich environment is ideal for cyber adversaries seeking access to systems and devices for financial gain, espionage, digital harassment, or outright cyber-warfare. Naturally, this leads to expanded attack surfaces, increased risk, and a complex and costly cyber arms race.By combining consequences, threats, and vulnerabilities and mapping them to mission risk, Shamrock Cyber significantly reduces the effort to prioritize, communicate, and mitigate risk. The Shamrock approach enables defenders to focus on their domains and yet understand and operate based on the domains of others. Through 4 kinds of analysis—Consequence, Threat, Vulnerability, and Risk, there are multiple approaches to suit the needs of many missions. Shamrock Cyber uniquely blends traditionally effective activities with innovative mission focused analyses that unite the equities of executives, managers, cyber practitioners, and system developers.Shamrock Cyber does not depend on leprechauns and luck to find cybersecurity gold at the end of the rainbow. Instead, it focuses on combining consequences, threats, and vulnerabilities, to communicate and reduce mission risk along with explaining the WHY to all involved. About the speaker: Born in Indiana and growing up in Butte, Montana from the age of 4, Chance received a BS in Computer Science at Montana Tech in Butte in 1988. He then pursued an MS in computer science concentrating on visualization at Montana State in Bozeman, Montana. Following graduation at MSU, Chance joined Pacific Northwest National Laboratory in July of 1991. He's been there ever since and has worked as a software developer, architect, project manager, and task lead on projects ranging from Air Force cockpit software to molecular visualization, to atmospheric science, to text visualization, to data quality, and for the last 15 years, cybersecurity. Chance leads software and system security analysis projects ranging from building technology, nuclear, and radiation monitoring systems. He is passionate about building bridges between researchers, engineers, and operations in the cybersecurity domain.
Recorded: 09/18/2024 CERIAS Security Seminar at Purdue University Exploiting Vulnerabilities in AI-Enabled UAV: Attacks and Defense Mechanisms Ashok Vardhan Raja, Purdue University Northwest In recent years, UAVs have seen significant growth in both military and civilian applications, thanks to their high mobility and advanced sensing capabilities. This expansion has been further accelerated by rapid advancements in AI algorithms and hardware. While AI integration enhances the intelligence and efficiency of UAVs, it also introduces new security and safety concerns due to potential vulnerabilities in the underlying AI models. These vulnerabilities can be exploited by malicious actors, leading to severe security risks and operational failures. This talk will focus on securing the integration of AI into UAVs to ensure their resilience in adversarial environments. We will begin by analyzing the data sensing and processing pipeline of key sensors used in AI-enabled UAV operations,identifying areas where vulnerabilities may exist. Following this, we will explore how to develop defense mechanisms to strengthen the robustness of these AI-driven UAV systems against potential threats. AI-enabled anomaly detection. AI-enabled anomaly detection and AI-enabled UAV infrastructure inspection will be leveraged as case studies in this talk. The talk will also cover the use of Large Language Models to improve this integration's security About the speaker: Ashok Vardhan Raja is an Assistant Professor of Cybersecurity in the department of Computer Information Technology and Graphics for the College of Technology at Purdue University Northwest. His research is on secure integration of Artificial Intelligence (AI) and Cyber Physical Systems (CPS)such as UAVs for robust operations. He is expanding his current work by using Swarm of UAVs to address security issues and to other domains in the integration of AI and CPS.
The Information Design Assurance Red(IDART) methodology is optimized to evaluate system designs and identify vulnerabilities by adopting, in detail, the varying perspectives of a system's most likely adversaries. The results provide system owners with an attacker's-eye view of their system's strengths and weaknesses.IDART can be applied to a diversity of complex networks, systems, and applications, including those that mix cyber technology with industrial machinery or other equipment. The methodology can be used throughout a system's lifecycle but the assessments are less expensive and more beneficial during design and development, when weaknesses can be found and mitigated more easily.Developed at Sandia National Laboratories in the mid-1990s and updated frequently, the IDART framework is NIST-recognized and designed for repeatability and measurable results. Atypical assessment includes the following high-level activities:Characterizing the target system and its architectureIdentifying nightmare consequencesAnalyzing the system for security strengths and weaknessesIdentifying potential vulnerabilities that could lead to nightmare consequencesDocumenting results and providing prioritized mitigation strategiesIDART assessors think like adversaries. To do this, they first develop a range of categorical profiles or"models" of a system's most likely attackers. Factors include an adversary's specific capabilities (i.e., domain knowledge, access, resources) as well as intangibles such as motivation and risk tolerance. The assessment team then uses this adversarial lens to measure the risks posed by system weaknesses and to prioritize mitigations.For efficiency and thoroughness, IDART relies on a free exchange of information. System personnel share documentation and participate in discussions that help assessors efficiently find as many attack paths as possible. In turn, the IDART team is transparent in conducting its assessment activities, giving system owners greater confidence in the work and the resulting analysis.All of these traits combine to make IDART a highly flexible tool. The methodology helps system owners identify critical vulnerabilities, understand adversary threats, and weigh appropriate strategies for delivering components, systems, and plans that are botheffective and secure. About the speaker: Russel Waymire is a manager at Sandia National Laboratories in the area of Cyber-Physical Security. Mr. Waymire has over 25 years of experience in the design, implementation, testing, reverse engineering, and securing of software and hardware systems in IT and OT environments. Mr. Waymire began his career as a software developer at Honeywell Defense Avionic Systems in Albuquerque New Mexico, where he developed the requirements, design, implementation, and testing of software for a variety of platforms that included the F-15, C-27J, KC-10, C-130, and the C5 aircraft. He then went on to Sandia National Laboratories in Albuquerque New Mexico where he has had an opportunity to work on a wide range of projects including algorithms in combinatorial optimization, software development for mod-sim force-on-force interactions and cognition/AI development, satellite software for operational systems in orbit, cyber vulnerability assessments for various US government agencies, and cyber physical assessments for numerous foreign partners that included physical and cyber upgrades at nuclear power plants and research reactors worldwide. Russel currently uses his experience and insights to lead a team researching innovative ways to protect critical infrastructure, space systems, and other high-consequence operational technologies.
At Purdue University, Ms. Kubecka will discuss how technologists, especially the next generation of digital defenders, can be empowered to consider ethics in cybersecurity, privacy, and emerging technologies, and how they can use their power for good in tech. About the speaker: Ms. Chris Kubecka is a globally recognized cybersecurity expert with over two decades of experience, known for her pivotal role in digital defense and her commitment to ethical technology practices. She has established a formidable reputation for protecting both national and international cybersecurity interests, often at the highest levels of government and industry.Ms. Kubecka's career began with a strong technical foundation, rapidly advancing into leadership roles that demand both tactical acumen and strategic foresight. Her expertise spans cyber warfare, digital intelligence, artificial intelligence, and the development of robust cybersecurity frameworks, including those addressing the challenges of post-quantum computing.A thought leader in cybersecurity, Ms. Kubecka frequently contributes to international conferences, policy discussions, and academic forums. She is the author of several influential books, including Hack The World With OSINT, and has published numerous research papers on platforms like ResearchGate. Her work often explores the ethical implications of emerging technologies and the critical role of privacy in cybersecurity.Ms. Kubecka serves as the CEO and Founder of HypaSec NL, Senior Cybersecurity Advisor for Elemental Concept, and Chief Hacktress for Unit6 Technologies. Her significant contributions to the field have been recognized with numerous awards, including The Order of Thor. She is also a former Distinguished Chair for the Middle East Institute's Cyber Security and Emerging Technology Program.Throughout her career, Ms. Kubecka has led critical operations that highlight the intersection of cybersecurity and human rights. During the conflict in Ukraine, she used her expertise to facilitate the evacuation of civilians, applying digital intelligence to support these missions. In Venezuela, her investigations uncovered the weaponization of government-backed applications, such as the Ven App and Patrica App, which are used for surveillance and repression of dissent. Her research revealed how these apps are being exploited to target citizens, leading to arrests, disappearances, and even deaths, underscoring the dire consequences of unethical technology use.Ms. Kubecka's background as a USAF aviator and former member of the USAF Space Command highlights her extensive commitment to defense in both the physical and digital realms. Her journey began at a young age, with her early technical skills leading to her first major hacking achievement at age ten.
Students: this is a hybrid event. You are strongly encouraged to attend in-person. Location: STEW G52 (Suite 050B) WL Campus. The rapid commercialization of GenAI products and services has significantly broadened the landscape of potential attack vectors targeting enterprise infrastructure, operations, and processes. This evolution poses substantial risks to enterprise assets and operations, requiring continuous risk, attack, and threat surface analysis. This exploratory study delineates critical findings across three key dimensions:An analysis of current market trends related to AI-driven cyber and information security risks;An overview of emerging regulatory requirements and compliance efforts specific to AI technologies and;Strategic initiatives for identifying and mitigating these risks, informed by insights from both industry and academia.The presentation provides a roadmap for technology practitioners navigating the complex intersection of AI innovation and cybersecurity. About the speaker: David is an Assistant Director in Ernst & Young's Americas Technology Risk Management practice. He focuses on Americas and Global technology risk assessments, supports IT and data regulatory efforts, and coordinates IT risk management processes for member firms. He brings over eight years of external and internal experience in information security consulting, technology, IT audit, and GRC across public and private industries. He previously served as an adjunct instructor and lecturer for undergraduate programs at Purdue University Northwest.David is pivotal in supporting EY's strategic technology, information security, and compliance projects. His specialties include continuous risk identification & analysis, GRC strategy development, security control testing analysis (e.g., NIST, ISO), and solutions development to manage enterprise risks across various IT domains and emerging technologies (e.g., AI).David is a passionate and dedicated professional who embodies the mindset of a continuous learner in IT, information security, emerging technologies, and data privacy. He proactively expands his knowledge and skillsets by pursuing advanced degrees, obtaining professional certifications, and conducting domestic & international speaking engagements.
The increased use of machine learning (ML) technologies on proprietary and sensitive datasets has led to increased privacy breaches in many sectors, including healthcare and personalized medicine. Although federated learning (FL) systems allow multiple parties to train ML models collaboratively without sharing their raw data with third-party entities, security concerns arise from the involvement of potentially malicious FL clients aiming to disrupt the learning process. In this talk, I will present how my research addresses these challenges by developing frameworks to analyze and improve the privacy and security aspects of ML. First, I will talk about model inversion attacks that allow an adversary to infer part of the sensitive training data with only black-box access to a vulnerable classification model. I will then present FLShield, a novel FL framework that utilizes benign data from FL participants to validate the local models before taking them into account for generating the global model. I will conclude with a discussion of challenges in building practical data-driven systems that take into account data privacy and security while keeping the intended functionality of the system unimpaired. About the speaker: Shagufta Mehnaz is an Assistant Professor of the Computer Science and Engineering department at The Pennsylvania State University. She is broadly interested in the areas of privacy, security, and machine learning. Her research focuses on enhancing the privacy and security of machine learning techniques and models themselves, as well as developing novel machine learning techniques to protect data security and privacy. She directs the PRIvacy, Security, and Machine Learning lab (PRISMLab) at Penn State. She obtained her Ph.D. in Computer Science from Purdue University in 2020. She also received the Bilsland Dissertation Fellowship at Purdue. She was one of the 100 Computer Science Young Researchers selected worldwide for the Heidelberg Laureate Forum (HLF) in 2018.
For the past four years, Sandia National Laboratories has been conducting a focused research effort on Trusted AI for national security problems. The goal is to develop the fundamental insights required to use AI methods in high-consequence national security applications while also improving the practical deployment of AI. This talk looks at key properties of many national security problems along with Sandia's ongoing effort to develop a certification process for AI-based solutions. Along the way, we will examine several recent and ongoing research projects, including how they contribute to the larger goals of Trusted AI. The talk concludes with a forward-looking discussion of remaining research gaps. About the speaker: David manages the Machine Intelligence and Visualization department, which conducts cutting-edge research in machine learning and artificial intelligence for national security applications, including the advanced visualization of data and results. David has been studying machine learning in the broader context of artificial intelligence for over 15 years. His research focuses on applying machine learning methods to a wide variety of domains with an emphasis on estimating the uncertainty in model predictions to support decision making. He also leads the Trusted AI Strategic Initiative at Sandia, which seeks to develop fundamental insights into AI algorithms, their performance and reliability, and how people use them in national security contexts. Prior to joining Sandia, David spent three years as research faculty at Arizona State University and one year as a postdoc at Stanford University developing intelligent agent architectures. He received his doctorate in 2006 and MS in 2002 from the University of Massachusetts at Amherst for his work in machine learning. David earned his Bachelor of Science from Clarkson University in 1998.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
In this presentation, I provide a thorough exploration of how dataflow analysis serves as a formidable method for discovering and addressing cybersecurity threats across a wide spectrum of vulnerability types. For instance, I'll illustrate how we can employ dynamic information flow tracking to automatically detect "blind spots"—sections of a program's input that can be changed without influencing its output. These blind spots are almost always indicative of an underlying bug. Furthermore, I will demonstrate how the use of hybrid control- and dataflow information in differential analysis can aid in uncovering variability bugs, commonly known as "heisenbugs." By delving into these practical applications of dataflow analysis and introducing open-source tools designed to implement these strategies, the goal is to present practical steps for pinpointing, debugging, and managing a diverse array of software bugs. About the speaker: Dr. Evan Sultanik is a principal computer security researcher at Trail of Bits. His recent research covers language-theoretic security, program analysis, detecting variability bugs via taint analysis, dependency analysis via program instrumentation, and consensus protocols for distributed ledgers. He is an editor of and frequent contributor to the offensive computer security journal "Proof of Concept or GTFO." Prior to joining Trail of Bits, Dr. Sultanik was the Chief Scientist at Digital Operatives and, prior to that, a Senior Research Scientist at The Johns Hopkins Applied Physics Laboratory. His dissertation was on the discovery of a family of combinatorial optimization problems the solutions for which can be approximated constant factor of optimal in polylogarithmic time on a parallel computer or distributed system. This was a surprising result since many of the problems in the family are NP-Hard. In a life prior to academia, Evan was a professional software engineer.
The aim of this discussion is to publicize both the challenge and potential solution for the integration of secure supply chain risk management content into conventional software engineering programs. The discipline of software engineering typically does not teach students how to ensure that the code produced and sold in commercial off-the-shelf (COTS) products hasn't been compromised during the sourcing process. We propose a comprehensive and standard process based on established best practice principles that can provide the basis to address the secure sourcing of COTS products. About the speaker: Dr. Dan Shoemaker received a doctorate from the University of Michigan in 1978. He taught at Michigan State University and then moved to the Business School at the University of Detroit Mercy to Chair their Department of Computer Information Systems (CIS). He attended the organizational roll-out of the discipline of software engineering at the Carnegie-Mellon University Software Engineering Institute in the fall of 1987. From that, he developed and taught a SEI-based software engineering curriculum as a separate degree program to the MBA within the College. During that time, Dr. Shoemaker's specific areas of scholarship, publication, and teaching centered on the processes of the SWEBOK, specifically specification, SQA, and SCM/sustainment. Dr. Shoemaker's transition into cybersecurity came after UDM was designated the 39th Center of Academic Excellence by the NSA/DHS at West Point in 2004. His research concentrated on the strategic architectural aspects of cybersecurity system design and implementation, as well as software assurance. He was the Chair of Workforce Training and Education for the DHS/DoD Software Assurance initiative (2007-2010), and he was one of the three authors of the Common Body of Knowledge to Produce, Acquire, and Sustain Software (2006). He was also a subject matter expert for NICE (2009 and NICE II – 2010-11). Dr. Shoemaker was also an SME for the CSEC 2017 (Human Security).This exposure led to a grant to develop curricula for software assurance and the founding of the Center for Cybersecurity and Intelligence Studies, where he currently resides. Dr. Shoemaker's final significant grant was from the DoD to develop a curriculum and teaching and course material for Secure Acquisition (in conjunction with the Institute for Defense Analysis and the National Defense University). He has published 14 books in the field, ranging from Cyber Resilience (CRC Press) to the CSSLP All-In-One (McGraw-Hill). His latest book, "Teaching Cyber Security" (Taylor and Francis), is aimed at K-12 teachers.
How Cybersecurity relates to various fields of business/ industries – how it works in these fields, different risks and vulnerabilities that are out there, which explains why manufacturing cybersecurity into the design of a product or service is so imperative. In companies today Budget Managers and Business Managers and Engineers are making decisions on their cybersecurity options without including cybersecurity experts in that process. Without the input from the cybersecurity experts, some cybersecurity decisions are made with cost savings as the primary goal, and cutting corners in cybersecurity can actually be a bad idea.
Reputation systems are crucial to online platforms' health. They are prevalent across online marketplaces and social media platforms either visibly (e.g., as star ratings and badges) or invisibly as signals that feed into recommendation engines. In theory, good behavior (e.g., honest, accurate, high-quality) begets high reputation, while poor behavior is deterred and pushed off the platform. In this talk, I will discuss how these systems seem to fulfill this mission only coarsely. On one platform, we were able to predict 2 times more suspensions than the reputation system in place using other public signals. On another study, we found that users with high reputation signals were suspended at significantly lower rates (up to 3 times less) for the same number of offenses and behavior as regular users, which suggests they may be impairing content moderation efforts. I will provide some hypotheses to explain these results and offer preliminary findings from current work. About the speaker: Alejandro is a 5th year PhD student at Carnegie Mellon University in Societal Computing, advised by Prof. Nicolas Christin. He is interested in measuring social influence in online communities adjacent to underground economies. His recent work focuses on how reputation is leveraged in anonymous marketplaces, p2p marketplaces, and cryptocurrency communities. He is a recipient of a CMU Cylab Presidential Fellowship, as well as a IEEE S&P Distinguished Paper Award. Prior to CMU, he obtained a B.S. from The Pennsylvania State University, where he worked with Prof. Peng Liu and Prof. Xinyu Xing on a variety of systems security projects. A Paraguayan native, Alejandro has been invited to talk about his work at the Paraguayan Central Bank and the Paraguayan National Police.
The frequency, materiality, and impact of cybersecurity incidents is at a level that the business world has never seen before. CISOs are at the forefront of this. The speaker has experience with developing cybersecurity products and managing IT infrastructure and security from startup to massive scale. The talk will go through the roles, responsibilities, rewards, and perils, of being a CISO in a modern enterprise software company in these turbulent times. We will explore some hard problems that need to be solved for the good guys to continue winning. About the speaker: Sanket Naik is the founder and CEO at Palosade, building modern AI-powered cyber threatintelligence solutions to defend companies from AI-weaponized adversaries. Heenjoys giving back to startups through investing and advisory roles.Before Palosade, he was the SVP of engineering for Coupa. In this role, he built the cloud and cybersecurity organization, over 12 years, from the ground up through an initial public offering followed by significant global growth. He has also held engineering roles at HP and Qualys.Sanket holds a BS in electronics engineering from the University of Mumbai and an MS inCS from Purdue University with research at the multi-disciplinary CERIAS cybersecurity center.
In the realm of risk, cybersecurity is a fairly new idea. Most people currently entering the cybersecurity profession do not remember a time when cybersecurity was not a major concern. Yet at the time of this writing, reliance on computers to run business operations is less than a century old. Prior to this time, operational risk was more concerned with natural disasters than man-made ones. Fraud and staff mistakes are also part of operational risk, so as dependency on computers steadily increased from the 1960s through the 1980s, a then-new joke surfaced: To err is human, but if you really want to screw things up, use a computer.Foundational technology risk management concepts have been in place since the 1970s, but the tuning and the application of these concepts to cybersecurity were slow to evolve. Yet there is no doubt that cybersecurity risk management tools and techniques have continuously improved.. Although the consequences of cybersecurity incidents have become dramatically more profound over the decades, available controls have also become more comprehensive, more ubiquitous, and more effective. This seminar is intended to make the fundamentals of cybersecurity risk management visible to those who are contributing to it, and comprehensible to those looking in from the outside. Like any effort to increasing visibility, increasing transparency in cybersecurity requires clearing out some clouds first. That is, in the tradition of Spaf's recent book on the topic*, busting some cybersecurity management myths that currently cloud management thinking about cybersecurity and replacing them with risk management methodologies that work.*Spafford, G., Metcalf, L. and Dykstra, J. (2022). Cybersecurity Myths and Misconceptions, Avoiding the Hazards and Pitfalls that Derail Us. Addison-Wesley. About the speaker: Dr. Jennifer L. Bayuk, Ph.D. is experienced in a wide variety of cybersecurity positions, including Wall Street Chief Information Security Officer, Global Bank Operational Risk Management, Financial Services Internal Audit, Big 4 Information Systems Risk Management, Bell Labs Security Software Engineer, Risk Management Software Company Founder, and Expert Witness.Author of multiple textbooks and articles on a variety of cybersecurity topics and is a frequent contributor to Cybersecurity Conferences, Boards, Committees, and educational forums.Jennifer has created curriculum on numerous information security, cybersecurity, and technology risk topics for conferences, seminars, corporate training, and graduate-level programs. Adjunct Professor at Quinnipiac University, Kean University, and Stevens Institute of Technology.She has a BS in Computer Science and Philosophy from Rutgers University, MS (1992) in Computer Science and a PhD (2012) in Systems Engineering from Stevens Institute of Technology.
We must be methodical and intentional about how Artificial Intelligence (AI) systems are designed, developed, deployed, and operationalized, particularly in critical infrastructure contexts. CISA, the UK-NCSC, and our partners advocate a secure by design approach where security is a core requirement and integral to the development of AI systems from the outset, and throughout their lifecycle, to build wider trust that AI is safe and secure to use. This talk will focus on challenges and opportunities in the secure deployment, operation, and maintenance of AI software systems. The talk will use publications on the practice of coordinated vulnerability disclosure as a motivating example. About the speaker: Dr. Jonathan Spring is a cybersecurity specialist in the Cybersecurity and Infrastructure Security Agency. Working within the Cybersecurity Division's Vulnerability Management Office, his area of focus includes researching and producing reliable evidence to support effective cybersecurity policies at various levels of vulnerability management, machine learning, and threat intelligence.Prior to joining CISA, Jonathan held positions in the Computer Emergency Response Team (CERT) division of the Software Engineering Institute (SEI) at Carnegie Mellon University and was adjunct professor at the University of Pittsburgh's School of Information Sciences.
Tensor decomposition is a powerful unsupervised machine learning method used to extract hidden patterns from large datasets. This presentation aims to illuminate the extensive applications and capabilities of tensors within the realm of cybersecurity. We offer a comprehensive overview by encapsulating a diverse array of capabilities, showcasing the cutting-edge employment of tensors in the detection of network and power grid anomalies,identification of SPAM e-mails, mitigation of credit card fraud, and detection of malware. Additionally, we delve into the utility of tensors for classifying malware families, pinpointing novel forms of malware, analyzing user behavior,and utilizing tensors for data privacy through federated learning techniques. About the speaker: Maksim E. Eren is an early career scientist in A-4, Los Alamos National Laboratory (LANL) Advance Research in Cyber Systems division. He graduated Summa Cum Laude with a Computer Science Bachelor's at University of Maryland Baltimore County (UMBC) in 2020 and Master's in 2022. He is currently pursuing his Ph.D. at UMBC's DREAM Lab, and he is a Scholarship for Service CyberCorps alumnus. His interdisciplinary research interests lie at the intersection of machine learning and cybersecurity, with a concentration in tensor decomposition. His tensor decomposition-based research projects include large-scale malware detection and characterization, cyber anomaly detection,data privacy, text mining, and high performance computing. Maksim has developed and published state-of-the-art solutions in anomaly detection and malware characterization. He has also worked on various other machine learning research projects such as detecting malicious hidden code, adversarial analysis of malware classifiers, and federated learning. At LANL, Maksim was a member of the 2021 R&D 100 winning project SmartTensors, where he has released a fast tensor decomposition and anomaly detection software, contributed to the design and development of various other tensor decomposition libraries, and developed state-of-the-art text mining tools.
In the course of the talk I'll discuss current authentication challenges, the looming problem with cracking public key encryption, and short and medium term recommendations to help folks stay secure. About the speaker: Bill helps clients achieve an effective information security posture spanning endpoints, networks, servers, cloud, and the Internet of Things. This involves technology, policy, and procedures, and impacts acquisition/development through deployment, operations, maintenance, and replacement or retirement. During his five-decade IT career, Bill has worked as an application programmer with the John Hancock Insurance company; an OS developer, tester, and planner with IBM; a research director and manager at Gartner for the Information Security Strategies service and the Application Integration and Middleware service, and served as CTO of Waveset, an identity management vendor acquired by Sun. At Trend Micro, Bill provided research and analysis of the current state and future trends in information security. He participates in the ISO/IEC 62443 standards body and the CISA ICSJWG on ICT security. He runs his own consulting business providing information security, disaster recovery, identity management, and enterprise solution architecture services. Bill has over 180 publications and has spoken at numerous events worldwide. Bill attended MIT, majoring in Mathematics. He is a member of CT InfraGard and ISACA.
Exploitations in cybersecurity continue to increase in sophistication and prevalence. The purpose of this talk is to discuss how the evolution of malware has led to increased exploitation and then discuss ways to enhance the cybersecurity paradigm. About the speaker: Solomon Sonya (@0xSolomonSonya) is a Computer Science Graduate Student at Purdue University. He earned his undergraduate degree in Computer Science and Master's Degrees in Computer Science, Information Systems Engineering, and Operational Art and Strategy. Solomon routinely develops new cyber security tools and presents his research, leads workshops, and delivers keynote addresses at cyber security conferences around the world. Prior to attending Purdue, Solomon was a Distinguished Computer Science Instructor at the United States Air Force Academy and Research Scholar at the University of Southern California, Los Angeles. Solomon's previous keynote and conference engagements include: DEFCON and BlackHat USA in Las Vegas, NV, SecTor Canada, Hack in Paris and LeHack, France, HackCon Norway, ICSIS – Toronto, ICORES Italy, BruCon Belgium, CyberCentral – Prague and Slovakia, Hack.Lu Luxembourg, Shmoocon DC, BotConf - France, CyberSecuritySummit Texas, SANS Digital Forensics Summit, DerbyCon Kentucky, SkyDogCon Tennessee, HackerHalted Georgia, Day-Con Ohio, TakeDownCon Connecticut, Maryland, and Alabama, and AFCEA – Colorado Springs and Indianapolis.
Evil has been lurking in the Internet since its inception. The IETF recognized this, releasing RFC 3514 on the evil bit. Unfortunately it isn't widely adopted, so we have to find our evil in other ways. Grepping is a time honored way of finding needles in haystacks, so let's see how much evil we can find in the DNS haystack...And can we answer the question of "Why is it so easy?" About the speaker: Leigh Metcalf is a Senior Network Security Research Analyst at the Carnegie Mellon University Software Engineering Institute's cybersecurity (CERT) division. CERT is composed of a diverse group of researchers, software engineers, and security analysts who are developing cutting-edge information and training to improve the practice of cybersecurity. Before joining CERT, Leigh spent more than 10 years in industry working as a systems engineer, architect, and security specialist.
The field of cybersecurity is constantly evolving, and Device Fingerprinting (DFP) has emerged as a crucial technique for identifying network devices based on their unique traffic data.This is necessary to protect against sophisticated cyber-attacks. However,automating device classification is complex, as it involves a vast and diverse feature space derived from various network layers, such as application,transport, and physical. With the advances in machine learning and deep learning, DFP has become more accurate and adaptable, integrating multi-layered data and emphasizing the need to balance robust security measures. The study of DFP, especially in the context of emerging protocols like HTTP/2 and HTTP/3,remains a critical area of research in cybersecurity. This talk focuses on enhancing real-time threat detection while navigating the challenges of scalability. About the speaker: Dr. Sandhya Aneja is a researcher, inventor, and computer scientist with a strong passion for teaching. She is an Assistant Professor at Marist College in Poughkeepsie, NY,and was a Visiting Research Scholar at the Department of Computer Science, Purdue University. She has over 15 years of experience teaching computer science to undergraduate and graduate students at the University of Delhi and the University of Brunei.As a researcher, she contributed to developing a mobile application to facilitate the matching of interests on available mobile devices and allow exchanging of messages and files. The application allows broadcasting names and a limited number of keywords representing users' interests without any connection in a nearby region. The broadcasting region creates a mobile wireless network limited by the Wi-Fi region that is around 200 meters. She also received a US Patent on this technology.As a computer scientist, she has received project funding from the University of Delhi as PI and the Universityof Brunei as co-PI. She has extensively worked on Brunei government-funded projects with IBM Researchers. She is also a contributor to Sandia and DARPA-funded projects at Purdue University.
Advanced Persistent Threat (APT) attacks are increasingly targeting modern factory floors. Recovery from a cyberattack is a complex task that involves identifying the root causes of the attack in order to thoroughly cleanse the compromised systems and remedy all vulnerabilities. As a result, the provenance analysis, which can correlate individual attack footprints and thus "connect the dots", is very much desired. Provenance analysis has been well studied in traditional IT systems, yet the OS-level attack model, prior work employs, cannot effectively capture application semantics in physical control systems. Recent efforts have been made to develop custom provenance models that uniquely represent physical attacks in cyber-physical systems. Nevertheless, existing techniques still fall short due to their unreliable semantic recovery, inability to reconstruct process contexts, and lack of cross-domain causality tracking. In this talk, we present ICSTracker, which aims to enable provenance analysis in the new setting of industrial IoT. To recover the physical semantics of controller routines, we utilize data mining to identify function call sequences that align with specific physical actions. To establish the process contexts, we resort to the data access patterns in controller code to discover and keep track of critical state variables that are shared among multiple iterations of control logic. To uncover the methods attackers employ in exploiting digital vulnerabilities to cause physical damage, we perform a cross-domain causality analysis, associating controller operations with OS-level events through their mutual access to shared digital assets. We have implemented and tested ICSTracker in a FischerTechnic testbed. Our preliminary results are promising, demonstrating that ICSTracker can precisely capture cross-domain cyber-physical attacks in a semantics and context-aware fashion. About the speaker: Mu Zhang is an Assistant Professor with the Kahlert School of Computing at the University of Utah. Zhang works at the unique intersection between systems security and cyber-physical systems. He is the lead PI of the DARPA HACCS project Semantics-Aware Discovery of Advanced Persistent Threats in Cyber-Physical Systems, which aims to detect advanced attacks in CPS settings. He has also been key personnel on the NSF CPS Frontiers project, Software Defined Control for Smart Manufacturing Systems, and has led the technical effort to develop a security vetting system for controller programs. Zhang has extensively published in top-tier security venues (S&P, CCS, NDSS), and received an ACM SIGSOFT Distinguished Paper Award at ISSTA 2023, an ACM SIGPLAN Distinguished Paper Award at OOPSLA 2019, and a Best Paper Honorable Mention at CCS 2022.
This is a hybrid event. Students are encouraged to attend in person: STEW G52(Suite 050B) Commercial or defense systems are often developed first to meet a mission or customer need. Security of many of these systems is often developed at a component level by each components product team. The product teams often maintain robust security for their component within the system, but security gaps begin to form when the complete system is assembled. Adversaries will seek to exploit these gaps in the overall system design as they look for the path of least resistance to achieve their goals. These adversaries do not limit themselves to one exploitation domain and will often pivot across domains in their execution of an attack. To guard against these multi-domain threats, we as security practitioners and researchers need to work together to adjust our world view on the larger system of system security challenge that we face. This presentation begins the process of enumerating some of these gaps, how gaps came into existence, and provides potential research avenues to address them. About the speaker: Dr. Robert Denz serves as the Director of the Secure and Resilient Systems group at Riverside Research. In this role, he leads a team of researchers who ensure software provenance, security, reliability, and resilience in systems. To achieve these objectives, the Secure and Resilient Systems group conducts innovative research in formal methods, AI-driven secure waveform design, and secure operating system implementations for the Department of Defense (DoD) and Intelligence Community (IC).Dr. Denz has over 15 years of experience working on and leading cybersecurity and anti-tamper research programs for DARPA and the DoD. He was recently the Principal Investigator for DARPA Dispersed Computing, where he oversaw a multi-disciplinary team that delivered distributed resilient mesh routing protocols to the tactical edge. Dr. Denz also served as a research lead for DARPA Mission Resilient Clouds (MRC), contributed to the DARPA Clean-slate design of Resilient, Adaptive Secure Hosts (CRASH), and was an original designer of the Air Force Cross-Domain Access SecureView Hypervisor. Through these efforts, he gained extensive knowledge of x86 processor internals and secure operating systems. Dr. Denz received his PhD in secure hypervisor and kernel design from the Thayer School of Engineering at Dartmouth College in 2016.
The challenge of building a security program is that there are too many things you could be doing, and that creates a challenge for security leaders to decide on which things they should do next.All too often companies pivot from fighting one fire to another fire. They end up cobbling together a security program with duct tape, bailing wire, and a handful of solutions implemented as a reaction to our own incidents and major headlines about other companies' breaches. How should a CISO evaluate building their security program?In this talk, I will be exploring a mental model that CISOs can use - that I used in my 20 years as a CISO - to evaluate the state of their security program, and to identify where there are gaps in coverage. At a high level, the framework is four dimensional, covering width (asset coverage), height (control comprehensiveness), depth (risk context), and time (maturity continuity). I will use case studies to highlight ways the security programs often fail on one of these axes, as a means for participants to connect the programs they work on to the shortcomings others have already experienced.Most ways to evaluate a security program become frameworks with an overly strong focus on detail, but which lose the holistic view of the health of a security program, and even the "known unknowns" (we're pretty sure there is a risk, but don't have specifics) become forgotten as the focus narrows to the "known knowns" (we've documented the risk). The "unknown unknowns," of course, almost never get visibility.Combining a mental model for assessing the overall maturity of the program, with a high level risk comparison system (the "Pyramid of Pain") allows a CISO to identify areas for improvement to mitigate risk in the future.Case studies from my time at Akamai will be shared (demonstrating not only how to quickly assess risk, but how to understand risk areas that may take years to mitigate), including the risk areas whose mitigation helped propel Akamai into the security leviathan it is today. About the speaker: Andy Ellis is a seasoned technology and business executive with deep expertise in cybersecurity, managing risk, and leading an inclusive culture. He is the founder and CEO of Duha, a boutique advisory firm focused on providing strategic consulting in the areas of Leadership, Management, Cybersecurity, Technology Risk, and Enterprise Risk Management. He is the author of 1% Leadership, Operating Partner at YL Ventures, Advisory CISO at Orca Security, and is an advisor to cyber security startups. Widely respected across the cybersecurity industry for his pragmatic approach to aligning security and business needs, Andy regularly speaks and writes on cybersecurity, leadership, diversity & inclusion, and decision-making. Ellis previously served as the Chief Security Officer of Akamai Technologies, where he was responsible for the company's cybersecurity strategy, including leading its initial forays into the cybersecurity market. In his twenty-year tenure at Akamai, Andy led the information security organization from a single individual to a 90+ person team, over 40% of whom were women. Andy has received a wide variety of accolades, including the CSO Compass Award, Air Force Commendation Medal, Spirit of Disneyland Award, Wine Spectator Award of Excellence (for The Arlington Inn), the SANS DMA Podcast of the Year (for Cloud Security Reinvented), and was the winner of the Sherman Oaks Galleria Spelling Bee. He was inducted into the CSO Hall of Fame in 2021.After receiving a degree in computer science from MIT, Andy served as an officer in the United States Air Force with the 609th Information Warfare Squadron and the Electronic Systems Center.
This is a hybrid event. Students are encouraged to attend in person: STEW 209. Operational technology (OT) and industrial control systems (ICS) need innovative cybersecurity solutions that go beyond compliance-based security controls in order to be more resilient against increasing cyber threats. This talk describes MITRE Infrastructure Susceptibility Analysis (ISA) that helps ICS/OT organizations to effectively assess risk and prioritize mitigations. About the speaker: As a science and technology leader and strategist, Dr. Wen Masters' career has spanned 30+years with government, academia, R&D centers, and not-for-profit organizations, leading impactful science and technology research and development. Currently, Wen is Vice President for Cyber Technologies at the MITRE Corporation, a not-for-profit organization that manages six federally funded research and development centers with a mission to solve problems for a safer world. In this role, Wen drives MITRE's cybersecurity strategy, champions for MITRE's cybersecurity capabilities, and oversees MITRE's innovation centers with a team of 1,200 professionals developing innovative technologies that address the nation's toughest cyber challenges to deliver capabilities for sponsors and public.Before joining MITRE, Wen was Deputy Director of Research at Georgia Tech Research Institute.She oversaw research in data science, information science, communications, computational science and engineering, quantum information science, and cybersecurity.Prior to Georgia Tech, Wen spent more than two decades as a federal government civilian and a member of the Senior Executive Service of America at the Office of Naval Research (ONR) and the National Science Foundation (NSF). At NSF, she served as the Lead Program Director for the Math Priority Area and a Managing Director for two Mathematical Sciences Institutes. At ONR,she led the Navy's Integrated Science and Technology research and development portfolio in applied mathematics, computer science and engineering, information science, communications,machine learning and artificial intelligence, electronics, and electrical engineering, as well as their applications for war fighting capabilities and national security. For the impact of her efforts, the Navy honored Wen with many awards, including the Distinguished Civilian Service Medal, the highest honorary award given by the Secretary of the Navy. Before her long career in the federal government, Wen worked at the Jet Propulsion Laboratory in Pasadena, California where she was responsible for orbit determination for NASA's deep space exploration missions, including Magellan, Galileo, and Cassini. Wen is a member of the National Academy of Sciences Naval Studies Board, Board of Trustees of the UCLA Institute for Pure and Applied Mathematics, and External Advisory Board of the Texas A&M University Global Cyber Research Institute.
During the last several years, there has been growing concern that the development of quantum computers could undermine the public-key cryptography that is a fundamental pillar of security on the Internet. Recently, the U.S. Government's National Institute of Standards and Technology has released draft standards for post-quantum encryption algorithms that can replace the existing, and potentially vulnerable public-key encryption. But while the future of encryption will depend on new algorithms,there are many other factors that will influence security in the decades to come. In 2022, the National Academies of Sciences, Engineering, and Medicine released a report on "The Future of Encryption" that examines factors including technical aspects of cryptography, societal and policy considerations, and product engineering. The report presents a series of findings that apply broadly, and paints three alternative future scenarios for the future of encryption. This presentation, based largely on the Academies report, will provide researchers, engineers, and policy professionals with context in which to view future developments and concepts for prioritizing future actions. About the speaker: Steve Lipner is the executive director of SAFECode, an industry nonprofit focused on software security assurance. He was previously partner director of software security at Microsoft where he was the creator and long-time leader of the Security Development Lifecycle (SDL) and was responsible for software integrity policies and government security evaluations. Steve also serves as the chair of the U.S.Government's Information Security and Privacy Advisory Board. He has more than a half century of experience in cybersecurity as researcher, engineer, and development manager and is named as coinventor on twelve U.S. patents. He is a member of the National Academy of Engineering and chaired the Academies' Committee on the Future of Encryption. Steve's CV is available at www.stevelipner.org.
Courtney Falk will discuss his ongoing research into Pod People, the ongoing search-engine optimization spam campaign. This talk combines threat hunting and threat intelligence with real-world applications including insights into how cybercriminals work and how organizations can collaborate. All publicly-accessible indicators collected by this project are published online to contribute to the good of the commons. About the speaker: Dr. Courtney Falk is an information security professional with over fifteen years of experience in the government, academic, and public sectors. He earned his doctorate of philosophy from Purdue University in the interdisciplinary information security program. When Courtney is not researching critical infrastructure for Purdue, he enjoys painting miniature figures and playing tabletop war games.
The number of software vulnerabilities found in modern computing systems has been on the rise for some time now. As more and more software is being developed, software testing is increasingly becoming an important part of the software development cycle, with the goal of rooting out any and all vulnerabilities before public release. However, finding software vulnerabilities is not a trivial task, especially in complex software systems with thousands of lines of code and complicated system interactions. Just a single vulnerability making its way into a software product/service can have devastating consequences, if not discovered and patched in good time.Luckily, there is a plethora of available software testing tools and techniques. One such software testing approach is called fuzzing. Fuzzing is an automated program testing technique introduced in the late-1980s, and has become a critical tool in a software tester's toolkit. Fuzzing is based on the simple idea of feeding software lots of mutated inputs and monitoring the program state for any anomalous behavior. Fuzzers have had a long and successful track record of finding software vulnerabilities. This success brought forth new and innovative approaches to improve the overall fuzzing process in all aspects. However, despite its success and widespread use, fuzzing is not a "one size fits all" approach. Software testers still have to tailor their fuzzing methodology to the software under test. Therefore, understanding the inner workings of fuzzers is absolutely vital in order to determine when and how to use them most effectively. About the speaker: Derek Dervishian works as a cybersecurity research engineer at Lockheed Martin - Advanced Technology Laboratories, an advanced applied R&D division of the Lockheed Martin corporation, specializing in cyber, autonomy, data analytics and much more. In this role, Derek has worked on several R&D projects across multiple technical areas, including vulnerability research and binary analysis.Derek graduated from Purdue University with a Bachelor's degree in Computer Engineering in December 2020. Derek is currently pursuing a Master's degree in Computer Science from the Georgia Institute of Technology.
Tracking technologies are proliferating at an increasingly high rate in apps, IoT devices, websites, and in a wide range of files. They are not only impacting privacy in wider and more harmful ways, but they have also extended far beyond the digital world and are also impacting physical safety. Such tools can certainly be very beneficial, when used responsibly and with informed awareness of the cybersecurity and privacy risks. However, when they are used without establishing technical and non-technical boundaries, and without taking risk mitigation actions, the associated surveillance activities can, and have, brought physical harms. I was an expert witness for a case a couple of years ago involving a stalker's use of his victim's smart car to find and almost fatally assault her. I'm currently an expert witness for two separate cases involving the use of Meta Pixels, Conversion APIs, cookies, and other types of tracking tech for surveillance of online activities. Virtually daily there are news articles reporting privacy invasions by digital trackers, drones, security cameras, and more. I will provide several real-life use cases, and provide discussion for the technical and non-technical capabilities that possibly could have been identified through risk assessment activities prior to making such products publicly available and informed the needed associated security and privacy capabilities, that would have supported privacy and cybersecurity protections and physical safety. About the speaker: Rebecca Herold has over 30 years of security, privacy and compliance experience. She is founder of The Privacy Professor Consultancy (2004) and of Privacy & Security Brainiacs SaaS services (2021) and has helped hundreds of clients throughout the years. Rebecca has been a subject matter expert (SME) for the National Institute of Standards and Technology (NIST) on a wide range of projects since 2009, including: 7 ½ years leading the smart grid privacy standards creation initiative, and co-authoring those informative references and standards; 2 years being a co-author of and a SME member of the team that created the Privacy Framework (PF) and associated documents; and 3 years as a SME team member, and co-author of the internet of things (IoT) technical and non-technical standards and associated informative references; and performing throughout the years proof of concept (PoC) tests for a variety of technologies, such as field electricity solar inverters, PMU reclosers, and associated sensors. Rebecca has served as an expert witness for cases covering HIPAA, privacy compliance, criminals using IoT devices to track their victims, stolen personal data of retirement housing residents, tracking app and website users via Meta Pixels and other tracking tech, and social engineering using AI. Rebecca has authored 22 books, and was adjunct professor for 9 ½ years for the Norwich University MSISA program. Since early 2018 Rebecca has hosted the Voice America podcast/radio show, Data Security & Privacy with the Privacy Professor. Rebecca is based in Des Moines, Iowa, USA. www.privacysecuritybrainiacs.com
With the ever-accelerating computerization process of once strictly mechanical systems, information security threats are only expected to increase. This rapidly unfolding process calls into question whether we could promptly cope with the security threats it entails. Unfortunately, a commonly observed trend is for the computerization process to steadily advance while paying little attention to the security aspect until a security vulnerability is discovered, often by an external actor. Only then, a quest for a suitable security measure begins. In sum, security is considered only in reaction to manifest breaches. This comes at a high price, as the fix is not often found speedily after the breach. In this talk, I will explain how to take a proactive vulnerability identification and defense construction approach to better secure cyber-physical systems. I will discuss two main themes of my research: 1) vulnerability identification and 2) defense construction with a focus on the context of Controller Area Network (CAN) systems. About the speaker: Dr. Khaled Serag is a post-doctoral research assistant at Purdue University. He finished his Ph.D. at Purdue in August 2023. His broad research area is Information Security. Since he joined Purdue, he has been working closely with Dr. Dongyan Xu and Dr. Z. Berkay Celik on several Automotive and ICS Security projects. He also has industrial research experience through working with Boeing as a Cyber Security Researcher, where he was involved in several security research projects pertaining to avionic networks, mesh networks, IoT devices, and other areas.
This is a hybrid event. Students are encouraged to attend in person: STEW G52(Suite 050B)As the commercial and international space community grows to reach the projected $1T for the global economy, the vast domain of space becomes increasingly congested and contested. In this Seminar the Space Information Sharing and Analysis Center (Space ISAC) and the National Cybersecurity Center (NCC) team up to share their perspectives and insights on the intersection of cyber and space, how the game is changing, and what effect this will have on government, industry and academia. This talk will discuss the technology trends in the industry, threats to space systems, and make recommendations to students and faculty about how to navigate the landscape of space domain cybersecurity over the next five years. About the speaker: Mr. Scott Sage is the Chief Operating Officer of the National Cybersecurity Center, a national-level nonprofit organization that provides collaborative cybersecurity knowledge and services to the United States. He encourages, engages, and equips others to solve worthwhile hard problems like his most recent assignment to develop a new space cybersecurity market for Peraton Inc. He also recently developed a complicated IR sensor development from a blank sheet of paper to launch and operation in under 24 months, and his prior conception and execution of an Insider Threat and Information Warfare Behavior Based Analytics R&D project that generated 2 patents and increased interest from DoD and Intelligence Community customers. Past accomplishments include: · Automated Mission Impact Assessment of Network Disruptions - Patent 8347145 · Concept to Low Earth Orbit IR Sensor for Space Development Agency < 2 years · Northrop Grumman Sector Cyber and Information Operations Strategy Development · Industry-leading technology development for scalability in satellite C2 automation · Increased worldwide frequency access for Low Earth Orbit satellite communications · House Armed Services Committee praise for highly classified space advocacy plan · Conceptualized, researched and constructed unique DoD Space Order of Battle Annex · Highly praised Master of Science thesis addressing satellite radiation effects Before devoting his work full time to visionary growth development for Peraton, Scott managed counter- hypersonics development for Northrop Grumman, advanced cyber defense systems development for AT&T, and advanced space operations programs for aerospace companies and the US Navy. Scott has published international export material on cybersecurity issues associated with virtualization and cloud computing and developed a nation-wide R&D network for Northrop Grumman that allowed critical technologies to be brought online for use on high priority captures worth over $8.6B in future revenue. Scott has also been a Certified Information Systems Security Professional (CISSP) and Homeland Security Expert since going to work after completing 15 years of US Navy service as a Commander. Scott volunteered as the co-chair of the Space ISAC Information Sharing Working Group and co-chair for the DHS CISA Future of Space Working Group and has volunteered at Penrose hospital and the Colorado Springs Rescue Mission, along with being a leader at his church. Formal degrees include a M.S., Space Systems Electrical Engineering from the Naval Postgraduate School in Monterey, B.S., Nuclear Engineering & B.A., Journalism & Mass Communication from Iowa State University, Ames, IA. Ms. Erin M. Miller is the Executive Director of the Space Information Sharing and Analysis Center (Space ISAC). Space ISAC serves as the primary focal point for the global space industry for "all threats and all hazards." Stood up at the direction of the White House in 2019, Erin led the Space ISAC to open its operational Watch Center, alongside its Cyber Malware and Analysis Vulnerability Laboratory in Colorado Springs, CO, USA. Under Erin's leadership, Space ISAC's headquarters facility is already serving several countries to achieve its mission of security and resilience for the global space industry. Each year Space ISAC puts on the Value of Space Summit (VOSS), co-hosted with The Aerospace Corporation at the University of Colorado Colorado Springs. Erin has over a decade of experience building meaningful tech collaborations and has formed hundreds of formal partnerships between government, industry and academia to solve problems for war fighters and national security. As a serial entrepreneur in the non-profit space, she thrives in launching new programs and new organizations from stand up through building and scaling operations. Erin was the Managing Director of the Center for Technology, Research and Commercialization(C-TRAC) and brought three USAF-funded programs to bear at the Catalyst Campus for Technology & Innovation (www.catalystcampus.org). Her expertise in brokering unique partnerships using non-FAR type agreements led to the standup of the Air Force's first cyber focused (#securebydesign) design studio,AFCyberWorx at the USAF Academy, and the first space accelerator, Catalyst Accelerator, at Catalyst Campus in Colorado Springs - in partnership with Air Force Research Laboratory and AFWERX. In 2020 Erin was a recipient of the Woman of Influence award. In 2018 Erin was recognized by the Mayor of Colorado Springs as Mayor's Young Leader (MYL) of the Year Award for Technology. She is also the recipient of Southern Colorado Women's Chamber of Commerce Award for Young Female Leader in 2018. In her previous roles she developed and managed intellectual property portfolios, technology transfer strategies, export control/ITAR, secure facilities, and rapid prototyping collaborations. Erin serves on the advisory board of CyberSatGov, CyberLEO and is a board member for the Colorado Springs Chamber of Commerce & EDC. She has guest lectured at Georgetown University, United States Air Force Academy, University of Colorado at Boulder, and Johns Hopkins University. She is frequently found public speaking at notable events like, Defense Security Institute's Summits, CyberSatGov, State of the Space Industrial Base, and other forum focused on security and space resiliency and critical infrastructure.
Recorded: 09/20/2023 CERIAS Security Seminar at Purdue University Enhancing Software Supply Chain Security in Distributed Systems Christopher Nuland, Red Hat In the aftermath of the transformative 2020Solarwinds breach, securing software supply chains has surged to the forefront of modern software development concerns. This incident underscored the imperative for innovative approaches to ensure software artifacts' integrity and authenticity. The Supply Chain Level for Software Artifacts (SLSA)framework emerged as a response, emphasizing secure software development processes for supply chains. As compliance standards, notably enforced by the National Institute of Standards and Technology (NIST), intensify the call for robust security measures, the convergence of open-source technologies presents a compelling solution.In the contemporary landscape of distributed systems, like Kubernetes, the significance of signing critical artifacts, such as container images and builds, cannot be overstated. These signatures substantiate the origin and unaltered state of the artifacts, rendering them resistant to tampering or unauthorized access. Yet, with the escalating complexity of software supply chains, bolstered by the proliferation of distributed technologies, ensuring trustworthy artifact provenance becomes more formidable.This challenge is where SigStore, an innovative technology solution, steps in. SigStore enables cryptographic signing and verification of software artifacts, offering a robust mechanism to establish the authenticity of these components. By leveraging transparency log technologies, SigStore enhances the trustworthiness of the supply chain,creating a formidable barrier against malicious alterations.This talk will discuss the popular technologies in the industry that are utilizing a zero trust software supply chain. Why this type of supply chain is important, and outline the different technologies used in conjunction with SigStore to create zero-trust supply chains within the software development and deployment lifecycle.Christopher Nuland has been involved with container technology since 2010, when he worked with Oak Ridge Labs and Purdue's CdmHub on containerizing their simulations with OpenVZ. He joined RedHat in 2018 as a container specialist in the infrastructure and application development space for primarily Fortune 100 companies across the U.S. His work has focused mainly on cloud-native migrations into k8s-based platforms, and developing secure cloud-native zero-trust supply chains for the healthcare,life sciences, and defense sectors. About the speaker: Christopher Nuland has been involved with container technology since 2010, when he worked with Oak Ridge Labs and Purdue's CdmHub on containerizing their simulations with OpenVZ. He joined RedHat in 2018 as a container specialist in the infrastructure and application development space for primarily Fortune 100 companies across the U.S. His work has focused mainly on cloud-native migrations into k8s-based platforms, and developing secure cloud-native zero-trust supply chains for the healthcare,life sciences, and defense sectors.