Podcasts about cerias

  • 16PODCASTS
  • 1,201EPISODES
  • 51mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 18, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about cerias

Latest podcast episodes about cerias

CERIAS Security Seminar Podcast
Thai Le, Towards Robust and Trustworthy AI Speech Models: What You Read Isn't What You Hear

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 18, 2026 38:41


Deepfake voice technology is rapidly advancing, but how well do current detection systems handle differences in language and writing style? Most existing work focuses on robustness to acoustic variations such as background noise or compression, while largely overlooking how linguistic variation shapes both deepfake generation and detection. Yet language matters: psycholinguistic features such as sentence structure, complexity, and word choice influence how models synthesize speech, which in turn affects how detectors score and flag audio. In this talk,  we will ask questions such as: "If we change the way a person writes, while keeping their voice the same, will a deepfake detector still reach the same decision?" and "Are some text-to-speech and voice cloning models more vulnerable to shifts in writing style than others?" We will then discuss implications for designing robust deepfake voice detectors and for advancing more trustworthy speech AI in an era of increasingly synthetic media. About the speaker:  Thai Le is an Assistant Professor of Computer Science at the Indiana University's Luddy School of Informatics, Computing, and Engineering. He obtained his doctoral degree from the college of Information Science and Technology at the Pennsylvania State University with an Excellent Research Award and a DAAD Fellowship. His research focuses on the trustworthiness of AI/ML models, with a mission to enhance the robustness, safety, and transparency of AI technology in various sociotechnical contexts. Le has published nearly 50 peer-reviewed research works with two best paper presentation awards. He is a pioneer in collecting and investigating so-called text perturbations in the wild, which has been utilized by users and researchers worldwide to study and understand effects of humans' adversarial behaviors on their daily usage with AI/ML models. His works have also been featured in ScienceDaily, DefenseOne, and Engineering and Technology Magazine.

CERIAS Security Seminar Podcast
Bethanie Williams, AI-Assisted Cyber-Physical Attack Detection in Smart Manufacturing Systems

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 11, 2026 47:07


The rise of Industry 4.0 has transformed manufacturing through the integration of cyber-physical systems, connectivity, and real-time data exchange into increasingly automated and intelligent platforms. While these advances improve productivity and efficiency, they also introduce vulnerabilities to cyber-physical attacks that can degrade product quality, damage equipment, and pose safety risks. Effective detection depends on understanding which data sources and levels of granularity provide sufficient visibility for accurate anomaly detection and attack identification. Replicated environments, such as digital twins (DTs), help address the challenges of collecting high-fidelity data and executing complex attack scenarios in live production systems.This talk presents an AI-assisted framework for detecting cyber-physical attacks in smart manufacturing using real machine experimentation complemented by DT–based replication. The framework evaluates multiple data sources, ranging from high-level operational data to low-level control and side-channel signals, to understand how data fidelity and context influence detection performance. A hardware-in-the-loop (HIL) DT is used to replicate machine behavior, safely execute attacks, and enable controlled experimentation that would be impractical in live production environments.Through experiments on a real CNC machining system and its corresponding HIL-based DT, multiple cyber-physical attack scenarios are evaluated using statistical, machine learning, and deep learning-based detection methods. Results demonstrate that detection effectiveness is highly dependent on attack type and data granularity, highlighting the need for domain-aware, multi-source monitoring strategies. The framework is further extended to additive manufacturing, illustrating how insights derived from CNC systems can guide attack detection in related manufacturing domains.Overall, this work demonstrates how combining AI-based detection with real-world experimentation and DT technologies enables more robust and practical security analysis for cyber-physical manufacturing systems. About the speaker: Dr. Bethanie Williams is an R&D, S&E Cybersecurity Engineer at Sandia National Laboratories, where she specializes in applying artificial intelligence (AI) to enhance the security and resilience of cyber-physical systems in critical infrastructure, including power grid systems, healthcare facilities, and advanced manufacturing. She is also actively involved in the Cybersecurity Manufacturing Innovation Institute (CyManII) through her work at Sandia. Bethanie earned her Bachelor of Arts degree as a triple major in Mathematics, Spanish, and Computer Science from Berea College in 2020. During her time at Berea, she was a Bonner Scholar and a member of the women's basketball team, earning All-American honors for her athletic achievements. She completed her Master of Science in Computer Science with a concentration in Cybersecurity at Tennessee Technological University in 2022, under the supervision of Dr. Ambareen Siraj, and earned her Ph.D. in Engineering with a major in Computer Science in 2025 under the guidance of Dr. Muhammad Ismail. Her dissertation, titled "Multi-Source Data Analysis and an Effective AI-Assisted Detection Framework for Cyber-Physical Attacks in Smart Manufacturing," focused on leveraging AI-driven approaches and analyzing various data sources to detect and mitigate cyber-physical attacks in manufacturing systems. Throughout her graduate studies, Bethanie received the College of Engineering Distinguished Fellowship and the National Science Foundation (NSF) Scholarship for Service (SFS). She was a year-round intern at Sandia National Laboratories as part of the Center for Cyber Defenders (CCD) program, where she contributed to national research initiatives under CyManII. Bethanie held several executive leadership roles at Tennessee Tech, including Vice President of Cyber Eagles and Graduate Student Club. She also served as a Ph.D. advisor for Women in Cybersecurity (WiCyS). Through these roles, she actively mentored students, organized outreach events, and fostered a supportive community for women in cybersecurity. Bethanie's current research interests include cyber-physical security, modeling and simulation of industrial control systems, and leveraging AI for advanced manufacturing. As an Early Career R&D, S&E Cybersecurity Engineer at Sandia, she is committed to bridging academic innovation and national security applications to protect critical infrastructure and ensure its resilience.

CERIAS Security Seminar Podcast
Mary Jean Amon, Parental Sharing ("Sharenting") Through the Lens of Interdependent Privacy

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 4, 2026 46:04


Parental sharing, sometimes termed "sharenting," refers to ways that parents share information about their children online and is a common mechanism through which young children are exposed to social media. Parental sharing is controversial due to its significant benefits and risks, with researchers highlighting broader concerns regarding long-term implications for children's developing privacy standards. Yet, many parents report a high degree of acceptance for parental sharing, and parents exposing their young children to social media the most are often modeling risky online behaviors. This presentation examines parental sharing in association with privacy and security concepts, research, and interventions toward supporting safe and responsible parental sharing. About the speaker: Mary Jean Amon is a quantitative psychologist focused on human-computer interaction and an Assistant Professor in Indiana University Bloomington's Department of Informatics. Her interdisciplinary research program leverages sensing technologies and advanced analytics to understand and improve dynamic decision-making and performance in the context of complex sociotechnological systems. This includes identifying near-real-time team coordinative patterns that enhance teaming performance, as well as human factors in privacy and security. Amon's quality of work is recognized through publications in top venues, best paper awards, diverse research funding sources, and general dissemination through media outlets like Forbes, New York Times, and Washington Post.

CERIAS Security Seminar Podcast
Young Kim, Counterfeit Medical Devices and Medicines as a Fundamental Cyber-Physical Security Problem

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 28, 2026 53:57


Hardware security is not a new problem, but it is rapidly expanding in both consumer and medical domains due to hyperconnectivity. Medical devices and counterfeit medicines represent a fundamental security challenge. In particular, although counterfeit medicines are not a new issue,the problem continues to worsen as counterfeiting practices become increasingly sophisticated. The counterfeiting of biomedical products poses a serious threat to patient safety, public health, and economic stability in both developed and developing countries, and many current countermeasures remain vulnerable because they provide limited security. In this talk, we will share our work on biomedical hardware security with a focus on pharmaceutical products. We present cyber-physical biomedical security technologies that encode dosage information and authentication into edible biomaterials, enabling serialization, track-and-trace, and authentication at the dosage level. This approach empowers patients to play an active role in combating counterfeit medicines. About the speaker: Young Kim is a professor in the Weldon School of Biomedical Engineering and holds the titles of University Faculty Scholar and Showalter Faculty Scholar at Purdue University. His research centers on co-creating hardware(devices) and software (models) for large-scale societal and healthcare applications. His lab develops hybrid machine learning by combining data analytics with models grounded in optical spectroscopy and light-matter interactions to move beyond big-data, compute-intensive AI and leverage engineers' domain expertise. His work spans optical imaging and spectroscopy, mesoscopic physics, meta materials, cancer research, hardware security, and global health,unified by machine learning and data analytics. His research has been funded by a diverse range of agencies, including NIH, CDC, VA, AFOSR, USAID and Gates Foundation. His primary applications are in global health and rural community health, which address large-scale societal and healthcare challenges in mutually reinforcing ways.

CERIAS Security Seminar Podcast
Vijayanth Tummala, Evaluating The Impact of Cyberattacks On AI-based Machine Vision Systems: A Case Study of Threaded Fasteners

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 21, 2026 32:33


AI-driven machine vision systems are becoming essential in mechanical engineering applications such as fastener classification, yet their increasing connectivity exposes them to adversarial cyberattacks. Model evasion attacks like FGSM can subtly alter input images and cause misclassification, raising concerns about reliability in automated manufacturing.This talk focuses on the role of Explainable AI and human-in-the-loop strategies in detecting and mitigating such attacks. In the presented case study, an EfficientNet-B0 fastener classification model is examined using Grad-CAM visualizations to determine whether shifts inactivation patterns can reveal adversarial manipulation. The study evaluates how FGSM-generated images affect model accuracy and confidence while assessing the XAI system's ability to highlight abnormal regions of attention and the potential for human-in-the-loop approaches to be utilized with XAI techniques as a practical path to strengthening the resilience of AI-based machine vision systems in manufacturing. About the speaker: Dr. Vijayanth Tummala is a Researcher in Cybersecurity and Human-AI Interaction. His research spans artificial intelligence and cybersecurity across interdisciplinary areas, including AI and Cybersecurity leadership, AI literacy, and computer vision applications. He was one of only seven recipients to receive the Best Paper Award in the AI track at ASME's IMECE conference held in November 2024, which features over 2,400 submissions annually. Previously, he held key leadership roles, including leading the NSA CAE-CD designation, launching graduate programs as part of a $1.5 million EDA grant received by his previous employer, and partnering with the Allen County High-Tech Crimes Unit.

CERIAS Security Seminar Podcast
Rohan Paleja, Building Interpretability into Human-Aware Robots through Neural Tree-Based Models

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 14, 2026 44:57


Collaborative robots and machine-learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity, enhancing safety, and improving the quality of our lives. These agents will dynamically interact with a wide variety of people in dynamic and novel contexts, increasing the prevalence of human-machine teams in applications spanning from healthcare and manufacturing to household assistance. My research aims to create transparent embodied systems that can support users and interact with humans, pushing the frontier of real-world robotics systems towards those that understand human behavior, maintain interpretability, and coordinate with high performance.  In this talk, I will cover a set of works that enable robots to 1) understand and learn from diverse human users, 2)  learn interpretable, human-readable tree-based control policies directly via reinforcement learning, and 3) provide users with information online to improve situational awareness and facilitate effective human-robot collaboration. About the speaker: Dr. Rohan Paleja is an Assistant Professor in the Department of Computer Science at Purdue University. He directs the Strategies for Collaboration, Autonomy, Learning, and Exploration in Robotics Lab. The SCALE Robotics Lab focuses on advancing machine learning and artificial intelligence to improve robot learning, human-robot interaction, and multi-agent collaboration. Their goal is to equip autonomous agents with the ability to operate in the diverse, unstructured, and human-rich environments these agents will encounter in the real world.Dr. Paleja's research interests cover a broad range of topics, namely Explainable AI (xAI), Interactive Robot Learning, and Multi-Agent Collaboration. Prior to Purdue, Dr. Paleja was a Technical Staff Researcher in the Artificial Intelligence Technology group at MIT Lincoln Laboratory, where he collaborated with the Air Force Experimental Operations Unit and the Army Research Lab. Prior to that, he earned his Ph.D. in Robotics at the Georgia Institute of Technology in 2023.His work has received multiple awards, including a Best Paper Finalist Award at the Conference of Robot Learning (CoRL) and a Best Workshop Paper Award at the International Conference of Computer Vision (ICCV) Multi-Agent Relational Reasoning Workshop.

CERIAS Security Seminar Podcast
Peter Ukhanov, From MOVEit to EBS – a Look at Mass Exploitation Extortion Campaigns

CERIAS Security Seminar Podcast

Play Episode Listen Later Dec 10, 2025 54:01


Over the past several years, CL0P has executed multiple mass exploitation campaigns using zero-day vulnerabilities in popular software products that resulted in mass data exfiltration. In this talk we'll take a look at the vulnerabilities that enabled their access, discuss ways defenders could have detected the exploits, and explore hardening recommendations to make public facing applications harder to compromise. About the speaker: Peter Ukhanov is a Principal Consultant with the Google Public Sector (Mandiant) IR team. Prior to joining Mandiant, Peter worked at Dragos focusing on OT/ICS environments. He started his career in incident response and digital forensics in 2014 at the Defense Information Systems Agency, spending almost 7 years supporting various Department of Defense entities.

CERIAS Security Seminar Podcast
Antonio Bianchi, Attacking and Defending Modern Software with LLMs

CERIAS Security Seminar Podcast

Play Episode Listen Later Dec 3, 2025 53:46


In this talk, I will discuss recent research projects at the intersection of software security and automated reasoning. Specifically, I will present our work on assessing the exploitability of the Android kernel and developing complex exploits for it, as well as our efforts to uncover bugs in Rust's unsafe code through fuzzing.Throughout the talk, I will highlight how Large Language Models (LLMs) can support both attackers and defenders in analyzing complex software systems, and I will present key lessons on using LLMs effectively along with the practical challenges that arise when integrating them into software security workflows. About the speaker: Dr. Antonio Bianchi's research interest lies in the area of Computer Security. His primary focus is in the field of security of mobile devices. Most recently, he started exploring the security issues posed by IoT devices and their interaction with mobile applications. As a core member of the Shellphish and OOO teams, he played and organized many security competitions (CTFs), and won the third place at the DARPA Cyber Grand Challenge.

CERIAS Security Seminar Podcast
Stephen Flowerday, The Hidden Laundromat at Play: how illicit value moves through online games

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 19, 2025 62:26


Online video games have evolved into vast financial ecosystems where real and virtual value mix at scale. This presentation shows how these spaces serve as efficient laundering channels, converting illicit funds from organized crime, sanctions evasion, terrorist financing, and digital fraud into assets that appear legitimate. Illicit value typically enters via card not present transactions, stolen digital wallets, and scam revenues before it is routed into platform marketplaces. From there, funds convert into tradeable virtual assets such as cosmetics, currencies, loot boxes, and content bundles, which can be divided into thousands of rapid microtransactions. Widely cited estimates place illicit financial flows at 2 to 5 percent of global GDP (roughly $800 billion to $2 trillion a year), while in game spending will reach $74.4 billion in 2025, providing liquidity, speed, and plausible deniability. About the speaker: Stephen Flowerday is a Professor in the School of Computer and Cyber Sciences at Augusta University. His research focuses on cybersecurity management, cybercrime, behavioral information security, and human-centric cybersecurity at the intersection of technology, processes, and people. His work has been supported by IBM, THRIP, the NRF, SASUF, Erasmus, and GMRDC. He serves as an associate editor and frequent reviewer for leading journals and conferences, and has reviewed grants for the Israeli NSF, the South African NRF, the U.S. NSF, and Bahrain's DHE.

CERIAS Security Seminar Podcast
Abulhair Saparov, Can/Will LLMs Learn to Reason?

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 12, 2025 52:36


Reasoning—the process of drawing conclusions from prior knowledge—is a hallmark of intelligence. Large language models, and more recently, large reasoning models have demonstrated impressive results on many reasoning-intensive benchmarks. Careful studies over the past few years have revealed that LLMs may exhibit some reasoning behavior, and larger models tend to do better on reasoning tasks. However, even the largest current models still struggle on various kinds of reasoning problems. In this talk, we will try to address the question: Are the observed reasoning limitations of LLMs fundamental in nature? Or will they be resolved by further increasing the size and data of these models, or by better techniques for training them? I will describe recent work to tackle this question from several different angles. The answer to this question will help us to better understand the risks posed by future LLMs as vast resources continue to be invested in their development. About the speaker: Abulhair Saparov is an Assistant Professor of Computer Science at Purdue University. His research focuses on applications of statistical machine learning to natural language processing, natural language understanding, and reasoning. His recent work closely examines the reasoning capacity of large language models, identifying fundamental limitations, and developing new methods and tools to address or workaround those limitations. He has also explored the use of symbolic and neurosymbolic methods to both understand and improve the reasoning capabilities of AI models. He is also broadly interested in other applications of statistical machine learning, such as to the natural sciences.

CERIAS Security Seminar Podcast
Hanshen Xiao, When is Automatic Privacy Proof Possible for Black-Box Processing?

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 5, 2025 58:19


Can we automatically and provably quantify and control the information leakage from a black-box processing? From a statistical inference standpoint, in this talk, I will start from a unified framework to summarize existing privacy definitions based on input-independent  indistinguishability and unravel the fundamental challenges in crafting privacy proof for general data processing. Yet, the landscape shifts when we gain access to the (still possibly black-box) secret generation. By carefully leveraging its entropy, we unlock  the black-box analysis. This breakthrough enables us to automatically "learn" the underlying inference hardness for an adversary to recover arbitrarily-selected sensitive features fully through end-to-end simulations without any algorithmic restrictions. Meanwhile,  a set of new information-theoretical tools will be introduced to efficiently minimize additional noise perturbation assisted with sharpened adversarially adaptive composition. I will also unveil the win-win situation between the privacy and stability for simultaneous  algorithm improvements. Concrete applications will be given in diverse domains, including privacy-preserving machine learning on image classification and large language models, side-channel leakage mitigation and formalizing long-standing heuristic data obfuscations. About the speaker: Hanshen Xiao is an Assistant Professor in the Department of Computer Science. He received his Ph.D. degree in computer science from MIT and B.S. degree in Mathematics from Tsinghua University. Before joining Purdue, he was a research scientist at NVIDIA Research. His research focuses on provable trustworthy machine learning and computation, with a particular focus on automated black-box privatization, differential trust with applications on backdoor defense and memorization mitigation, and trustworthiness evaluation.

CERIAS Security Seminar Podcast
Marcus Botacin, Malware Detection under Concept Drift: Science and Engineering

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 29, 2025 52:13


The current largest challenge in ML-based malware detection is maintaining high detection rates while samples evolve, causing classifiers to drift. What is the best way to solve this problem? In this talk, Dr. Botacin presents two views on the problem: the scientific and the engineering. In the first part of the talk, Dr. Botacin discusses how to make ML-based drift detectors explainable. The talk discusses how one can split the classifier knowledge into two: (1) the knowledge about the frontier between Malware (M) and Goodware (G); and (2) the knowledge about the concept of the (M and G) classes, to understand whether the concept or the classification frontier changed. The second part of the talk discusses how the experimental conditions in which the drift handling approaches are developed often mismatch the real deployment settings, causing the solutions to fail to achieve the desired results. Dr Botacin points out ideal assumptions that do not hold in reality, such as: (1) the amount of drifted data a system can handle, and (2) the immediate availability of oracle data for drift detection, when in practice, a scenario of label delays is much more frequent. The talk demonstrates a solution for these problems via a 5K+ experiment, which illustrates (1) how to explain every drift point in a malware detection pipeline and (2) how an explainable drift detector also makes online retraining to achieve higher detection rates and requires fewer retraining points than traditional approaches. About the speaker: Dr. Botacin is a Computer Science Assistant Professor at Texas A&M University (TAMU, USA) since 2022. Ph.D. in Computer Science (UFPR, Brazil), Master's in Computer Science and Computer Engineering (UNICAMP, Brazil). Malware Analyst since 2012. Specialist in AV engines and Sandbox Development. Dr. Botacin published research papers at major academic conferences and journals. Dr. Botacin also presented his work at major industry and hacking conferences, such as HackInTheBox and Hou.Sec.Con.Page: https://marcusbotacin.github.io/

CERIAS Security Seminar Podcast
Rajiv Khanna, The Shape of Trust: Structure, Stability, and the Science of Unlearning

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 22, 2025 55:42


Trust in modern AI systems hinges on understanding how they learn—and, increasingly, how they can forget. This talk develops a geometric view of trustworthiness that unifies structure-aware optimization, stability analysis, and the emerging science of unlearning. I will begin by revisiting the role of sharpness and flatness in shaping both generalization and sample sensitivity, showing how the geometry of the loss landscape governs what models remember. Building on these insights, I will present recent results on Sharpness-Aware Machine Unlearning, a framework that characterizes when and how learning algorithms can provably erase the influence of specific data points while preserving accuracy on the rest. The discussion connects theoretical guarantees with empirical findings on the role of data distribution and loss geometry in machine unlearning—ultimately suggesting that the shape of the optimization landscape is the shape of trust itself. About the speaker: Rajiv Khanna is an Assistant Professor in the Department of Computer Science. His research interests span various subfields of machine learning including optimization, theory and interpretability.Previously, he held positions of Visiting Faculty Researcher at Google, postdoctoral scholar at Foundations of Data Analystics Institute at University of California, Berkeley and a Research Fellow in the Foundations of Data Science program at the Simons Institute also at UC Berkeley. He graduated with his PhD from UT Austin.

CERIAS Security Seminar Podcast
Matthew Sharp, Securing Linux in a Heterogenous Enterprise Environment

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 15, 2025 51:42


This seminar examines the challenges of securing Linux (and legacy UNIX) systems in heterogenous enterprise environments, where cohabitant Windows infrastructure often dictates corporate security focus, resources, and tooling. Drawing on experiences across academia, large industry, and more modestly-sized startups, Sharp will highlight practical strategies, open source approaches, and mindset shifts needed to effectively protect Linux in a Windows-centric security landscape. About the speaker: Matthew Sharp has dedicated over two decades to securing UNIX and Linux servers across diverse environments of widely varying scale and complexity, in roles encompassing systems and network administration, red team contract work, and system and security engineering. Presently, he serves as a Principal Engineer at Toyota Motor North America with their Cyber Defensive Services group. His extensive experience has provided firsthand insights into the challenges associated with securing Linux systems in environments where Windows typically dominates both infrastructure and security investments. Sharp is particularly interested in advancing practical, open-source-driven approaches to Linux security and fostering a mindset that empowers practitioners to take proactive steps in addressing problems that mainstream security tools often overlook.

CERIAS Security Seminar Podcast
Stephen Kines, Four Deadly Sins of Cyber: Sloth, Gluttony, Greed & Pride

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 8, 2025 45:46


In the UK one of the great global car brands is on the verge of bankruptcy this month due to a single cyber-attack with the consequence of a potential loss of 130,000 jobs. Jaguar Land Rover is seeking a government bail-out to survive. In this first of a series of seminars delivered from the founder of a cybersecurity company in the same city where Jaguar Land Rover is reeling from this attack, we will cover Four Deadly Sins of Cyber with the other 3 sins in a follow-up seminar:1. Sloth: Bloated legacy architectures and slow patch cycles, run very real risks of seeing their progress as "good enough" up until the very moment some major event proves it wasn't. We will look at how to focus on compartmentalization, and containment.2. Gluttony: Exponential expansion of networks and devices to serve the AI-masters leading to the Skynet moment. Cyber threats leverage connectivity to spread; contagion control comes from knowing how to control that connectivity.3. Greed: Insatiable desire to acquire the latest and greatest security software, in the belief that newer is better, irrespective of how it fits and is to be used. Not so in OT networks where few of those are fit for purpose. The aim for simplicity benefits the most important questions "what is where?", "what exactly is the threat?" and "where can we exert control of threats accessing critical resources?".4. Pride: Overconfidence and self-assuredness in the status quo, doing more of the same will be fine. How's that working out so far? Humans-in-the-loop: some method of controlling contagion is essential. Minimizing the loss remains mandatory. The second half of the seminar will cover three perspectives of a founder of a hardware cybersecurity innovator : 1. The need to look at RoI when deploying solutions, 2. How to frame CNI cyber solutions within SDG/sustainability/impact, and 3. Moving beyond code-jockeys – cyber career perspectives requiring skills in humanities (psychology, philosophy, etc.) to think differently. About the speaker: Stephen is an international corporate lawyer with expertise in complex M&A and tax efficient commercial transactions in the US, UK and emerging markets. He has been a general counsel for ultra-high net worth individuals and families as well as international law firms. He is focused on emerging technologies, including blockchain and cybersecurity. A natural manager, Stephen also isn't afraid to do the work that needs to be done in an efficient bootstrapped startup. He is also know for his avid community engagement and commitment to sustainability at all levels. Also a former military officer, Stephen is the 2IC of Goldilock - keeping 'selection and maintenance of the aim' front of mind.

CERIAS Security Seminar Podcast
Sanket Naik, AI Agents for DevSecOps

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 1, 2025 48:04


AI is enabling developers and non-developers (product managers, solutions engineers) to write more lines of code than even before. Businesses are under pressure to ship these AI built products to stay competitive while still meeting regulatory requirements. Can AI solve this problem? In this talk, we will explore the opportunities and pitfalls to use AI agents for DevSecOps. About the speaker: Sanket Naik is the founder and CEO at Palosade, building a purpose-built AI platform enabling enterprises to automate their security program and unleash their business potential. He enjoys giving back to startups through investing and advisory roles. Before Palosade, he was the SVP of engineering for Coupa. In this role, he built the cloud and cybersecurity organization, over 12 years, from the ground up through an initial public offering followed by significant global growth. He has also held engineering roles at HP and Qualys.Sanket holds a BS in electronics engineering from the University of Mumbai and an MS in CS from Purdue University with research at the multi-disciplinary CERIAS cybersecurity center.

CERIAS Security Seminar Podcast
Richard Thieme, Thinking Like a Hacker in the Age of AI

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 24, 2025 68:31


We need to understand AI, what's here and what's coming, at a deep and ever-deepening level. This is a genuine inflection point for our society. It's like the internet squared except the rate of adoption is much higher. We don't have decades to figure this out. ... This is not a technical talk. The focus is on the approaches we need to adopt to work in tandem with AIs. It's about thinking differently. It's about thinking like hackers. About the speaker: Richard Thieme is an author and speaker who addresses the challenges posed by new technologies. He has published numerous articles, thirteen books, and delivered hundreds of speeches. His Mobius Trilogy illuminates the realities of intelligence work and was lauded by a CIA veteran as one of the best works of serious spy fiction ever. He spoke at Def Con this year for the 27th time and was named the first "uber contributor" of the conference. He has keynoted security conferences in 15 countries. Clients range from GE, Microsoft, Medtronic, and Bank of America, to NSA, FBI, Dept of the Treasury. Los Alamos, Pentagon Security Forum, and the Secret Service.

CERIAS Security Seminar Podcast
Rolf Oppliger, E2EE Messaging: State of the Art and Future Challenges

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 17, 2025 65:05


End-to-end encrypted (E2EE) messaging on the Internet allows encrypted messages to be sent from one sender to one or multiple recipients in a way that cannot be decrypted by anybody else - arguably not even the messaging service provider itself. The protocol of choice is Signal that invokes and puts in place several cryptographic primitives in new and ingenious ways. Besides the messenger of the same name, the Signal protocol is also used by WhatsApp, Facebook Messenger, Wire, and many more. As such, it marks the gold standard and state of the art when it comes to E2EE messaging on the Internet.To make it scalable and useful for large groups, the IETF has also standardized a complementary protocol named messaging layer security (MLS). In this talk, we outline the history of development and mode of operation of both the Signal and MLS protocols, and we elaborate on the next challenges for the future. About the speaker: Rolf Oppliger studied computer science, mathematics, and economics at the University of Bern, Switzerland, where he received M.Sc. (1991) and Ph.D. (1993) degrees in computer science. In 1994-95, he was a post-doctoral researcher at the International Computer Science Institute (ICSI) of UC Berkeley, USA. In 1999, he received the venia legendi for computer science from the University of Zurich, Switzerland, where he was appointed adjunct professor in 2007. The focus of his professional activities is on technical information security and privacy. In these areas, he has published 18 books and many scientific articles and papers, regularly participates at conferences and workshops, served on the editorial boards of some leading magazines and journals, and has been the editor of the Artech House information security and privacy book series since its beginning (in the year 2000). He's the founder and owner of eSECURITY Technologies Rolf Oppliger, works for the Swiss National Cyber Security Centre NCSC, and teaches at the University of Zurich. He was a senior member of the ACM and the IEEE, as well as a member of the IEEE Computer Society and the IACR. He also served as vice-chair of the IFIP TC 11 working group on network security.

CERIAS Security Seminar Podcast
Kris Lovejoy, The Converged Threat Landscape: What's Next in Cybersecurity

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 10, 2025 54:19


Cybersecurity stands at a historic inflection point, where converged forces are reshaping how we think about digital defense. In this discussion, Kyndryl's Global Security & Resiliency Leader Kris Lovejoy will share five key predictions for how AI-driven threats, workforce disruption, geopolitical fragmentation, quantum computing, and infrastructure vulnerabilities will redefine how we secure our digital future. These forces are not just trends, but urgent signals of what's to come. Kris will also provide a strategic framework for navigating this converged threat landscape, with insights into the emerging roles, governance models and resilience strategies that will shape cybersecurity in the years ahead. About the speaker: Kris Lovejoy is an internationally recognized leader in cybersecurity and cyber resilience. As Kyndryl's Global Practice Leader for Security and Resiliency, Kris leads more than 7,500 cyber resilience professionals across more than 60 countries. Before joining Kyndryl, Kris led EY's Global Consulting Cybersecurity practice. She also founded and led BluVector Inc., one of the first AI-powered Advanced Threat Detection products, which Comcast acquired in 2019. Kris was previously general manager of IBM Security Services. Kris serves on the boards of Dominion Energy (NYSE: D) and the International Security Alliance (ISA) and is also a member of the World Economic Forum's Cybersecurity Committee and Cybersecurity Coalition. She holds U.S. and EU patents in risk management and champions inclusion in cybersecurity as executive co-sponsor of Kyndryl's Women's Inclusion Network. Her cybersecurity industry contributions have earned multiple recognitions, including The Cyber Guild's Change-Maker Award (2022), "Top 50 Cybersecurity Leaders" by The Consulting Report (2021), and "Top Woman Technology Leader" by Consulting Magazine (2020).

CERIAS Security Seminar Podcast
Dave Schroeder, Utilization of National Guard Cyber Forces in Title 32 Status for National Cyber Missions

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 3, 2025 54:38


The U.S. military possesses a deep and extensive body of cyber expertise in uniform in the National Guard and Reserve force in particular. Leveraging this expertise effectively, both in a way that is productive for the military, and that is fulfilling and meaningful for the servicemember — which results in benefits for recruiting, retention, and continued development of this expertise — has been an ongoing challenge. This productive employment is even more challenging while in reserve status, resulting in attrition of this critical force. There is a national imperative, as well as clear statements from military cyber leadership, to effectively utilize all available resources to include the National Guard and Reserve force to meet the nation's cyber challenges. About the speaker: Dave Schroeder works to enable and advance intelligence and security research and partnerships at the University of Wisconsin–Madison. He is passionate about creating connections and bringing the rich and dynamic expertise at UW–Madison to the most pressing global security challenges. Dave serves as a Cyber Warfare Officer in the Wisconsin Army National Guard, and previously served a Navy Cryptologic Warfare Officer. He is also Research Director of the Wisconsin Security Research Consortium (WSRC), and manages UW-Madison's Cyber Programs and Designations. He holds graduate degrees in Cybersecurity Policy and Information Warfare, and is graduate of the Naval Postgraduate School, Naval War College, and Joint Forces Staff College.

CERIAS Security Seminar Podcast
Nick Selby, Build Things Properly

CERIAS Security Seminar Podcast

Play Episode Listen Later Aug 27, 2025 60:59


People talk quite a lot about things like 'shift left" that make it sound as if it is a new concept -- sold at your finer consultancies -- to build things properly in the first place. After two decades of incident response, smoke jumping and Tech Debt burndowns, I think it's time to talk about the way teams can build security not just into the product but into the company culture by examining some basic realities of the product development process. This is not just for tech companies; it's for any firm with a process by which they turn ideas into money. Because for all the SDLC tools, all the configuration platforms, the code scanners, and the security and code testing doodads out there, nothing in my experience works as well as starting with the basics: including security and legal experts as well as the people who manage the internal services that will be your upstream and downstream dependencies at the ideation stage. The amount of weapons-grade stupid, the mountain ranges of tech debt, and the broken business promises that this simple plan can avoid make it hard to believe that it's so rare to find these practices in mainstream companies. In this talk, I will describe the most common side effects of failing to do this, how those side effects manifest into cultural roadblocks, silos, and sadness, and most important: how you can break the cycle, slash through the Gordian knot of despair and missed deadlines, and return to cranking out product like a start up. About the speaker: Nick Selby is the founder of EPSD, Inc., and he has more than 20 years of experience advising organizations in highly targeted industries. Previously, he led professional services at Evertas and served as Interim Executive Director of the Cryptoasset Intelligence Sharing and Analysis Center. His executive roles have also included stints at Trail of Bits and Paxos Trust Company. He managed cyber incident response at TRM Partners and N4Struct, and in 2005 founded the information security practice at 451 Research (now S&P Global Intelligence), where he served as Vice President of Research Operations until 2009. As Director of Cyber Intelligence and Investigations at the NYPD (2018-2020), Selby led cybercrime investigations for America's largest police department. Selby serves on the Board of Directors of the non-profit National Child Protection Task Force and the advisory board of Sightline Security. While retired from law enforcement, he continues to serve as a reserve detective for a Dallas-Fort Worth area police agency, where he investigates crimes against children and the cyber aspects of real-world crimes.

CERIAS Security Seminar Podcast
Paul Vixie, Force Projection in the Information Domain: Implications of DNS Security

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 30, 2025 72:23


The DNS resolution path by which the world's internet content consumers locate the world's internet content producers has been under continuous attack since the earliest days of Internet commercialization and privatization. Much work has recently and is currently being invested to protect this vital source of Personally Identifiable Information -- but by whom, and why, and how? Let's discuss. About the speaker: Paul Vixie serves AWS Security as Deputy CISO, VP & Distinguished Engineer after a 29-year career as the founder and CEO of five startup companies covering the fields of DNS, anti-spam, Internet exchange, Internet carriage and hosting, and Internet security. Vixie earned his Ph.D. in Computer Science from Keio University in 2011 and was inducted into the Internet Hall of Fame in 2014. He has authored or co-authored several Internet RFC documents and open source software projects including Cron and BIND. https://en.wikipedia.org/wiki/Paul_Vixie

CERIAS Security Seminar Podcast
Tristen Mullins, Using Side-Channels for Critical Infrastructure Protection

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 23, 2025 35:31


About the speaker: Recorded: 04/23/2025 CERIAS Security Seminar at Purdue University Using Side-Channels for Critical Infrastructure Protection Tristen Mullins, ORNL Dr. Tristen Mullins is a cybersecurity professional specializing in side-channel analysis, cyber-physical systems security, and supply chain integrity. Currently an R&D Associate and Signal Processing Engineer at Oak Ridge National Laboratory (ORNL), she conducts innovative research at the intersection of hardware security and national security. Dr.Mullins earned her Ph.D. in Computing from the University of South Alabama in2022, where she focused on developing novel defense mechanisms against side-channel attacks and made significant contributions to adaptive security architectures. At ORNL, she leads initiatives in critical infrastructure protection and cyber resilience while actively mentoring students and promoting cybersecurity education. Additionally, Dr. Mullins plays a vital role in the National Security Sciences Academy and has founded the IEEE East Tennessee Section Young Professionals Affiliate Group to support emerging engineers.Honored with multiple awards for her contributions and leadership, she remains dedicated to enhancing the security of next-generation computing systems through collaboration with both federal agencies and industry leaders.

CERIAS Security Seminar Podcast
Richard Love, Russian Hacking: Why, How, Who, and to What End

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 16, 2025 57:47


The purpose of Russian hacking and their concept of cyber war is conceptually and practically different from Western strategies.  This talk will focus on understanding why Russia uses cyber tools to further strategic interests, how they do it (by examining the 2016 interference in the U.S. presidential election and the NotPetya cases), and who does it. About the speaker: Dr. Richard Love is currently a professor at NDU's College of Information and Cyberspace and recently served as a professor of strategic studies at U.S. Army War College's (USAWC) School of Strategic Landpower and as assistant director of the Peacekeeping and Stability Operations Institute from 2016-2021. From 2002 to 2016, Dr. Love served as a professor and senior research fellow at NDU's Institute for National Strategic Studies / WMD Center.  He is an adjunct professor teaching law, international relations, and public policy at Catholic University and has taught law and policy courses at Georgetown, the Army Command and General Staff College, the Marshall Center, and the Naval Academy, among others.  He holds a Ph.D. in International Relations and Security Studies from the University of New South Wales in Australia (2017), an LLM from American University School of Law (2002), and a Juris Doctor in Corporate and Security Law from George Mason University School of Law. His graduate studies in East-West relations were conducted at the Jagellonian University in Krakow, Poland, and the University of Munich, in Germany.  His undergraduate degree is from the University of Virginia.

CERIAS Security Seminar Podcast
Josiah Dykstra, Lessons for Cybersecurity from the American Public Health System

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 9, 2025 50:16


This talk explores how the principles and practices of the American public health system can inform and enhance modern cybersecurity strategies. Drawing on insights from our recent CRA Quad Paper, we examine the parallels between public health methodologies and the challenges faced in today's digital landscape. By analyzing historical responses to public health crises, we identify strategies for improving situational awareness, inter-organizational collaboration, and adaptive risk management in cybersecurity. The discussion highlights how lessons from public health can bridge the gap between technical cybersecurity teams and policymakers, fostering a more holistic and effective defense against emerging cyber threats. About the speaker: Josiah Dykstra is the Director of Strategic Initiatives at Trail of Bits. He previously served for 19 years as a senior technical leader at the National Security Agency (NSA). Dr. Dykstra is an experienced cyber practitioner and researcher whose focus has included the psychology and economics of cybersecurity. He received the CyberCorps® Scholarship for Service (SFS) fellowship and is one of ten people in the SFS Hall of Fame. In 2017, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) from then President Barack Obama. Dr. Dykstra is a Fellow of the American Academy of Forensic Sciences (AAFS) and a Distinguished Member of the Association for Computing Machinery (ACM). He is the author of numerous research papers, the book Essential Cybersecurity Science (O'Reilly Media, 2016), and co-author of Cybersecurity Myths and Misconceptions (Pearson, 2023). Dr. Dykstra holds a Ph.D. in computer science from the University of Maryland, Baltimore County.

CERIAS Security Seminar Podcast
Michael Clothier, Annual CERIAS Security Symposium Closing Keynote IT, OT, IoT — It's Really Just the "T": An International and Historical Perspective

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 2, 2025 64:57


In today's rapidly evolving digital landscape, the lines between Information Technology (IT), Operational Technology (OT), and the Internet of Things (IoT) have become increasingly blurred. While these domains were once distinct, they now converge into a single, interconnected technology ecosystem—one that presents both unprecedented opportunities and critical security challenges. In this keynote, Michael Clothier, Chief Information Security Officer at Northrop Grumman, brings 30 years of global cybersecurity leadership to explore how organizations can rethink their approach to securing "technology" as a whole, rather than as separate silos. Drawing on his extensive experience across the U.S., Australia, Asia, and beyond—including securing mission-critical defense and aerospace systems, leading enterprise IT transformations, and integrating cybersecurity across diverse industries—Michael will examine the evolution of security challenges from historical, international, and cross-industry perspectives. Key discussion points include: From Air-Gapped to Always Connected – A historical view of how IT, OT, and IoT security challenges have evolved and what we can learn from past approaches.The Global Cybersecurity Landscape – Insights from securing critical infrastructure across Asia, Australia, and the U.S., and the lessons we can apply to today's interconnected world.Breaking Down the Silos – Why treating IT, OT, and IoT as distinct domains is outdated and how a unified security strategy strengthens resilience.National Security Meets Enterprise Security – Perspectives from both military and private-sector leadership on protecting sensitive data, intellectual property, and critical systems. As cybersecurity professionals, we must shift our mindset from securing individual components to securing the entire technology ecosystem. Whether you are safeguarding an industrial control system, an aircraft, or a corporate network, the fundamental security principles remain the same. By applying an integrated approach, we can better protect the critical systems that power modern society. Join Michael for this thought-provoking keynote as he challenges conventional thinking, shares real-world case studies, and provides actionable strategies to redefine cybersecurity in an era where everything is just "T." About the speaker: Chief Information Security Officer at Northrop Grumman

CERIAS Security Seminar Podcast
Tim Benedict, The Future of AI Depends on Guardrails

CERIAS Security Seminar Podcast

Play Episode Listen Later Mar 26, 2025 54:46


As companies expand AI adoption to accelerate business growth, they face an evolving landscape of security risks and regulatory uncertainty. With guidelines and policies still taking shape, organizations must balance innovation with responsibility, ensuring AI is both secure and aligned with emerging standards.This session will explore the challenges and risks organizations encounter on their AI journey, along with new approaches to mitigating threats and strengthening governance. We'll discuss how companies can navigate this shifting environment and implement guardrails that enable AI to drive business success—safely and responsibly. About the speaker: Tim Benedict is a seasoned technology executive with over two decades of experience spanning IT, cybersecurity, AI governance, and digital transformation. As the Chief Technology Officer at COMPLiQ, he leads the development of AI-driven compliance and security solutions, helping organizations navigate regulatory requirements, mitigate risks, and adopt AI securely. His work focuses on building resilient, scalable platforms that empower enterprises to integrate AI while maintaining transparency, security, and operational control.With a strong background in enterprise IT, cloud computing, and security architecture, Tim has worked across multiple industries, including finance, government, and technology. He has led large-scale cloud and cybersecurity initiatives, developed enterprise compliance strategies, and driven business-focused technology solutions that bridge innovation with regulatory and operational needs.Tim's expertise spans strategic leadership, technical innovation, and cross-functional collaboration. He has shaped security-first approaches for AI governance, developed scalable frameworks for risk mitigation, and helped businesses align technology investments with long-term growth strategies. Based in Indiana, he remains actively engaged in fostering industry advancements and driving innovation in AI security and compliance.

CERIAS Security Seminar Podcast
Amir Sadovnik, What do we mean when we talk about AI Safety and Security?

CERIAS Security Seminar Podcast

Play Episode Listen Later Mar 12, 2025 55:02


In February 2024, Gladstone AI produced a report for the Department of State, which opens by stating that "The recent explosion of progress in advanced artificial intelligence … is creating entirely new categories of weapons of mass destruction-like and weapons of mass destruction-enabling catastrophic risk." To clarify further, they define catastrophic risk as "catastrophic events up to and including events that would lead to human extinction." This strong yet controversial statement has caused much debate in the AI research community and in public discourse. One can imagine scenarios in which this may be true, perhaps in some national security-related scenarios, but how can we judge the merit of these types of statements? It is clear that to do so, it is essential to first truly understand the different risks AI adaptation poses and how those risks are novel. That is, when we talk about AI safety and security, do we truly have a clarity about the meaning of these terms? In this talk, we will examine the characteristics that make AI vulnerable to attacks and misuse in different ways and how they introduce novel risks. These risks may be to the system in which AI is employed, the environment around it, or even to society as a whole. Gaining a better understanding of AI characteristics and vulnerabilities will allow us to evaluate how realistic and pressing the different AI risks are, and better realize the current state of AI, its limitations, and what breakthroughs are still needed to advance its capabilities and safety. About the speaker: Dr. Sadovnik is a senior research scientist and the Research Lead for Center for AI Security Research (CAISER) at Oak Ridge National Lab. As part of this role, Dr. Sadovnik leads multiple research projects related to AI risk, adversarial AI, and large language model vulnerabilities. As one of the founders of CAISER, he's helping to shape its strategy and operations through program leadership, partnership development, workshop organization, teaching, and outreach.Prior to joining the lab, he served as an assistant professor in the department of electrical engineering and computer science at the University of Tennessee, Knoxville and as an assistant professor in the department of computer science at Lafayette College. He received his PhD from the School of Electrical and Computer Engineering at Cornell University, advised by Prof. Tsuhan Chen as member of the Advanced Multimedia Processing Lab. Prior to arriving at Cornell he received his bachelor's in electrical and computer engineering from The Cooper Union. In addition to his work and publications in AI and AI security, Dr. Sadovnik has a deep interest in workforce development and computer science education. He continues to teach graduate courses related to machine leaning and artificial intelligence at the University of Tennessee, Knoxville.

CERIAS Security Seminar Podcast
Hisham Zahid & David Haddad, Decrypting the Impact of Professional Certifications in Cybersecurity Careers

CERIAS Security Seminar Podcast

Play Episode Listen Later Mar 5, 2025 42:15


Professional certifications have become a defining feature of the cybersecurity industry, promising enhanced career prospects, higher salaries, and professional credibility. But do they truly deliver on these promises, or are there hidden drawbacks to pursuing them? This presentation takes a deep dive into the dual-edged nature of certifications like CISSP, CISM, CEH, and CompTIA Security+, analyzing their benefits and potential limitations. Drawing on data-driven research, industry insights, and real-world case studies, we explore how certifications influence hiring trends, professional growth, and skills development in cybersecurity. Attendees will gain a balanced perspective on the role of certifications, uncovering whether they are a gateway to career success or an overrated credential. Whether you are an aspiring professional or a seasoned practitioner, this session equips you with the knowledge to decide if certifications are the key to unlocking your cybersecurity potential—or if other paths may hold the answers. About the speaker: Hisham Zahid is a seasoned cybersecurity professional and researcher with over 15 years of combined technical and leadership experience. Currently serving under the CISO as a Security Compliance Manager at a FinTech startup, he has held roles spanning engineering, risk management, audit, and compliance. This breadth of experience gives him unique insight into the complex security challenges organizations face and the strategies needed to overcome them.Hisham holds an MBA and an MS, as well as industry-leading certifications including CISSP, CCSP, CISM, and CDPSE. He is also an active member of the National Society of Leadership and Success (NSLS) and the Open Web Application Security Project (OWASP), reflecting his commitment to professional development and community engagement. As the co-author of The Phantom CISO, Hisham remains dedicated to advancing cybersecurity knowledge, strengthening security awareness, and guiding organizations through an ever-evolving threat landscape.David Haddad is a technology enthusiast and optimist committed to making technology and data more secure and resilient.David serves as an Assistant Director in EY's Technology Risk Management practice, focusing on helping EY member firms comply with internal and external security, data, and regulatory requirements. In this role, David supports firms in enhancing technology governance and oversight through technical reviews, consultations, and assessments. Additionally, David contributes to global AI governance, risk, and control initiatives, ensuring AI products and services align with the firm's strategic technology risk management processes.David is in the fourth year of doctoral studies at Purdue University, specializing in AI and information security. David's experience includes various technology and cybersecurity roles at the Federal Reserve Bank of Chicago and other organizations. David also served as an adjunct instructor and lecturer, teaching undergraduate courses at Purdue University Northwest.A strong advocate for continuous learning, David actively pursues professional growth in cybersecurity and IT through academic degrees, certifications, and speaking engagements worldwide. He holds an MBA with a concentration in Management Information Systems from Purdue University and multiple industry-recognized certifications, including Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Data Privacy Solutions Engineer (CDPSE), and Certified Information Systems Auditor (CISA).His research interests include AI security and risk management, information management security controls, emerging technologies, cybersecurity compliance, and data protection.

CERIAS Security Seminar Podcast
Ali Al-Haj, Zero Trust Architectures and Digital Trust Frameworks: A Complementary or Contradictory Relationship?

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 26, 2025 52:06


This session explores the foundational concepts and practical applications of Zero Trust Architectures (ZTA) and Digital Trust Frameworks (DTF), two paradigms gaining traction in cybersecurity. While Zero Trust challenges the traditional notion of trust by enforcing strict access controls and authentication measures, Digital Trust seeks to build confidence through data integrity, privacy, and ethical considerations. Through this talk, we will investigate whether these approaches intersect, complement, or diverge, and what this means for the future of cybersecurity. Attendees will gain insights into implementing these frameworks to enhance both security and user confidence in digital environments. In addition to a practical overview, this talk will highlight emerging research areas in both domains.  About the speaker: Dr. Ali Al-Haj received his undergraduate degree in Electrical Engineering from Yarmouk University, Jordan, in 1985, followed by an M.Sc. degree in Electronics Engineering from Tottori University, Japan, in 1988 and a Ph.D. degree in Computer Engineering from Osaka University, Japan, in 1993. He then worked as a research associate at ATR Advanced Telecommunications Research Laboratories in Kyoto, Japan, until 1995. Prof. Al-Haj joined Princess Sumaya University for Technology, Jordan, in October 1995, where he currently serves as a Full Professor. He has published papers in dataflow computing, information retrieval, VLSI digital signal processing, neural networks, information security, and digital multimedia watermarking.

CERIAS Security Seminar Podcast
Adam Shostack, Risk is Not Axiomatic

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 12, 2025 64:25


This talk will look at how systems are secured at a practical engineering level and the science of risk. As we try to engineer secure systems, what are we trying to achieve and how can we do that? Modern threat modeling offers some practical approaches we can apply today. The limits of those approaches are important, and we'll look at how risk management seems to be treated as an axiom, some history of risk as a discipline, and how we might use that history to build better risk management processes. About the speaker: Adam is the author of Threat Modeling: Designing for Security and Threats: What Every Engineer Should Learn from Star Wars. He's a leading expert on threat modeling, a consultant, expert witness, and game designer. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft.His accomplishments include:Helped create the CVE. Now an Emeritus member of the Advisory Board.Fixed Autorun for hundreds of millions of systemsLed the design and delivery of the Microsoft SDL Threat Modeling Tool (v3)Created the Elevation of Privilege threat modeling gameCo-authored The New School of Information SecurityBeyond consulting and training, Shostack serves as a member of the Blackhat Review Board, an advisor to a variety of companies and academic institutions, and an Affiliate Professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.

CERIAS Security Seminar Podcast
Mustafa Abdallah, Effects of Behavioral Decision-Making in Proactive Security Frameworks in Networked Systems

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 5, 2025 59:32


Facing increasingly sophisticated attacks from external adversaries, networked systems owners have to judiciously allocate their limited security budget to reduce their cyber risks. However, when modeling human decision-making, behavioral economics has shown that humans consistently deviate from classical models of decision-making. Most notably, prospect theory, for which Kahneman and Tversky won the 2002 Nobel memorial prize in economics, argues that humans perceive gains, losses and probabilities in a skewed manner. Furthermore, bounded rationality and imperfect best-response behavior has been frequently observed in human decision-making within the domains of behavioral economics and psychology. While there is a rich literature on these human decision-making factors in economics and psychology, most of the existing work studying ​ security of networked systems does not take into account these biases and noises. In this talk, we show our proposed novel behavioral security game models for the study of human decision-making in networked systems modeled by attack graphs. We show that behavioral biases lead to suboptimal resource allocation patterns. We also analyze the outcomes of protecting multiple isolated assets with heterogeneous valuations via decision- and game-theoretic frameworks. We show that behavioral defenders over-invest in higher-valued assets compared to rational defenders. We then propose different learning-based techniques and adapt two different tax-based mechanisms for guiding behavioral decision-makers towards optimal security investment decisions. In particular, we show the outcomes of such learning and mechanisms on different realistic networked systems. In total, our research establishes rigorous frameworks to analyze the security of both large-scale networked systems and heterogeneous isolated assets managed by human decision makers and provides new and important insights into security vulnerabilities that arise in such settings. About the speaker: Dr. Mustafa Abdallah is a tenure-track Assistant Professor in the Computer and Information Technology (CIT) Department at Purdue University in Indianapolis, with a courtesy appointment at Purdue Polytechnic Institute. He earned his Ph.D. from the Elmore Family School of Electrical and Computer Engineering at Purdue University in 2022 and previously served as a tenure-track faculty member at IUPUI. His research focuses on game theory, behavioral decision-making, explainable AI, meta-learning, and deep learning, with applications in proactive security of networked systems, IoT anomaly detection, and intrusion detection. His work has been published in top security and AI venues, includingIEEE S&P, ACM AsiaCCS, IEEE TCNS, IEEE IoT-J, Computers & Security, and ACM TKDD. He has received the Bilsland Fellowship, multiple IEEE travel grants, and internal research funding from IUPUI. Dr. Abdallah has extensive industrial research experience, including internships at Adobe Research (meta-learning for time-series forecasting), Principal Financial Group (Kalman filter modeling for financial predictions), and RDI (deep learning for speech technology applications), which led to a U.S. patent and multiple publications. He holds B.Sc. and M.Sc. degrees from Cairo University, with a focus on electrical engineering and engineering mathematics, respectively.

CERIAS Security Seminar Podcast
D. Richard Kuhn, How Can We Provide Assured Autonomy?

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 29, 2025 56:15


Safety and security-critical systems require extensive test and evaluation, but existing high assurance test methods are based on structural coverage criteria that do not apply to many black box AI and machine learning components.   AI/ML systems make decisions based on training data rather than conventionally programmed functions.  Autonomous systems that rely on these components therefore require assurance methods that evaluate input data to ensure that they can function correctly in their environments with inputs they will encounter.  Combinatorial test methods can provide added assurance for these systems and complement conventional verification and test for AI/ML.This talk reviews some combinatorial methods that can be used to provide assured autonomy, including:Background on combinatorial test methodsWhy conventional test methods are not sufficient for many or most autonomous systemsWhere combinatorial methods applyAssurance based on input space coverageExplainable AI as part of validation About the speaker: Rick Kuhn is a computer scientist in the Computer Security Division at NIST, and is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He co-developed the role based access control (RBAC) model that is the dominant form of access control today. His current research focuses on combinatorial methods for assured autonomy and hardware security/functional verification. He has authored three books and more than 200 conference or journal publications on cybersecurity, software failure, and software verification and testing.

CERIAS Security Seminar Podcast
Nick Harrell, Mechanisms of Virality in Online Discourse

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 22, 2025 51:53


Information virality is an increasingly important topic in modern media environments, but it often remains overlooked in discussions about information security. This presentation will explain why information virality is a cybersecurity concern and how it can be exploited to manipulate public discourse. By utilizing theories from prominent cultural psychologists and employing natural language processing techniques, we will demonstrate methods for capturing viral discourse and identifying additional features linked to behavioral patterns that may motivate participation in discussions. This talk will focus solely on the methodology and our preliminary findings, as the research is still ongoing. About the speaker: Nick Harrell has served in the military for 18 years. Currently, he works as a data systems engineer, where he designs, builds, and maintains complex data systems that help Army leaders make informed decisions. He is on a fellowship at Purdue University, pursuing a Ph.D. in Information Security. Nick is a member of the International Information System Security Certification Consortium (ISC2) and the Project Management Institute (PMI). His research interests focus on Natural Language Processing (NLP) for Information Assurance, specifically on mechanisms that enhance user engagement in online public discourse.

CERIAS Security Seminar Podcast
Stanislav Kruglik, Querying Twice: How to Ensure We Obtain the Correct File in a Private Information Retrieval Protocol

CERIAS Security Seminar Podcast

Play Episode Listen Later Jan 15, 2025 43:47


Private Information Retrieval (PIR) is a cryptographic primitive that enables a client to retrieve a record from a database hosted by one or more untrusted servers without revealing which record was accessed. It has a wide range of applications, including private web search, private DNS, lightweight cryptocurrency clients, and more. While many existing PIR protocols assume that servers are honest but curious, we explore the scenario where dishonest servers provide incorrect answers to mislead clients into retrieving the wrong results.We begin by presenting a unified classification of protocols that address incorrect server behavior, focusing on the lowest level of resistance—verifiability—which allows the client to detect if the retrieved file is incorrect. Despite this relaxed security notion, verifiability is sufficient for several practical applications, such as private media browsing.Later on, we propose a unified framework for polynomial PIR protocols, encompassing various existing protocols that optimize download rate or total communication cost. We introduce a method to transform a polynomial PIR into a verifiable one without increasing the number of servers. This is achieved by doubling the queries and linking the responses using a secret parameter held by the client. About the speaker: Stanislav Kruglik has been a Research Fellow at the School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, since April 2022. He earned a Ph.D. in the theoretical foundations of computer science from the Moscow Institute of Physics and Technology, Russia, in February 2022. He is an IEEE Senior Member and a recipient of the Simons Foundation Scholarship. With over 40 scientific publications, his work has appeared in top-tier venues, including IEEE Transactions on Information Forensics and Security and the European Symposium on Research in Computer Security. His research interests focus on information theory and its applications, particularly in data storage and security.

CERIAS Security Seminar Podcast
Christopher Yeomans, Fairness as Equal Concession: Critical Remarks on Fair AI

CERIAS Security Seminar Podcast

Play Episode Listen Later Dec 4, 2024 52:49


Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike' and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a metatheory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. About the speaker: DR. CHRIS YEOMANS is Professor and Head of the Department of Philosophy at Purdue University. He earned his PhD at the University of California, Riverside in 2005 before joining the Purdue faculty in 2009. He is the author of three monographs, Freedom and Reflection: Hegel and the Logic of Agency, The Expansion of Autonomy: Hegel's Pluralistic Philosophy of Action, and The Politics of German Idealism: Law & Social Change at the Turn of the 19th Century (all from Oxford University Press). His work has been supported by the Purdue Provost's Faculty Fellowship for Study in a Second Discipline (history), the Alexander von Humboldt Foundation, and the National Science Foundation.

CERIAS Security Seminar Podcast
Mason Rice, Adversarial C2 inside OT Networks

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 20, 2024 50:35


This presentation outlines adversarial command and control attacks in OT networks.  Focusing on the electrical grid, this presentation highlights current gaps in critical infrastructure protection research.  After discussing real-world examples, a fictional electrical grid is used to explore cyber-physical threats and mitigations to OT systems. About the speaker: Dr. Mason Rice is the director of the Cyber Resilience and Intelligence Division at Oak Ridge National Laboratory. In this role, he is responsible for an R&D portfolio focused on advanced intelligent systems and resilient cyber-physical systems, including research into (1) AI for national security, (2) cybersecurity for critical systems, (3) machine-augmented intelligence, (4) vulnerability science, and (5) identity science.Following retirement from the Army, Dr. Rice joined ORNL in 2017 as the Cyber-Physical R&D Manager and was soon appointed as the first Group Leader for Resilient Cyber-Physical Systems at ORNL. He ultimately grew the group into four focused research groups, at which point he was selected to be the first Section Head of the new Resilient Cyber-Physical Systems Section.

CERIAS Security Seminar Podcast
Yanxue Jia, HomeRun: High-efficiency Oblivious Message Retrieval, Unrestricted

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 6, 2024 43:28


Oblivious Message Retrieval is designed to protect the privacy of users who retrieve messages from a bulletin board. Our work, HomeRun, stands out by providing unlinkability across multiple requests for the same recipient's address. Moreover, it does not impose a limit on the number of pertinent messages that can be received by a recipient, which thwarts "message balance exhaustion" attacks and enhances system usability. HomeRun also empowers servers to regularly delete the retrieved messages and the associated auxiliary data, which mitigates the constantly increasing computation costs and storage costs incurred by servers. Remarkably, none of the existing solutions offer all of these features collectively. About the speaker: Yanxue Jia is currently a post-doctoral researcher in the Department of Computer Science at Purdue University. In 2022, she obtained her Ph.D. in Computer Science from Shanghai Jiao Tong University. Her research mainly focuses on applied cryptography, especially secure computation, blockchain, and provable security. She is dedicated to designing efficient and secure cryptographic protocols that enhance collaboration while ensuring privacy protection. Her work has been published at top-tier conferences, such as USENIX Security, CCS, and Asiacrypt. For more detailed information about her academic and research background, please refer to her homepage https://yanxue820.github.io/

CERIAS Security Seminar Podcast
Roger Grimes, Many Ways to Hack MFA

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 30, 2024 113:12


Students: this is a hybrid event. You are strongly encouraged to attend in-person. Location:  STEW G52 (Suite 050B) WL Campus.  Everyone knows that multi-factor authentication (MFA) is more secure than a simple login name and password, but too many people think that MFA is a perfect, unhackable solution. It isn't! I can send you a regular phishing email and completely take control of your account even if you use a super-duper MFA token or smartphone app. I can hack ANY MFA solution at least a handful of different ways, although some forms of MFA are more resilient than others. Attend this presentation and learn the 12+ ways hackers can and do get around your favorite MFA solution. The presentation will include a (pre-filmed) hacking demo and real-life successful examples of every attack type. It will end by telling you how to better defend your MFA solution so that you get maximum benefit and security. About the speaker: Roger A. Grimes, CPA, CISSP, CEH, MCSE, CISA, CISM, CNE, yada, yada, Data-Driven Defense Evangelist for KnowBe4, Inc., is the author of 14 books and over 1400 articles on computer security, specializing in host security and preventing hacker and malware attacks. Roger is a frequent speaker at national computer security conferences and was the weekly security columnist at InfoWorld and CSO magazines between 2005 - 2019. He has worked at some of the world's largest computer security companies, including, Foundstone, McAfee, and Microsoft. Roger is frequently interviewed and quoted in the media including Newsweek, CNN, NPR, and WSJ. His presentations are fast-paced and filled with useful facts and recommendations.

CERIAS Security Seminar Podcast
Alessandro Acquisti, Behavioral Advertising and Consumer Welfare

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 23, 2024 53:46


Online behavioral advertising has raised privacy concerns due to its dependence on extensive tracking of individuals' behaviors and its potential to influence them. Those concerns have been often juxtaposed with the economic value consumers are expected to gain from receiving behaviorally targeted ads. Those purported economic benefits, however, have been more frequently hypothesized than empirically demonstrated. We present the results of two online experiments designed to assess some of the consumer welfare implications of behaviorally targeted advertising using a counterfactual approach. Study 1 finds that products in ads targeted to a sample of online participants were more relevant to them than randomly picked products but were also more likely to be associated with lower quality vendors and higher product prices compared to competing alternatives found among search results. Study 2 replicates the results of Study 1. Additionally, Study 2 finds the higher product relevance of products in targeted ads relative to randomly picked products to be driven by participants having previously searched for the advertised products. The results help evaluate claims about the direct economic benefits consumers may gain from behavioral advertising. About the speaker: Alessandro Acquisti is the Trustees Professor of Information Technology and Public Policy at Carnegie Mellon University's Heinz College. His research combines economics, behavioral research, and data mining to investigate the role of privacy in a digital society. His studies have promoted the revival of the economics of privacy, advanced the application of behavioral economics to the understanding of consumer privacy valuations and decision-making, and spearheaded the investigation of privacy and disclosures in social media.Alessandro has been the recipient of the PET Award for Outstanding Research in Privacy Enhancing Technologies, the IBM Best Academic Privacy Faculty Award, the IEEE Cybersecurity Award for Innovation, the Heinz College School of Information's Teaching Excellence Award, and numerous Best Paper awards. His studies have been published in journals across multiple disciplines, including Science, Proceedings of the National Academy of Science, Journal of Economic Literature, Management Science, Marketing Science, and Journal of Consumer Research. His research has been featured in global media outlets including the Economist, the New York Times, the Wall Street Journal, NPR, CNN, and 60 Minutes. His TED talks on privacy and human behaviour have been viewed over 1.5 million times.Alessandro is the director of the Privacy Economics Experiments (PeeX) Lab, the Chair of CMU Institutional Review Board (IRB), and the former faculty director of the CMU Digital Transformation and Innovation Center. He is an Andrew Carnegie Fellow (inaugural class), and has been a member of the Board of Regents of the National Library of Medicine and a member of the National Academies' Committee on public response to alerts and warnings using social media and associated privacy considerations. He has testified before the U.S. Senate and House committees and has consulted on issues related to privacy policy and consumer behavior with numerous agencies and organizations, including the White House's Office of Science and Technology Policy (OSTP), the US Federal Trade Commission (FTC), and the European Commission.He has received a PhD from UC Berkeley and Master degrees from UC Berkeley, the London School of Economics, and Trinity College Dublin. He has held visiting positions at the Universities of Rome, Paris, and Freiburg (visiting professor); Harvard University (visiting scholar); University of Chicago (visiting fellow); Microsoft Research (visiting researcher); and Google (visiting scientist).His research interests include privacy, artificial intelligence, and Nutella. In a previous life, he has been a soundtrack composer and a motorcycle racer (USGPRU).

CERIAS Security Seminar Podcast
Xiaoqi Chen, SmartCookie: Blocking Large-Scale SYN Floods with a Split-Proxy Defense on Programmable Data Planes

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 16, 2024 37:21


Despite decades of mitigation efforts, SYN flooding attacks continue to increase in frequency and scale, and adaptive adversaries continue to evolve. In this talk, I will briefly introduce some background on the SYN flooding attack, existing defenses via SYN cookies and challenges to scale them to very high line rate (100Gbps+), and then present our latest work SmartCookie (USENIX Security '24). SmartCookie's innovative split-proxy defense design leverages high-speed programmable switches for fast and secure SYN cookie generation and verification, while implementing a server-side agent using eBPF to enable scalability for serving benign traffic. SmartCookie can defend against attack rate up to 130+ million packet per second with no packet loss, while also achieving 2x-6.5x lower end-to-end latency for benign traffic compared to existing switch-based hardware defenses. About the speaker: Xiaoqi Chen recently joined as an assistant professor at the School of Electrical and Computer Engineering, Purdue University. His research focuses on utilizing algorithm design for high-speed network data planes to improve network measurement and telemetry, implement closed-loop optimization for intelligent resource allocation and congestion control, as well as to enable novel approaches for enhancing network security and privacy.

CERIAS Security Seminar Podcast
Zhou Li, The Road Towards Accurate, Scalable and Robust Graph-based Security Analytics: Where Are We Now?

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 9, 2024 55:08


Graph learning has gained prominent traction from the academia and industry as a solution to detect complex cyber-attack campaigns. By constructing a graph that connects various network/host entities and modeling the benign/malicious patterns, threat-hunting tasks like data provenance and entity classification can be automated. We term the systems under this theme as Graph-based Security Analytics (GSAs). In this talk, we first provide a cursory view of GSA research in the recent decade, focusing on the academic side. Then, we elaborate a few GSAs developed in our lab, which are designed for edge-level intrusion detection (Argus), subgraph-level attack reconstruction (ProGrapher) and storage reduction (SEAL). In the end of the talk, we will review the progress and pitfalls along the development of GSA research, and highlight some research opportunities. About the speaker: Zhou Li is an Assistant Professor at UC Irvine, EECS department, leading the Data-driven Security and Privacy Lab. Before joining UC Irvine, he worked as Principal Research Scientist at RSA Labs from 2014 to 2018. His research interests include Internet Security, Organizational network security, Privacy Enhancement Technologies, and Security and privacy for machine learning. He received the NSF CAREER award, Amazon Research Award, Microsoft Security AI award and IRTF Applied Networking Research Prize.

CERIAS Security Seminar Podcast
Michail Maniatakos, Dissecting the Software Supply Chain of Modern Industrial Control Systems

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 2, 2024 55:37


Recent years have been pivotal in the field of Industrial Control Systems (ICS) security, with a large number of high-profile attacks exposing the lack of a design-for-security initiative in ICS. The evolution of ICS abstracting the control logic to a purely software level hosted on a generic OS, combined with hyperconnectivity and the integration of popular open source libraries providing advanced features, have expanded the ICS attack surface by increasing the entry points and by allowing traditional software vulnerabilities to be repurposed to the ICS domain. In this seminar, we will shed light to the security landscape of modern ICS, dissecting firmware from the dominant vendors and motivating the need of employing appropriate vulnerability assessment tools. We will present methodologies for blackbox fuzzing of modern ICS, both directly using the device and by using the development software. We will then proceed with methodologies on hotpatching, since ICS cannot be easily restarted in order to patch any discovered vulnerabilities. We will demonstrate our proposed methodologies on various critical infrastructure testbeds. About the speaker: Michail (Mihalis) Maniatakos is an Associate Professor of Electrical and Computer Engineering at New York University (NYU) Abu Dhabi, UAE, and a Research Associate Professor at the NYU Tandon School of Engineering, New York, USA. He is the Director of the MoMA Laboratory (nyuad.nyu.edu/momalab), NYU Abu Dhabi. He received his Ph.D. in Electrical Engineering, as well as M.Sc., M.Phil. degrees from Yale University. He also received the B.Sc. and M.Sc. degrees in Computer Science and Embedded Systems, respectively, from the University of Piraeus, Greece. His research interests, funded by industrial partners, the US government, and the UAE government include privacy-preserving computation and industrial control systems security.

CERIAS Security Seminar Podcast
Chance Younkin, Shamrock Cyber – When Luck Just Isn't Enough

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 25, 2024 60:18


In the past 30 years, the world has experienced a booming IoT market, advances in automation and OT systems, and an ever-increasing dependence on cyber in every aspect of modern life.  This target rich environment is ideal for cyber adversaries seeking access to systems and devices for financial gain, espionage, digital harassment, or outright cyber-warfare.  Naturally, this leads to expanded attack surfaces, increased risk, and a complex and costly cyber arms race.By combining consequences, threats, and vulnerabilities and mapping them to mission risk, Shamrock Cyber significantly reduces the effort to prioritize, communicate, and mitigate risk.  The Shamrock approach enables defenders to focus on their domains and yet understand and operate based on the domains of others.  Through 4 kinds of analysis—Consequence, Threat, Vulnerability, and Risk, there are multiple approaches to suit the needs of many missions.  Shamrock Cyber uniquely blends traditionally effective activities with innovative mission focused analyses that unite the equities of executives, managers, cyber practitioners, and system developers.Shamrock Cyber does not depend on leprechauns and luck to find cybersecurity gold at the end of the rainbow.  Instead, it focuses on combining consequences, threats, and vulnerabilities, to communicate and reduce mission risk along with explaining the WHY to all involved. About the speaker: Born in Indiana and growing up in Butte, Montana from the age of 4, Chance received a BS in Computer Science at Montana Tech in Butte in 1988.  He then pursued an MS in computer science concentrating on visualization at Montana State in Bozeman, Montana. Following graduation at MSU, Chance joined Pacific Northwest National Laboratory in July of 1991. He's been there ever since and has worked as a software developer, architect, project manager, and task lead on projects ranging from Air Force cockpit software to molecular visualization, to atmospheric science, to text visualization, to data quality, and for the last 15 years, cybersecurity.  Chance leads software and system security analysis projects ranging from building technology, nuclear, and radiation monitoring systems.  He is passionate about building bridges between researchers, engineers, and operations in the cybersecurity domain.

CERIAS Security Seminar Podcast
Ashok Vardhan Raja, Exploiting Vulnerabilities in AI-Enabled UAV: Attacks and Defense Mechanisms

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 18, 2024 54:25


Recorded: 09/18/2024 CERIAS Security Seminar at Purdue University Exploiting Vulnerabilities in AI-Enabled UAV: Attacks and Defense Mechanisms Ashok Vardhan Raja, Purdue University Northwest In recent years, UAVs have seen significant growth in both military and civilian applications, thanks to their high mobility and advanced sensing capabilities. This expansion has been further accelerated by rapid advancements in AI algorithms and hardware. While AI integration enhances the intelligence and efficiency of UAVs, it also introduces new security and safety concerns due to potential vulnerabilities in the underlying AI models. These vulnerabilities can be exploited by malicious actors, leading to severe security risks and operational failures. This talk will focus on securing the integration of AI into UAVs to ensure their resilience in adversarial environments. We will begin by analyzing the data sensing and processing pipeline of key sensors used in AI-enabled UAV operations,identifying areas where vulnerabilities may exist. Following this, we will explore how to develop defense mechanisms to strengthen the robustness of these AI-driven UAV systems against potential threats. AI-enabled anomaly detection. AI-enabled anomaly detection and AI-enabled UAV infrastructure inspection will be leveraged as case studies in this talk. The talk will also cover the use of Large Language Models to improve this integration's security About the speaker: Ashok Vardhan Raja is an Assistant Professor of Cybersecurity in the department of Computer Information Technology and Graphics for the College of Technology at Purdue University Northwest. His research is on secure integration of Artificial Intelligence (AI) and Cyber Physical Systems (CPS)such as UAVs for robust operations. He is expanding his current work by using Swarm of UAVs to address security issues and to other domains in the integration of AI and CPS.

CERIAS Security Seminar Podcast
Russel Waymire, IDART (Information Design Assurance Red Team): A Red Team Assessment Methodology

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 11, 2024 65:59


The Information Design Assurance Red(IDART) methodology is optimized to evaluate system designs and identify vulnerabilities by adopting, in detail, the varying perspectives of a system's most likely adversaries. The results provide system owners with an attacker's-eye view of their system's strengths and weaknesses.IDART can be applied to a diversity of complex networks, systems, and applications, including those that mix cyber technology with industrial machinery or other equipment. The methodology can be used throughout a system's lifecycle but the assessments are less expensive and more beneficial during design and development, when weaknesses can be found and mitigated more easily.Developed at Sandia National Laboratories in the mid-1990s and updated frequently, the IDART framework is NIST-recognized and designed for repeatability and measurable results. Atypical assessment includes the following high-level activities:Characterizing the target system and its architectureIdentifying nightmare consequencesAnalyzing the system for security strengths and weaknessesIdentifying potential vulnerabilities that could lead to nightmare consequencesDocumenting results and providing prioritized mitigation strategiesIDART assessors think like adversaries. To do this, they first develop a range of categorical profiles or"models" of a system's most likely attackers. Factors include an adversary's specific capabilities (i.e., domain knowledge, access, resources) as well as intangibles such as motivation and risk tolerance. The assessment team then uses this adversarial lens to measure the risks posed by system weaknesses and to prioritize mitigations.For efficiency and thoroughness, IDART relies on a free exchange of information. System personnel share documentation and participate in discussions that help assessors efficiently find as many attack paths as possible. In turn, the IDART team is transparent in conducting its assessment activities, giving system owners greater confidence in the work and the resulting analysis.All of these traits combine to make IDART a highly flexible tool. The methodology helps system owners identify critical vulnerabilities, understand adversary threats, and weigh appropriate strategies for delivering components, systems, and plans that are botheffective and secure. About the speaker: Russel Waymire is a manager at Sandia National Laboratories in the area of Cyber-Physical Security. Mr. Waymire has over 25 years of experience in the design, implementation, testing, reverse engineering, and securing of software and hardware systems in IT and OT environments. Mr. Waymire began his career as a software developer at Honeywell Defense Avionic Systems in Albuquerque New Mexico, where he developed the requirements, design, implementation, and testing of software for a variety of platforms that included the F-15, C-27J, KC-10, C-130, and the C5 aircraft. He then went on to Sandia National Laboratories in Albuquerque New Mexico where he has had an opportunity to work on a wide range of projects including algorithms in combinatorial optimization, software development for mod-sim force-on-force interactions and cognition/AI development, satellite software for operational systems in orbit, cyber vulnerability assessments for various US government agencies, and cyber physical assessments for numerous foreign partners that included physical and cyber upgrades at nuclear power plants and research reactors worldwide. Russel currently uses his experience and insights to lead a team researching innovative ways to protect critical infrastructure, space systems, and other high-consequence operational technologies.

CERIAS Security Seminar Podcast
Chris Kubecka de Medina, Empowering the Next Generation of Digital Defenders: Ethics in Cybersecurity and Emerging Technologies

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 4, 2024 60:25


At Purdue University, Ms. Kubecka will discuss how technologists, especially the next generation of digital defenders, can be empowered to consider ethics in cybersecurity, privacy, and emerging technologies, and how they can use their power for good in tech. About the speaker: Ms. Chris Kubecka is a globally recognized cybersecurity expert with over two decades of experience, known for her pivotal role in digital defense and her commitment to ethical technology practices. She has established a formidable reputation for protecting both national and international cybersecurity interests, often at the highest levels of government and industry.Ms. Kubecka's career began with a strong technical foundation, rapidly advancing into leadership roles that demand both tactical acumen and strategic foresight. Her expertise spans cyber warfare, digital intelligence, artificial intelligence, and the development of robust cybersecurity frameworks, including those addressing the challenges of post-quantum computing.A thought leader in cybersecurity, Ms. Kubecka frequently contributes to international conferences, policy discussions, and academic forums. She is the author of several influential books, including Hack The World With OSINT, and has published numerous research papers on platforms like ResearchGate. Her work often explores the ethical implications of emerging technologies and the critical role of privacy in cybersecurity.Ms. Kubecka serves as the CEO and Founder of HypaSec NL, Senior Cybersecurity Advisor for Elemental Concept, and Chief Hacktress for Unit6 Technologies. Her significant contributions to the field have been recognized with numerous awards, including The Order of Thor. She is also a former Distinguished Chair for the Middle East Institute's Cyber Security and Emerging Technology Program.Throughout her career, Ms. Kubecka has led critical operations that highlight the intersection of cybersecurity and human rights. During the conflict in Ukraine, she used her expertise to facilitate the evacuation of civilians, applying digital intelligence to support these missions. In Venezuela, her investigations uncovered the weaponization of government-backed applications, such as the Ven App and Patrica App, which are used for surveillance and repression of dissent. Her research revealed how these apps are being exploited to target citizens, leading to arrests, disappearances, and even deaths, underscoring the dire consequences of unethical technology use.Ms. Kubecka's background as a USAF aviator and former member of the USAF Space Command highlights her extensive commitment to defense in both the physical and digital realms. Her journey began at a young age, with her early technical skills leading to her first major hacking achievement at age ten.

CERIAS Security Seminar Podcast
David Haddad, AI's Security Maze: Navigating AI's Top Cybersecurity Risks Through Strategic Planning and Resilient Operations

CERIAS Security Seminar Podcast

Play Episode Listen Later Aug 28, 2024 56:59


Students: this is a hybrid event. You are strongly encouraged to attend in-person. Location:  STEW G52 (Suite 050B) WL Campus. The rapid commercialization of GenAI products and services has significantly broadened the landscape of potential attack vectors targeting enterprise infrastructure, operations, and processes. This evolution poses substantial risks to enterprise assets and operations, requiring continuous risk, attack, and threat surface analysis. This exploratory study delineates critical findings across three key dimensions:An analysis of current market trends related to AI-driven cyber and information security risks;An overview of emerging regulatory requirements and compliance efforts specific to AI technologies and;Strategic initiatives for identifying and mitigating these risks, informed by insights from both industry and academia.The presentation provides a roadmap for technology practitioners navigating the complex intersection of AI innovation and cybersecurity. About the speaker: David is an Assistant Director in Ernst & Young's Americas Technology Risk Management practice. He focuses on Americas and Global technology risk assessments, supports IT and data regulatory efforts, and coordinates IT risk management processes for member firms. He brings over eight years of external and internal experience in information security consulting, technology, IT audit, and GRC across public and private industries. He previously served as an adjunct instructor and lecturer for undergraduate programs at Purdue University Northwest.David is pivotal in supporting EY's strategic technology, information security, and compliance projects. His specialties include continuous risk identification & analysis, GRC strategy development, security control testing analysis (e.g., NIST, ISO), and solutions development to manage enterprise risks across various IT domains and emerging technologies (e.g., AI).David is a passionate and dedicated professional who embodies the mindset of a continuous learner in IT, information security, emerging technologies, and data privacy. He proactively expands his knowledge and skillsets by pursuing advanced degrees, obtaining professional certifications, and conducting domestic & international speaking engagements.

CERIAS Security Seminar Podcast
Shagufta Mehnaz, Privacy and Security in ML: A Priority, not an Afterthought

CERIAS Security Seminar Podcast

Play Episode Listen Later Aug 21, 2024 63:19


The increased use of machine learning (ML) technologies on proprietary and sensitive datasets has led to increased privacy breaches in many sectors, including healthcare and personalized medicine. Although federated learning (FL) systems allow multiple parties to train ML models collaboratively without sharing their raw data with third-party entities, security concerns arise from the involvement of potentially malicious FL clients aiming to disrupt the learning process. In this talk, I will present how my research addresses these challenges by developing frameworks to analyze and improve the privacy and security aspects of ML. First, I will talk about model inversion attacks that allow an adversary to infer part of the sensitive training data with only black-box access to a vulnerable classification model. I will then present FLShield, a novel FL framework that utilizes benign data from FL participants to validate the local models before taking them into account for generating the global model. I will conclude with a discussion of challenges in building practical data-driven systems that take into account data privacy and security while keeping the intended functionality of the system unimpaired. About the speaker: Shagufta Mehnaz is an Assistant Professor of the Computer Science and Engineering department at The Pennsylvania State University. She is broadly interested in the areas of privacy, security, and machine learning. Her research focuses on enhancing the privacy and security of machine learning techniques and models themselves, as well as developing novel machine learning techniques to protect data security and privacy. She directs the PRIvacy, Security, and Machine Learning lab (PRISMLab) at Penn State. She obtained her Ph.D. in Computer Science from Purdue University in 2020. She also received the Bilsland Dissertation Fellowship at Purdue. She was one of the 100 Computer Science Young Researchers selected worldwide for the Heidelberg Laureate Forum (HLF) in 2018.

CERIAS Security Seminar Podcast
David Stracuzzi, Defining Trusted Artificial Intelligence for the National Security Space

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 24, 2024 51:26


For the past four years, Sandia National Laboratories has been conducting a focused research effort on Trusted AI for national security problems. The goal is to develop the fundamental insights required to use AI methods in high-consequence national security applications while also improving the practical deployment of AI. This talk looks at key properties of many national security problems along with Sandia's ongoing effort to develop a certification process for AI-based solutions. Along the way, we will examine several recent and ongoing research projects, including how they contribute to the larger goals of Trusted AI.  The talk concludes with a forward-looking discussion of remaining research gaps. About the speaker: David manages the Machine Intelligence and Visualization department, which conducts cutting-edge research in machine learning and artificial intelligence for national security applications, including the advanced visualization of data and results.  David has been studying machine learning in the broader context of artificial intelligence for over 15 years.  His research focuses on applying machine learning methods to a wide variety of domains with an emphasis on estimating the uncertainty in model predictions to support decision making.  He also leads the Trusted AI Strategic Initiative at Sandia, which seeks to develop fundamental insights into AI algorithms, their performance and reliability, and how people use them in national security contexts.  Prior to joining Sandia, David spent three years as research faculty at Arizona State University and one year as a postdoc at Stanford University developing intelligent agent architectures. He received his doctorate in 2006 and MS in 2002 from the University of Massachusetts at Amherst for his work in machine learning.  David earned his Bachelor of Science from Clarkson University in 1998.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.