Talk Schedule - Talk Lineup - Speaker Bios

Hardware Identity Is Hard: Securing Edge & AI Agents With Open Standards

Time: Friday @ 1330

Speaker: Andrew McCormick

Verifiable identity is a problem with physical hardware, especially in edge environments where AI is increasingly being deployed. Attributes like MAC and IP address are easily spoofable and unstable. Hardware birth certificates are infeasible to manage, secure, and revoke at scale. This session shows how open standards (RATS, SPIFFE, SCITT) enable verifiable workload identities for the modern age.


Stop Trading Security For Predictive Power: Retrofitting Diagnostic Capabilities For In-Place AI Systems In Electrical Grid Operational Technology (OT)

Time: Friday @ 1430

Speaker: Emily Soward

Understand why and how AI software is already impacting predictive electrical grid management and how to compensate. This talk focuses on how challenges can be addressed to improve forensic and diagnostic capabilities in AI systems used for OT. Through simplified case studies, learn how predictive AI systems already account for diverse physical infrastructure, energy usage, weather conditions, and operational conditions. We will look at how AI is impacting grid maintenance and inspection, how large loads and load profiling impact energy distribution, how weather pattern changes and extremes can be missed, and more. Building on this foundation, we will discuss challenges for AI systems in OT and some ways to balance agility in adopting new technologies like AI with the need for security analysis capabilities. Attendees will walk away with practical tips for developing line-of-sight to existing AI systems. These include how to generate the right level of telemetry, when to rip-and-refactor versus retrofit, and how to incrementally add sources. No AI experience needed, no electrical grid experience assumed.


Adopting AI To Protect Industrial Control Systems: Assessing Challenges and Opportunities From The Operators’ Perspective

Time: Friday @ 1530

Speaker: Clement Fung

Industrial control systems (ICS) manage critical physical processes such as electric distribution and water treatment. Attackers infiltrate ICS and manipulate these critical processes, causing damage and harm. AI-based approaches can detect such attacks and raise alarms for operators, but they are not commonly used in practice and it is unclear why. In this work, we directly asked practitioners about current practices for alarms in ICS and their perspectives on adopting AI to support these practices. We conducted 18 semi-structured interviews with practitioners who work on protecting ICS, through which we identified tasks commonly performed for alarms such as raising alarms when anomalies are detected, coordinating operator response to alarms, and analyzing data to improve alarm rule sets. We found that practitioners often struggle with tasks beyond anomaly detection, such as alarm diagnosis, and we propose designing AI-based tools to support these tasks. We also identified barriers to adopting AI in ICS (e.g., limited data collection, low trust in vendor technology) and recommend ways to make AI-based tools more effective and trusted by practitioners, such as demonstrating model transparency through interactive pilot projects.


Attributions for ML-Based ICS Anomaly Detection: From Theory To Practice

Time: Saturday @ 1030 (Forsythe Room)

Speaker: Clement Fung

Industrial Control Systems (ICS) govern critical infrastructure like power plants and water treatment plants. ICS can be attacked through manipulations of its sensor or actuator values, causing physical harm. A promising technique for detecting such attacks is machine-learning-based anomaly detection, but it does not identify which sensor or actuator was manipulated and makes it difficult for ICS operators to diagnose the anomaly’s root cause. Prior work has proposed using attribution methods to identify what features caused an ICS anomaly-detection model to raise an alarm, but it is unclear how well these attribution methods work in practice. In this paper, we compare state-of-the-art attribution methods for the ICS domain with real attacks from multiple datasets. We find that attribution methods for ICS anomaly detection do not perform as well as suggested in prior work and identify two main reasons. First, anomaly detectors often detect attacks either immediately or significantly after the attack start; we find that attributions computed at these detection points are inaccurate. Second, attribution accuracy varies greatly across attack properties, and attribution methods struggle with attacks on categorical-valued actuators. Despite these challenges, we find that ensembles of attributions can compensate for weaknesses in individual attribution methods. Towards practical use of attributions for ICS anomaly detection, we provide recommendations for researchers and practitioners, such as the need to evaluate attributions with diverse datasets and the potential for attributions in non-real-time workflows.


Android Malware Obfuscation

Time: Friday @ 1030 (Ogelthorpe Room)

Speaker: Joshua Satterfield

Android Malware Analysis is a contast cat and mouse game where malware developer apply more complex obfuscation technique to their malware, requiring malware analysts to develop more sophisticated techniques to reverse engineer the malware. This talk will explain different Android malware obfuscation techniques and how to bypass them. This talk will cover a example piece of malware created by the presenter for the purposes of the talk and show two different methods of obfuscating the malware, and how to bypass both obfuscations methods statically. Then the presenter show show a real in-the-wild piece of Android malware that is difficult to analyze using static analysis techniques, and the presenter will show how to reverse engineer the malware using dynamic analysis.


When Java Meets IoT: Challenges For Secure Operation

Time: Saturday @ 1110 (Ogelthorpe Room)

Speaker: Marc Schoenefeld

Java offers power and safety for IoT, but brings server-like risk into edge contexts. For secure operation it is critical to know your attack surface, employ secure communication and authentication first. This awareness comes best together with the practices to code defensively, manage dependencies, and modernize your JDK and libraries to maintain a resilient IoT infrastructure. The presentation will also give generic guidance for non-Java scenarios.


Acoustic Side Channel Attack On Keyboards Based On Typing Patterns

Time: Saturday @ 1130 (Forsythe Room)

Speaker: Alireza Taheritajar

Acoustic side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices. These attacks aim to reveal users’ sensitive information by targeting the sounds made by their keyboards as they type. Most existing approaches in this field ignore the negative impacts of typing patterns and environmental noise in their results. This paper seeks to address these shortcomings by proposing an applicable method that takes into account the user’s typing pattern in a realistic environment. Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.


Rethinking Segmentation: Why VLANs Fail In Critical Infrastructure Networks

Time: Saturday @ 1150 (Ogelthorpe Room)

Speaker: Florian Doumenc

In critical networks, VLANs are still widely used to implement segmentation. But VLAN-based designs introduce substantial risk: flat broadcast domains, error-prone configurations, and lack of application-layer visibility. 802.1x, often proposed as a compensating control, brings its own complexity and poor fit for OT protocols and devices.

This session will outline why VLANs and 802.1x fail as segmentation mechanisms in critical environments, and why zero-trust principles offer a better foundation. We’ll detail how proxy-based segmentation, software-defined DMZs, and protocol-aware (L5–L7) controls enable more precise, auditable, and fault-tolerant network isolation.


Styx Emulator: Public Release and Future Roadmap

Time: Saturday @ 1330 (Forsythe Room)

Speaker: Jordan Moore

We introduce Styx, a modern, composable emulator written in Rust focused on embedded and DSP platforms. Styx is newly publicly released under a BSD-2 Clause license on GitHub under the styx-emulator organization.

Styx encapsulates the core emulation mechanics and functionality into modular components, simplifying new emulator development. Styx focuses on reducing the investment required to get emulation for a target system, and touts its ability for users to tailor the libraries to their specific needs and specific context. Historically, researchers have been able to add new architecture support, integrate with physics simulators, connect multiple emulators, spin up fuzzers, connect with gdb, and more as long as they were comfortable working with the Styx codebase.

To address these needs, the future of Styx is all about unifying the capabilities that have been built and prototyped over the years to dramatically reduce the effort required to iterate on target emulations. This presentation will highlight the features and support of Styx before diving into the upcoming features on the horizon. This release of the Styx Emulator is our first step towards rewriting the emulation stack ground up to support targeting the hard to reach embedded targets prevalent in our critical infrastructure.


An Adversarial Loop For Robust Phishing Detection In Critical Infrastructure Email Systems

Time: Saturday @ 1330 (Ogelthorpe Room)

Speaker: Aayush Kumar

In critical infrastructure sectors—transportation, energy, and manufacturing—email remains a primary vector for attackers. Traditional filters falter against adaptive phishing campaigns. We introduce an adversarial loop framework that co-trains an ensemble Defender and a spectrum of Attackers (from static templates to a reinforcement learning agent). A Planner orchestrates iterative rounds of attack generation, model retraining, and calibration, forcing the Defender to learn increasingly sophisticated evasion tactics. On real and synthetic datasets, our method sustains an F1-score of 0.80 and ROC-AUC of 0.90 under RL-driven attacks (vs. F1=0.54 for a static baseline). We will demonstrate integration into enterprise email gateways, discuss compute-security trade-offs, and share open-source tooling for continuous hardening.

Time: Saturday @ 1430 (Forsythe Room)

Speaker: Pouria Rad

Artificial intelligence (AI) and machine learning (ML) have great potential to enhance digital forensic investigation, but progress is impeded by challenges in building datasets that meet technical accuracy and legal requirements. We herein compile findings from the latest scholarly literature to identify potential key aspects that are required for building forensic datasets that can effectively support AI-based investigative tools. We examine current practices in dataset building, ranging from representativeness of data, quality of annotation, chain-of-custody documentation, and metadata standardization, and consider their effects carefully on training robust AI models. Results point to key shortcomings that impede advanced AI implementations in digital forensics, which form a strong baseline for developing a standard workflow for building forensic datasets. This work, therefore, forms a stepping stone for future projects to enhance investigation capabilities through a better-structured and legally sound process of dataset building.


Securing The Supply Chain With GitHub: Inside Kong’s Public Shared Actions Strategy

Time: Saturday @ 1430 (Ogelthorpe Room)

Speaker: Pankaj Mouriya

This talk shares how Kong’s Security Engineering team designed and scaled Public Shared Actions (PSA https://github.com/Kong/public-shared-actions) an open-source GitHub Actions repository used to secure CI/CD pipelines across our engineering organization.

We’ll walk through how PSA helps us automate static code analysis, SBOM generation, CVE detection, provenance with Cosign, and reproducible builds. You’ll learn how we made PSA resilient to upstream outages, supported independent versioning from a monorepo, and enabled secure consumption of public workflows across private repositories.

Expect practical insights on release governance, dependency management, alerting via Slack, and GitHub anti-pattern detection. Whether you want to build something similar or adopt PSA directly, this talk will provide actionable takeaways, implementation guidance, and lessons learned from real-world use.


Safeguarding Industrial Control In The Era of GenAI and Agentic Intelligence

Time: Saturday @ 1600 (Forsythe Room)

Speaker: Liliane Scarpari

Industrial Control Systems (ICS) are increasingly augmented by advanced Artificial Intelligence from Generative AI (GenAI) that synthesizes data to Agentic AI that autonomously acts. This integration offers transformative defensive capabilities but also exposes new attack surfaces. This presentation explores how GenAI and agentic intelligence can both strengthen and threaten ICS cybersecurity. I begin by examining the dual-use nature of GenAI in security: attackers leverage it to automate phishing, malware creation, and reconnaissance, while defenders can use GenAI to bolster threat detection, policy generation, and incident response. Next, I define agentic AI AI agents capable of independent decision-making and illustrate their role in autonomous ICS defense, such as continuous network monitoring and adaptive anomaly response. I then discuss the unique threat landscape of ICS/SCADA (legacy systems, insecure protocols, safety-critical processes) and how AI-driven solutions can mitigate these risks. A focus is given to Microsoft Defender for Cloud as a case study in protecting AI-enabled ICS: its features (Cloud Security Explorer, AI Security Posture Management) identify misconfigurations and AI data exposures, helping secure critical infrastructure. Finally, I address ethical and safety considerations ensuring AI remains “trustworthy by default” in industrial environments. Attendees will learn actionable strategies to harness GenAI and agentic AI for ICS security while maintaining robust safeguards against AI-powered threats.


Beyond The Exploit: Breach From The Attacker’s Point of View

Time: Sunday @ 0930 (Forsythe Room)

Speaker: [Evan Anderson](/2025/bios/#Evan -nderson)

This talk offers a unique perspective by stepping into the shoes of a motivated attacker during a successful offensive campaign against a manufacturing facility. Designed for defenders and security professionals who don’t live day to day focused on breaking in, we will dissect the entire cyber kill chain. Attendees will gain critical insights into the strategy and tactical decisions made by adversaries, enabling organizations to build more resilient defenses.

We begin with the initial reconnaissance phases, exploring how attackers identify, fingerprint and select their targets. We will then delve into the art of initial compromise, examining how the adversary picks targets and builds the exploits employed to gain an initial foothold within an organization’s perimeter.

Once inside, the focus shifts to internal network navigation, situational awareness and privilege escalation. This segment will illustrate how attackers pivot between systems, escalate their privileges, and evade detection, systematically mapping out the network and identifying valuable assets.

Finally, we will cover the various ways attackers achieve their ultimate objectives, whether it’s data exfiltration, system disruption, or financial gain. This comprehensive walkthrough will equip security professionals with a deeper understanding of adversarial methodologies, fostering a proactive approach to cybersecurity.


Empowering AI-Driven Healthcare With Secure, Decentralized, and Privacy-Enhancing Adaptive Intelligence

Time: Sunday @ 0930 (Ogelthorpe Room)

Speaker: Hussien AbdelRaouf Khaled

Integrating the Internet of Medical Things (IoMT) and artificial intelligence (AI) is revolutionizing healthcare by enabling real-time health monitoring, predictive analytics, and personalized treatment. However, existing AI healthcare models are trained offline on static datasets, making them less adaptable to evolving health data and potentially reducing their accuracy and decision-making. Furthermore, adversaries may exploit this by injecting frequent data shifts, straining healthcare resources. Privacy concerns also arise from the exposure of sensitive patient data. Therefore, we propose a novel AI-driven healthcare methodology with secure, decentralized, and privacy-enhancing adaptive intelligence. First, a deep learning (DL) model is devised to leverage its high-confidence probability to detect data drift efficiently. Next, we propose a privacy-preserving approach leveraging functional encryption to ensure patient data confidentiality during drift detection and model retraining while eliminating reliance on a trusted entity. Lastly, we propose a customized consortium blockchain with group signatures to protect patient anonymity and data tampering and unlinkability while preventing falsely claiming drift incidents. Moreover, to ensure decentralization, it removes the need for a trusted authority in cryptographic key generation. Our experiments, on a real testbed and healthcare datasets, show that the proposed methodology achieves realtime drift detection with performance comparable to existing methods, while reducing the computational time by 52.35%. It also maintains high accuracy, achieving up to 98.43% with the offline health monitoring model and up to 96% with the online adaptive model. Additionally, it preserves patient privacy while reducing computational and communication overhead by 94.26% and 89%, respectively, compared to the state-of-the-art.


Red Team Vs. Sidecar: Threat Modeling The Credential Injection Pipeline

Time: Sunday @ 1030 (Forsythe Room)

Speaker: Rhys Evans

Red Team vs. Sidecar! We threat model non-human credential injection pipelines through live attack demos featuring AI agent MCP exploits. Learn offensive techniques targeting Kubernetes, Lambda & Fargate sidecars, plus defensive patterns: least-privilege architectures, behavioral monitoring & hardened deployments.


From Seaweed To Security: Harnessing Alginate To Challenge IoT Fingerprint Authentication

Time: Friday @ 1030 (Ogelthorpe Room)

Speaker: Pouria Rad

The increasing integration of capacitive fingerprint recognition sensors in IoT devices presents new challenges in digital forensics, particularly in the context of advanced fingerprint spoofing. Previous research has highlighted the effectiveness of materials such as latex and silicone in deceiving biometric systems. In this study, we introduce Alginate, a biopolymer derived from brown seaweed, as a novel material with the potential for spoofing IoT-specific capacitive fingerprint sensors. Our research uses Alginate and cutting-edge image recognition techniques to unveil a nuanced IoT vulnerability that raises significant security and privacy concerns. Our proof-of-concept experiments employed authentic fingerprint molds to create Alginate replicas, which exhibited remarkable visual and tactile similarities to real fingerprints. The conductivity and resistivity properties of Alginate, closely resembling human skin, make it a subject of interest in the digital forensics field, especially regarding its ability to spoof IoT device sensors. This study calls upon the digital forensics community to develop advanced anti-spoofing strategies to protect the evolving IoT infrastructure against such sophisticated threats.


What Data Tells Us About How APTs Really Attack Critical Infrastructue

Time: Sunday @ 1200 (Forsythe Room)

Speaker: Ymir Vigfusson

In the last 18 months, there have been two significant changes in state-sponsored attackers targeting American critical infrastructure. It is not just the significant increase in the volume of breaches in the headlines, but the goals of the attackers have also shifted. Join us for a data-driven deep dive into the most common tactics, techniques, and procedures (TTPs) of APT groups, the gaps they exploit, and actionable strategies to defend against these adversaries.

What you’ll learn:

  • The most common methods exploited by APT groups
  • Where traditional best practices fall short
  • The most effective detection points and countermeasures to implement now

Cyber First Aid and Self Healing Systems

Time: Sunday @ 1200 (Ogelthorpe Room)

Speaker: David Kovar

What did we do?

We developed a TRL 3 proof of concept self healing simulated pacemaker. Conduct a certain type of cyber attack and the system will detect, analyze, and repair itself. This was incredibly carefully scoped in all dimensions. In particular, the attack modified running code in a way that was designed to be easy to analyze and patch. But we demonstrated the art of possible for LLM enabled self healing, and really impressed DARPA by using formal methods to confirm that the patch generated by the LLM was functionally equivalent to the original code.

Where do we go?

How do we move up the TRL ladder? We need to: deploy (and possibly develop) local LLMs, support more types of attack, operate on a relevant real world system, instrument “all the things”, expand our formal methods coverage. Here is a video produced by DARPA showing our work