Skip to main content

Currently Skimming:

2 Artificial Intelligence and the Landscape of Cyber Engagements
Pages 9-21

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 9...
... Fisher provided opening remarks to help introduce and frame the panel's discussion, identifying four key topics for the panelists to consider: implications of AI across the "cyber kill chain," the importance of understanding an attacker's motivation and intent, the evolving landscape of international conflict, and the potential for AI to introduce a cyber arms race. INTRODUCTION AND CONTEXT Artificial Intelligence Across the Cyber Kill Chain Fisher began by considering the Lockheed Martin cyber kill chain, a framework for understanding the structure of a cyberattack in terms of seven stages: reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective.3 She noted that AI and ML could have implications for all stages of the cyber kill chain, with the potential role and utility of any particular AI technology likely varying across the different levels.4 1  Defense Advanced Research Projects Agency, "High-Assurance Cyber Military Systems Program," https://www.darpa.mil/program/high assurance-cyber-military-systems, last accessed March 11, 2019.
From page 10...
... Some plausibility has been demonstrated, primarily based upon "first-wave" AI tools, such as expert systems and rule-based systems, as opposed to statistical ML. Fisher noted that Mike Walker, the DARPA program manager for the CGC, has hypothesized that AI/ML will be least effective at the core of program analysis, the "hard" part of the cyber kill chain, including things like debugging, decompilation, reachability analysis, and finding vulnerabilities and patches.
From page 11...
... Fisher challenged panelists and attendees to consider these points in their discussions of the implications of AI across the broader cybersecurity landscape. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN CYBERATTACKS: INSIGHTS FROM HACKING COMPETITIONS David Brumley, Carnegie Mellon University and ForAllSecure Brumley discussed his experiences and lessons learned from hacking competitions and challenges at the annual DEF CON conference and the DARPA CGC, and provided insights on the use of AI and ML in launching -- and thwarting -- cyberattacks.
From page 12...
... He suggested that such economics might translate usefully into real-world cybersecurity contexts as well. Lessons from DARPA's Cyber Grand Challenge Brumley recalled that Mike Walker, a past participant in DEF CON's CTF, went to DARPA in 2014 as a program manager and asked a new question: Can we teach computers to hack?
From page 13...
... Upon realizing that this was against the rules, Brumley's team removed the code in question. More broadly, the outcomes of DEF CON CTFs and the CGC point to a need for a greater emphasis on autonomy in cyber operations.
From page 14...
... A second question came from Yevgeniy Vorobeychik, who pointed out that contests have a single winner, which is a key difference from the real world, where there isn't necessarily a single winner. Despite this limitation, zero-sum activities can nonetheless be a useful way to shed light on aspects of engagement that may otherwise be overlooked, Brumley noted, suggesting that military leaders might not appreciate the value of exploit re-use.
From page 15...
... These potential costs are a strong incentive to avoid false positives. Moore noted the significant challenge of keeping false positives low without creating openings for attackers.
From page 16...
... Potential Uses of Artificial Intelligence and Machine Learning in Cyber Operations Hoffman reiterated that AI or ML methods could be used to automate actions at different stages of the cyber kill chain, including hunting for vulnerabilities or introducing new attack vectors that target AI/ML algorithms or the associated training data. Adversaries' use of AI-based methods will vary with their objectives.
From page 17...
... Defenders do have some inherent advantages because they control the terrain, and application of techniques such as deception and honeypots at specific points along the cyber kill chain can help to overcome an adversary's advantages. Hoffman suggested that, to understand how AI or ML will affect the dynamics of cyber conflicts, we must consider how they might impact these asymmetries between cyber defense and cyber offense.
From page 18...
... PANEL DISCUSSION Fisher moderated an open discussion between panelists and the workshop audience. Participants tackled considerations around collateral damage from cyber engagements, different ways of viewing how AI and ML may be 22  As described in B
From page 19...
... While competitions such as DEF CON CTF do not consider whether third parties are damaged by an action, these considerations are important in the dynamics of real-world engagements. He wondered how redesigning a competition to incorporate such dynamics would affect the way the game is played, and whether this might provide a venue through which better to understand the dynamics around collateral damage.
From page 20...
... simulations to be realistic, especially those that are AI-generated -- and acknowledged that he might be unique in this perspective. He noted that simulations can fall short because they don't account for the adaptive nature of competitions or the real world and rely upon known grammar.
From page 21...
... O'Reilly added that data-driven AI and ML methods could potentially help accomplish these measurement improvements. For example, natural language processing could be useful for analyzing trends in the cybersecurity research literature.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.