Skip to main content

Currently Skimming:

Large Language Models and Cybersecurity: Proceedings of a Workshop - in Brief
Pages 1-9

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 1...
... As such, the context in which an input is placed heavily HOW LARGE LANGUAGE MODELS WORK AND WHY THEY ARE INTERESTING influences the probability weight of the following token.2 Several speakers provided background on how LLMs work Xiong listed grammar, lexical semantics, world knowland some promising applications. Caiming Xiong, Sales- edge, sentiment analysis, translation, special reasoning, force, introduced language models' approach of taking and mathematics as just a few of the skills LLMs can in a list of words, given as a prompt, and attempting to accomplish through sophisticated word prediction.
From page 2...
... cations, Guido noted that his company, Trail of Bits, has Grabowski and Pearce emphasized that the output of successfully used LLMs to "decompile" machine-level LLMs can be influenced by how the model is designed code back into high-level programming languages, iden and the data that an LLM is trained on. Even though tify and trigger bugs, reason about memory layouts, write many have a similar chatbot interface, models can vary scripts to launch exploits, identify weak encryption, and significantly in terms of their detailed structure and the find cases where cryptographic application programming data used to train them.
From page 3...
... Other possible approaches closer to using LLMs without direct human oversight. include developing affordances to provide models with Parisa Tabriz, Google, noted that one of the challenges in feedback on their mistakes and adjusting the amount of considering user expectations is that they change accordrandomness (sometimes called the "temperature" of the ing to context and over time.
From page 4...
... Recent research results show that even individuals generate interview questions using the fake logs, the user without significant resources can poison datasets used by was able to nudge the chatbot into fulfilling the request. open source and commercial LLMs.5 Guido provided some Grabowski noted a couple of methods currently used to examples.
From page 5...
... to the evaluation of security threats. Grabowski and adversarial prompt examples to help raise awareness noted that this work could help network security engiof these issues.11 neers recall information about known vulnerabilities but, at least at present, cannot automate security engineering Another possibility for mitigating this vulnerability would work.
From page 6...
... and obtain a backtrace, input the backtrace to identify problematic code, decompile that code, and identify and Dolan-Gavitt gave an example of a compelling demon fix the problem. However, a more systematic evaluastration: using ChatGPT to repair source code containing tion of reverse engineering using LLMs found only a 53 vulnerability CVE-2023-40296 from the National Insti percent accuracy rate in answering true/false questions tute of Standards and Technology's National Vulnerabil about programs containing flaws.
From page 7...
... existing tools, determining the acceptability and visibility Finally, Shoshitaishvili has explored using LLMs as of mistakes, and how rapidly LLM facilitated actions can a cybersecurity tutor to help close the current secuimprove. rity skills gap.
From page 8...
... telemarketers, which suggests that the models could also be used to combat social engineering actors.16 In the immediate term, Guido predicted that LLMs will disrupt the cybersecurity technology landscape in such Participants expressed excitement regarding the future areas as bug bounties, phishing training, signature-based use of LLMs for tasks humans currently do. Pearce hoped defenses, disinformation detection, and attacker attri- to see LLMs performing better than humans in specific bution.
From page 9...
... , VMWare; Yair Amir, Johns Hopkins University; Steven Bellovin, Columbia University; Thomas Berson, Salesforce; Nadya Bliss, Arizona State University; Timothy Booher, Boeing; Srini Devadas, Massachusetts Institute of Technology; Curtis Dukes, Center for Internet Security; Kristen Eichensehr, University of Virginia; Paul England (NAE) , Microsoft; Alexander Gantman, Qualcomm Technologies; Melissa Hathaway, Hathaway Global Strategies; Galen Hunt, Microsoft; Maritza Johnson, University of San Diego; Brian LaMacchia, Farcaster Consulting Group; John Launchbury, Galois; Dave Levin, University of Maryland; Damon McCoy, New York University; James Miller, Adaptive Strategies; Andy Ozment, Capital One; Ari Schwartz, Venable; Parisa Tabriz, Google.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.