Skip to main content

Currently Skimming:

2 Summary of Workshop Presentations
Pages 9-27

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 9...
... At Google, Adkins's realm is primarily in remediation. The chief information officer, information technology administrators, and site reliability engineers (SREs)
From page 10...
... 3 Even such painstaking verification is likely to be ineffective against sophisticated adversaries. 10 Forum on Cyber Resilience
From page 11...
... In practice, that means quickly building systems, recovering them, and migrating them when necessary, while also conducting regular, automated tests, including for catastrophic events. These practices are detailed in Google's book, Site Reliability Engineering.4 It is also helpful to carefully consider metrics, to determine what exactly is useful to measure, Adkins said.
From page 12...
... , in which Anderson asserts that computer security is not strong enough to prevent malicious events.5 Adkins said that in a follow-up piece written a few years later, Anderson posited that the best way to improve security is to pair computer systems with humans via auditing and logging, practices that are still in use today.6 Looking at today's capabilities and the shifts that have occurred since the 1980s when the paradigm emphasizing reference monitors and a trusted computing base was established, Adkins proposed a new solution: teaching the machines not only to read logs and discover breaches, but to actually learn to defend themselves. While this solution may be still a long way off, Adkins pointed to recent encouraging examples, such as the 2016 Defense Advanced Research Projects Agency (DARPA)
From page 13...
... Building on this point, Peter Swire, Georgia Institute of Technology, asked if attacks perpetrated by nation-states fundamentally would require different recovery strategies than other attacks or accidental failures. Adkins replied that the recovery process is largely the same in most cases regardless of the root cause of the problem.
From page 14...
... Rather than a static target or a far-off goal, he said resilience in the financial sector is a real, everyday requirement best thought of as a constant process with ever-changing adversaries. A Shared System for Recoverability Edelman noted that to be able to give customers an additional level of confidence in the ability of their banks to provide services even in the face of very sophisticated malicious activity, the financial industry created Sheltered Harbor.8 While banks already capture every transaction every day within their own systems, Sheltered Harbor is an initiative undertaken by the sector as a whole that provides an additional layer of protection and enables rapid recovery and reconstitution of customer account status if needed.
From page 15...
... Sheltered Harbor employs sophisticated encryption and routine testing and verification to help ensure security. The system, he said, is built to be agile enough to use whatever is the most secure storage process available, whether that means publishing data onto tapes and storing it in a vault or -- in the future -- using a secure cloud-based system.
From page 16...
... Response to the 2017 S3 Outage Bob Blakley, Citigroup, asked Schmidt to describe the steps taken after the Amazon S3 storage service outage, which lasted a few hours in February 2017. Schmidt first noted that one of his 2017 security goals, set before the outage, was to drastically reduce the number of humans with access to certain data, a deliberately difficult goal that would force an increased reliance on automation.
From page 17...
... Schmidt replied that monocultures in APIs should be considered differently from monocultures in the implementations underneath the APIs. Shared APIs provide a common interface format that allows for readily switching between services Diversity of and moving workloads if needed.
From page 18...
... Schmidt responded that rather than saying "no," he prefers for engineers to search for a clearer understanding of customer needs and then determine the best way to meet them. Eric Grosse, an independent consultant, pointed to key lessons, such as the value of conducting forensics and the importance of keeping comprehensive logs, and asked Schmidt what additional practices he would consider useful industry-wide.
From page 19...
... He also emphasized that the company takes regular, rigorous testing very seriously and said its operational teams are trained with frequent recovery drills. Danzig asked how Amazon handles the unique security needs of one particular customer -- the U.S.
From page 20...
... He said that this monitoring is done within a multi-layered framework for power grid cybersecurity and recovery whose key components include standards, maintenance, and information sharing. 20 Forum on Cyber Resilience
From page 21...
... Standards are developed by using an open standards-setting process and many power companies are required to implement these standards. Roxey asserted that the power grid is the single most regulated sector, with regulations in this space ranging from personnel and training requirements to physical security perimeter requirements to recovery plans to vulnerability assessments, among many others.
From page 22...
... Good regulations to support testing, detection capabilities, and incident response plans can help to minimize the fallout. Peter Swire, Georgia Institute of Technology, asked if Roxey could explain why it has taken so long to restore power in Puerto Rico after Hurricane Maria.
From page 23...
... , offered a perspective on measuring community resilience and shared NIST's framework for improving critical infrastructure for cybersecurity. Community Resilience Cauffman, a research engineer in the Community Resilience Group (CRG)
From page 24...
... By breaking down such a complex problem piece by piece, NIST hopes to create a framework for communities to be resilient and able to recover from disasters. NIST Framework for Improving Critical Infrastructure for Cybersecurity Barrett leads NIST's Framework for Improving Critical Infrastructure for Cybersecurity (the "Framework")
From page 25...
... In fact, he said, a recent report by the National Institute of Building Sciences showed that $1 in mitigation spending saves roughly $6 in future recovery expenses.10 For events that are perhaps more rare or for events where recovery costs are lower, other strategies could include mutual aid partnerships, where organizations or regions pledge to help each other recover, such as by sending in crews to restring downed power lines. There are trade-offs to weigh in any case, and the community tools NIST is developing include an economic component to help stakeholders assign an actual dollar value to their options.
From page 26...
... One way of approaching dependencies in the physical world, he said, is by defining essential services and using secure engineering to build them with resilience factors in place. But it can be tricky to strike the right balance in identifying essential services without creating ever more dependencies.
From page 27...
... Cauffman replied that there is potential for IoT devices to help, especially as technological capabilities grow and costs decline. For example, he pointed to the availability of low-cost smart flood gauges that provide real-time water level data to communities to inform decision making and help move people out of harm's way.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.