Skip to main content

Currently Skimming:

Appendix C: Risk Assessment in the Testing, Evaluation, and Use of Standoff Detectors
Pages 40-51

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 40...
... Of course, only one set of events will actually happen, so the actual realized loss (or gain) after the fact will usually be less or more (perhaps enormously less or more)
From page 41...
... This "decision analysis" formalizes the intuitive conclusion that, owing to the asymmetry of the consequences of guessing wrong about whether a CWA attack is coming, donning protective gear makes sense even when an attack is fairly unlikely, as long as it is credible, but that it does not make sense when the likelihood of an attack is remote. The question is one of how likely need the attack be and how different must the losses be under different scenarios.
From page 42...
... Since it provides perfect information, if it goes off, there is an imminent exposure to CWA coming, and so the decision to don protective gear is clearly favored. If there is no alarm, there is no imminent exposure, and the decision to go without protective gear is best.
From page 43...
... False Positives and False Negatives For standoff detectors a detector in a threat environment will not always provide sufficient warning for every attack. There is some probability, for example, that a CWA shell will burst too close to the troops' position to provide time to don protective gear.
From page 44...
... Again, a decision made on the presumption that the detector is perfect will be in error, although the losses will be less than for false negatives. The losses will include the performance decrement attributable to protective gear and the gradual loss of confidence in the detector if false positives prove common (i.e., the "crying wolf" effect)
From page 45...
... . Whether the detector has a large or a small value of information depends on its false positive and false negative rates as well as on the probability that it will have something to detect; that is, the likelihood of a CWA attack in the threat situation of interest.
From page 46...
... The Value of Information It is clear from the above discussion that the key element about detector design and testing is the characterization of false positive and false negative rates. These rates are the quantitative characterization of the reliability of a detector, and alternative testing protocols will differ in the degree to which the uncertainty about a detector's field performance can be reduced.
From page 47...
... A good testing program will try to minimize this problem by testing over a wide variety of environments. Third, the risk consequences of false negatives and false positives depend on the field commander's decisions and how they are influenced by the detector, and those decisions will be based on the perceived false positive and false negative rates, which may differ from the actual rates.
From page 48...
... of Alarms = number if signals given by the instrument that a CWA is present True positive alarms = number of true incidents alarmed by instrument False positive alarms = number of alarms that do not coincide with incidents False negatives = number of incidents that did not trigger alarm 1,000 threat situations lead to an actual attack) and the false positive and false negative rates are both 1%, then 1.
From page 49...
... Information on the false positive rate in the field will accrue more quickly since most settings do not have CWAs present. Given the rarity of actual CWA attacks, the false negative rate will be difficult to determine through experience, and no additional information on this rate (beyond that provided by testing)
From page 50...
... false negative rates are overly optimistic. The operational risks of the detectors (and hence their military effectiveness)
From page 51...
... The risks associated with testing per se are the possibilities of unrecognized or uncharacterized factors that could compromise the validity of the measurement of false positive and false negative rates. The test protocols rigorously followed are intended to provide good estimates of the false positive/false negative rates for CWA detection in the field based on the protocol-specific signal-processing models.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.