Skip to main content

Ballistic Imaging (2008) / Chapter Skim
Currently Skimming:

9 Feasibility of a National Reference Ballistic Image Database
Pages 223-252

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 223...
... 9 Feasibility of a National Reference Ballistic Image Database In the formative era of modern firearms examination, Hatcher (1935:291–292) noted a development that he interpreted to be suggestive of the adage that "a little knowledge is a dangerous thing." "Certain very well-intentioned individuals recently came very near having a federal law enacted to require every maker of a pistol or revolver to fire and recover a bullet from each gun made, and to mark that bullet with the number of the gun, and keep it for reference by the legal authorities in case a crime should later be committed with a gun of that caliber." Hatcher argued against this forerunner of a national ballistic toolmark database (if not a national reference ballistic image database)
From page 224...
... In this chapter, we present the argument from the preceding chapters in order to answer the primary, titular question of our study: Is a national reference ballistic image database (RBID) a feasible, accurate, and technically capable proposition?
From page 225...
... The previously cited firearms manufacture statistics do not directly correspond to annual sales to individual customers; they include production for military and law enforcement purposes, and they include guns that may sit in inventory rather than be quickly sold. The ATF estimates about 4.5 million "new firearms, including approximately about 2 million handguns, are sold in the United States" each year (U.S.
From page 226...
... As we discuss further in the next section, we generally assume that a national RBID would -- at least initially -- focus on handguns, and hence an annual entry workload of 1–2 million firearms per year, depending on whether imports are included. 9–B  Assumptions In Box 1-3, we describe some basic assumptions about the nature of a national RBID, with particular regard to the wording used in past legislation and in the enabling language of the currently operational state RBIDs.
From page 227...
... Third, we assume that the actual process of generating samples and acquiring images from them would follow very closely the New York Combined Ballistic Identification System (CoBIS) model: that is, that most of the burden of generating the sample of cartridge casings would fall on firearms manufacturers, who would include the sample in the firearm's packaging.
From page 228...
... When FFLs go out of business they are required to transfer their transaction records to ATF, which then stores them for use in tracing. Local law enforcement agencies may initiate a trace request by submitting a confiscated gun and associated information to the ATF's National Tracing Center (NTC)
From page 229...
... Specifically, we assume that queries on the database would be initiated by state and local law enforcement agencies, who would acquire images from evidence they wished to compare and send them over a network for comparison. (Doing this on NIBIN-supplied IBIS equipment, and effectively using the existing NIBIN terminals as the interface to the RBID, would obviously require changes in legislation -- which currently limits Trace Result Count Percent Completed Traces (by method)
From page 230...
... In a national RBID, however, the interrelationships between entries in the database are not of direct interest (since there is no reason to expect a match between two newly manufactured or imported guns) , and performing comparison requests as each new entry is added only serves to increase the computational demands on the system infrastructure. What is interesting in the RBID setting is the comparison results that are obtained when a piece of crime scene evidence is entered and compared against the RBID.
From page 231...
... In this, we diverge from the New York CoBIS and Maryland MD-IBIS models, where routing of all database entries through a single site is tractable, and move toward the existing NIBIN model where computational infra­structure is divided across three sites (and entry dispersed over more than 200 localities)
From page 232...
... The rate at which queries are made of the national RBID -- that ­exhibits are entered by state and local law enforcement agencies for comparison purposes with the database -- will depend on local law enforcement acceptance and staff limitations. As described in previous chapters, large differences between jurisdiction in the effective use of the existing NIBIN system depends on differences in acceptance of the technology, hence the set of recommendations in Chapter 6 to enhance NIBIN by making it a more vital part of the investigative system.
From page 233...
... , the threshold does serve the purpose of limiting the amount of image and score data that must be pushed back from regional correlation servers to NIBIN partner agencies for every comparison request. Some limit on the number of results routinely returned on comparison requests would likely have to be established to keep transmission times in check.
From page 234...
... At the most basic level, the collection of exhibit casings from newly manufactured firearms should be relatively tractable because, conceptually, all it would require is a systematic, cross-manufacture standardization of current practices of test firing for quality control. Manufacturers routinely test (or proof)
From page 235...
... For both newly manufactured and newly imported firearms, a critical question that would have to be addressed is exact specification of the conditions under which test fires are to be performed and the number of firings that must be completed before designating one or two casings as the ballistic sample. As described in Section 3–D.3, the concept of a "settle-in" effect would be a greater concern if bullets were used as the sample rather than casings; in that event, the prevailing view among firearms examiners would hold that the gun must be fired 8–10 times before its unique markings stabilize.
From page 236...
... A useful framework is to consider the basic problem in working with ballistic image databases probabilistically. Define a true match to be the case when a firearms examiner confirms a suggested possible match from an image database query.
From page 237...
... Suppose one compares a reference casing with N guns in an image database; for simplicity, assume that there is one correct casing (gun) in the database that matches this reference exhibit and that all the other entries are nonmatches.
From page 238...
... However, one can see from Table 8-6 that, with the exception of breech face measurements on the NBIDE exhibit set, the overlap metrics are all too large to be adequate. Even if N is as small as 100, the success rate for top 10 lists for the DKT exhibit sets are still less than 0.5.
From page 239...
... . The second salient argument concerns the capacity of ballistic imaging systems to distinguish true matches from nonmatches, as described in Section 9–B.3 and Chapter 8: Basic probability calculations, under reasonable assumptions, suggest that the process of identifying a subset of possible matches, that contains the true match with a specified level of certainty, depends critically on as-yet-underived measures of similarity between and within gun type.
From page 240...
... do not have the discriminatory power needed to reliably place true matches in the top rankings using imaging comparisons. Though there is no special magic in the top 10 ranks, there is also a practical limit in the number of potential matches that any human examiner or operator is likely to page through and consider in his or her work; though the existing methods can be made to work well, they simply do not work well enough to make a national RBID practical.
From page 241...
... , which is inherently problematic for RBIDs since "revolvers are less likely to leave cartridge casings at crime scenes than are pistols."
From page 242...
... Early in ATF's work with the IBIS platform, Masson (1997:42) observed that as ballistic image databases grew in size, the IBIS rankings tended to produce suggested linkages that might look promising on-screen -- and might also be tricky to evaluate using direct microscopy: As the database grew within a particular caliber, 9mm for instance, there were a number of known non-matched testfires from different firearms that were coming up near the top of the candidate list.
From page 243...
... The design of the current databases, and the need to ensure a firewall from NIBIN data due to the legal restrictions on NIBIN content, have made the databases inconvenient to search: exhibits must be transported to specific facilities for acquisition and comparison. To that end, mechanisms for encouraging searches of state RBIDs by law enforcement agencies in the same state or region should be developed and the results evaluated.
From page 244...
... .   The key random variable of interest for our problem is N T = ∑ Ij, j =2 the number of scores that are ranked higher than the true match X1.
From page 245...
... Based on this, the probability of having the correct match being in the top-K scores can be approximated as  K−β P (T ≤ K )
From page 246...
... Since these are all matches to the same casing from the crime scene, a natural question is whether this will induce dependence among the Xi's and if so how will the assumption of independence affect the results. Statistically speaking, what is the difference between treating the image of the crime scene casing as fixed versus random?
From page 247...
... In some sense, this is the make-or-break case, since there has to be enough separation of the images that correspond to guns of the same type. One has the matching image X1 from the crime scene gun and the others X2, .
From page 248...
... But a caveat is in order first: the confidence levels in Tables 9-2 through 9-5 refer only to the probability of the true match being in the top K They do not say anything about the correct one being actually identified in practice, which would depend on a firearms examiner reviewing the results of all K matches and finding the correct one (retrieving the physical evidence for a direct comparison)
From page 249...
... In the independent case in Table 9-2, the standard deviations were scaling up in terms of N . But here they are scaling up linearly due to the covariances.
From page 250...
... The calculations in Tables 9-4 and 9-5 suggest that -- as in the simpler one-gun case -- values of K can quickly grow to levels of practical implausibility from the perspective of reviewing database comparison reports, particularly for low D values and less-clear separations between gun types. However, they also illustrate the importance of the degree of mean separation between the images from different gun types (akin to the discussion of overlap metrics in Section 9–C.3)
From page 251...
... So, for instance, the ability to detect matches in a relatively small database containing equal numbers of moderately distinct images (D1 = 4, D2 = 6; 10,000 each) is comparable to that when one small set of images (D1 = 4; 10,000)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.