National Academies Press: OpenBook

Biometric Recognition: Challenges and Opportunities (2010)

Chapter: Appendix D: Testing and Evaluation Examples

« Previous: Appendix C: Statement of Task
Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×

D
Testing and Evaluation Examples

Large-scale biometric systems traditionally undergo a series of tests beyond technology and scenario testing. These large-scale system tests are typically at the system level, not just the biometric subsystem level, and occur multiple times in the life of a system in such forms as factory acceptance tests before shipment, site or system acceptance tests before initiating operations, and in-use tests to ensure that performance remains at acceptable levels and/or to reset thresholds or other technical parameters.

The following examples outline aspects of testing from three real-world, large-scale systems: the FBI’s Integrated Automated Fingerprint Identification System (IAFIS), Disney’s entrance control system, and the U.S. Army’s Biometrics Automated Toolkit (BAT). Note that there are numerous other systems that offer lessons as well—these include the DOD’s Common Access Card, US-VISIT, California’s Statewide Fingerprint Imaging System (CA SFIS), and so on. The inclusion or exclusion of systems in this discussion is not meant to convey a judgment of any sort.

THE FBI’S IAFIS SYSTEM

IAFIS underwent numerous tests before and after deployment in 1998 and 1999. After deployment the FBI tracked performance for 5 years to determine the threshold for automatic hit decisions. After measuring and analyzing the performance data, the FBI was able to say with confidence

Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×

that all candidates for whom the 10-fingerprint to 10-fingerprint search score was above a certain level were always declared as matches (or hits) by the examiners. As a result of this analysis the FBI was able to let the IAFIS system automatically declare as matches about 25 percent of the hits that previously required human intervention. This test used routine operational data in an operational environment and was not orchestrated to include any controls or prescreening on the input data. The transactions were run through the system normally, and match decisions were made by human examiners working with candidates presented by the IAFIS automated matchers.

DISNEY’S ENTRANCE CONTROL SYSTEM

Walt Disney World (WDW) has publicly reported1 on internal testing using several different biometric technologies over the years. (See Box 5.1 for more on Disney’s use of biometrics.) WDW tested various hand geometry and finger scanning technologies at several theme park locations to evaluate alternative technologies to the then-existing finger geometry used in its turnstile application. WDW also tested technologies for other applications to increase guest service and improve operating efficiency. Testing there is done in four stages: laboratory testing, technology testing, scenario testing, and operational evaluation. Since WDW has had existing biometric technology in place since 1996 and a substantial amount of experience with the biometric industry, its mind-set is that a threshold has been set for performance in both error rates and throughput and prospective vendors must exceed this level of performance to be considered for future enhancement projects.

In WDW lab testing, prospective biometric devices or technologies are examined for the underlying strengths of their technology/modality, usability, and accuracy. This testing is performed under optimal, controlled conditions for all of the relevant parameters that can affect performance. Parameters like technology construction and architecture, component mean time between failures, and theoretical throughput are extrapolated based on the results of laboratory tests. The goal of laboratory testing is to quickly determine whether a device or technology is worth investigating further. If a technology does not meet a performance level above the existing technology under optimal conditions, there is no point in investigating further.

If a prospective biometric device or technology is determined to be promising in the WDW lab environment, then the next stage of testing,

Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×

called “technology testing,” is conducted to examine the limitations of the technology where some of the parameters will be controlled and others will be allowed to vary into “extreme limits” to see how the technology reacts and where it fails. For example, increasing the amount of ambient lighting for a facial recognition system or increasing the amount of background noise for a voice system stresses the capabilities of those systems. If the technology is still determined to be promising, scenario testing is performed by testing the technology in the live, operational environment with real-world users.2 Typically, all data are captured and subsequently analyzed to determine if the system performed as expected, better, or worse. Analysis is performed to determine if some parameter was unexpectedly affecting performance. Often video of the user interactions will be recorded to assist in the data analysis and is particularly useful if the results of the testing show unexplained anomalies or unexpected results. For example, video of the interactions may detect users swapping fingers between enrollment and subsequent use in a fingerprint system. During this entire testing process, a potential system enhancement cost/benefit analysis is updated with the results of each round of the testing. If the performance gain is determined to be worthwhile, a business decision may then be made to migrate to the new technology. Disney has followed these scenario tests with operational tests on deployed systems to estimate actual error and throughput rates.

U.S. ARMY’S BIOMETRIC AUTOMATED TOOLKIT

BAT was developed in the late 1990s by the Battle Command Battle Lab of the Army Intelligence Center at Fort Huachuca, Arizona, as an advanced concepts technology demonstration (ACTD) to enable U.S. military forces to keep a “digital dossier” on a person of interest.3 Other features such as biometric collection and identification badge creation were also included. BAT uses a rugged laptop and biometric collection devices (facial images, fingerprints, iris images, and, in some cases, voice samples) to enroll persons encountered by the military in combat operations. Hundreds of devices were rushed into production to meet demand during Operation Iraqi Freedom and Operation Enduring Freedom in Afghanistan.

This military use of biometrics ensures that a person, once registered, can later be recognized even if his or her identity documents or facial characteristics change. This permits postmission analysis to identify per-

2

J. Ashbourn, “User Psychology and Biometric Systems Performance” (2000). Available at http://www.adept-associates.com/User%20Psychology.pdf.

3

Available at http://www.eis.army.mil/programs/dodbiometrics.htm.

Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×

sons of future interest and in-mission analysis to detain persons of interest from biometrically supported watch lists. These systems are considered by many to be a technical success today, and the data are shared, when appropriate, with the FBI, DHS, and the intelligence community. When first deployed they did not go through factory or system acceptance tests due to the rapid prototyping and the demand for devices. After operational use, it was determined that the fingerprints collected were not usable by the FBI because several factors had not been considered in the original tactical system design, which did not include sending output to the FBI’s strategic system, IAFIS. BAT was then formally tested operationally and the required changes identified and made. The operational retests before and after deployment showed that the current generation BAT systems generally met all of the image quality and record format protocols specified by the FBI. These BAT devices, however, use proprietary reference representations to share information on watch lists, which makes them less interoperable with standards-based systems than with one another.

Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×
Page 155
Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×
Page 156
Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×
Page 157
Suggested Citation:"Appendix D: Testing and Evaluation Examples." National Research Council. 2010. Biometric Recognition: Challenges and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/12720.
×
Page 158
Next: Appendix E: The Biometrics Standards Landscape »
Biometric Recognition: Challenges and Opportunities Get This Book
×
Buy Paperback | $44.00 Buy Ebook | $35.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Biometric recognition--the automated recognition of individuals based on their behavioral and biological characteristic--is promoted as a way to help identify terrorists, provide better control of access to physical facilities and financial accounts, and increase the efficiency of access to services and their utilization. Biometric recognition has been applied to identification of criminals, patient tracking in medical informatics, and the personalization of social services, among other things. In spite of substantial effort, however, there remain unresolved questions about the effectiveness and management of systems for biometric recognition, as well as the appropriateness and societal impact of their use. Moreover, the general public has been exposed to biometrics largely as high-technology gadgets in spy thrillers or as fear-instilling instruments of state or corporate surveillance in speculative fiction.

Now, as biometric technologies appear poised for broader use, increased concerns about national security and the tracking of individuals as they cross borders have caused passports, visas, and border-crossing records to be linked to biometric data. A focus on fighting insurgencies and terrorism has led to the military deployment of biometric tools to enable recognition of individuals as friend or foe. Commercially, finger-imaging sensors, whose cost and physical size have been reduced, now appear on many laptop personal computers, handheld devices, mobile phones, and other consumer devices.

Biometric Recognition: Challenges and Opportunities addresses the issues surrounding broader implementation of this technology, making two main points: first, biometric recognition systems are incredibly complex, and need to be addressed as such. Second, biometric recognition is an inherently probabilistic endeavor. Consequently, even when the technology and the system in which it is embedded are behaving as designed, there is inevitable uncertainty and risk of error. This book elaborates on these themes in detail to provide policy makers, developers, and researchers a comprehensive assessment of biometric recognition that examines current capabilities, future possibilities, and the role of government in technology and system development.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!