National Academies Press: OpenBook
« Previous: Appendix A: PSI Technologies
Page 16
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 16
Page 17
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 17
Page 18
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 18
Page 19
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 19
Page 20
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 20
Page 21
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 21
Page 22
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 22
Page 23
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 23
Page 24
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 24
Page 25
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 25
Page 26
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 26
Page 27
Suggested Citation:"Appendix B: Technology Assessment Criteria and Field Tests." National Academies of Sciences, Engineering, and Medicine. 2007. Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers. Washington, DC: The National Academies Press. doi: 10.17226/23170.
×
Page 27

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix B Technology Assessment Criteria and Field Tests Technology Assessment Criteria Accuracy As stated in TCRP Report 86, Volume 13, ideally, both deterrence and detection performance of technologies should be measured--however, deterrence cannot be measured. Detection can be determined by using threat simulants. After a sufficient number of test runs are conducted, the probability of detection (or the false negative rate) can be determined. To avoid any type of operator-related bias which may skew the performance results, covert runs in which the location and items containing the simulants are not known are advisable. Key accuracy measures of concern are False Negative Rate, False Positive Rate for innocuous-valid and False Positive Rate for innocuous-not valid materials, and the Nuisance Rate. The False Negative Rate indicates how well the technology functions in the detection of explosives. The lower the rate, the better the technology is in avoiding “misses”. It stands to reason that we would wish to decrease this rate as much as possible since a high False Negative Rate indicates that the technology is ineffective in detecting explosives. The Nuisance Rate measures the likelihood of detecting a bonafide threat that actually originates from an innocuous source. For instance, contact with fertilizer which is the same substance as explosives can set off an alarm; police officers handling gunpowder or explosives are another example.1 2 There is a tradeoff between the False Negative Rate and the False Positive Rate along with the Nuisance Rate since the Nuisance Rate is associated with the False Positive Rate. The tradeoff is as follows: as the False Negative Rate is pushed down, the False Positive Rate starts to increase. The False Positive Rate indicates how likely a technology is in setting off an alarm when there is no threat. A high False Positive Rate would disrupt operations and cause significant passenger delays. Operational efficiency (reliability of service, minimization of delays) is a primary concern and objective for transit agencies. Any technology that would significantly deteriorate operational efficiency is likely to be ruled out as a viable passenger and baggage screening technology for public transportation systems. The ideal technology would demonstrate low rates for both indicators. The Crossover rate is the rate at which 1 Fatah, A., J. Barrett, R. Arcilesi, K. Ewing, C. Lattin, M. Helsinki. Guide for the Selection of Chemical Agent and Toxic Industrial Material Detection Equipment for Emergency First Responders (NIJ Guide 100-00). National Institute of Justice, June 2000. 2 Rhykerd C., D. Hannum, D. Murray, and J. Parmeter. Guide for the Selection of Commercial Explosives Detection Systems for Law Enforcement Applications (NIJ Guide 100-99). National Institute of Justice, September 1999. 13

both rates are equal – the lower this rate is, the better the technology performs overall in terms of accuracy. Currently, Mass Spectrometry (MS) has low error rates due to its high chemical specificity and strong informing power. Future generations of MS are expected to have even better accuracy.3 Throughput Throughput can be defined as the number of passengers inspected per hour (or some other period of time). Throughput is of importance to transit operations from both the customers’ point of view and the agency’s. While minimal delays in accessing transit service would be tolerable from the passenger’s viewpoint, prolonged delays would not. In terms of transit operations, delays may produce crowding conditions and confusion in constrained areas of the system (e.g., area near the turnstile, on the platform). If crowding conditions are caused by the PSI method then the security risk may increase by making the location more vulnerable to attack. This may offset the benefits of the PSI program. Operational Issues There are other operational issues concerning the equipment itself including 1) ease of use, 2) maintenance requirements, and 3) space requirements. 1) Ease of use translates into labor costs (number of screeners needed to man each unit) and training expenses (both initial and recurrent). 2) Maintenance requirements also affect expenditures and could create problems if repair and daily upkeep are intensive and/or complex. Also, some technologies may perform well under laboratory conditions but do poorly in the field. Therefore, any new technology or equipment should be tested in an actual operating environment before it is considered for deployment. 3) Space requirement is another issue. In many transit facilities, space is limited and fixed -- the construction of additional space to accommodate larger screening equipment may not be feasible. Equipment size may also vary by vendor. Therefore, this is simply a matter of ensuring that sufficient space is available for the equipment. Customer Acceptance According to a panel on airline passenger screening, the perceived threat level appears to be related to the level of inconvenience and privacy invasion a passenger is willing to experience and possibly even health risks passengers are willing to incur.4 3 Panel on Technical Regulation of Explosives-Detection Systems, National Materials Advisory Board, and Commission on Engineering and Technical Systems. Configuration Management and Performance Verification of Explosives-Detection Systems (Publication NMAB-482-3). National Research Council, Washington, D.C., 1998. 4 Committee on Commercial Aviation Security. National Materials Advisory Board and Commission on Engineering and Technical Systems. Airline Passenger Security 14

This leads to 1) the importance of understanding the specific nature of the security threat(s) facing the transit system 2) the need to assess the transit customer’s perception about the security threat and if there is a discrepancy between 1) and 2), and 3) the importance of addressing the discrepancy by communicating with the public to clarify the nature and level of threat. Based on these observations alone, it appears that there can be no one-size-fits-all technological or methodological solution to passenger screening. From a birds’ eye perspective, customer acceptance is relevant to the transit agency’s bottom line because customer perception of service quality is linked to the transit customer’s mode choice. If customer acceptance is low and customer satisfaction declines, the transit system’s ridership and its revenues may be in jeopardy. Therefore, customer perceptions about invasion of privacy, effects of the screening system on physical health, its effect on service quality, and its overall effectiveness in combating terrorist activity are all vital. If the perceived benefits of the screening system outweigh perceived disadvantages, customer satisfaction and ridership may be positively impacted. For instance, if a transit passenger feels more secure because a particular screening technology is accurate and reliable, they will continue to use transit and may encourage others to use it as well. The opposite is true as well. If they face constant delays due to the screening technology and have been reprimanded for being late to work, they may decide to use a different mode (drive to work) because their perceived disadvantages have exceeded their perceived benefits of using transit due to the installation of the technology. In order for transit security screening to gain customer acceptance, customer perceptions should be ascertained, and any questions and concerns about the screening process should be addressed by engaging in intensive customer outreach and communications immediately prior to and during the initial months of the rollout period. Also, as security screening becomes more commonplace at public venues such as shopping malls and sports arenas, widespread acceptance of transit security screening may occur. Health Issues Exposure to radiation on a daily basis could be disconcerting to some passengers. Information regarding the safety of the technology and equipment should be disseminated to passengers to alleviate these concerns. In general, technologies using any type of radiation and, more specifically, ionizing radiation may cause some concern.5 While assessing the safety of a particular technology is beyond the scope of this project, the following regarding customer perceptions of bulk detection technologies and trace detection technologies may be stated: For bulk detection technologies, millimeter wave imaging and infrared detectors would probably be perceived by transit customers to be Screening: New Technologies and Implementation Issues. National Research Council. Washington, D.C. 1996. 5 Ibid. 15

the safest (no radiation is used).6 7 Terahertz would probably be perceived as the next safest (no ionizing radiation is used, although terahertz radiation is used). For trace detection technologies, optical, IR spectroscopy, and surface acoustic wave (SAW) would most likely raise the least health-related concerns. Costs All categories of costs should be estimated – unit costs, installation costs, lifecycle costs, O&M and labor costs, training costs, infrastructure modification costs. For the newer technologies and equipment models, lifecycle costs may not be available. In selecting a particular technology for deployment, common sense dictates that a technology that would exceed an agency’s security budget would need to be eliminated from the list of possible alternatives. Also, the total cost should be considered in conjunction with the benefits provided by a particular technology and delivery method.8, 9 Other Assessment Criteria Other factors that ought to be taken into account are portability of the equipment, alarm capability, detection states, start-up time, resistance to interferants, power capabilities, battery needs, operational environment, and durability. Many of these factors may vary based on the particular manufacturer and model. Portability of the equipment can be represented by the dimensions of the equipment and its weight. Typically equipment is considered portable if one person is able to transport the equipment. Alarm capability refers to the type of alarm (audio and/or visual) a detector has and the effectiveness of the alarm. 6 Huguenin, G. Richard. The Detection of Hazards and Screening for Concealed Weapons with Passive Millimeter Wave Imaging Concealed Threat Detectors. South Deerfield, MA. 7 SafeView Now Shipping Its Scout Personnel Screening Systems; Millimeter Wave Technology Provides Safe Alternative to X-ray, May 17, 2005 TMCNet on the Web, www.tmcnet.com 8 Committee on Commercial Aviation Security. National Materials Advisory Board and Commission on Engineering and Technical Systems. Airline Passenger Security Screening: New Technologies and Implementation Issues. National Research Council. Washington, D.C. 1996. 9 Fatah, A., J. Barrett, R. Arcilesi, K. Ewing, C. Lattin, M. Helsinki. Guide for the Selection of Chemical Agent and Toxic Industrial Material Detection Equipment for Emergency First Responders (NIJ Guide 100-00). National Institute of Justice, June 2000. 16

Detection states refer to the state(s) – vapor, aerosol, liquid -- that can be detected. Start-up time is the time required to set up the equipment including calibration requirements, if any. Resistance to interferants: an interferant is able to deactivate the detection capability of the equipment by implementing some type of countermeasure on the part of terrorists. Power capabilities describe the necessity for electrical power or other power sources. For equipment that is powered by batteries, battery needs (length of battery life) should also be considered Operational environment is the environment under which the equipment is able to operate. The environmental conditions that could affect detection capability include excessive moisture (rain, high humidity), temperature extremes, presence of diesel fuel, smoke, and other vapors. Durability refers to the ability of the equipment to tolerate rough usage. This factor becomes important if the screening equipment will be moved frequently. National Institute of Justice Guide for the Selection of Chemical Agent and Toxic Industrial Material Detection Equipment for Emergency First Responders10 is a comprehensive compilation of chemical detection equipment evaluations. The guide provides useful information on chemical agents and toxic industrial materials (TIMs) and presents a tabular summary of the performance results for each equipment type using the following performance criteria: • Chemical agents detected • TIMs detected • Sensitivity • Resistance to interferents • Response time • Start-up time • Detection states • Alarm capability • Portability 10 Fatah, A., J. Barrett, R. Arcilesi, K. Ewing, C. Lattin, M. Helsinki. Guide for the Selection of Chemical Agent and Toxic Industrial Material Detection Equipment for Emergency First Responders (NIJ Guide 100-00). National Institute of Justice, June 2000. 17

• Battery needs • Power capabilities • Environment • Durability • Unit cost • Operator skills • Training While some of the information may be outdated due to advances in technology, the coding scheme which consists of shaded circles and selection factor key are excellent models which may be used for similar technology assessment research. The NIJ Guide (Volume I) indicated that the staff of the Center for Domestic Preparedness, Military Chemical/Biological Units, the National Institute of Justice, and members of a federal Interagency Board has compiled questions to assist officials in selecting detector vendors/models.11 While some of these questions duplicate some of the factors already mentioned in TCRP Report 86, Volume 13, they also provide additional questions useful in differentiating detection equipment models. The questions were as follows: 1. What agents has the equipment been tested against? 2. Who conducted the tests? Have the test results been verified by an independent laboratory or only by the manufacturer? What were the results of those tests? 3. What common substances cause a ‘false positive’ reading or interference? 4. Is the test data available? Where? 5. What types of tests were conducted? Have any engineering changes or manufacturing process changes been implemented since the testing? If so, what were the changes? 6. Can the equipment detect both large and small agent concentrations? 7. Are there audible and visual alarms? What are their set points and how hard is it to change them? Are the alarm set points easily set to regulatory or physiologically significant values? 8. How quickly does the detector respond to a spike in the agent concentration? How quickly does the detector clear when taken to a clean area? What is the response time of the detector to a spike in the agent? How much time does the detector take to clear when taken to a clean area? 11 Ibid. 18

9. How long does it take to put the equipment into operation? Can it be efficiently operated by someone in a Level A suit? 10. How long do the batteries last? How long does it take to replace batteries or recharge? What is the cost of new batteries? Are the expended batteries HAZMAT and what is the cost of disposal of batteries? 11. How long has the company/manufacturer been involved with the Chem-Bio-Nuc and first responder industries? You may also ask for references. 12. Is the company currently supplying its product(s) to similar agencies? If so, who? Ask for names and phone numbers of departments currently using the company’s equipment. Ask to follow-up on the phone any written testimonials. 13. What additional items are required to operate/maintain the equipment? At what cost? What training materials are provided – manuals, videotapes, CD ROMs? What is the cost of training materials? 14. What type of warranty/maintenance support is offered? Cost? 15. What is the return rate on the equipment under warranty? What are the top five reasons for failure? 16. What are the required on-hand logistical support and costs? How often does the equipment need to be sent back to the manufacturer for maintenance? 17. How often does the equipment require calibration? Does calibration require returning the equipment to the manufacturer? Does the calibration involve hazardous materials? 18. What special licenses/permits/registrations are required to own/operate the equipment? 19. What similar companies’ products has this product been tested against? What were the results of the tests? 20. What is the shelf life of the equipment? (open exposed, open unexposed, closed exposed, closed unexposed) 21. What is required to decontaminate the equipment if taken into the Hot Zone? 22. What capability does this equipment give me that I do not currently possess? What equipment can I do away with if I purchase this? Is it only used for military chemicals? 23. Does this equipment require any hazardous materials for cleaning? If yes, what are they? 19

24. Taking weight and size into consideration, what procedures/process are needed to employ down range? How hard is it to decontaminate to get it out of the Hot Zone? What procedures/process are employed to decontaminate to remove from Hot Zone? 25. What is the theory of operation? Surface acoustic wave (SAW) photo ionization, flame ionization, etc. 26. What are the environmental limitations – high temperature, low temperature, humidity, sand/dust? 27. What are the storage requirements? (i.e., refrigerators, cool room, or no special requirements) 28. What training is required to use the equipment and interpret the results? Does the company provide this training, and what is the cost? How often is refresher training required? Field Tests TCRP Portable Explosive Detection Devices Test Testing of lightweight portable detectors from one manufacturer was done by the researchers in the TCRP research study on the Applicability of Portable Explosive Detection Devices in Transit Environments. The tests were done at three transit sites in the U.S. The portable devices were tested under a variety of conditions (near external fumes, diesel fuel, etc.) in a range of transit settings (near turnstiles, on platforms, in vehicle – commuter rail, subway, light rail, and trolley, escalators, maintenance yards, and parking facilities). The test results showed that none of these environmental factors affected the reliability of the devices. While the average test time was less than 2 minutes, the researchers concluded that the time necessary to screen every passenger boarding a transit vehicle would be prohibitive.12 TRIP I, II, and III Tests13, 14 Trip I, II, and III were a series of TSA-sponsored Transit Rail Inspection Pilot (TRIP) studies for passenger and baggage screening technologies for use in commuter rail environments. Trip I was held in May, 2004 at the New Carrollton, MD rail station, a 12 Haupt, Steven G., Shahed Rowshan, and William C. Sauntry. TCRP Report 86: Volume 6—Applicability of Portable Explosive Detection Devices in Transit Environments. Transportation Research Board of the National Academies, Washington, D.C., 2004. 13 Transit Rail Passenger Security Presentation. TSA. Jan. 27, 2006. 14 Pryor, Robert. TSA Trip I, II, III Tests Presentation Slides. CTO Operational Integration Division / TSA 20

transit hub for WMATA, AMTRAK and MARC passengers. Lessons learned were gathered, technology readiness gap assessment was performed, and technology requirements were refined. Trip I established the operational test-bed site at New Carrollton and evaluated equipment, processes, and procedures against a partial set of threat scenarios. Objectives of Trip I were to screen 100% of passengers and their carry-on articles during the designated screening periods; successfully resolve all alarms; determine operational effectiveness of processes, procedures, and technologies; and determine reliability, maintainability, and availability of technologies used during pilot. Trip I screening process - A TSA screener told passengers arriving at the checkpoint to place carry-on articles on the Multi-View Tomography (MVT) entry belt and then step into the trace detection portal, EntryScan3. If neither the passenger nor their articles alarmed, the passenger was allowed to proceed to the train boarding area. If passengers set off an alarm, they were patted down to ensure that they were not carrying explosives and their carry-on articles were sampled with ETD and searched. If a bag set off an alarm, an ETD sample was taken using a document scanner, and it was also physically searched with the aid of MVT-provided bag images. In the waiting area, canine teams and a radiation pager were also deployed. Equipment used and average scan times: • The GE Ion Track EntryScan3 Explosives Trace Detection (ETD) portal scanned passengers for traces of explosives. Average time to screen was 14.9 seconds. Higher times were typically caused by exit faults. • L-3 Communications Multi-View Tomography X-Ray (MVT) scanned passenger carry-on bags for bulk explosives. Average time to screen was 5.2 seconds • Smiths Barringer Ionscan 400B document scanner was used if a bag alarmed on the MVT. An ETD sample was taken and the bag was physically searched with the aid of MVT-provided bag images. Because both passengers and bags were screened, the average time to traverse the checkpoint was high -- 96 seconds. However, passenger satisfaction survey responses indicated that 90% of the passengers were supportive of the screening. Trip II was conducted June-July, 2004 at Union Station in Wash., D.C. Equipment, processes and procedures for screening checked baggage were evaluated based on the information gathered from Trip I. A Smiths Heimann EDtS was used to screen checked baggage on selected trains. The checked bags were sent to the screening location by a system of belts, and then placed on the EDtS entry belt. For bags which set off an alarm and for baggage and other items left in the parcel room, the Smith Barringer Ionscan 400B ETD equipment or canine teams was used. If the alarm was set off, a full bag search was performed. 21

TSA concluded that these methods did not adversely impact station operations. Screening results demonstrated operational feasibility. The average screening time was 29 seconds per article. Trip III was conducted July-August, 2004 on New Haven CT’s Shore Line East Commuter Line. It established an operational test-bed site and evaluated equipment, processes, and procedures for on board screening of passengers and baggage. A rail car was transformed into a screening car by removing seats and installing detection equipment, as shown in the photo below. Trip III demonstrated that on board, in-transit screening was possible. The false positive rates were less than expected: X-Ray – less than 5%, Doc Scanner - less than 2%, Tabletop ETD – less than 2%, and all alarms were successfully resolved. The X-ray equipment was operational 100% of the time during the pilot period, while the document scanner and tabletop ETD were down about 12 hours at separate times. For non-alarming passengers, average time for the entire screening process was 20-22 seconds. Median time per one station stop group was 253 seconds to screen all embarking passengers. Passenger satisfaction survey responses indicated that 91% of the passengers were supportive of the screening. TSA Presentation Slides on Trip I, II, III Primary screenings were performed using the Advanced Automated X-Ray Explosives Detection (L3 APS-II) which performs an X-Ray analysis of suspect materials; and Smiths Barringer Ionscan 400B document scanner. Secondary screenings were done using the GE Ion Track Ionizer which can identify trace amounts of 40 different explosive substances. Carry-on baggage is swiped using a collection pad which is then placed in the ionizer for analysis. Secondary screenings also included visual and/or physical searches by a TSA screener. Based on the results of the Trip tests, TSA Office of Chief Counsel concluded that consistent with the 4th amendment, the operational aspects of the pilot tests did not preclude the federal government from establishing transit related terrorist screening as mandatory. The TSA legal opinion was rendered after consultation with the Dept. of Justice. Of note, it was considered relevant that passenger participation would have been 22

greatly reduced if screening had not been mandatory. However, also observed was that the results of customer surveys were generally very positive. Other lessons learned from the Trip studies were as follows: Lessons Learned • Early and frequent screener participation in equipment and process design is critical. • Agency collaboration is important when testing screening technologies. • Electrical power disruptions can damage ETDs. • Rail environment necessitates more frequent maintenance for ETDs. • Customer support for rail passenger and baggage screening was high. • Some disadvantages and issues that were highlighted during the studies include: o Screening process may not be compatible with large passenger volumes o Rail stations were not designed to accommodate screening equipment; therefore, some stations may not be suitable for large equipment. o It would be very costly for a rail agency to procure, install, and operate screening equipment at every commuter rail station. Maryland Transit Pilot Test15 A Mobile Security Checkpoint (MSC) pilot program sponsored by TSA in partnership with the Maryland Transit Administration (MTA) was conducted in April, 2006 for four weeks between the hours of 5am and 9am. The pilot program evaluated the operational feasibility, effectiveness, and cost of screening technology installed in a mobile unit. The containerized checkpoint was located at the Dorsey Road MARC commuter rail station. Passengers were required to be screened by an Explosives Trace Portal machine (the Sentinel II Portal manufactured by Smiths Detection) and a metal detector manufactured by CEIA, and their bags screened as well by an X-ray explosives detector, the HI-SCAN 6046si and a trace explosives detector, the Ionscan 400B. Both of these baggage screening detection models are manufactured by Smiths detection. As with the NJ PATH test, passengers were not required to place cell phones, keys, and change in a special container. SAIL Tests The purpose of the SAIL tests was to test the feasibility of using new technologies while maintaining efficient passenger and vehicle screening systems for high volume commuter ferries. 15 TSA Unveils Mobile Security Checkpoint Pilot Program with Maryland Transit Authority. DHS Press Release. TSA. April 3, 2006. 23

Cape May-Lewes Ferry SAIL Test16 A vehicle screening project spearheaded by TSA in conjunction with the Coast Guard and the Delaware River and Bay Authority to screen automobiles for explosives and radioactive material at the Cape May-Lewes Ferry Terminal started on October 21, 2004. The Secure Automobile Inspection Lanes (SAIL) test project was conducted at the Cape May-Lewes Ferry in Cape May, NJ. Passengers were asked to exit their car or truck, and a screening van drove past the vehicle. The technology used was Z Backscatter capable of identifying conventional and plastic explosives. The processing time took less than a minute per vehicle. Vehicles flagged by the initial screening received a secondary screening by a canine team. Larkspur to San Francisco Ferry SAIL II17 The second phase of the SAIL project, Secure Automated Inspection Lanes or SAIL II, began in August, 2005. SAIL II was also a 30-day test of a screening device; this time, the passengers were screened for explosives. The tests were led and funded by TSA in close collaboration with the Coast Guard and local agencies. Ion Mobility Spectrometry technology was tested in the SAIL II passenger screening trial in two formats – one as a document scanner which would identify explosives on passenger tickets and other travel documents and the other as a desktop trace explosives detection system. The vendor was Smiths Detection, a provider of trace detection and x-ray security screening equipment. The document scanner collects samples by swiping the surface of documents such as driver's licenses, passports and other travel documents, over a sample disc which is then analyzed by the detector. Passengers held or touched a card that was capable of capturing traces of explosives. The card was then passed through a scanning machine. After the test, each card was destroyed in front of the passenger. Passengers who were flagged as testing positive were required to undergo a secondary screening. The desktop detector is able to screen the contents of luggage, packages and electronic items for traces of explosives. 16 TSA To Conduct First Real-World Test Of Cutting-Edge Backscatter Technology. DHS Press Release. TSA. Oct. 21, 2004. 17 Fimrite, Peter. Ferry passengers to have hands tested. New gadget scans for explosives in name of counterterrorism. San Francisco Bay. Aug. 26, 2005. 24

Next: Appendix C: Aviation Screening »
Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers Get This Book
×
 Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Web-Only Document 38--Appendixes to TCRP Report 86: Public Transportation Passenger Security Inspections, A Guide for Policy Decision Makers contains the detailed appendixes to TCRP Report 86 Vol. 13. TCRP Report 86 Vol. 13 suggests guidance that a public transportation agency may use when considering whether, where, when, and how to introduce a passenger security inspection program into its operations.

The TCRP Report 86: Public Transportation Security series assembles relevant information into single, concise volumes, each pertaining to a specific security problem and closely related issues. These volumes focus on the concerns that transit agencies are addressing when developing programs in response to the terrorist attacks of September 11, 2001, and the anthrax attacks that followed. Future volumes of the report will be issued as they are completed.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!