National Academies Press: OpenBook

Site-Based Video System Design and Development (2012)

Chapter: Chapter 1 - Introduction

« Previous: Executive Summary
Page 5
Suggested Citation:"Chapter 1 - Introduction." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 5
Page 6
Suggested Citation:"Chapter 1 - Introduction." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 6
Page 7
Suggested Citation:"Chapter 1 - Introduction." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 7

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5C h a p t e r 1 The aim of this project was to develop and validate a new obser- vation tool for vehicle safety research: an automated video tracking system. The system is to be capable of wide-scale use and capture detailed vehicle trajectories in traffic environments where safety performance of the highway is of particular inter- est. The captured trajectories can be used to find traffic conflicts and compute relevant metrics, such as gap times and distances between vehicles. The motivation for the study is the low fidel- ity and low frequency of historical crash data. Researchers have little objective information about vehicle speeds and position- ing and timing just prior to a crash, and limited information about the contributing roles of driver, environment, highway design, and especially surrounding traffic when crashes occur. The hypothesis is that essential new information can be sup- plied by captured trajectories for conflicts and near crashes via objective measures of conflict, thus avoiding excessive delays waiting to gather sparse data on actual crashes. To assess risk factors, the tracking system is also to gather associated expo- sure data. Therefore, the challenge is to design and test an automated video tracking system that captures most or all vehicle trajectories at a particular site; this must be done with sufficient fidelity to compute conflict metrics (as well as related factor variables such as speed and traffic density). The cur- rent study includes a small field trial designed to establish the validity of the video tracking system as a viable research tool for safety research. The concept design and system development are to be guided by the research agenda of the SHRP 2. Although the SHRP 2 Safety program is centered on a large-scale field study using instrumented vehicles, it includes the current thrust to develop a robust prototype system to capture vehicle motions from site-based video image processing. Vehicle and site-based approaches have their own strengths and weaknesses. In- vehicle studies provide high-quality data about the driver and the vehicle but only limited access to information about the traffic, highway, and environment. For incidents or events of interest, it is possible to supplement the vehicle-based data collection with static information, such as highway geometry; but for dynamic information, especially the kinematics of other vehicles, the approach is limited. By contrast, the site-based approach is directly focused on the dynamic traffic environ- ment, and it is much easier to supplement this with detailed information about the local geometry, signage, and traffic sig- nals; thus, it can capture interactions and conflict information that are unavailable to an in-vehicle study. In the current study, in which trajectories were captured at a signalized intersection, traffic signal states were recorded in parallel to the vehicle motions. Of course, the human factors information is at best limited for site-based systems. The broad theme of the SHRP 2 Safety program is to iden- tify risk factors, develop and validate surrogates for crash, and enable methods and data collection to answer a variety of research questions relating to highway safety. Particular emphasis is placed on intersection and road departure crashes. Given the strengths and weaknesses of the in-vehicle and site-based approaches, the current project is focused more on the intersection safety problem, where conflicts are spa- tially localized and interactions between vehicles dominate the safety problem. Intersection vehicle tracking is also the more technically challenging problem for video-based track- ing, so the current project focuses directly on intersection safety. Objective measures of intersection conflict, such as gap tim- ing for path-crossing conflicts, are a particular focus of the video-based system. According to the SHRP 2 priorities, these metrics are to be used as surrogates for actual crashes, and statistical techniques will validate the approach by relating the patterns and influences of highway and other factors between conflicts and crashes. These aspects are to be the subject of future studies, in which larger data sets will be captured and analyzed. For this project, such aims help define the system requirements and evaluation criteria in terms of accuracy, level of automation, reliability, and system availability. This is the central aim of the system development, so that future Introduction

6Lidar systems use scanning laser light instead of micro- waves to perform the ranging, and these provide a more seri- ous option for vehicle tracking but have their limitations. Lidar systems are significantly more expensive than cameras, so it is less reasonable to consider such devices for long-term installation; it is hard to conceive of multiple laser scanners being left unattended at an intersection for periods of several weeks or months. As is the case with radar, lidar systems lack vertical resolution and are best mounted close to the ground, again increasing problems of occlusion. Angular resolution is better than that for radar, and the field of view generally is better. Integrating video and radar is also a feasible option, but it adds to system complexity and in no way removes the need for video image processing. Likewise, other sensors such as loop detectors could provide data to support triggering and such in the video system, but in the case of loop detectors, there appears to be little advantage because using video image processing to provide virtual loops is already an established approach, and the additional issues of sensor latency and dependency on installed hardware and interfaces make it hard to justify. In fact, one of the strong points about the video image data is that triggering an electronic camera shutter can be accurately controlled to the point that sensor latency is of minimal concern. For intersection safety, an additional sensor or data require- ment is to acquire traffic signal states at the same time vehicle trajectories are recorded. This could be a digital input from suitably equipped traffic signal controllers or, as was used in the present project, using an analogue sensor to record activ- ity on the signal load circuits. This was conveniently achieved using optical sensing of the LEDs on the load switches in the traffic signal cabinet, which has the advantage of excluding electrical connection between the monitoring system and the traffic signal circuits. In this project, an operationally efficient option has been developed, namely, to integrate video with video. This means to use synchronized video streams from multiple digital cameras mounted at an intersection and combine the results from these video streams to reconstruct vehicle trajectories. The design concept includes the architecture of the system, whereby indi- vidual video streams are processed in parallel, based on feature extraction; data fusion takes place with extracted features, not the raw images, so there is no need to store or transmit large volumes of video data, and the design has been made with the intention of being completely scalable to systems with larger numbers of cameras and with all image processing taking place on site. The fact that image processing is automated is also essential to the scalability of the system; it is a basic assumption of the system design that no manual intervention is allowed in the image and feature analysis software, and manual review is used only to assess performance and quality of results. deployment is enabled on a wider scale: crash frequencies and types can be associated with corresponding conflicts, and ultimately design improvements and other interventions can be implemented to systematically reduce both conflicts and crashes. Accurate and searchable intersection trajectory data not only are required in the recording of events (for example, crashes, near crashes, and conflicts that require avoidance maneuvers), but also are important for recording accurate trajectories for nonevents to characterize the baseline flow of traffic, and this places strong demands on the video system. This is because risk analysis requires detailed and searchable data on the normal driving population to determine denomi- nator or exposure measures. In the context of conflict met- rics, it is far more powerful to have detailed trajectories for all vehicles than, for example, to use manual techniques to refine trajectories in the case of specific events. Because of the complex nature of the data collection, which deals with multiple interacting vehicles, the video camera appears to be the most feasible sensor, and although this is a basic assumption of the research project, it is worth making a brief comparison with existing alternatives. On the positive side, video cameras are commonly installed on highways and are relatively inexpensive and portable, even high-precision machine vision cameras. The system development builds on earlier work, which includes research and development car- ried out previously by the S09 research team. There is also a relatively mature commercial technology for machine vision cameras and a large body of research on algorithms for extract- ing information from video streams. On the negative side, the video image provides only a projective 2-D rendering of the 3-D world; the video image has no sense of depth. Video images also suffer from large data volumes, image complex- ity, optical distortion, susceptibility to occlusions and stray light reflections, and basic lighting and weather variations. All of these factors provide challenges to the development of an automated video-based tracking system. Alternative sensors capture depth information but with other limitations. The most common alternative sensor used for vehicle detection and tracking is 24 GHz or 77 GHz radar. These devices scan horizontally and lack vertical resolution, so the mounting position ideally is just above ground height, similar to where such sensors typically are mounted on vehi- cles (i.e., at bumper height). This increases occlusion prob- lems, where a nearby vehicle blocks the reflection of a more distant vehicle. Radar also has limited angular resolution (at around 0.5° this is poor by video image standards), which is made worse by changes in reflection point location on a detected vehicle. Most important, radar offers a limited field of view, typically in the range 10–15°, which is wholly inade- quate for covering a large intersection without using a very large number of radar systems.

7mance and not to create a data set for extensive analysis of crashes and conflicts. This report is structured as follows. Research questions relating to SHRP 2 Safety are reviewed in Chapter 2, and the applicability of the site-based video system to address such questions is considered. Chapter 3 presents a survey of previ- ous systems developed for vehicle tracking using video cam- era technology, including commercial and research systems. As part of this survey the major challenges of video-based tracking are considered, in particular the technical challenges that stand in the way of a robust automated vehicle tracking system. Chapter 4 considers a broad range of system require- ments for the Site Observer, while Chapter 5 provides more detail on the conflict metrics proposed for use in the analysis of surrogates for vehicle crash. The ideal accuracy require- ments for the system are suggested by simulations, which are therefore appended to the general requirements. In Chap- ter 6 the full Site Observer design concept is proposed and in Chapter 7 the intersection site chosen for the study and the system installation, including camera calibration, are described. Chapter 8 sets out the main image processing techniques used in the project, including the features extracted and stored in the single-camera subsystem. Chapter 9 describes the critical steps for data fusion, whereby vehicle trajectories are extracted from the feature databases and vehicles are localized in the 3-D world. Trajectory refinement and estimation of vehicle velocities and accelerations are described in Chapter 10, and in Chapter 11, a number of validation tests are described and results presented. Chapter 12 offers a sample analysis of con- flicts and Chapter 13 concludes the report with a summary and conclusions from the study. In terms of tracking performance, the most critical step is to uniquely isolate vehicles from each other and from the background; this is the main purpose of the multiple camera approach, using consistency between features seen from the different camera perspectives. The multicamera view is well suited to this because it provides multiple cases of the same vehicle kinematics and offers the best opportunity to minimize the effects of occlusions. Also important is that stray features caused by glare and reflections cannot pass a basic validity check of feasible vehicle motions and may be excluded auto- matically. The system design concept, based on fusion of fea- tures from different camera perspectives, is inherently robust. Camera locations are another important motivator for the multicamera approach. Because a stable and very high cam- era position normally is not possible, vehicle detection and tracking must be done while considering the full 3-D geom- etry of the intersection space; vehicle height and perspective effects must be accounted for, and this is one of the chal- lenges for the postprocessing of extracted features. The resulting site-based video system is referred to as the SHRP 2 Site Observer. In this report, it is applied to trajec- tory capture and conflict metric analysis at one particular intersection. The pilot study collected data at a four-way signalized inter- section in Ann Arbor, Michigan. Installation was carried out in fall 2009, and data collection took place between January and March 2010. During this time, weather conditions were variable and included snow cover and bright sunshine. Data collection was sufficiently extensive to enable a sample analy- sis of conflict metrics presented in this report. However, the main purpose was proof of concept and validation of perfor-

Next: Chapter 2 - Safety Research Questions »
Site-Based Video System Design and Development Get This Book
×
 Site-Based Video System Design and Development
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-S09-RW-1: Site-Based Video System Design and Development documents the development of a Site Observer, a prototype system capable of capturing vehicle movements through intersections by using a site-based video imaging system.

The Site Observer system for viewing crashes and near crashes as well as a basis for developing objective measures of intersection conflicts. In addition, the system can be used to collect before-and-after data when design or operational changes are made at intersections. Furthermore, it yields detailed and searchable data that can help determine exposure measures.

This report is available in electronic format only.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!