National Academies Press: OpenBook

Site-Based Video System Design and Development (2012)

Chapter: Chapter 3 - Existing Video-Based Vehicle Monitoring Systems

« Previous: Chapter 2 - Safety Research Questions
Page 14
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 14
Page 15
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 15
Page 16
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 16
Page 17
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 17
Page 18
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 18
Page 19
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 19
Page 20
Suggested Citation:"Chapter 3 - Existing Video-Based Vehicle Monitoring Systems." National Academies of Sciences, Engineering, and Medicine. 2012. Site-Based Video System Design and Development. Washington, DC: The National Academies Press. doi: 10.17226/22836.
×
Page 20

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

14 C h a p t e r 3 Many video-based systems currently exist for monitoring or tracking vehicles; however, most have limitations that the current system development seeks to overcome. Commer- cial systems typically are aimed at vehicle detection or with a basic level of tracking that the current project seeks to improve. Research-level systems are most typically not implementable systems; rather, they provide tools and algorithms to cap- ture and convert video data to trajectory estimates with lit- tle regard for long-term implementation or automation. Some previous attempts at automated video capture are reviewed here. Commercial Systems The research team reviewed commercial and technical litera- ture on products that use video to detect vehicle activity at highway intersections. The team focused on five such systems; other products were found, but none exceeded the current or likely future capabilities of the five selected systems. When possible, the team contacted the companies concerned and received additional technical background. Autoscope The Autoscope product line is made by Image Sensing Systems Inc. (Image Sensing 2008), located in St. Paul, Minnesota. Autoscope includes a wide product line sold and distributed by Econolite. This line includes two camera models and vari- ous hardware racks for data recording and processing. The company offers the Software Developer’s Kit to interact with specific/custom applications. Autoscope systems are widely implemented in the intelli- gent transportation systems field, with installations in more than 55 countries. These systems work with many signal con- trollers on the market, including SCATS, SCOOT, NEMA, Type 170/179 and others. There are a number of features of the system that Autoscope references in its documentation. Camera and Communication • Video detection using algorithms to simplify installation, setup, and ease of use; • Streaming digital video via Ethernet; • Dual-core processor for image processing; • MPEG-4 digital streaming video output for review; • Web browser communications for Internet access; • Password protection for access control on shared networks; • Camera integrated with machine vision processor in a single unit; and • ClearVision technology (hydrophilic coating and faceplate heater) to ensure clean lens for high-quality video. Data and Measurements Pertinent to Intersections • General traffic management: Stopline and approach demand, turning movements, degree of saturation, queue measure- ment, speed and volume estimation. • Occupancy: Autoscope detects whether a vehicle is present in predefined zones on the highway. The zones are poly- gons that project to rectangles in the ground plane but are sensitive to height variations in the target vehicles; pro- vided the camera is sufficiently well aligned with the lane, this effect is not sufficient to invalidate the occupancy estimation. • Incidents: Autoscope detects stopped vehicles in prohib- ited areas, red light runner detection. • Nonintersection applications: Oversized vehicles exceed- ing safe speeds, vehicles driving the wrong way, work zone safety, railroad crossing safety, bus lane enforcement. The system is flexible and capable within the limits of event detection but has limited capability for the extraction of motion variables (volumes and speed are estimated). No tracking capa- bility is available sufficient to determine conflict metrics and detailed trajectories cannot be reconstructed. Existing Video-Based Vehicle Monitoring Systems

15 tunnels are already equipped with a Traficon Automatic Inci- dent Detection (AID) system which equates to 40,000 detec- tors operating worldwide.” Video data are fed into a detection unit, and at installation a number of detection zones are configured. When a vehicle crosses a predefined line or zone, vehicle detection is regis- tered automatically (Figure 3.1). Algorithms provide different types of traffic information, such as traffic data for statistical processing, incident-related data, or presence data. The communication board handles the compression of images and transmission of data, alarms, or images. VideoTrak VideoTrak by Quixote, based in Palmetto, Florida, is another video vehicle detection system for controlling intersections, monitoring freeways and tollways, and collecting traffic data (VideoTrak 2008). The VideoTrak system can detect vehicles, motorcycles, bicycles, and rail vehicles and uses a dedicated video camera design. VideoTrak can be operated in two modes: (1) intersection operation and (2) highway management. During intersection operation, VideoTrak tracks targets (vehicles) down a track- ing strip, normally associated with a roadway lane, and trig- gers a DC logic call to an intersection traffic controller, such as the Peek 3000E Controller, when a target’s lead pixel enters a designated detection zone. However, VideoTrak’s use of track strips is somewhat novel, with occupancy within the strip giving a semi-continuous 1-D track of the vehicle as the front of the vehicle moves along the strip (zone illustrations can be found at www.ustraffic.net/datasheets/videotrakdata sheet.pdf). Each field of view can support 32 detection zones divided in any manner across the track strips. A VideoTrak 905 unit can EagleVision The EagleVision video detection system is made by Siemens of Atlanta, Georgia (Siemens USA 2008) and has vehicle detec- tion capabilities similar to those of Autoscope. The system has the advantage of simple setup and minimal calibration; installers mount the camera, aim the lens at the general target area, and connect the camera to the cabinet through a single cable. Fine-tuning of the detection area may be done remotely via a computer interface. Other features of the system are support for as many as eight detector zones and outputs, IP communications via a single CAT-5 cable, color streaming video with GUI, and software operating via a Linux operating system. The system provides another good illustration of the state of the art in current site-based video technology for traffic applications; again, no tracking capability is included. Vantage The Vantage video detection system is made by Iteris in Santa Ana, California (Iteris 2008). Iteris has been using image pro- cessing to detect the presence of vehicles at intersections since 1993. The systems are mostly aimed at replacing inductive loop sensors. Worldwide, Vantage, and Autoscope are the market leaders in number of deployments of these “virtual loop” sys- tems. The Iteris system shares the attributes just mentioned and has particular enhancements in terms of the detection of pedestrians and bicycles, as well as incident detection, ramp metering, and highway monitoring. Again, various aggregated traffic and congestion parameters can be collected and trans- mitted to a traffic management center. Iteris offers an option to use a wireless IP camera with the Vantage system, which allows for simple cable-free installa- tion and remote data retrieval. This option uses a license-free 2.4 GHz band to transmit live video from the CCTV camera to the controller cabinet. This wireless transmitter is inte- grated into the camera and has a 3-inch antenna. The Iteris Vantage system can record counts, speeds, and three types of vehicle classifications. These data are developed by drawing zones in each lane of travel. A single zone drawn to 15 ft in length can store class, speed, count, and occupancy data by adjusting interval lengths. Each camera can be assigned as many as 24 zones with 8 count zones. The camera must be mounted to see oncoming traffic or outgoing traffic, and the data can be collected locally or downloaded via telephone line, wireless, or cable. Again, the detection and estimation are spa- tially referenced within the 2-D space of the camera image. Traficon Traficon, based in Wevelgem, Belgium, was founded in 1992 (Traficon USA 2008). According to its website, “more than 200 Figure 3.1. Traficon detection zones. Image provided with permission from Traficon, courtesy of Control Technologies, Inc.

16 research-Based Systems A number of research groups have used video processing techniques for tracking applications. In almost all cases, the data collection and analysis system has been developed and used by the same research team that processed the resulting research data. Data collection effort normally is conducted on a limited scale, and there is a shortage of systems designed for long-term installation and larger scale deployment. The first two systems, SAVME (System for Assessment of the Vehicle Motion Environment) and NGSIM (Next Generation SIMula- tion), were developed by groups participating in this research project. SAVME The SAVME project (Ervin et al. 2000) was led by the Univer- sity of Michigan Transportation Research Institute (UMTRI) and included two subcontractors, ERIM International and Nonlinear Dynamics. Data collections were conducted in 1996 and 1999, generating more than 30,000 individual vehi- cle trajectories. The hardware and software were delivered to the Department of Transportation and were enhanced by NHTSA starting in 2002. Additional data collections were conducted by NHTSA between 2002 and 2004. SAVME con- sisted of three subsystems: 1. Data collection consisting of cameras mounted on towers and computer equipment for collecting and storing video imagery. 2. Trackfile production to process video imagery and produce trackfiles containing trajectories for the vehicles in the imagery. 3. Trackfile analysis to process the trackfiles, store the track data in a relational database, and provide tools for access and analysis of the track data. The initial installation covered 600 feet of a five-lane urban roadway, which included a small intersection at one end of the region. There were two cameras on 100-foot towers spaced 200 feet apart and 100 feet from the center of the road. The use of high towers aided the image processing but made the system intrusive and less flexible. Figure 3.2 shows a dis- play of tracking results. Each vehicle has a red cross on it marking the tracking point, and the high camera mounting points allow for simple 2-D to 2-D transformations between image and ground plane, with vehicle height having little influence on the results. Validation results showed that spatial accuracies typically were within 0.6 m, and velocity components typically were within 0.6 m/s of the true values. The collected database was used to explore a number of common driving scenarios, support four tracking/detecting cameras and one surveillance camera, whereas a VideoTrak 910 unit can support as many as eight tracking/detecting cameras and two surveillance cam- eras. According to VideoTrak, camera location is highly criti- cal to proper detection, with the best location centered over the approaching lanes approximately 10 m above the roadway surface. Summary The preceding commercial systems share many similarities in their purpose, capabilities, and methods, although there are also substantial differences in terms of product definition and refinement. The common features of interest are: • Information is extracted via occupancy detection within the 2-D space of the camera; such information largely can be derived using robust and simple background subtrac- tion algorithms. • Data output is suited to the traffic management application, rather than safety research: vehicle positioning is based on occupancy (and thus of limited resolution), and additional information is inferred by occupancy detection within zones to provide estimates of speed, volume, gaps, vehicle size, and so forth. • Data sharing between camera subsystems is limited or nonexistent. • Camera positioning is highly significant to detection per- formance, with preference given to alignment of the cam- era with the traffic lanes and maintaining a sufficiently high ratio between camera height and maximum detec- tion range. The last point is particularly important. It is only through careful choice of camera position and orientation (pose) that a 2-D sensor can be mapped into the 2-D world of ground vehicle motions on road surface. When cameras are poorly aligned or viewing angles are low, the vertical geometry of the vehicles has a substantial effect on the viewed image. Restrict- ing information processing to the 2-D plane of the single- camera image is a constraint within all the above systems that fundamentally limits their capacity to be developed into vehi- cle tracking systems for capturing continuous data on vehicle trajectories. Recent work (Kanhere et al. 2010, 2011) shows that a limited degree of feature tracking, together with detec- tion of edges and frontal area features, can enhance the capa- bilities of these types of commercial video systems and reduce sensitivity to camera mounting height. However, because none of these systems addresses the construction of detailed trajectories, the research team considered the design and per- formance of existing research-based systems specifically designed for the tracking application.

17 NGSIM The NGSIM program was instigated by FHWA and industry leaders who developed several commercial traffic simulation tools. The objective of the NGSIM program was to provide real-world data for the verification, calibration, and valida- tion of traffic simulation models. A system was developed, taking video inputs from multiple cameras installed on a high building near the target road and generating long trajectories of individual vehicles with a relatively low level of user input. The source code of the software and the resulting trajectories are available to the public (NGSIM 2008). Vehicle detection and tracking used a model-based algo- rithm (Kim and Malik 2003). The procedure is to fit wire- frame models to horizontal and vertical line segments detected from the image (Figures 3.3 and 3.4). According to Kim et al. (2005), the detection rate is greater than 88%, with a false- positive rate of 1%. The detection algorithm was designed not just to maximize the detection rate but also to focus on finding the positions and dimensions of the detected vehi- cles as accurately as possible; the wireframe models perform the step of mapping the 2-D camera information back into the 3-D domain by correcting for perspective effects. Tracking is based on template matching of the whole vehicle. An image patch for the vehicle is stored and matched in successive frames. Because perspective angles change grad- ually, the feature image also is modified gradually. The system operated offline and with user assistance; a human opera- tor monitored the detection zone to correct any detection failures. The first data set was collected and processed at the Berkeley Highway Laboratory. The overall video surveillance system consisted of eight digital video cameras with overlapping fields of view on the roof of a 30-story building overlooking a section of the I-80 freeway in the city of Emeryville, Califor- nia (the San Francisco Bay Area). From a set of 30-min video clips, a prototype data set of 4,733 vehicle trajectories over a length of 2,950 ft (approximately 1 km) was collected. As reported by Kim et al. (2005), the accuracy of the data set was estimated at approximately 2 ft (60 cm) across the freeway including flying passes, left turn across oncoming traffic, emerging from a signed intersection, queue formation and dispersal, and braking propagation along a vehicle string. Results include X-Y trajectories and motion time histories for individual vehicles and vehicle clusters. The project was able to demonstrate the general feasibility of obtaining high-quality kinematic data from a site-based video system and the power of resulting data analyses. On the other hand, the project was limited in a number of key respects: • Manual intervention was needed to define and correct candidate trajectories, requiring at least 10 h of operator time for each hour of video recorded. • The methods were not scalable, in the sense that a video archive was required as an intermediate step, and for large- scale implementation there would be a massive expansion in the amount of video data recorded. • To reduce perspective effects, and thus deterioration of location information, the cameras were mounted on high and intrusive towers, which greatly affects the feasibility and flexibility of installation. Figure 3.2. SAVME images and tracking display. V5 V4 V3 V2 V1 V0 d5 d4 d3 d2 d1 HL HR B S w FR B HR HL S Figure 3.3. The wireframe models used by the NGSIM system to detect a passenger vehicle (left) and a container truck (right).

18 intersection. A single camera was installed at the first inter- section (using a tall building), whereas four cameras were installed at the second, with two cameras on each of two light- ing poles, which gave high vantage points and unobstructed views of the intersection. In both cases video data were recorded and processed offline at a supercomputer facility. The similarities with SAVME and NGSIM are clear, but this project sought to address many of the issues raised in the cur- rent project, particularly in terms of developing a system that can perform accurate and automated tracking at intersections. The four-camera installation operated only during the hours of 09:00 and 15:00 on days with few shadows, from March 2007 to May 2008. Image frames capture was triggered using a GPS receiver so that trajectories could be stitched together after processing. Interestingly, the project also tested the use of laser radar, but the resulting data were not used. According to the authors, “analysis of the distance/angle data proved to be computationally intensive,” from which the research team inferred that the large and complex data sets from laser radar were to be no easier to process than the video. Video image processing was similar to the NGSIM approach, using background subtraction and box-type geo- metric model estimation, including shadow estimation. To reduce shadow effects, the data collection was limited to the middle of the day and mainly on overcast days. Because the primary aim of the study was data capture rather than system development, these restrictions seem entirely reasonable to the current research team. In terms of camera location, at the four-way intersection, preference was given to the use of very tall light poles, although this created problems of cameras swaying in the wind. The multicamera approach is similar to that used by SAVME, which had two cameras for the purpose of increased coverage, rather than sensor redundancy. Tra- jectories were estimated at the single-camera level and joined afterward, and it does not appear that overlapping regions were used to improve the underlying detection or vehicle position estimation. In their report, the authors note substantial problems in making reliable and automated trajectory estimations, espe- cially those arising from occlusion (and thus broken trajec- tories), tracking of fake objects (e.g., fleeting shadows), and transfer of track estimation from one object to another (e.g., when a small vehicle drives close to a larger vehicle). As with NGSIM and SAVME, identification of vehicles was based on background subtraction and box estimation. The algorithms developed included automated repair mechanisms to join broken tracks as well as additional algorithms to apply geomet- ric corrections. The latter algorithms were found to be needed to remove an effect by which the estimated vehicle position is shifted systematically toward the camera, an effect that was site specific. The study amply demonstrates the problems of and 4 ft (120 cm) along the freeway, similar to the SAVME system data set. Like SAVME, the NGSIM system’s design is focused on extracting high-quality vehicle trajectories. Both are user assisted to ensure a 100% detection rate. NGSIM handles the effects of shadow through the 3-D model templates and is con- ceptually superior to methods based simply on background subtraction and centroid location. However, the template matching works only when the perspective angle of the vehi- cle changes gradually and when shapes are predictable. A fully automated system requires a method that is not so tied to predictable vehicle shapes, advantageous camera angles, or manual intervention. IVSS Study A recent study (Smith et al. 2009) conducted in Sweden pro- vides a serious benchmark for the state of the art when using site-based video for vehicle tracking and safety evaluation. The study looked at intersection safety from a number of per- spectives, including site-based tracking, in-vehicle data col- lection (small scale with a single instrumented vehicle), and multidriver simulators. The video image analysis art of this project is similar in scope and scale to SAVME, although using modern equipment and employing greater resources in terms of the amount of video analyzed and the automation of the algorithms. The study involved multiple partners: Auto- liv, Chalmers Technical University, Linköping University (Computer Vision Laboratory), SAAB Automobile AB, Volvo Car Corporation AB, and the Swedish Road Administration (Vägverket). In the IVSS study, the video tracking component is the most relevant feature; it was used at two intersections near Gothenburg, Sweden. The first application was at a three-way intersection with a speed limit of 50 km/h in an industrial area: 626 h of traffic were recorded on video. In the second application, 95 h of video were recorded at a 70-km/h, four-way Figure 3.4. Detected line segments (left) and the detection result (right).

19 • Loss of information caused by projection of the 3-D world on a 2-D image: can be a significant problem depending on the camera angle and camera mounting height; • Noise in images: less significant for vehicle tracking than for other applications; • Complex object motion: not a major problem for vehicle tracking because vehicle motions are limited to a 2-D sur- face and orientation normally is aligned with the vehicle velocity vector; • Nonrigid or articulated nature of objects: a limited prob- lem, associated with vehicles towing trailers; • Partial and full object occlusions: a significant problem for vehicles at intersections; • Complex object shapes: a moderate problem because some vehicles (e.g., mobile cranes) have complex shapes; • Scene illumination changes: a significant problem for back- ground subtraction, both in the long time scale (night and day) and the short time scale (clouds and headlights); and • Real-time processing requirements: real-time processing is desirable but not absolutely required. Object representation is an important aspect, especially when the pixelated images are matched to object models. Two approaches are commonly used: primitive geometric shapes and object silhouette and contour. Both approaches are relevant to vehicle tracking. Four approaches to appear- ance representation are noted: probability densities of object appearance, templates, active appearance models, and multi- view appearance models. All four approaches can be consid- ered for vehicle tracking. However, vehicles can have complex shapes, and it seems best to avoid this problem of matching appearances if possible. Another approach is to track image features rather than appearance. Four types of image feature are commonly used: color, edges, optical flow, and texture. Again, all are poten- tially relevant to vehicle tracking. Many tracking algorithms use combinations of features. Object tracking is divided into three steps: object detec- tion, frame-to-frame tracking, and behavior recognition. For object detection, four approaches are presented. They are point detectors, background subtraction, segmentation, and supervised learning. The detection of points of interest is a general approach that does not depend on lots of knowledge about what is being viewed, and there are algorithms that are viewpoint and intensity invariant. However, when points have been detected, the problem of grouping them into vehi- cles remains. The background subtraction approach is com- monly used for vehicle tracking because the background typically is constant, whereas the vehicles typically are moving. Although not much knowledge about what is being viewed would seem to be required, in fact it is necessary to use such knowledge about the background and the vehicles to obtain automated trajectory capture, even when the aim is not to develop a fully deployable system. The three studies described indicate what has been achieved in terms of relatively large-scale research projects and how tracking system demands go beyond the standard capabilities of commercial video systems used in traffic management. Certainly, other researchers have conducted similar work on a smaller scale (e.g., Parkhurst 2006), but there remains two especially challenging components to the task undertaken in this project: to develop an installable and scalable hardware system, using an architecture and algorithms that make auto- mated image analysis feasible and tolerant of extraneous effects such as reflections, shadows, and occlusions. To under- stand these effects in a more general way, we turn attention briefly to the wider area of object tracking. Object tracking research Tracking has been an area of interest in the computer vision community for many years, and technical advances and cost decreases in digital video cameras have boosted recent interest. Kim (2008) summarized the basic video image processing techniques commonly used to track vehicles, namely, back- ground subtraction and corner feature tracking. It has been mentioned that background subtraction is commonly used in commercial systems. The underlying method is simple: some type of averaging over time is performed for each pixel so that an estimate of the background scene is obtained— fixed objects are retained and transitory objects are removed. Then, comparing any image to this background, regions of significant difference (where pixel brightness, so-called gray- scale value, differs by more than some threshold) can be iden- tified as containing candidate vehicles. Details of averaging method and selection of thresholds differ between implemen- tations. As mentioned by Kim in 2008, tracking the resulting foreground regions is far from simple and prone to error, especially when there are occlusions. By contrast, corner fea- tures are highly localized and tend to persist over many frames, and thus are more suitable for tracking. A corner feature is a small region of an image where the spatial variation of gray- scale value is large and where there is no single preferred direc- tion for the gradient. If one direction predominates, an edge is obtained; edge features can also be useful for tracking, but they are not localized as well as corner features, and thus are more open to ambiguity when tracking. Yilmaz et al. (2006) published a comprehensive review of object tracking research and included traffic monitoring in their list of six important object tracking tasks. They list eight issues that add complexity to the object tracking task; all eight apply to vehicle tracking to some extent. The research team augmented their list with its brief interpretation of the issues’ relevance to vehicle tracking:

20 in the image is tracked. This generally is desirable but can be a problem under occlusion. Again, once the object has been tracked, the problem of determining the motion of the object remains. The survey paper (Yilmaz et al. 2006) identifies occlusion as one of the major challenges in object tracking and describes a variety of approaches that have been used to address the prob- lem. It is clear that although much effort has been expended to develop principled approaches to detection and tracking, the approaches for dealing with occlusion are much more ad hoc and have been developed on a case-by-case basis by using the characteristics of the particular problem at hand. The survey also identifies multicamera tracking as an impor- tant approach. As with occlusion, a variety of methods are described, but they are all basically ad hoc and use the spe- cific characteristics of the problem at hand. Area coverage and depth estimation are the primary benefits mentioned for multicamera approaches, although tolerance of occlusions also is clearly of interest in the current application. Addressing future directions in object tracking, the assump- tions typically used to make the tracking problem tractable are violated in many realistic scenarios and thus limit the useful- ness of trackers in many applications. Applying contextual information is a promising approach for addressing the trac- tability problem, something that certainly applies to the vehi- cle tracking problem. the desired level of performance with this approach. The segmentation approach detects objects by their visual charac- teristics in individual images, rather than depending on char- acteristics related to multiple images. This approach works well with sparse traffic but becomes very challenging in dense traffic, where occlusions are common. The supervised learning approach is when a learning algorithm is presented with train- ing data and learns to detect the objects automatically. This “black-box” approach again depends on appearance, though without any explicit model. In complex environments, it is unlikely to be successful. For object tracking, three approaches are discussed. They are point tracking, kernel tracking, and silhouette tracking. Point tracking is not only relevant to point objects; there are algorithms that can cluster and segment sets of points associ- ated with an object based on their motion and a rigid body assumption. Kernel tracking can use a variety of appearance representations, but a key factor is that the kernel is some feature that can be tracked consistently over multiple frames. The motion estimate is then based on the motion of the ker- nel, which leaves the problem of determining the motion of the object given the motion of the kernel. Silhouette tracking tracks the outline of the object, which can change over time, and is a common approach for vehicle tracking. The key dif- ference between kernel tracking and silhouette tracking is that in silhouette tracking the complete region of the object

Next: Chapter 4 - Performance Requirements »
Site-Based Video System Design and Development Get This Book
×
 Site-Based Video System Design and Development
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-S09-RW-1: Site-Based Video System Design and Development documents the development of a Site Observer, a prototype system capable of capturing vehicle movements through intersections by using a site-based video imaging system.

The Site Observer system for viewing crashes and near crashes as well as a basis for developing objective measures of intersection conflicts. In addition, the system can be used to collect before-and-after data when design or operational changes are made at intersections. Furthermore, it yields detailed and searchable data that can help determine exposure measures.

This report is available in electronic format only.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!