National Academies Press: OpenBook

Naturalistic Driving Study: Technical Coordination and Quality Control (2014)

Chapter: Chapter 4 - Data Management and Processing

« Previous: Chapter 3 - Data Collection Phase
Page 60
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 60
Page 61
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 61
Page 62
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 62
Page 63
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 63
Page 64
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 64
Page 65
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 65
Page 66
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 66
Page 67
Suggested Citation:"Chapter 4 - Data Management and Processing." Transportation Research Board. 2014. Naturalistic Driving Study: Technical Coordination and Quality Control. Washington, DC: The National Academies Press. doi: 10.17226/22362.
×
Page 67

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

60 C h a p t e r 4 Data Collection/Ingestion process Data transfer began when the site contractors inserted a vehicle’s data drive into the solid-state data drive bay appa- ratus attached to their respective site server, which provided the interface for the data to be transferred from the SSD to the staging server. The data transfer and subsequent process- ing through ingestion were controlled by a workflow system designed and implemented in the Coordination Contractor’s high performance computing (HPC) data center. Ingestion Data ingestion involved the highly choreographed, workflow- driven movement of data from the vehicle to temporary resi- dence on a site contractor’s server, then transfer through a high-speed research data network to the Coordination Con- tractor, where it was processed (see Figure 4.1). When data arrived at the Coordination Contractor’s data center, the workflow engine generated a copy of the original encrypted data received from the site contractors for pro- cessing purposes. As soon as the “processing copy” was cre- ated and validated (as an exact copy), the original encrypted data file was sent to the Coordination Contractor’s archival (magnetic tape) data storage. The processing copy resided on disk storage systems in its encrypted state until broken apart by the workflow agents for ingestion processing pur- poses. The workflow agents decrypted files and transformed the data into formats that could be loaded into the Coor- dination Contractor’s video storage or database repository. As the workflow agents completed their tasks, they deleted the processing copies. At the end of the workflow process- ing steps, the original data received from the site contractors were stored in a tape archive; the finished files are available for analysis on the Coordination Contractor’s video storage and database systems. Archived data are safely stored and preserved in their orig- inal raw (encrypted) form for the duration of the lifetime of the data (i.e., for up to 30 years after the last participant left the study). Thus, if researchers or analysts for any reason need to return to the original, unaltered data to gain a deeper, truer understanding of any particular subset, the data in their original raw form will be available for such analyses. Processing At the point when data were ready to be analyzed by data reductionists or researchers, they were no longer encrypted. Instead, researchers and reductionists were assigned well- defined and strictly regulated roles that provided access to data based on their Active Directory (AD) domain credentials and their assigned AD group membership. The database and file servers implemented role-based security according to AD group membership, and group membership was granted (or removed) as needed by a select group of research scientists. Data protections SHRP 2 data were protected from the moment they were col- lected through migration from vehicle to the final research repository. In addition, data were stored “as collected” in a modern peta-scale hierarchical storage management (HSM) system where an archival copy was maintained in the HSM system’s tape library. The first line of protection started on the DAS with a sophisticated data encryption process. Once data were trans- ferred to the Coordination Contractor, decrypted, and ingested, they were protected by role-based security that limited a user’s access to data based on their IRB approvals (in the case of access to personally identifying information) or on their need for access to data elements required to address research questions as guided by SHRP 2. Additionally, multiple cop- ies of SHRP 2 data were maintained at separate facilities in Data Management and Processing

61 personally identifying information and were therefore neces- sarily guarded from exposure to users or hackers who were not IRB-approved to work with the data. Data security began at the point where data were collected and stored on the DAS hard drive. PID Decoding Process In preparation for the SHRP 2 NDS, the Coordination Con- tractor assessed U.S. light-vehicle sales from model years 2000 to 2007. By working in conjunction with the Alliance of Automobile Manufacturers and the Association of Inter- national Automobile Manufacturers, the Coordination Con- tractor pursued relationships with the major OEMs to obtain CAN PIDs. This effort served to enrich the database with additional data from the onboard vehicle network for high- volume models. The types of data made available included case one facility was to suffer a disaster of any sort (note that the facilities were in the same locality, which is not the ideal arrangement). Encryption Data encryption relied on two encryption methodologies: Advanced Encryption Standard (AES) and the Rivest, Shamir, Adleman (RSA) algorithm. To prevent the possibility of data decryption in a location other than the Coordination Con- tractor, the AES key was further encrypted with the public key of an RSA public/private key pair. RSA is an asymmet- ric encryption technique: a publicly available key was used for encryption; decryption required the paired private key, which was stored at the Coordination Contractor’s loca- tion. A unique RSA public/private key pair was allocated to each DAS. The collected data (sensor and video) contained Site Contractor Staging Server AUTOCOPY FILE TRANSFER UNPACK TEXT EXPORT TEXT IMPORT HEADER IMPORT VIDEO TRANSFORM High-speed Research Network (Internet) Coordination Contractor HPC Compute Cluster OTHER FILES IBM Enterprise Infosphere Warehouse EMC Isilon File NAS FRAME BURGLARQA S0 6 10 g ig E HP C Da ta C en te r L AN AUDIO TRANSFORM S06 10 gigE HPC Data Center LAN SGI DMF Storage Tier Virtualization (archive) Figure 4.1. Data collection and ingestion workflow.

62 with the units and/or categories available for the standard variables. For example, the network speed for a particular vehicle might have been collected in miles per hour units. The standard unit for network speed is kilometers per hour. Therefore, a unit translation would be necessary to con- vert the collected variable into the corresponding standard variable and make that value compatible with others from different vehicles. These translations were also instantiated into the database. Automated quality checks were applied after standardiza- tion. They occurred at the file level, meaning that an indepen- dent set of checks was completed for each file collected. The checks for each variable were documented at the timestamp level so that segments of “good” data for a variable could be isolated from “bad” data within the same file. These checks were meant to flag any data that were out of expected bounds or were produced as a result of DAS malfunction. Note that not all of these checks were applicable to all standard vari- ables; in some cases, a variable may only have been subjected to one or two of these checks as appropriate. The battery of automated checks is provided below: • Not present indicates whether at least one data point was captured for a variable within a particular file. If a variable was not present for an entire file, no other checks for that variable were necessary. • Bounds indicates whether the values recorded for a given variable were within the bounds defined in the relevant data dictionary. Boundary values (i.e., lower, upper, both) could be specified independently for each variable. • Simple dependency indicates whether the dependent vari- able (i.e., the variable being checked) should be considered of questionable quality given that a “parent” variable had failed one or more of its quality checks. These comparisons were made on a timestamp-by-timestamp basis. Each sim- ple dependency consisted of only one dependent and one independent variable, but more than one simple depen- dency could be applied to a single dependent variable. For example, one of the quality metrics for the processed accel- erometer values considered whether the corresponding raw values exhibited good quality during the same time period. • Complex dependency is similar to a simple dependency but with more complex conditions allowed. While a simple dependency was a function of the independent variable having “good” quality when the dependent variable was collected, a complex dependency could further refine what values of the independent variable indicated “good” qual- ity for the dependent variable. Each complex dependency consisted of only one dependent and one independent vari- able, but multiple complex dependencies could be applied to a single dependent variable. For example, a check for any variable collected from the vehicle network modules speed, wiper usage, brake actuation, accelerator position, turn signal usage, and steering data. Considerable effort was expended in mapping the net- work data elements in the database for each unique year/ make/model combination. This activity required great care to ensure that proprietary information related to the PIDs could not be extrapolated for inappropriate use. Mapping the additional data elements to the database allowed vehicle- specific packages to be installed on the solid-state drive dur- ing installation. Data Quality processes With any study, it is imperative to not only continually moni- tor, but also work to ensure that the data being collected are as high in quality as possible in terms of completeness and accuracy. These facets of data quality must be present to gen- erate meaningful conclusions from the analyses. This sec- tion describes the quality processes applied to all study data, including driving sensor and video data collected via the DAS as well as a variety of nondriving data (e.g., participant demographics, driver functional assessments, vehicle charac- teristics and features, and post hoc crash analyses). Sensor Data Once data were ingested into the database, they underwent a standardization process and a subsequent battery of auto- mated quality checks. These processes are described in more detail in this section. The processing of the data was aided by the development of a data dictionary table. This table pro- vided the variable name, the expected units, a description of the variable, lower and upper limits, and expected data avail- ability rate for each variable. The first step in the standardization process was to map the collected variables for each vehicle into the existing set of standard variables as outlined in the data dictionary. Because of the wide variety of vehicles present in the data set and the proprietary nature of some of the code used to standardize the data, such standardization could not be performed on the DAS in real time, as DAS processing resources were deliberately minimized to the greatest extent possible to conserve size and power consumption. Consequently, samples of collected vari- able data for each vehicle were examined visually post hoc for each vehicle and assigned to appropriate standard variables. In some cases, there was a one-to-one match between collected data and the standard variable across the vehicle fleet, but this was not true in many other cases, especially those involving network variables. Once completed, these assignments were instantiated in the database. The standardization process also included the definition of any necessary translation of the collected variable to comply

63 previously, with an eye toward identifying specific vehicles in need of camera adjustment or replacement. Table 4.1 sum- marizes the standards to which each camera view was held for the purposes of this quality assurance and control process. This review was undertaken by a team of trained data reductionists under the auspices of a protocol that elicited a quality assessment for each of the four camera views: face, forward, hands, and rear. The quality assessment for each view was selected from one of four options, defined as follows: • Good quality: Video is clear, viewable, and correctly aligned. • Misaligned video: Video is misaligned from target (i.e., pointing in the wrong direction). • Distorted: Video is available but not usable for research purposes. • Not available: Video is unavailable. Figure 4.2 presents an example of an Advanced Health Check image that received an assessment of good quality for all four camera views. In such a case, no further action would be required. A variety of video errors, caused by misplaced or malfunc- tioning head units or rear cameras, was detected through quality checks of images associated with Advanced Health Checks. These included unavailability of one or more of the video views, misalignment of one or more views, and dis- tortion of one or multiple views to the extent that mean- ingful analysis could not be conducted. In such cases, once confirmed by the Coordination Contractor data integrity coordinator, referral would be made to the Coordination Contractor operations staff, who would review the images and subsequently issue the indicated maintenance request. The process followed for the evaluation of images trans- mitted in conjunction with Advanced Health Checks was required that the last reported status for that module indi- cated a “recording” status to output a good quality score. • Duplicates indicates whether a particular variable had two entries on the collected data under the same timestamp. If that was the case, the data quality for the timestamp in which this occurred was considered “bad.” • Spike identification indicates whether a data point that was otherwise within the expected bounds for the variable should be considered experimental noise, typically due to sensor noise. This particular check was used for longitudi- nal and lateral accelerations. The code examined preced- ing and following values around the suspected spike and assessed whether the overall pattern was feasible based on the expected physics of the scenario. Multiple metrics were used in this assessment, including the derivative of acceler- ation, the variance in the sample, and measures from basic principles of motion. Results from the battery of checks were instantiated in the database for each file. The results were then aggregated by data drive to assess the overall quality of the data for each col- lected drive. Each of those aggregated sets of data quality pro- files was examined by a data analyst. Problems that suggested systematic issues were further studied to determine whether corrective action was possible and the level at which it should occur (e.g., vehicle, database). Files without issues for fur- ther study, those for which corrective action had occurred, or those for which no corrective action was possible, were then released for subsequent analyses. Video Data Part of these subsequent analyses entailed a manual review of images transmitted via Advanced Health Checks, discussed Table 4.1. Camera Views—Ideal Descriptions and Purposes Camera View Ideal Purpose Face camera Complete, clear view of the driver’s face, including eyes and mouth. Camera should be positioned to exclude views of backseat passengers. A clear view of the face facilitates eye-glance analysis and evaluation of distraction associated with secondary tasks of talking, eating, and singing. Forward camera High-quality, color video of the forward roadway. Forward road and traffic, traffic lights, and cars in front should be visible, with roadway centered horizontally and with the horizon just above the center line. A clear view of the forward roadway facilitates evaluation of traffic density, visibility, road conditions, and time of day, as well as recognition of potential hazards posed by oncoming traffic and activities of drivers in surrounding vehicles. Instrument panel High-quality video of the distance from the driver’s door to the center console, featuring a complete view of both of the driver’s hands and steering wheel, radio/CD player/cigarette lighter, and center console. A clear view of hands and center console facilitates analysis of distractions resulting from secondary tasks such as adjusting cabin temperature or radio, using a cell phone, and reaching for objects. Rear camera High-quality video of the traveled roadway. Traveled road- way, following traffic, and traffic lights should be visible, with roadway centered horizontally and with horizon just above center vertically. A clear view of the traveled roadway facilitates analysis of traffic density and potential hazards posed by following traffic.

64 all DASs installed in SHRP 2 vehicles were operating in an optimal fashion. Non-DAS Data Assuring the quality of the time series data and video col- lected via the DAS was a central focus of the overall quality efforts; but considerable effort was also devoted to assuring the quality of the many non-DAS sources of data, as described below. Several approaches were used, including applying basic knowledge of the data when applicable (e.g., for height and weight). In the absence of such baseline knowledge, a statistical outliers approach was employed such that extreme values were distrusted and discarded, except when indepen- dent verification suggested otherwise. With this interquartile range (IQR) approach, any value ≤[Q1 - (1.5 × IQR)] or ≥[Q3 + 1.5 × IQR] was considered an outlier, where Q1 = first quartile, Q3 = third quartile, and IQR = interquartile range or (Q3 - Q1) for the particular variable distribution in question. Participant Demographics When appropriate and possible, demographic informa- tion was validated to ensure accuracy and quality. The key variables that were validated and verified were date of birth, age of licensure, gender, years driving, and miles driven in similar to the quality assurance protocol used to evaluate sample video files taken from the ingested data pool. The scope of the quality review undertaken for the sampled files extended beyond video quality assessment to include valida- tion of network variables, including speed, throttle position, and radar accuracy. Despite this increase in complexity, both of these quality assurances reviews were part of the larger process of verification, documented in Figure 4.3, which ensured that all data collected were of optimal value and that Figure 4.2. Example of good quality for all four video views (with participant’s face hidden). Quality Problems? Coordination Contractor receives camera images/trip files from field Quality Assurance process completeNo Yes Images/trip files usable? Quality Assurance process complete Quality Assurance process complete Maintenance required? Yes No No Maintenance ticket issued Maintenance performed Yes Figure 4.3. Process flow for quality assurance of video and sensor data.

65 Driver Functional Assessments Functional assessments, as previously referenced, were col- lected from each primary participant, typically at the outset of each one’s participation. Questionnaire responses were reviewed for completeness and accuracy. Validation efforts focused on identifying outliers, anomalous data points, and standardizing units of measure. It should be noted that par- ticipants did have the right to refuse to answer any particular question and to refuse or discontinue participation in any par- ticular functional assessment with neither reason nor penalty. Attitude And BehAvior QuestionnAires The completion rate for the attitude and behavior question- naires was roughly 99% (see Table 4.3). The completion verification process included creating a database of the com- pleted or attempted questionnaires for each participant. If a participant was missing one or more questionnaires, the data collection site was contacted and a request made that the participant complete the missing questionnaires. Similarly, if a participant omitted a large majority of questions within a specific questionnaire, the data collection site was requested to ask the participant to complete the questionnaire. heAlth- And sleep-relAted QuestionnAires The health- and sleep-related questionnaires included the Medical Conditions and Medications and the Sleep question- naires. The Medical Conditions and Medications question- naires (Appendix G) were primarily validated for anomalous data within three main variables: height, weight, and neck size. the most recent previous year. Validation techniques included employing general statistical testing such as normal distribu- tions, cross tabulations, and box-and-whisker plots to iden- tify outliers and anomalous values within the data collected. Validation efforts also used the application of general knowl- edge and logic such as literature review and considering the minimum licensure age in the United States as indicated in Table 4.2. When appropriate, requests were made to data col- lection sites to verify or update missing and anomalous data. Participants with data entries that fell outside the identi- fied ranges of a given variable were flagged for verification by the relevant site contractor. When participants could not be contacted for verification, a null value was assigned in place of the anomalous data. When participants could be contacted and the data for a given variable could be corrected or veri- fied, amended data were inserted. Table 4.2 provides the test and rationale for age, age of licensure, years driven, miles driven in previous year, and date of birth. Table 4.2. Participant Age, Licensure, and Years Driven Information Validation Variable Test Number of years driving Greater than the age of participant Birth date Less than 16 years from today’s date Miles driven in previous year Greater than 150,000 miles a year License age Less than 14 years old Table 4.3. Completed Assessment and Related Surveys for Participants in the Study at Least 1 Day Survey Participants Completed Surveys Percentage Complete Driver Demographics Questionnaire (Appendix V) 3,254 3,244 99.7 Driving History Questionnaire (Appendix W) 3,254 3,245 99.7 Barkley’s ADHD Quick Screen Questionnaire (Appendix D) 3,254 3,245 99.7 Driving Knowledge Questionnaire (Appendix F) 3,254 3,240 99.6 Medical Conditions and Medications Exit Survey (Appendix G) 3,254 2,658 81.7 Frequency of Risky Behavior Questionnaire (Appendix C) 3,254 3,241 99.6 Hand Strength Assessment 3,254 3,239 99.5 Medical Conditions and Medications Survey (Appendix G) 3,254 3,243 99.7 Modified Manchester Driver Behavior Questionnaire (Appendix H) 3,254 3,237 99.5 Optec Assessment 3,254 3,244 99.7 Perception of Risk Questionnaire (Appendix C) 3,254 3,236 99.4 Sensation Seeking Scale Questionnaire (Appendix E) 3,254 3,235 99.4 Sleep Questionnaire (Appendix B) 3,254 3,232 99.3

66 attempted. Participants who did not complete the assessment were flagged and marked as needing to complete the assess- ment at deinstallation, if possible. For those that needed to complete the clock drawing exercise at deinstallation, the site was instructed to write on the assessment that it had been completed at deinstallation. The data reductionist scored each clock drawing test on a rating of 1 (perfect) to 6 (no reasonable representation). The scoring procedures are dis- cussed in Chapter 6. The visual-cognitive tests were administered using Driv- ing Health Inventory (DHI) software. The DHI incorporated 10 tests. Three of its visual-cognitive tests were used in this study: • Visualizing Missing Information; • Useful Field of View (UFOV) (visual information process- ing speed); and • Trail Making. The scores for these DHI-based tests were computed and stored automatically on the hard drive of the site contractor’s assessment computer. Once uploaded to the Coordination Contractor database, these results were reviewed for complete- ness. If a participant did not complete any of the DHI tests, data collection sites were contacted and asked to have the partici- pants complete the series of tests at deinstallation, if possible. vision Vision scores were reviewed for completeness. In the event that a participant did not complete any of the vision tests, data collection sites were asked to have the participants com- plete the test later, typically while exiting the study. physicAl ABility Assessments Grip strength results were primarily reviewed for extreme values using the statistical outliers approach. Table 4.5 illus- trates the high fence that was used. Vehicle Characteristics and Features A variety of vehicle characteristics was collected from each vehicle that was enrolled in the study. These characteristics encompassed a wide range of data, including but not limited to year, make, model, VIN, odometer reading, tire tread depth, tire pressure, battery amps and volts, and information on inte- grated vehicle technologies. The verification process included • Validating year, make, and model of each vehicle; • Validating VIN of each vehicle; and • Applying statistical measures to tire pressure and tire tread depth, battery voltage, battery cranking amps, and battery date to identify anomalous data. Other data quality efforts included the standardization of free- text answer entries. The sleep-related questionnaire was also validated for extreme and anomalous data. All of these ques- tionnaires used statistical procedures to determine outliers. Table 4.4 illustrates the statistical boundaries used in the validation and verification efforts of self-reported participant medical and medication data. These data included height, weight, and neck size. This information was collected twice using nearly identical Medical Conditions and Medications surveys: once at enrollment and once at the end of study par- ticipation. Data were verified using low and high extremes, and when appropriate, site contractors were contacted to verify. conners’ continuous performAnce test ii (cpt ii) The verification procedures were used to process and validate participants’ CPT II reports. Coordination Contractor per- sonnel reviewed each CPT II to verify that the assessment had been attempted and was correctly uploaded to the database. The verification of CPT II reports included a visual inspec- tion to confirm completeness and that the correct file had been uploaded to the database. Participants were then tracked in a database with indica- tions of whether or not the CPT II report had been com- pleted. In the event that the document was not successfully uploaded, had been uploaded in the wrong format, or was incomplete, the site contractor was asked to readminister the assessment to that participant during the DAS deinstallation session, if possible. clock drAwing test Verification procedures were also used to process and validate participants’ clock drawing tests. Trained data reductionists at the Coordination Contractor facility were tasked with reviewing each clock drawing to verify that it was correctly uploaded to the database and that the assessment had been Table 4.4. Participant Information Validation (Height, Weight, Neck Size) Variable Low Extreme High Extreme Rationale Height (in.) Cutoffs were determined by identifying data that appeared outside the normal distribution or were identified by cross-tabulating weight and height and identify- ing the anomalies. Male 54 84 Female 48 78 Weight (lb) Male 100 525 Female 80 525 Neck size (in.) Male 10 25 Female 8 25

67 batteries that might be equipped with higher-cranking amps and voltage. The boundary of 100 psi was determined given that normal passenger vehicle tires require no more than 35 psi. The high extreme of 100 psi is typical in semi tractor trailer tires; however, this high range allows for larger vehi- cles, such as large super-duty pickup trucks. Vehicle odom- eter readings were only screened for extreme values, such as no mileage or extremely high mileage. Table 4.6 illus- trates the boundaries for identifying extreme outliers and anomalous data. The boundaries for outliers were determined by research- ing typical tire pressure measured in pounds per square inch (psi), battery voltage, and battery cranking amps. The anomalous battery dates were determined by flagging vehi- cles with a battery date older than the year of the vehicle or newer than the review date. Vehicle tire pressures were vali- dated by inspecting the lower and higher extremes within the data. Vehicle battery voltage and cranking amps were determined by researching typical passenger vehicle bat- tery voltage (12.6 V) and amps (<1000 A), and allowed for Table 4.5. Grip Strength Boundaries Grip Variable Low Extreme (lb) High Extreme (lb) Rationale Male Applied statistics (3 ∗ IQR) Right 1st attempt 0 186 Right 2nd attempt 0 185 Left 1st attempt 0 186 Left 2nd attempt 0 185 Female Right 1st attempt 0 114 Right 2nd attempt 0 112 Left 1st attempt 0 117 Left 2nd attempt 0 115 Table 4.6. Vehicle-Based Information Validation Variable Numerical Cutoff If Applicable Odometer start None unless extreme (e.g., 999,999) Odometer end Based on each vehicle and months in study Left front tire pressure Greater than 100 psi Left rear tire pressure Greater than 100 psi Right front tire pressure Greater than 100 psi Right rear tire pressure Greater than 100 psi Battery voltage More than 16 V Battery cranking amps More than 2,000 A Battery date Less than year/make of vehicle, or greater than date reviewed

Next: Chapter 5 - Cell Phone Records Integration Study »
Naturalistic Driving Study: Technical Coordination and Quality Control Get This Book
×
 Naturalistic Driving Study: Technical Coordination and Quality Control
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2)Report S2-S06-RW-1: Naturalistic Driving Study: Technical Coordination and Quality Control documents the coordination and oversight of participant- and vehicle-based operations for an in-vehicle driving behavior field study collected from naturalistic driving data and associated participant, vehicle, and crash-related data.

This report documents the methods used by six site contractors located at geographically distributed data collection sites throughout the United States to securely store data in a manner that protects the rights and privacy of the more than 3,000 participants enrolled in the study.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!