National Academies Press: OpenBook
« Previous: 2 Synthesis of Existing and Potential Roadside Crash Injury Metrics and Identification of Relevant Data Sources
Page 51
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 51
Page 52
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 52
Page 53
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 53
Page 54
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 54
Page 55
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 55
Page 56
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 56
Page 57
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 57
Page 58
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 58
Page 59
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 59
Page 60
Suggested Citation:"3 Research Approach." National Academies of Sciences, Engineering, and Medicine. 2023. Evaluation and Comparison of Roadside Crash Injury Metrics. Washington, DC: The National Academies Press. doi: 10.17226/27401.
×
Page 60

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

51 3 Research Approach The objective of NCHRP Project 17-90 was to evaluate existing roadside crash injury metrics and propose enhanced crash injury metrics that better reflect the occupant characteristics and vehicle fleet of the 2020s. The research program was primarily based upon real-world crash experiences and considered the following: 1. Three different crash impact types (frontal, side, and oblique impacts); 2. Relevant crash and occupant characteristics, other than the value of the injury metric, that may affect injury risk; 3. The impact tolerance of all major body regions; and 4. The injury potential of intrusion into the occupant compartment. The project was divided into two phases. The purpose of Phase I was to identify research needs and develop a research plan to accomplish project goals. Phase II executed the panel-approved research plan. Chapter 2 presented a synthesis of the state of the practice and engineering rationale for existing and potential crash injury metrics, and identification of potential data sources for assessment of roadside crash injury metrics, which was completed as part of Phase I. This information served as a basis for the development of the Phase II research plan. The purpose of Chapter 3 is to describe Phase II of the research plan. Phase II is described briefly in the following paragraphs and in more detail in the sections below. The overall approach of Phase II was to first determine the candidate vehicle-based metrics to evaluate and the methods to quantify occupant injury severity. A database of suitable real-world crash cases was then assembled to evaluate the candidate vehicle-based metrics and split into training and test subsets. The database included detailed occupant injury information, associated vehicle kinematics data, and other relevant crash and occupant characteristics. Data from the training subset was then used in conjunction with binary logistic regression to develop injury risk curves for each candidate injury metric for frontal, side, and oblique impacts. Injury risk curves were developed for overall injury and body region-specific injury when sufficient data were available. The developed injury risk curves were then used to predict injury for the test subset. The candidate metrics were ranked primarily based on how well they predicted the observed occupant injury from the test data subset using applicable statistical metrics. Two related investigations were also conducted: (1) to determine if considering vehicle-specific restraint performance significantly improved the real-world injury prediction capabilities of vehicle-based metrics, and (2) to compare how well the candidate injury risk metrics predicted ATD-based injury metrics used in full-scale vehicle crash tests. The results from all of these analyses were synthesized and used to propose any new or modified injury risk procedures that may be adopted in a future version of MASH. A sample of previously conducted MASH crash tests was used to assess the possible implications of any proposed new or modified occupant injury risk criteria. In addition to the use of vehicle-based injury metrics, MASH specifies limits on occupant compartment intrusion. A separate analysis was conducted to evaluate the current MASH occupant compartment intrusion limits using available real-world crash data. The analysis consisted of three parts: (1) updating the previous FHWA intrusion study, (2) evaluating the current MASH location- specific vehicle occupant compartment intrusion limits using real-world crashes, and (3)

52 estimating the frequency of certain vehicle damage patterns in real-world crashes with roadside hardware. Finally, the study findings were used to develop suggested modifications to current MASH language related to injury criteria as a means to implement any proposed new or modified injury criteria developed as part of the project. A proposed roadmap of research needs was then developed to aid with future efforts to update the MASH occupant injury risk evaluation procedures. Injury Metrics and Injury Severity Measurement 3.1.1 Candidate Injury Metrics The project considered the ability of five candidate metrics, summarized in Table 3-1, to predict real-world crash injury. The metrics included the existing FSM and four alternative metrics identified during Phase I of the project, all of which evaluate occupant injury risk using only vehicle kinematics information. Table 3-1. Existing and potential roadside crash injury metrics to be evaluated. Metric Type Injury Metric Description Crash Pulse Only MDV Maximum Delta-v; Maximum vehicle change in velocity ASI Acceleration Severity Index Crash Pulse + Assumed Occupant Response FSM Flail Space Model, including the Occupant Impact Velocity (OIV) and the Occupant Ridedown Acceleration (RA) OLC Occupant Load Criterion VPI Vehicle Pulse Index MDV is the change in vehicle velocity due to the crash event (Wusk and Gabler 2017) and has long been used by the vehicle safety community to gauge overall crash severity. ASI is a single normalized value representing crash severity based on the 50-ms moving average acceleration of the vehicle center of gravity in three dimensions (Gabauer and Gabler 2005). ASI is used along with an FSM variant to evaluate occupant risk in international roadside hardware crash tests (CEN 2010; Joint Technical Committee CE-033 2017; Ministry of Transportation Ontario 2017). Both the MDV and ASI metrics use only the vehicle crash pulse to gauge occupant injury risk. The FSM assumes occupant injury risk occurs via two possible mechanisms: the OIV and the RA. OIV is the occupant’s velocity relative to the vehicle occupant compartment at the instant the occupant first crosses a simplified occupant compartment boundary (Tsoi and Gabler 2015). RA is the maximum 10-ms average vehicle acceleration subsequent to occupant impact with the simplified occupant compartment boundary (Gabauer and Gabler 2004). OLC is an occupant’s constant acceleration between the time points at which the occupant is displaced 65 mm and 300 mm relative to the vehicle (Wusk and Gabler 2017). VPI is the maximum occupant acceleration during the crash as derived from a spring-mass model with constant restraint stiffness and slack (Tsoi and Gabler 2015). The FSM, OLC, and VPI use the vehicle crash pulse in tandem with one or more assumed constraints on occupant motion to gauge occupant injury risk potential. For all candidate metrics, larger numeric values correspond to higher risk of occupant injury. The metrics currently used in roadside hardware crash testing (i.e., the FSM and ASI) are compared against established threshold values to determine if occupant risk is acceptable for a given crash test. In analogous full-scale crash testing of vehicles, ATDs (i.e., crash test dummies) are used to

53 determine occupant injury risk potential, typically for a specific body region. Generally, these ATD-based metrics have a corresponding threshold value established as well as an injury risk curve. The injury risk curve is a mathematical expression that converts the metric value to a corresponding probability of occupant injury. For the vehicle-based metrics, however, there is a general lack of these injury risk curves available. In addition to evaluating the ability of the candidate metrics to predict occupant injury, this project developed injury risk curves for the candidate metrics in different crash types, such as frontal, side, and oblique. 3.1.2 Injury Severity Measurement Methods The primary method of ranking injury severity was using the AIS. The AIS code measures injury severity in terms of threat to life, and the AIS system was developed by a consensus of trauma surgeons for an extensive compendium of injuries (AAAM 2008). Each injury incurred by a person is coded on a six-point scale ranging from 1 for minor injuries to 6 for unsurvivable injuries (Table 3-2). In this project, the MAIS was used as a measure of the overall severity of an occupant’s injuries. Two threshold levels of injury severity were considered, MAIS2+F and MAIS3+F. MAIS2+F denotes an occupant with at least one injury at level AIS 2 (moderate) or greater, including occupants who were fatally injured. Similarly, MAIS3+F denotes an occupant with at least one injury at level AIS 3 (serious) or greater, including occupants who were fatally injured. Table 3-2. AIS ISS. Injury Severity Description 0 Not injured 1 Minor injury 2 Moderate injury 3 Serious injury 4 Severe injury 5 Critical injury 6 Maximum injury (fatal) As part of Phase I, the research team investigated the use of other potential injury severity criteria that could supplement the use of AIS. These included the ISS, the IIS, the FCI, and the Harm societal cost metric. Each of these metrics weight injury severity by considering the consequences, such as mobility impairment, associated societal costs, of all injuries suffered by an occupant rather than only the maximum severity injury (MAIS). Based on the Phase I findings, the research team selected Harm for further investigation and development of associated models. The Harm metric is a means of measuring the societal cost of traffic crashes and is frequently used in the evaluation of impact injury countermeasures. This societal cost includes both medical costs and indirect costs, such as loss of wages. The original Harm metric was first developed by Malliaris et al. (1985) as a means of balancing number of injuries with the severity or cost of an injury, where severity is determined using the AIS. The improved Harm metric (multi-Harm) developed by Fildes et al. (1992) assigns a societal cost to each injury to estimate a total societal cost of injury.

54 Build and Analyze the Injury Assessment Dataset (IAD) 3.2.1 Assemble the Available Data and Compute Candidate Injury Metric Values An IAD, as shown in Figure 3-1, was assembled based on relevant cases with in-depth occupant injury information and EDR data from four U.S. real-world crash databases: NASS/CDS, CISS, CIREN, and SCI. Each of the cases in the IAD was matched with longitudinal and lateral EDR crash pulses from associated EDR downloads. For each matched case, the team assembled detailed injury information by body region and AIS injury severity. The team also assembled the relevant occupant characteristics, such as sex, age, restraint use, and seating location. Although the IAD case selection criteria varied slightly by crash type, IAD cases were limited to front seat occupants, crashes comprising a single event, and crashes with no vehicle rollover or occupant ejection. Additional case selection requirements included occupants 13 years of age or older, known AIS injury data, known EDR-recorded belt status, and vehicle occupant compartment intrusion less than the MASH intrusion limits. Before extracting the vehicle longitudinal and lateral velocity, all EDR data were checked to ensure a successful, complete recording of the event along with a complete crash pulse. Figure 3-1. Assembling the IAD from four real-world crash databases. The available EDR data were used to compute the candidate metric values. Once the IAD was assembled, the existing MASH vehicle-based metrics (OIV and RA) and the other candidate injury metrics (ASI, OLC, VPI, and delta-v) were computed for each case. As shown in Figure 3-1, the computations were based upon the crash pulses extracted from the EDR downloads of real-world crashes in the IAD. To the extent possible based on the available EDR data, the existing roadside hardware crash injury metrics were computed as prescribed by MASH. For the oblique crashes, the lateral and longitudinal metric values were combined into a single resultant metric value such that only a single metric value could be included in the developed models. There was no difference in how the lateral and longitudinal OIV was computed among any of the crash modes. Additionally, it is important to note that for the frontal and side crash models, ASI only considered either the longitudinal or lateral direction. For the oblique model, both the longitudinal and lateral directions were considered, and the ASI values were normalized by their respective thresholds, 12 G and 9 G. Build Injury Assessment Database Compute Candidate Injury Metrics FSM (OIV & RA) ASI OLC VPI Delta-V EDR Data Injury Outcomes Crash Characteristics Injury Assessment Database Real-World Crash Cases NASS/CDS CISS CIREN SCI Event Data Recorder (EDR) Data Vehicle Crash Pulse

55 3.2.2 Develop Models of Injury Severity for Each Candidate Injury Metric Using the available NASS/CDS cases in the IAD as a model training dataset, injury risk curves for each of the candidate injury metrics were developed using the computed injury metric values, the associated occupant injury, and relevant crash characteristics. Figure 3-2 presents our approach for the OIV but this process was repeated for each candidate metric. Binary logistic regression was used to model injury severity for each candidate metric. Observed occupant injury data were used to classify overall occupant injury as severe or minor/no injury. Although two injury severity thresholds were considered to distinguish severe from minor/no injury, sufficient serious injury cases were only available to develop models using the MAIS2+F scheme but not the MAIS3+F scheme. When sufficient cases were available, body region-specific MAIS2+F models were also developed for one or more of the following three body regions: (1) head and face (HF), (2) neck and cervical spine (N), and (3) thorax, abdomen, and lumbar and thoracic spine (TALT). Figure 3-2. Graphical summary of injury risk curve development for each candidate injury risk metric. Because human injury tolerance is a strong function of impact direction, separate risk curves will be developed for pure frontal loading, side (lateral) loading, and oblique loading. To control for crash type, separate models were developed for frontal, side, and oblique impacts. Each model also controlled for relevant crash characteristics, including occupant age, occupant sex, occupant body mass index (BMI), occupant belt use, and vehicle type. Initial models were developed with a full set of covariates, and then final models were developed including only with the statistically significant predictors. All models were developed while considering the complex sampling design of the NASS/CDS training dataset and using the associated NASS/CDS case weighting values. Case weights are assigned to each case to provide an estimate of the national incidence of various crash types. Despite this, most subsets of these data frames contain a few cases that have very large weights (Brumbelow 2019). Often, the cases assigned these large weights share similar outcomes to other cases with smaller weights. To avoid allowing the largely weighted cases to dominate and skew potentially meaningful results, this analysis excludes any cases that have a case weight greater than 5,000 (Kononen 2011). Note that this case weight restriction did not result in any excluded cases in the training dataset. The result was injury risk curves relating each candidate metric to occupant injury risk while accounting for the potentially confounding factors. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5 10 15 20 25 Pr ob ab ili ty o f O ve ra ll/ Re gi on In ju ry Occupant Impact Velocity [m/s] Side Impact Frontal Impact Oblique Impact Binary Logistic Regression Injury Metrics (EDR-derived) Injury Outcomes Crash Characteristics Injury Assessment Database (NASS/CDS) Hypothetical Case Selection Repeat for Each Candidate Injury Metric Front Side Oblique

56 Each candidate injury metric either does not consider occupant restraints, such as seat belts and/or airbags, or considers restraints with the assumption that restraint performance is independent of vehicle make and model. Full-scale vehicle crashworthiness testing, however, has demonstrated that different vehicles can have markedly different restraint performances, thereby affecting occupant injury risk. On a subset of the IAD data, two different methods were explored for incorporating a quantification of vehicle-specific restraint performance into the evaluation of the candidate injury metrics: 1. The first approach computed the probability of serious occupant injury based on corresponding NHTSA frontal crash tests as an additional covariate in an injury prediction model. The injury quantification scheme used serves as the basis for the NHTSA five-star rating of passenger vehicle safety. For each real-world crash vehicle with a matching crash test, we recomputed the probability of serious occupant injury (AIS 3+) based on the response of the ATD head, neck, thoracic, and lower extremity regions in the crash test and then used the computed probability values directly in the developed models. 2. The second approach was to enhance the VPI metric with vehicle-specific restraint stiffness and slack values. The current VPI metric, as prescribed by ISO, is based upon a lumped mass-spring model that explicitly models restraint use to estimate occupant motion. The restraint system (i.e., belt and airbag) is modeled in terms of a spring of stiffness, k, and slack, s. The VPI standard prescribed by ISO is limited, however, by using a one-size-fits- all restraint stiffness, which we know to be a gross over-simplification. Our approach computed vehicle make-model specific restraint stiffness values using NHTSA crash tests for each make-model in a subset of our real-world crash dataset. Injury risk curves were developed for both vehicle-specific approaches described above. Only metrics that demonstrated significant improvement in occupant injury risk prediction were considered further for incorporation into an improved candidate metric for MASH. An alternate set of linear regression models was also developed to relate the candidate injury metrics to a different method of quantifying occupant injury, the Harm metric. These linear regression models were developed using the training dataset to predict Harm societal cost for occupants in the IAD for each crash mode. Similar to the AIS-based models, the developed models controlled for relevant crash characteristics, including occupant age, occupant sex, occupant BMI, occupant belt use, and vehicle type. Final models were developed with only statistically significant predictors, and the models were developed while considering the complex sampling design of NASS/CDS. 3.2.3 Validate the Injury Severity Models using Independent Datasets and Rank Order Candidate Metrics Each AIS-based regression model developed using the training dataset was then used to predict injury outcomes for the test dataset, yielding a set of predicted injury outcomes based on each candidate metric in addition to the actual injury outcomes for each suitable test dataset occupant. This process is depicted graphically in Figure 3-3. CISS, CIREN, and SCI data within the IAD were initially considered for inclusion in the test dataset, but ultimately only the CISS data were

57 included in the test dataset. As CIREN and SCI had very few suitable cases available, they were not used in the test dataset. All predicted and observed injuries in the test dataset were weighted using the associated CISS case weighting values to account for the complex sampling design of the CISS database. The restriction to exclude cases with weights in excess of 5,000 resulted in exclusion of less than 3% of the suitable unweighted CISS cases. Figure 3-3. Graphical summary of the comparison of candidate injury metrics using the test dataset. When comparing the predicted injuries with the actual injuries, each predicted value can be labeled as either a true positive, true negative, false positive, or false negative result. The ability of the model to correctly predict injury for occupants who actually did suffer an injury is the true positive rate, also known as sensitivity or recall. The probability that the occupants predicted to be injured were actually injured is the model’s positive predictive value, also known as precision. For both the training and test datasets, the majority of the occupants had not experienced an MAIS2+F injury, so the models exceled at predicting non-injury cases. Since non-injury cases were of less concern than injury cases, the models needed to be compared in such a way that did not consider true negative predictions. The ability of the candidate injury metrics to predict real-world occupant injury was systematically rank ordered using the following three methods: 1) The F2 score, which is a transformation of the harmonic mean of precision and recall, was computed for each model. The F2 score does not consider true negative predictions, which is ideal in a dataset where the majority of the cases are negative cases (non-injury, in this case). The F2 score also prioritizes a higher recall value rather than a higher precision. This is important when there is a strong need to identify positive cases correctly. A higher F2 score indicates better predictive capability of the model. 2) The accuracy of each model was computed. Accuracy is the percentage of correct predictions made by the model and is computed by dividing the number of correct CISS + EDR Cases Injury Assessment Database (CISS) EDR-Computed Injury Metric Value Injury Prediction Actual Injuries Injury Risk Curve Developed with Training Data Compute F2 Score Crash Characteristics Repeat for Each Candidate Injury Metric and Compare F2 Scores

58 predictions by the total number of predictions made. This measurement, unlike the F2 score, does factor in true negative predictions. 3) ROC curves were generated for each model. ROC represents the sensitivity and specificity at varying discrimination levels. Sensitivity measures the ability of the candidate vehicle- based severity metric to correctly identify seriously injured occupants as such. Specificity measures the ability of the metric to correctly identify non-seriously injured occupants. The area under the curve (AUC) of the ROC can be compared among models, with larger AUC values indicating better predictive capabilities. The result was a quantitative ranking of how well each candidate metric-predicted injury outcome matches the observed injury outcome. Note for the F2 and accuracy metrics, a decision threshold in the form of a percent injury risk must be selected (i.e., a predicted injury risk above this threshold would predict “injury” and a value below would predict “no injury” for a specific occupant). For each candidate injury metric model, this threshold was found by selecting the percent injury risk value that optimized the F2 score for the training dataset. The same selected thresholds based on the training dataset were then applied when predicting occupant injury for the test dataset. A similar validation process was used for the developed Harm models. Each Harm regression model developed with the training data was then used to predict Harm societal cost for the test dataset, yielding a set of predicted Harm costs based on each candidate metric in addition to the actual Harm costs for each suitable test dataset occupant. When comparing the predicted Harm costs with the actual Harm costs using the test dataset, each predicted value has an associated error value. The ability of the candidate injury metrics to predict real-world occupant Harm costs was systematically rank ordered using the root mean squared error (RMSE) value associated with each model. Correlate Intrusion with Real-World Injury In addition to prescribing thresholds for OIV and RA, MASH (AASHTO 2009, 2016) prescribes thresholds for vehicle occupant compartment deformations. These intrusion thresholds are prescribed for nine different vehicle areas (windshield, side panel, toe pan, etc.) with the values varying by vehicle area. MASH commentary indicates that the thresholds were based on: (1) recommended guidelines developed by the IIHS to evaluate vehicle structural performance in offset frontal crash tests, and (2) an FHWA study that provided interim guidance on maximum acceptable occupant compartment intrusion limits. Although not explicitly cited, the FHWA study referenced by MASH is presumably the study conducted by Eigen and Glassbrenner (2003). The authors examined NASS/CDS data from 1991 through 2000 to investigate the relationship between occupant compartment intrusion levels and corresponding occupant injury. Although the Eigen and Glassbrenner study used real-world crashes, the study was conducted prior to the implementation of the current vehicle region-specific intrusion limits. Little is known regarding how the current MASH occupant compartment intrusion thresholds relate to real-world crash occupant injury.

59 The purpose of this portion of the project was to evaluate MASH guidelines on acceptable intrusion limits based on real-world crash experience. The analysis had three primary components, each listed below with a brief description: 1. Analysis of occupant compartment intrusion magnitude and occupant injury – The purpose of this portion of the study was to update the Eigen and Glassbrenner (2003) study using the most recent years of NASS/CDS data available (2000 thru 2015). 2. Evaluation of current MASH occupant compartment intrusion limits – The overall approach of this portion of the analysis was to classify real-world NASS/CDS crashes using the MASH occupant compartment intrusion criteria (i.e., above/below the associated intrusion threshold), and then examine corresponding maximum occupant injury using developed statistical models that control for potentially confounding factors. 3. Estimation of real-world frequency of vehicle damage patterns – Full-scale crash testing with roadside safety hardware has identified several vehicle damage patterns including glued seam separation and A/B-pillar damage. Vehicle damage photographs available in NASS/CDS were reviewed for cases involving an impact with specific roadside hardware devices to provide an estimate of the proportion of crashes where each damage mode is present. Assess Candidate Metric Ability to Predict Occupant Acceleration NHTSA maintains a publicly available database of full-scale vehicle crash tests that contain sensor data for more than 12,000 tests. Selected NHTSA crash tests of late model vehicles (i.e., vehicle model year 2018, 2019, or 2020) with instrumented crash test dummies (ATDs) were used to provide an additional means to compare the candidate injury metrics. Three different crash test types were considered: 1. Frontal test where a vehicle impacts a rigid barrier head on, 2. Side impact test with a bogie vehicle impacting the side of a stationary vehicle, or 3. Side impact test with the vehicle side impacting a fixed pole. In each crash test, the candidate metrics were computed using the vehicle crash pulse measurements from on-board instrumentation. Occupant acceleration measurements were obtained from accelerometers located at the ATD head, thorax, and pelvis. In addition, the HIC, 3-ms clip (a thoracic injury metric), and chest compression were computed. Regression models were developed to determine the correlation between the candidate injury metrics and the ATD acceleration and injury criteria (e.g., HIC). The developed regression models can be used to compare how well the candidate metrics predict ATD acceleration and ATD-based injury criteria. A candidate injury metric that better predicts the ATD injury criteria would likely be better suited to predict real-world occupant injury as the ATD-based injury criteria have been heavily validated against available biomechanical data. The results from this analysis were used to supplement the comparison of the candidate metrics using real-world crash data. Crash test data from the NHTSA database were also used to compute vehicle-specific occupant restraint performance when incorporating this information to evaluate the prediction of real-world crash injury.

60 Compare FSM and Alternate Metrics in Roadside Crash Tests There are three primary crash testing facilities in the U.S. that conduct full-scale crash tests of roadside safety hardware: the Midwest Roadside Safety Facility (MwRSF), the Texas Transportation Institute (TTI), and the FHWA Federal Outdoor Impact Laboratory (FOIL). The research team selected and obtained electronic crash test data for a sample of MASH crash tests conducted by these three facilities. The candidate injury metrics were computed for all the sample MASH tests to provide a means of assessing how any potential new MASH metrics or modifications to any existing MASH metrics would impact previously tested roadside hardware. The injury risk curves developed using the available real-world crash data were used to translate the candidate injury risk values into a probability of occupant injury. The results of this analysis were used to weigh the implications of any MASH modifications considered. Proposed Implementation of Results in MASH Based on the findings of the conducted analyses, the research team developed suggested modifications to existing MASH injury criteria and thresholds, including the rationale for any proposed changes and suggested modifications to current MASH language on injury criteria using redline and strikeout markups. When an injury risk metric was determined to be superior and practical, the research team developed options for inclusion in MASH, including potential language for possible inclusion in MASH. Future Roadmap for Updates to MASH Injury Risk Evaluation Using the findings from the current study and identified gaps in the available literature, the research team developed a proposed roadmap of research needs for potential updates to the MASH injury risk evaluation. The developed roadmap was divided into two phases: near-term and longer- term. The focus of the current project was to improve the existing vehicle-based MASH criteria, but the vehicle fleet will continue to change. One near-term research need will be to reevaluate the MASH criteria to account for advances in passive safety and the increasing prevalence of electric vehicles. Other future near-term injury risk methods could include physical testing with instrumented dummies or virtual crash testing using finite element modeling (FEM) of the hardware device, vehicle, and instrumented occupants. Each of these options currently has challenges that may be alleviated in the future. For each of these future near-term options, the roadmap addressed the (a) challenges and opportunities, (b) cost and practicality, and (c) potential data elements that need to be collected and potential sources. MASH, like most crash test procedures including NHTSA crash tests, assumes that occupants will be seated in forward-facing seats. Compared to current vehicles, longer-term future autonomous vehicles (AVs) may have different occupant compartment configurations (e.g., rear- facing front seats), which would expose occupants to very different crash loading environments. The roadmap assessed the need and timeline for potentially revisiting the MASH injury risk evaluation techniques in response to these upcoming vehicles.

Next: 4 Building the IAD »
Evaluation and Comparison of Roadside Crash Injury Metrics Get This Book
×
 Evaluation and Comparison of Roadside Crash Injury Metrics
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The crash performance of roadside safety hardware, such as guardrails, is typically evaluated using full-scale crash tests with vehicles striking the device in representative worst-case impact scenarios. Each test is evaluated based on vehicle response, device response, and potential for injury to vehicle occupants.

NCHRP Research Report 1095: Evaluation and Comparison of Roadside Crash Injury Metrics, a pre-publication draft from TRB's National Cooperative Highway Research Program, evaluates existing roadside crash injury metrics and proposes enhanced crash injury metrics that better reflect the occupant characteristics and vehicle fleet of the 2020s.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!