Skip to main content

Currently Skimming:


Pages 256-560

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 256...
... A-1 Appendix A Roadside Safety Verification and Validation Program (RSVVP) User's Manual December 2009 (Revision 1.4)
From page 258...
... A-3 Example 2: Multiple-Channel Comparison .............................................................................. 52 Analysis Type ..........................................................................................................
From page 259...
... A-4 INTRODUCTION TO RSVVP The Roadside Safety Verification and Validation Program (RSVVP) quantitatively compares the similarity between two curves, or between multiple pairs of curves, by computing comparison metrics.
From page 260...
... A-5 INSTALLATION SYSTEM REQUIREMENTS RSVVP has been written and compiled using Matlab®. In order to run the RSVVP program either the full Matlab® (version 2009a or higher)
From page 261...
... A-6 environment can be downloaded from: http://civil-ws2.wpi.edu/Documents/Roadsafe/NCHRP22-24/RSVVP/RSVVP_1_7.zip To install MCR, perform the following steps: 1. Extract the content of the RSVVP.zip file in the folder on your PC where you want to install RSVVP (for example: C:\RSVVP\)
From page 262...
... A-7 EVALUATION METHODS AND DATA ENTRY PROCEDURE GENERAL DISCUSSION In RSVVP, the baseline curve or reference curve is called the "true curve" as it is assumed to be the correct response, whereas the curve that is to be verified or validated, say from a model or experiment, is called the "test curve." For example, in validating a computer simulation against a full-scale crash test, the time history data from the physical crash test would be input as the "true curve" in RSVVP and the computer simulation time history would be input as the "test curve". Since the comparison metrics assess the degree of similarity between any pair of curves in general, the input curves may represent various physical entities (e.g., acceleration time histories, force-deflection plots, stress-strain plots, etc.)
From page 263...
... A-8 1. Single Channel - A single pair of curves are compared 2.
From page 264...
... A-9 0.00000000 0.10000000 0.02000000 0.09900000 0.04000000 0.09800000 0.06000000 0.09700000 0.08000000 0.09600000 0.10000000 0.09500000 0.12000000 0.09400000 0.14000000 0.09300000 ……………………………………… Abscissa Ordinate Figure A-1: Format of the test and true curves. Although no limitation is imposed or assumed for the units of both the abscissa and ordinate columns, the use of some preprocessing features like the SAE filtering option may only make sense for time history data (i.e., the first column represents time)
From page 265...
... A-10 When the run completely mode is selected, RSVVP reads the configuration file and automatically evaluates the comparison metrics using the options stored in to the configuration file (e.g. preprocessing, metrics selection time intervals, etc.)
From page 266...
... A-11 Compare a Single pair of curves Compare multiple pairs of curves Figure A-2: Selection of the type of comparison and re-sampling limit. To Load the configuration file, click the button with three dots (i.e., )
From page 267...
... A-12 Figure A-3: Selection of the configuration file. Procedure for Data Entry After the analysis options have been selected, RSVVP closes the window and opens another graphical user interface that will be used for loading and preprocessing the input curves.
From page 268...
... A-13 Figure A-4: Input of the test and true curves. Procedure for Initial Preprocessing The user is given the option to perform initial adjustments of the data, including scaling, trimming, and translating the curves, prior to applying additional preprocessing options, as shown in Figure A-5.
From page 269...
... A-14 Curve scaling The ‘scale' option allows the user to scale the original time histories using user-defined scale factors. The true and test curves can be scaled by separate scale factors.
From page 270...
... A-15 specifications different from the standard SAE J211 filter, user defined filters parameters can be specified. Note: If data is filtered during the trimming process, the user will not be allowed to change the filtering option during subsequent preprocessing operations.
From page 271...
... A-16 Original input true and test curve True and test curves after the translation to the origin Figure A-7: Shift of one of the two input curves to the origin. Note: If the option to scale the original curves is changed or if the scaling factors are changed, RSVVP will automatically update the graph of the original input curves as well as the graph of the preprocessed curves.
From page 272...
... A-17 option has been selected, RSVVP trims each individual channel of data based on the shortest curve in each curve pair; then, after all the data has been input and preprocessed, the curves are further trimmed to the length of the shortest channel. If the original sampling rate of one of the curves is larger than the ‘re-sampling rate limit', the data will be re-sampled to the chosen limit value (see Figure A-2)
From page 273...
... A-18 physical experiments and numerical simulations, the true and test curves should be filtered using the same filter to ensure that differences in the metric evaluation are not based on the difference in frequency content in the true and test signals. The filter options in RSVVP are compliant with the SAE J211/1 specification.
From page 274...
... A-19 If it is necessary to specify a CFC value that is not listed in the menu, select the option ‘User defined CFC…' at the end of the list and input the desired CFC parameters in the ‘Optional user defined CFC' field located right below (Figure A-8b)
From page 275...
... A-20 from numerical solution should not need to use these options since shift and drift are features of sensor characteristics in physical tests. The use of the shift and drift options is, therefore, not recommended for curves resulting from computer simulations.
From page 276...
... A-21 identical input curves with an initial phase difference due to a different starting point in the acquisition process would probably lead to poor results of some of the comparison metrics. Two different synchronization options are available in RSVVP: (1)
From page 277...
... A-22 Figure A-10: Drop down menu of the ‘Sync Options' box. Figure A-11: Option for selecting new starting point for synchronization.
From page 278...
... A-23 In the multichannel mode, six tabs are located at the bottom, left corner of the GUI window, as shown in Figure A-12. The tab corresponding to the current channel's input/preprocessing page is highlighted in red.
From page 279...
... A-24 Note In the Evaluation method box, select the desired method for the evaluation of the multiple data channels using the dropdown menu, as illustrated in Figure A-13. The default method is to use ‘Weighting Factors.' If this method is selected, the graph on the left side of the window will show the curves for the first available channel.
From page 280...
... A-25 the most probable pairing point for the two curves. However, if the user is not satisfied with the synchronization, he has the option of changing the initial starting point used in the minimization algorithms.
From page 281...
... A-26 METRICS SELECTION METRICS SELECTION The metrics computed in RSVVP provide mathematical measures that quantify the level of agreement between the shapes of two curves (e.g., time-history data obtained from numerical simulations and full-scale tests)
From page 282...
... A-27 examines the differences of residual errors between them. Of the fourteen different metrics available in RSVVP, the Sprague-Geers MPC metrics were found to be the most useful metrics for assessing the similarity of magnitude and phase between curves and the ANOVA metrics were found to be the best for examining the characteristics of the residual errors.
From page 283...
... A-28 Figure A-3: Select the metric profile from the drop-down menu. Figure A-4: Example of a metrics selection using the ‘User selected metrics' profile.
From page 284...
... A-29 TIME INTERVAL In RSVVP, metrics can be evaluated over the complete length of the curve (e.g., whole time interval) and/or over one or more user defined time intervals.
From page 285...
... A-30 By default RSVVP evaluates the selected metrics on both Procedure for Compression of Image Files the whole time interval and user selected time interval(s)
From page 286...
... A-31 METRICS EVALUATION Once the desired metrics have been selected, and the time intervals over which the metrics will be calculated have been defined by the user, RSVVP begins the metrics calculation process. In the multichannel mode, RSVVP first calculates the value of the metric for each individual channel (or channel resultants if the resultant method was selected)
From page 287...
... A-32 configuration file contains all the information that has been input in RSVVP, including all the preprocessing options as well as the metrics selection. Thus, the configuration file contains all the information necessary to repeat the analysis.
From page 288...
... A-33 are used to define respectively the time value of the lower and upper boundary, as shown in Figure A-21. Fill in the desired values and press the ‘Evaluate metrics' button to start the evaluation of the metrics on the defined interval.
From page 289...
... A-34 the user for a new User Defined time window. The results obtained for each time interval will be saved separately.
From page 290...
... A-35 Figure A-10: Screen output for the NCHRP 22-24 profile Figure A-11: Screen output for the ‘All metrics' or 'User defined' profiles
From page 291...
... A-36 For multichannel input, if the weighting factors method has been selected, the user can view the results for any of the individual channels or the multi-channel weighted results by selecting the desired option from the drop-down menu beside the time-history graph. When the Multi-channel results is selected from the drop-down menu, a histogram graph of the weighting factors used to compute the metric values in the multichannel mode is plotted.
From page 292...
... A-37 Figure A-12: Pop-up browse window for selecting output folder for RSVVP results. The user has the option of creating a new folder by selecting the tab ‘Make New Folder' in the browse window.
From page 293...
... A-38 the whole time interval and all user defined time intervals, as shown in Figure A-26. The time interval used in each evaluation is indicated in the heading of each column.
From page 294...
... A-39 Figure A-15: Summary of pre-processing options and separate sheets for each input channel in the Excel file.
From page 295...
... A-40 GRAPHS RSVVP creates several graphs during the evaluation of the metrics and saves them as bitmap image files. For each time interval evaluated in RSVVP, the following graphs are created in the folder …/Results/Time-histories/: a)
From page 296...
... A-41 EXAMPLES Two examples are presented in the following sections in order to illustrate the step-bystep procedure for using RSVVP. In Example 1, an acceleration-time history from a full-scale crash test is compared to that of another "essentially" identical full-scale crash test using the single channel option in RSVVP.
From page 297...
... A-42 Analysis Type The first step is to select the type of curve comparison that will be performed. In this example, only a single pair of curves is being compared, so the option ‘single channel' is selected in the GUI window, as shown in Figure A-17.
From page 298...
... A-43 Figure A-18: GUI-preview of original input data loaded into RSVVP. The various preprocessing operations are applied incrementally in this example in order to demonstrate how each operation contributes to the general improvement of the input curves.
From page 299...
... A-44 option, which will be used in a later step. Note: It is typically desirable to also trim the head of the curves to eliminate any pre-impact data from the curve comparison.
From page 300...
... A-45 Figure A-20: Original and filtered acceleration time histories. It is apparent from the graphs in Figure A-20 that the two curves are not synchronized with each other, as each curve demonstrates a different start-time at which the acceleration data started recording.
From page 302...
... A-47 Figure A- 22: Selection of the metrics profile and time interval. During the calculations of the metrics, various graphs appear and disappear on the computer screen.
From page 303...
... A-48 Figure A-23: GUI-window displaying results from whole time interval metrics calculations Clicking the ‘Proceed to evaluate metrics' button, opens a GUI-window, as shown in Figure A-24, that will allow the user to define upper and lower boundaries for a new time interval over which to calculate the metrics. The interval selected for this example is 0.05 seconds to 0.15 seconds.
From page 304...
... A-49 Once the user time window has been defined, the button ‘Evaluate metrics' is pressed to start the calculations of the metrics based on the data within the user defined interval. As before, various graphs appear and disappear on the computer screen, as RSVVP captures and saves the data.
From page 305...
... A-50 Figure A-26: Time interval 0.15 seconds to 0.20 seconds defined using GUI window Figure A-27: Metrics computed for time interval [0.15 sec, 0.20 sec] Save Results To save results and exit, simply press the button ‘Save results and Exit'.
From page 306...
... A-51 evaluated during the metrics calculations. For this example, three different subfolders were created: • Whole_time_Interval, • User_defined_interval_1_[0.05 , 0.15]
From page 307...
... A-52 Table A-3: Summary of the metrics values for each of the time intervals evaluated. Calculated Metric Whole Time Interval [0, 0.3396]
From page 308...
... A-53 Analysis Type The first step is to select the type of curve comparison that will be performed. In this example, six pairs of curves are being compared, so the option ‘multiple channel' is selected in the GUI window, as shown in Figure A-28.
From page 309...
... A-54 seconds) , and filtered using SAE 60 filter.
From page 310...
... A-55 Roll rate Pitch rate Figure A-29: Original and pre-processed curve pairs for each data channel Note that, in the multi-channel case, the synchronization is performed in an intermediate step, after all the channels have been input. Once all the curve pairs have been entered into RSVVP and preprocessed, the ‘Proceed to curves syncho' option at the bottom of the GUI window will open a new GUI for synchronizing the curves.
From page 311...
... A-56 Z acceleration Yaw rate Roll rate Pitch rate Figure A-30: Synchronization results Metric selection and evaluation After the synchronization process is completed, RSVVP automatically opens another GUI for selecting the desired metrics. For this example, the NCHRP 22-24 metrics profile (i.e., ANOVA metrics and the Sprague & Geers MPC metrics)
From page 312...
... A-57 are completed, RSVVP displays the results of the first channel on the screen. Note that beside each metric value RSVVP indicates whether or not the result meets the recommended acceptance criteria.
From page 313...
... A-58 Figure A-32: Screen output of the results for the Y channel. Figure A-33: Screen output of the results for the Z channel.
From page 314...
... A-59 Figure A-34: Screen output of the results for the Yaw channel. Figure A-35: Screen output of the results for the Roll channel.
From page 315...
... A-60 Figure A-36: Screen output of the results for the Pitch channel. Figure A-37: Screen output of the results for the weighted average.
From page 316...
... A-61 Table A-4 shows a summary of the comparison metrics computed for each data channel and the weighted average. The values that exceed the NCHRP 22-24 recommended acceptance criterion for that metric are displayed with a red background in the table.
From page 317...
... A-62 Table A-4: Summary of the calculated metrics for the multi-channel data Data Channel Sprague & Geers ANOVA (M)
From page 319...
... A-64 APPENDIX A1: Comparison Metrics in RSVVP A brief description of the metrics evaluated by RSVVP is presented in this section. All fourteen metrics available in RSVVP are deterministic shape-comparison metrics.
From page 320...
... A-65 and Gear metric [7,8] is the most recent variation of MPC-type metrics.
From page 321...
... A-66 Table A1-1: Definition of MPC metrics. Magnitude Phase Comprehensive Integral comparison metrics Geers Geers CSA Sprague & Geers Russell where Point-to-point comparison metrics Knowles & Gear where (with )
From page 322...
... A-67 SINGLE-VALUE METRICS Single-value metrics give a single numerical value that represents the agreement between the two curves. Seven single-value metrics were considered in this work: (1)
From page 323...
... A-68 ANOVA METRICS ANOVA metrics are based on the assumption that two curves do, in fact, represent the same event such that any differences between the curves must be attributable only to random experimental error. The analysis of variance (i.e., ANOVA)
From page 324...
... 69 APPENDIX A2: Multi-Channel Weight Factors The multi-channel mode in RSVVP was created for the specific purpose of comparing numerical simulations of vehicle impact into roadside barriers to the results from a full-scale crash test. The data that are typically collected in such tests include (at a minimum)
From page 325...
... 70 2. Area Method (default)
From page 326...
... 71 • Evaluation of the area of the True curve for each acceleration channel, ai , and rotational channel, vi. • Evaluation of the sum of the acceleration areas, aSum, and rotational areas, vSum.
From page 327...
... 72 • The histogram should have a normal or bell shaped distribution and the • Cumulative distribution should have an "S" shape If the histogram and the cumulative distribution do have these shape characteristics, the residuals between the two curves are most likely due to some systematic error which should be identified and corrected.
From page 328...
... B-i Appendix B: Roadside Safety Verification and Validation Program (RSVVP) Programmer's Manual December 2009 (Revision 1.4)
From page 329...
... B-ii CONTENTS   FOREWORD ..................................................................................................................................
From page 330...
... B-iii List of Figures Figure B-1: Representation of (a) Shift and (b)
From page 331...
... B-iv Figure B-40: Diagram of sub-block E.2 (Excel results)
From page 332...
... B-1 FOREWORD This guide describes the implementation of the Roadside Safety Verification and Validation Program (RSVVP) developed under the NCHRP 22-24 project.
From page 333...
... B-2 INTRODUCTION RSVVP quantitatively compares one or multiple pairs of curves by computing comparison metrics which are objective, quantitative mathematical measures of the agreement between two curves. The comparison metrics calculated by RSVVP can be used to accomplish one or more of the following operations:  Validate computer simulation models using data obtained from experimental tests  Verify the results of a simulation with another simulation or analytical solution  Assess the repeatability of a physical experiment Although RSVVP has been specifically developed to perform the verification and validation of roadside safety simulations and crash tests, it can be used to generally perform a comparison of virtually any pair of curves.
From page 334...
... B-3 DESCRIPTION OF TASKS This section gives a description of the operations performed by RSVVP, and when possible the theoretical background behind the operations. The tasks performed by RSVVP can be categorized into six main categories: 1)
From page 335...
... B-4 o Re-sampling and Trimming o Synchronization The re-sampling and trimming operations are performed by default as they are necessary to correctly compare any pair of curves because both curves must match point-to-point. In the next sections, a brief description of each preprocessing operation and the theory/method implemented are given.
From page 336...
... B-5 implementing a digital filter which complies with the specifications of the SAE J211 standard [2] , the reference in the matter of filtering for the NCHRP Report 350 [3]
From page 337...
... B-6 curve. The length of the head and tail is equal to the closest integer approximation of the curve frequency divided by 10.
From page 338...
... B-7 The correction of the shift effect can be easily achieved by translating the whole curve by the shift value a. As for the drift effect, once the value of the slope m has been calculated from Equation (10)
From page 339...
... B-8 smaller of the end values between the two original time vectors is considered, in order to trim them to the same interval. Note that, because of the new sampling rate the end value of the new time vector may be approximated by defecting the maximum time of the original curve.
From page 340...
... B-9 (a) Positive offset s (b)
From page 341...
... B-10 Table B-1: Definition of MPC metrics. Magnitude Phase Comprehensive Integral comparison metrics Geers [5]
From page 342...
... B-11 Table B-2: Definition of single-value metrics. Integral comparison metrics Correlation Coefficient [10]
From page 343...
... B-12  Compute the weighted average of the metrics,  Plot the time history of the metrics and  Prepare the variable to output results in Excel files. The program can evaluate metrics considering either a single couple of curves or multiple pairs simultaneously.
From page 344...
... B-13 In particular, the output of the results in Excel format requires the results be stored in variables characterized by particular data structures which will be discussed in detail in the following section. PROGRAM STRUCTURE The information presented herein is intended to illustrate the basic structure and organization of the RSVVP program so that users can easily locate where and how each specific task is programmed.
From page 345...
... B-14 following sections. Due to various reasons which will be explained later in this manual, in general, it was not possible to implement each of the programmed tasks into a specific corresponding block (i.e., a one to one correspondence between the tasks and the blocks)
From page 346...
... B-15 Because of the complexity of the code, the algorithms implemented in each block are described at different levels of detail, starting from a general overview and going more into details at each further level of the flowcharts. In particular, each block is described using flowcharts at three different levels: 1.
From page 347...
... B-16 GRAPHICAL USER INTERFACES The interaction between the program and the user is achieved using various Graphical User Interfaces (GUI's)
From page 348...
... B-17 Main function Opening function Output function Objective function 1 Objective function 2 Objective function N …….. Figure B-5: Structure of a Matlab® GUI.
From page 349...
... B-18 the Main function and may return the variable "handles" to the same local workspace. In most cases, except in some rare exceptions, the exchange of information between an Object function and the Main function is achieved using the field ‘output' of the structure variable "handles" (i.e., handles.output)
From page 350...
... B-19 Input  configuration  Output  configuration  Figure B-7: Representation of the workspace of the Main and Objective functions of a GUI. When the GUI main function closes and returns to the main invoking code, the related figure is not automatically closed by Matlab®.
From page 351...
... B-20   Block A  (Initialization)   Block B  (Input and Option selection)
From page 352...
... B-21 input/preprocessing operations. Figure B-10 shows the main structure of the sub-block Initialization.
From page 353...
... B-22 In the case of multiple channels, most of the option values for each input channel are stored in vectors instead of scalar variables (The variables which become vectors are indicated with "(v) " in the previous list)
From page 354...
... B-23 the loading and the specific preprocessing operations, refer to the next section (Block B)
From page 355...
... B-24 After the cycle has concluded and all the channels have been input and preprocessed, the minimum length between all the couple of channels is computed. In case of multiple channels, first the weighting factors or the resultants are calculated and, then, the program cycles over the channels/resultants to perform the synchronization of the curves in case it is requested by the configuration file.
From page 356...
... B-25 The main characteristic of Block B is that the three sub-blocks are implemented in sequence into a loop which terminates only when the user decides to proceed to the evaluation of metrics (Figure B-13)
From page 357...
... B-26 Figure B-14: Flow chart of the main algorithm of Block B Any time they open, the graphical interfaces of each of the three sub-blocks load the options and the various data input by the user during the previous iteration of the main loop.
From page 358...
... B-27 Input/Preprocessing (Block B.1) This sub-block is the first of Block B and implements the GUI which handles the input of the curves and their preprocessing (GUI_1_3)
From page 359...
... B-28 Load_Preprocess As previously mentioned, this script is the core of Block B.1 which manages the input of the channel/s and the corresponding preprocessing. The algorithm of the script is shown in Figure B-16 and Figure B-17.
From page 360...
... B-29 Apart from the scaling operation, which is performed by the script Load_curves, and the manual trimming of the curves, which is implemented in the script Manual_trim_shift, all the other preprocessing operations are invoked by the script Preprocessing. The scaling option is implemented by simply multiplying the vector containing the data points by the scaling factor defined by the user for that specific channel and curve.
From page 361...
... B-30 Reply.flag = 0 or Reply.flag = 4? Open GUIGUI_1_3 Multichannel?
From page 362...
... B-31 Multichannel? Next/Previous Channel?
From page 363...
... B-32 0.00000000 0.02000000 0.04000000 0.06000000 0.08000000 0.10000000 0.12000000 0.14000000 …………………………… Figure B-18: Sketch of the structure of the variable Preprocessed. Load_curves This script manages the load of the input curves from the ASCII files provided by the user.
From page 364...
... B-33 Preprocessing 2 (Block B.2) This sub-block implements the synchronization of the curves in case multiple channels are input.
From page 365...
... B-34 Preprocessing_2 This script is the core of sub-block B.2. It manages the synchronization of the multiple channels and the selection of the method to use for computing the equivalent metrics (weighting factors or resultant)
From page 366...
... B-35 Figure B-20: Flow chart of the script Preprocessing_2 (sub-block B.2)
From page 367...
... B-36 In case the iteration of the loop is the first for that channel (Reply_2.first_iteration = 1) , the variable Preprocessed_3 is initialized.
From page 368...
... B-37 Figure B-21: Diagram of sub-block B.3 (Metrics selection)
From page 369...
... B-38 Figure B-22: Flow chart of the script Metrics_selection (sub-block B.3)
From page 370...
... B-39 Figure B-23: Diagram of Block C Curves preparation (Block C.1)
From page 371...
... B-40 C hannel 1 C hannel 2 C hannel 3 C hannel 4 C hannel 5 C hannel 6 Figure B-24: Data organization of the matrix variables True and Test. Multichannel?
From page 372...
... B-41 Curves histories (Block C.2) This sub-block saves the time histories of both the original and preprocessed input curves.
From page 373...
... B-42 Curve plotting (Block C.3) The sub-block Curves plotting performs two main operations: (i)
From page 374...
... B-43 Figure B-27: Flow chart of the script Whole_plot_curves (sub-block C.3)
From page 375...
... B-44   Block A  (Initialization)   Block B  (Input and Option selection)
From page 376...
... B-45 Whole time (Block D.1) The sub-block Whole time calculates the metrics in the full time interval on which the curves are defined.
From page 377...
... B-46 Figure B-30: Diagram of sub-block D.1 (Whole time)
From page 378...
... B-47 Weighting_scheme_whole This script calculates the weighting factors in case of multiple channels. The steps followed to compute the weighting factors are shown in the flowchart in Figure B-32.
From page 379...
... B-48     Whole_time_evalution  Whole_time_postprocessing  Figure B-32: Flow chart of the scripts Whole_time_evaluation (left) and Whole_time_postprocessing (right)
From page 380...
... B-49 Whole_time_evaluation This script manages the computation of the metrics according to the selection made by the user in the corresponding GUI. The variable used to store the metric flags is Metrics.
From page 381...
... B-50 by the script Weighting_scheme_whole. These weighted values are then summed up immediately after the loop ends in order to obtain a weighted average.
From page 382...
... B-51 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … ..
From page 383...
... B-52 Output_single_history_xls Output_channel_history_xls Figure B-34: Data organization of the variables Output_single_history_xls and Output_channel_history_xls. Table_output_whole This script is the last recalled by sub-block D.1 and contains commands to create a summary table with graphics and the values of the metrics.
From page 384...
... B-53 Because the scripts used in the following sub-blocks have a structure similar to the corresponding scripts used in the preceding sub-block, D.1, the reader can refer to the description already given in the previous section. The only script which is described next in this section is Store_results, as it is peculiar of sub-block D.2.
From page 385...
... B-54   Figure B-35: Diagram of sub-block D.2 (User time)
From page 386...
... B-55 Save Results This script manages the storage of the results obtained for each iteration of the main loop of sub-block D.2 (i.e., for each user-defined time interval on which the comparison metrics are computed)
From page 387...
... B-56   Output_xls  Output_channel_xls  Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test W ho le tim e in te rv al Us er ti m e i nt er va l # 1 Us er ti m e i nt er va l # 2 Us er ti m e i nt er va l # n …… 0 0 Whole time interval User time interval #1 User time interval #2 User time interval #3 C hannel 1 C hannel 2 C hannel 3 C hannel 4 C hannel 5 C hannel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … ..
From page 388...
... B-57   Block A  (Initialization)   Block B (Input and Option selection)
From page 389...
... B-58 Configuration file (E.1) The sub-block Configuration file manages the possibility to update the configuration file with the information about any time interval defined by the user during the execution of RSVVP.
From page 390...
... B-59 Select directory (GUI) Show onscreen message Save results Excel_results Save time histories Excel_time_histories Figure B-40: Diagram of sub-block E.2 (Excel results)
From page 391...
... B-60 channels, a new sheet is written for each channel and the same headers are written also for any of the channel sheets. This operation is implemented in a loop which cycles over the number of input channels.
From page 392...
... B-61 Figure B-41: Flow chart of the script Excel_results (sub-block E.2)
From page 393...
... B-62 Figure B-42: Flow chart of the script Excel_results_MPC (recalled by the script Excel_results)
From page 394...
... B-63 Step 1 Extract metrics values for all channels   0 Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Metrics extracted 0 Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Metrics extracted Step 2 Extract metrics values for each channels   Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Metrics extracted Extracted values  for Channel 1 W ho le tim e i nt er va l Us er ti m e i nt er va l # 1 Us er ti m e i nt er va l # 2 Us er ti m e i nt er va l # n Metrics extracted Figure B-43: Data extraction from the variable Output_channel_xls.
From page 395...
... B-64 case, a loop cycles over the number of time intervals defined by the user and creates an Excel file during each iteration. Also, if multiple channels were input, in either the case the comparison was performed on the whole time interval or user defined time interval/s the algorithm cycles over each of the input channels in order to save them in separate sheets of the same Excel file.
From page 396...
... B-65 Folder selection (Block E.3) This sub-block (Figure B-45)
From page 397...
... B-66 REFERENCES [1] MathWorks, "Matlab® User Guide – high performance numeric computation and visualization Software", The MathWorks Inc., 3 Apple Hill Drive, Natick, MA, USA, 2008.
From page 399...
... B-68 APPENDIX B-1: CODE VERIFICATION The implementation of the following main features of RSVVP has been verified:  Sprague & Geers metrics  Knowles & Gear metrics In order to verify the correct implementation of the Sprague & Geers metric, a comparison of ideal analytical wave forms differing only in magnitude or phase was performed and the results were compared with the outcomes obtained by Schwer [8] using the same benchmark curves.
From page 400...
... B-69 In both cases, the sampling period was sec02.0t and sec2sec0  t . Figure B-46 shows the graphs of the analytical curves used respectively for the magnitude error and the phase error tests.
From page 401...
... B-70 Table B-4: Value of the Sprague & Geers metric components calculated using the RSVVP program. Metric component 20 % Magnitude difference Phase difference +20% -20% Magnitude 0.2 ≈0 ≈0 Phase 0 0.195 0.195 Combined 0.2 0.195 0.195 Figure B-47.
From page 402...
... B-71 a) phase difference of +20% b)
From page 403...
... B-72 APPENDIX B-2: COMPILING RSVVP The Matlab® code of RSVVP can be compiled as a standalone executable. This allows users who do not have Matlab® installed on their machine to be able to run RSVVP.
From page 404...
... B-73 APPENDIX B-3: Type of Variables Used in the Code The main typologies of variables available in Matlab® and used in the implementation of the code of RSVVP are:  Matrices and arrays (floating-point/integer data, characters and strings)  Structures  Cell arrays Matrices and arrays are used to store both numbers and text characters.
From page 405...
... B-74 numbers, while row two holds three other types of arrays, the last being a second cell array nested in the outer one. Figure B-50: Example of a Matlab structure variable [1]
From page 406...
... B-75 APPENDIX B-4: Preprocessing Algorithms This appendix describes the general algorithms used to perform the following preprocessing operations:  Filtering  Shift/drift  Resampling & trimming  Synchronization Filtering The filter process is implemented in the function sae_filter, whose algorithm is shown in Figure B-51. The function receives as input the following three variables: (i)
From page 407...
... B-76 Figure B-51: Algorithm of the SAE filtering. Shift/drift The shift and drift corrections are implemented in the script Shift_drift.
From page 408...
... B-77 steps described in the algorithm are performed on either the true or test curves or both of them, according to the user selection. Figure B-52: Main algorithm of the script Shift_drift.
From page 409...
... B-78 Function shift_value  Function drift_value        i = i + 1 curve(i) < peak*
From page 410...
... B-79 Figure B-54: Algorithm of the script Resampling_trimming.
From page 411...
... B-80 Synchronization The automatic synchronization of the curves is implemented in the script Curve_synchronizing whose algorithm is shown in Figure B-55. This script calculates the shift value which minimizes a target function, which can be either the area between the two curves (area of the residuals method)
From page 412...
... B-81 functions, the shifting of the two curves is performed by invoking the user-defined function shift whose algorithm is shown in Figure B-57.   Function area_res  Function sre  Figure B-56.
From page 413...
... B-82 common time vector is used for both the interpolated true and test curves, which starts at time zero and is trimmed at the end by a value equal to the shift. Shift value s < 0.5 length of the shorter curve ?
From page 414...
... B-83 APPENDIX B-5: Metrics Algorithms This appendix gives a detailed description of the algorithms used to implement the various comparison metrics available in RSVVP. Note that each metric implemented in RSVVP is repeatedly evaluated considering time intervals increasing in size in order to track the behavior of the metrics when the two curves are compared on a limited portion of the time domain.
From page 415...
... B-84  2mI mm (B5-8)
From page 416...
... B-85 ANOVA The Analysis of Variance metrics are based on the residuals between the measured and the computed curves. In particular, the residuals are normalized to the peak value of the measured curve.
From page 417...
... C-1 APPENDIX C : BENCHMARK CASE EXAMPLE FORMS The following sections include the filled-out forms and reports corresponding to the benchmark cases described in Chapter 6. The blank forms with instructions are included in Appendix E
From page 418...
... C-2 APPENDIX C1: TEST CASE 1: PICKUP TRUCK STRIKING A STRONG-POST W- BEAM GUARDRAIL WITHOUT A CURB VALIDATION/VERIFICATION REPORT FOR A _______________Report 350 2000P Pickup Truck_________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a _______Steel deformable barrier G4(1S)
From page 419...
... C-3 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)
From page 420...
... C-4 PART II: ANALYSIS SOLUTION VERIFICATION Table C1-1. Analysis Solution Verification Table.
From page 421...
... C-5 PART III: TIME HISTORY EVALUATION TABLE Table C1-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 422...
... C-6 Table C1-3(a)
From page 423...
... C-7 Table C1-3(b)
From page 424...
... C-8 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C1-4. Evaluation Criteria Test Applicability Table.
From page 425...
... C-9 Table C1-5. Roadside Safety Phenomena Importance Ranking Table.
From page 426...
... C-10 Table C1-5. Roadside Safety Phenomena Importance Ranking Table (continued)
From page 427...
... C-11 Table C1-5. Roadside Safety Phenomena Importance Ranking Table (continued)
From page 428...
... C-12 Plots of the time histories used to evaluate the comparison metrics S&G mag. = 21.5√ S&G phase = 33.3√ Mean = 0.02√ St.D.
From page 433...
... C-17 APPENDIX C2: PICKUP TRUCK STRIKING A STRONG-POST W-BEAM GUARDRAIL IN COMBINATION WITH AN AASHTO TYPE B CURB VALIDATION/VERIFICATION REPORT FOR A _______________Report 350 2000P Pickup Truck_______________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a Steel deformable barrier G4(1S)
From page 434...
... C-18 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)
From page 435...
... C-19 PART II: ANALYSIS SOLUTION VERIFICATION Table C2-1. Analysis Solution Verification Table.
From page 436...
... C-20 PART III: TIME HISTORY EVALUATION TABLE Table C2-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 437...
... C-21 Table C2-3(a)
From page 438...
... C-22 Table C2-3(b)
From page 439...
... C-23 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C2-4. Evaluation Criteria Test Applicability Table.
From page 440...
... C-24 Table C2-5(a)
From page 441...
... C-25 Table C2-5(b)
From page 442...
... C-26 Table C2-5(c)
From page 443...
... C-27 Plots of the time histories used to evaluate the comparison metrics A Accelerations S&G mag.
From page 444...
... C-28 S&G mag. = 0.5 √ S&G phase = 48.6 x Mean = 0.01√ St.D.
From page 445...
... C-29 S&G mag. = 9.7 √ S&G phase = 25.2 √ Mean = 0.07 x St.D.
From page 446...
... C-30 APPENDIX C3: SMALL CAR STRIKING A VERTICAL RIGID WALL VALIDATION/VERIFICATION REPORT FOR A ________________________EN 1317 Vehicle ___________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a ______________________Concrete barrier_______________________________ (roadside hardware type and name)
From page 447...
... C-31 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)
From page 448...
... C-32 PART II: ANALYSIS SOLUTION VERIFICATION Table C3-1. Analysis Solution Verification Table.
From page 449...
... C-33 PART III: TIME HISTORY EVALUATION TABLE Table C3-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 450...
... C-34 Table C3-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (Multiple channels)
From page 451...
... C-35 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C3-4. Evaluation Criteria Test Applicability Table.
From page 452...
... C-36 Table C3-5. Roadside Safety Phenomena Importance Ranking Table.
From page 453...
... C-37 Table C3-5. Roadside Safety Phenomena Importance Ranking Table (continued)
From page 454...
... C-38 (2) The severity indexes were computed considering the curves preprocessed by RSVVP on the time interval [0 sec, 0.2 sec]
From page 455...
... C-39 Plot of the time histories used to evaluate the comparison metrics X-Acceleration (g) Y-Acceleration (g)
From page 456...
... C-40 Z-Acceleration (g) Yaw rate (rad/sec)
From page 457...
... C-41 APPENDIX C4: SMALL CAR STRIKING A VERTICAL RIGID WALL VALIDATION/VERIFICATION REPORT FOR A ________________________EN 1317 Vehicle ___________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a ______________________Concrete barrier_______________________________ (roadside hardware type and name)
From page 458...
... C-42 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)
From page 459...
... C-43 PART II: ANALYSIS SOLUTION VERIFICATION Table C4-1. Analysis Solution Verification Table.
From page 460...
... C-44 PART III: TIME HISTORY EVALUATION TABLE Table C4-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 461...
... C-45 Table C4-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (Multiple channels)
From page 462...
... C-46 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C4-4. Evaluation Criteria Test Applicability Table.
From page 463...
... C-47 Table C4-5. Roadside Safety Phenomena Importance Ranking Table.
From page 464...
... C-48 Table C4-5. Roadside Safety Phenomena Importance Ranking Table (continued)
From page 465...
... C-49 (2) The severity indexes were computed considering the curves preprocessed by RSVVP on the time interval [0 sec, 0.2 sec]
From page 466...
... C-50 Plots of the time histories used to evaluate the comparison metrics X-Acceleration (g) Y-Acceleration (g)
From page 467...
... C-51 Z-Acceleration (g) Yaw rate (rad/sec)
From page 468...
... C-52 Roll rate (rad/sec) Pitch rate (rad/sec)
From page 469...
... C-53 APPENDIX C5: TRACTOR TRAILER TRUCK STRIKING A 42" TALL RIGID CONCRETE MEDIAN BARRIER VALIDATION/VERIFICATION REPORT FOR A _______________Tractor-Semitrailer Model (36000V) __________________________ (Report 350 Vehicle Type)
From page 470...
... C-54 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)
From page 471...
... C-55 PART II: ANALYSIS SOLUTION VERIFICATION Table C5-1. Analysis Solution Verification Table.
From page 472...
... C-56 PART III: TIME HISTORY EVALUATION TABLE Table C5-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 473...
... C-57 Table C5-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi-channel option using Area II method)
From page 474...
... C-58 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C5-4. Evaluation Criteria Test Applicability Table.
From page 475...
... C-59 Table C5-5. Structural Adequacy Phenomena for the Tractor-Semitrailer Test Case.
From page 476...
... C-60 Plots of the time histories used to evaluate the comparison metrics S&G mag. = 12.4√ S&G phase = 48.5x Mean = 0.02√ St.D.
From page 477...
... C-61 S&G mag. = 12.8√ S&G phase = 47.1x Mean = 0.00√ St.D.
From page 478...
... C-62 APPENDIX C6: ROADSIDE HARDWARE PIRT FOR A STRONG-POST W-BEAM GUARDRAIL WITH WOOD BLOCKOUTS Developer: Worcester Polytechnic Institute Worcester, MA Model Date: January 2002 Report Date: November 30, 2009 Barrier: The modified G4(1S) guardrail with wood blockouts is composed of 12-gauge wbeam rails supported by W150x13.5 steel posts with150x200 mm wood blockouts (i.e., the type of blockout used in the G4(2W)
From page 479...
... C-63 Figure C6-2. Model of the G4(1S0 Strong-Post W-Beam Guardrail Model.
From page 480...
... C-64 Table C6-2. Comparison Metric Evaluation Table for Phenomena #1.
From page 481...
... C-65 Table C6-3. Comparison Metric Evaluation Table for Phenomena #2.
From page 482...
... C-66 Table C6-4. Comparison Metric Evaluation Table for Phenomena #3.
From page 483...
... C-67 Table C6-5. Comparison Metric Evaluation Table for Phenomena #4.
From page 484...
... C-68 Table C6-6. Comparison Metric Evaluation Table for Phenomena #5.
From page 485...
... C-69 Table C6-7. Comparison Metric Evaluation Table for Phenomena #6.
From page 486...
... C-70 Table C6-8. Phenomenon Importance Ranking Table for the Modified G4(1S)
From page 487...
... C-71 APPENDIX C7: VEHICLE PIRT FOR A 1992 FREIGHTLINER FLD120 TRACTOR PHENOMENA IMPORTANCE RANKING TABLE FOR A 1992 FREIGHTLINER FLD120 TRACTOR Developer: NCAC/Battelle/ORNL/University of Tennessee at Knoxville Date: 11/30/2009 Model: Reduced Element (i.e., bullet model) model of a 1992 Freightliner FLD120 Tractor with integral sleeper-cabin.
From page 488...
... C-72 Table C7-1. List of Experiments to be used in the PIRT Development 1.
From page 489...
... C-73 Table C7-2. Comparison Metric Evaluation Table.
From page 490...
... C-74 Table C7-3. Comparison Metric Evaluation Table for Phenomena #2.
From page 491...
... C-75 Table C7-4. Comparison Metric Evaluation Table for Phenomena #3.
From page 492...
... C-76 Table C7-5. Comparison Metric Evaluation Table for Phenomena #4.
From page 493...
... C-77 Table C7-6. Comparison Metric Evaluation Table for Phenomena #5.
From page 494...
... C-78 Table C7-7. Comparison Metric Evaluation Table for Phenomena #5.
From page 495...
... C-79 Table C7-8 Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA #5: Rear "Air-Bag" Suspension (60 psig bag pressure, 0.1 in/sec)
From page 496...
... C-80 Table C7-9 . Comparison Metric Evaluation Table for Phenomena #5.
From page 497...
... C-81 Table C7-10. Comparison Metric Evaluation Table for Phenomena #6.
From page 498...
... C-82 Table C7-11. Phenomenon Importance Ranking Table for Tractor-Semitrailer Model.
From page 499...
... C-83 APPENDIX C8: VEHICLE PIRT FOR THE MODIFIED C2500R VEHICLE MODEL Developer: National Crash Analysis Center George Washington University Modified by: Worcester Polytechnic Institute Worcester, MA Model Date: January 2002 Model: The NCAC C2500R finite element model is a reduced element model of a 1995 Chevrolet 2500 pickup truck. The C2500R model, shown in Figure C8-1, has been used by several research organizations over the years and each organization has made changes and improvements to the model based on their particular analysis needs.
From page 500...
... C-84 Table C8-1. List of Experiments used in the PIRT Development 1.
From page 501...
... C-85 Table C8-2. Comparison Metric Evaluation Table.
From page 502...
... C-86 Table C8-3. Comparison Metric Evaluation Table.
From page 503...
... C-87 Table C8-4. Comparison Metric Evaluation Table.
From page 504...
... C-88 Table C8-5. Comparison Metric Evaluation Table.
From page 505...
... C-89 Table C8-6. Comparison Metric Evaluation Table.
From page 506...
... C-90 Table C8-7. Comparison Metric Evaluation Table.
From page 507...
... C-91 Table C8-8. Comparison Metric Evaluation Table.
From page 508...
... C-92 Table C8-9. Comparison Metric Evaluation Table.
From page 509...
... C-93 Table C8-10. Phenomenon Importance Ranking Table for the Modified C2500 Vehicle Model Validated Phenomenon Validated?
From page 510...
... C-94 Table C8-5. Comparison Metric Evaluation Table.
From page 511...
... C-95 Table C8-6. Comparison Metric Evaluation Table.
From page 512...
... C-96 Table C8-7. Comparison Metric Evaluation Table.
From page 513...
... C-97 Table C8-8. Comparison Metric Evaluation Table.
From page 514...
... C-98 Table C8-9. Comparison Metric Evaluation Table.
From page 515...
... C-99 Table C8-10. Phenomenon Importance Ranking Table for the Modified C2500 Vehicle Model Validated Phenomenon Validated?
From page 516...
... 1 APPENDIX D SURVEY OF PRACTITIONERS The survey of practitioners is included in the following pages. The actual survey form itself is provided first and a tabulation of the survey responses is provided second.
From page 531...
... 16 Summary of the survey responses
From page 549...
... E-1 APPENDIX E VALIDATION/VERIFICATION REPORT FORMS A _______________ ________________________________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a _________________________________________________________________ (roadside hardware type and name)
From page 550...
... E-2 PART I: BASIC INFORMATION These forms may be used for validation or verification of roadside hardware crash tests. If the known solution is a full-scale crash test (i.e., physical experiment)
From page 551...
... E-3 PART II: ANALYSIS SOLUTION VERIFICATION Using the results of the analysis solution, fill in the values for Table E-1. These values are indications of whether the analysis solution produced a numerically stable result and do not necessarily mean that the result is a good comparison to the known solution.
From page 552...
... E-4 PART III: TIME HISTORY EVALUATION TABLE Using the RSVVP computer program (‘Single channel' option) , compute the Sprague & Geers MPC metrics and ANOVA metrics using time-history data from the known and analysis solutions for a time period starting at the beginning of the contact and ending at the loss of contact.
From page 553...
... E-5 Table E-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option)
From page 554...
... E-6 Table E-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi- channel option)
From page 555...
... E-7 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table E-4 is similar to the evaluation tables in Report 350 and MASH. For the Report 350 or MASH test number identified in Part I (e.g., test 3-10, 5-12, etc.)
From page 556...
... E-8 Table E-4. Evaluation Criteria Test Applicability Table.
From page 557...
... E-9 Complete Table E-5 according to the results of the known solution (e.g., crash test) and the numerical solution (e.g., simulation)
From page 558...
... E-10 rupture did occur resulting in a phenomenon T entry of "yes" for the known solution, the known and analysis solutions do not agree and "no" should be entered in the "agree? " column.
From page 559...
... E-11 Table E-5(b)
From page 560...
... E-12 Table E-5(c)

Key Terms



This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.