National Academies Press: OpenBook

High-Speed Weigh-in-Motion System Calibration Practices (2008)

Chapter: Chapter Four - Majory Survey Findings

« Previous: Chapter Three - Survey Questionnaire
Page 27
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 27
Page 28
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 28
Page 29
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 29
Page 30
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 30
Page 31
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 31
Page 32
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 32
Page 33
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 33
Page 34
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 34
Page 35
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 35
Page 36
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 36
Page 37
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 37
Page 38
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 38
Page 39
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 39
Page 40
Suggested Citation:"Chapter Four - Majory Survey Findings." National Academies of Sciences, Engineering, and Medicine. 2008. High-Speed Weigh-in-Motion System Calibration Practices. Washington, DC: The National Academies Press. doi: 10.17226/23062.
×
Page 40

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

27 DOTs replied as follows when asked how they cooperate with the enforcement agency in their state managing WIM systems: • Six reported that they provide traffic WIM data to an enforcement unit for enforcement planning purposes. • One stated that it collects data from WIM systems that are calibrated and maintained by the enforcement agency. • Two stated that they install, calibrate, and maintain WIM systems for the enforcement screening at PrePass™ sites. • One stated its repair crews occasionally perform work for the enforcement agency. • One stated that it allows the enforcement agency wireless real-time access to WIM data during mobile enforcement operations. • One stated that it provides contract plans review, evaluation/inspection services, post-warranty mainte- nance, and traffic data collection/analysis for the enforce- ment agency’s WIM systems. • One stated that it coordinates with the enforcement agency regarding WIM data quality and future site loca- tions with high volumes of heavy trucks. When asked how they cooperate with the DOT in their state managing WIM systems, enforcement agencies replied as follows: • Seven reported that they make WIM data from their main line screening systems available to the DOT’s traffic data unit. • One stated that it provides static scale readings to the DOT’s traffic data unit for testing WIM systems. The overall findings of the survey are presented in Appen- dix D, which is presented in the online version only. The sur- vey results were divided into three parts, to be made available electronically, as follows: • Systems used for traffic data collection only, • Systems used for both traffic data collection and enforce- ment screening, and • Systems used for enforcement only. The following tables offer a summary of the background findings of the survey. Table 13 suggests that most traffic data collection WIM systems are Type I, although the majority of the combined traffic data/enforcement screening sites are Type II systems. Interestingly, the majority of WIM systems used exclusively for enforcement screening are Type II sys- tems. Table 14 suggests that autocalibration is used primarily for Type II systems used for both data collection and enforce- ment screening. Table 15 suggests that the majority of state agencies, regardless of application, perform in-house post- installation WIM system calibrations. It should be noted, however, that not all agencies use test trucks in post-installation calibration. Two of the 25 agencies that reported conducting post-installation calibration use alter- native methods. One agency uses FAW monitoring of traffic stream vehicles, whereas another relies on autocalibration. Table 16 shows the method used for performing routine WIM calibration. Note that some agencies use more than one method for calibration and/or monitoring the calibration over time. Under “Other,” agencies provided clarifications on how one or more of the three listed methods were implemented (these are not shown in the table). A summary of the actual number of agencies using partic- ular WIM calibration methods is given in Table 17. It is noted that the majority of agencies that responded use more than one method for WIM calibration. Only five of the agencies reported using traffic data QC alone for this purpose. The following sections present summaries of the survey findings organized in three parts, as follows: • Test truck WIM calibration, • Traffic stream truck WIM calibration, and • WIM calibration through WIM data QC. TEST TRUCK WEIGH-IN-MOTION CALIBRATION QUESTIONS Under this questionnaire segment, agencies were asked questions about how they calibrate their most common WIM systems using test trucks. The number of agencies using test trucks for WIM calibration varies depending on their function, as follows: • Twenty-two of the 34 agencies managing traffic data collection WIM systems use test trucks for WIM cali- bration. Six of these agencies reported that their most common WIM systems are Type I, whereas the remain- ing 16 reported that their most common systems are Type II. CHAPTER FOUR MAJOR SURVEY FINDINGS

• Six of the seven agencies managing traffic data and enforcement screening WIM systems use test trucks for WIM calibration. Four of these agencies reported that their most common WIM systems are Type I, whereas one reported that its most common WIM systems are Type II. • Two of the 11 agencies managing enforcement-only WIM systems use test trucks for WIM calibration. Their most common WIM system is Type I. 28 A summary of agency responses on the methodology used to calibrate WIM systems using test trucks is are shown in Table 18, which suggests that approximately half of the agencies perform test truck WIM calibrations in-house, and Table 19, which suggests that the majority of agencies does so on a routine basis. A contractor/manger differs from an on-call contractor. The former decides when on-site calibration is need, whereas the latter responds to an agency request to perform calibration. The frequency in routine calibrations ranges from 6 months to 24 months, with the majority per- formed at 12-month intervals, as shown in Table 20. In addi- tion, some agencies do test truck calibrations in response to indications of calibration or for other reasons (e.g., changes in pavement or sensor condition). Interestingly, although the majority of agencies using test trucks for WIM calibration reports considering pavement WIM Sensors Type I Type II Other Traffic Data Only 56.3% 37.5% 6.3% Both 33.3% 55.6% 11.1% Enforcement Only 29.4% 58.8% 11.8% Other = Type III, portable WIM sensors pit-embedded, single wheel path load cells. TABLE 13 WHAT TYPES OF WIM SENSORS ARE USED? Sensor Type I Type II Other Traffic Data Only 14.3% 75.0% 10.7% Both 0.0% 100.0% 0.0% Enforcement Only 50.0% 50.0% 0.0% TABLE 14 WHAT TYPE OF SENSORS IS AUTOCALIBRATION USED FOR? Who Does it? Yes Agency Vendor Traffic Data Only 75.8% 51.5% 33% Both 66.7% 66.7% 33.3% Enforcement Only 90% 23% 77% TABLE 15 IS POST-INSTALLATION CALIBRATION ALWAYS PERFORMED? Test Trucks Traffic Trucks Traffic Data QC Other Traffic Data Only 66.7% 33.3% 57% 18% Both 33% 22.2% 39% 5% Enforcement Only 5.9% 58.9% 29.4% 5.9% TABLE 16 WHICH CALIBRATION METHOD IS USED? Calibration Method Traffic Data Only Both Enforcement Only Test Truck Only 7 0 0 Traffic Trucks Only 2 0 6 Traffic Data QC Only 4 0 1 Test Truck and Traffic Trucks 3 0 0 Test Truck and Traffic Data QC 8 3 0 Traffic Trucks and Traffic Data QC 2 1 3 All three methods 4 3 1 Other 6 1 0 No response 4 0 0 TABLE 17 WIM METHOD CALIBRATION METHOD SUMMARY

29 roughness (Table 21), only about 25% does so objectively; 11.1% do the straightedge/circular plate test described by ASTM E1318-02, 3.6% simulate this test using software that accepts as input the pavement profile and 14.8% simply use the IRI (Table 22). Even fewer agencies consider the structural condition of the foundation of the sensors. Some do so indi- rectly by studying the shape of the sensor’s raw signal and do so only when WIM measurements seem to be inaccurate (Table 23). Tables 24 to 34 give details of the actual procedure used in test truck calibration and the calculations used for comput- ing calibration factors. The majority of agencies use one or two test trucks (Table 24). The single truck chosen is typically a Class 9. Where two trucks are used, one is a Class 9 and the other is either a Class 5 or 7, with only one agency reporting using a Class 10 truck. Some agencies use two Class 9 trucks. The majority of agencies specify an air suspension type for the test trucks (Table 25), but this is not always enforced. The majority of agencies uses fixed weigh scales for obtaining the static loads of test trucks, although more than 40% of agencies managing dual-use WIM use portable scales (Table 26). The majority of agencies weigh axle groups rather than individual axles (Table 27), which is probably the result of the configura- tion of enforcement static weigh scales. Most agencies perform the measurements only once (Table 27). Table 28 lists the criteria used by agencies in selecting test truck speeds. About half of the WIM systems, regardless of application, are calibrated using test trucks running at the site median traffic speed. The remainder is divided between the posted speed limit and multiple speeds. The majority of agencies managing dual-use WIM systems use multiple test speeds. Instructions are issued to the drivers either by means of two-way radios or cell phones. Although the majority of agencies using WIM traffic for data collection only conduct 10 test runs per vehicle speed, agencies using WIM for either traffic data/enforcement or enforcement alone conduct three test runs per vehicle speed (Table 29). The corresponding percentage of agen- cies not turning their autocalibration off during test truck calibration is 30% for those using WIM for data collection only, 20% for agencies using WIM for data collection and Agency On-Call Contractor Contractor/Manager Agency and Contractor Traffic Data Only 55% 9% 27% 9% Both 80% — — 20% Enforcement Only 50% — — 50% TABLE 18 WHO PERFORMS THE TEST TRUCK CALIBRATION? Routine Schedule Drift Indication Other Traffic Data Only 67% 26% 7.4% Both 42.8% 28.6% 28.6% Enforcement Only 50% 50% — TABLE 19 WHAT TRIGGERS CALIBRATION? 6 Months 12 Months 24 Months Traffic Data Only 31.2% 67% 6.7% Both — 100% — Enforcement Only — 100% — TABLE 20 IF CALIBRATIONS ARE DONE ROUTINELY, HOW OFTEN? Always Only if Tolerance Not Met Never Traffic Data Only 59% 27.3% 13.6% Both 100% — — Enforcement Only 100% — — TABLE 21 IS SITE SMOOTHNESS CONSIDERED? Visual Straight Edge/Circular Plate Profile+ IRI Profile+ LTPP Software Other Traffic Data Only 66.7% 11.1% 14.8% 3.6% 3.6% Both 40% 30% 10% 10% 10% Enforcement Only 100% — — — — TABLE 22 WHAT METHOD IS USED FOR QUANTIFYING SMOOTHNESS?

enforcement, and 50% for agencies using WIM for enforce- ment only (Table 30). Responders indicated that overall, 87% of agencies carry out calibration calculations on site. Table 31 shows that the calibration calculation method for traffic data WIM systems is equally split between agency software, vendor software, and short-hand calculations. Although for dual-use WIM systems, most agencies use short- hand calculations; for enforcement-only systems, about two-thirds of the agencies use vendor software to carry out the calculations. The main load data elements for which WIM errors are computed are the GVW, individual axle loads, and tandem axle loads (Table 32). The majority of agencies computes calibration factors by setting the mean GVW equal to zero or by setting a combination of the mean GVW and the mean axle load errors equal to zero. Few agencies compute cali- bration factors by minimizing the least square error between WIM and static axle loads through zero-intercept regres- sion (Table 33). Depending on the WIM application, up to 67% of the agencies report deriving speed-specific calibra- tion factors, although a significant percentage of agencies input the average of these factors in all speed bins after cal- ibration (Table 34). 30 TRAFFIC STREAM TRUCK WEIGH-IN-MOTION CALIBRATION QUESTIONS Under this questionnaire segment, agencies were asked to respond to questions related to calibrating their most common WIM systems utilizing traffic stream trucks of known static weight. The total number of agencies using traffic stream trucks of known static weight varies as follows, depending on their function: • Seven of the 34 agencies managing traffic data collec- tion WIM systems use traffic stream trucks for WIM calibration. Four of these agencies use Type I systems, whereas the remaining three use Type II systems. • Four of the seven agencies managing traffic data and enforcement screening WIM systems use traffic stream trucks for WIM calibration. All four of these agencies use Type I systems. • Ten of the 11 agencies managing enforcement-only WIM systems use traffic stream trucks for WIM calibration. Nine of these agencies use Type I systems, whereas the remaining one uses Type II systems. Responses on the means of weighing these traffic stream vehicles varied. Seventeen agencies use static scales (i.e., 15 use enforcement facilities and 2 use portable scales); three agencies answered “other,” but evidently they use some known traffic stream weight, such as the FAW of certain vehicle classes, instead of actual weighing of vehicles statically; and one agency did not specify the actual weighing method. Tables 35 to 48 describe agency responses related to the details of their WIM calibration procedures involving traffic stream trucks of known static weight. Most agencies conduct these types of calibration in-house, with the exception of enforcement agencies that involve a contractor (Table 35). The majority performs calibrations only when there is an indication Yes Traffic Data Only 36% Both 25% Enforcement Only 0% TABLE 23 DO YOU CONSIDER THE STRUCTURAL CONDITION AT THE SITE? 1 2 Traffic Data Only 90% 10% Both 100% — Enforcement Only — 100% TABLE 24 HOW MANY TRUCKS ARE USED? AirLeaf SpringSite Rep.Yes* Traffic Data Only 81% 5.3% 10.5% 84.21% Both 80% — — 100% Enforcement Only 100% 50%50% *Specified but not always enforced. TABLE 25 ARE EACH TRUCK’S SUSPENSION TYPES SPECIFIED AND IF SO, WHAT TYPES? (Typically refers to a truck’s tandem axle groups) Portable Fixed Other Traffic Data Only 19% 81% — Both 42.9% 57% 14.3% Enforcement Only 33% 67% — Other = Semi-portable. TABLE 26 WHAT TYPE OF STATIC SCALES IS USED?

31 Axle Groups Individual Axles 1 2 3 Traffic Data Only 51.6% 44.4% 87% 8.7% 4.3% Both 57% 27% — 40% 60% Enforcement Only 50% 25% 50% 50% — TABLE 27 WHAT STATIC WEIGHT DATA ARE RECORDED AND HOW MANY TIMES ARE MEASURED? Median Speed at Site Posted Speed Multi-Speed Selected by Agency Multi-Speed Selected by Driver Traffic Data Only 40% 30% 25% 5% Both 40% — 60% — Enforcement Only 50% — — 50% TABLE 28 AT WHAT SPEEDS ARE THE TEST TRUCKS RUN? 2 3 5 6 7 8 10 20 Traffic Data Only 6.3% 12.5% 12.5% 6.3% 6.3% 6.3% 43.8% 6.3% Both 25% 50% — — — — — 25% Enforcement Only — 50% — — — — — 50% TABLE 29 MINIMUM NUMBER OF TEST RUNS BY SPEED Yes No Do Not Know Traffic Data Only 60% 30% 10% Both 80% 20% — Enforcement Only 50% 50% — TABLE 30 IS AUTOCALIBRATION TURNED OFF DURING TRUCK TESTING? Traffic Data Only Both Enforcement Only Agency Software 34.6% 14.3% 33% Vendor Software 30.1% 28.6% 67% Calculator 34.6% 71.4% — TABLE 31 HOW WIM ERRORS ARE COMPUTED ON-SITE Total Length Axle Spacing GVW Tandem Axle Loads Individual Axle Loads Speed Traffic Data Only 24% 71% 100% 43% 71% 33% Both 50% 50% 100% 33% 50% 50% Enforcement Only 50% 100% 100% 50% 100% 50% Note: Percentage reflects the number of agencies reporting to calculate WIM errors for the particular data element. TABLE 32 WHICH DATA ELEMENTS ARE ERRORS COMPUTED FOR?

32 No Yes Yes, But Average Is Input in Each Speed Bin Traffic Data Only 50% 22.7% 27.2% Both 33% 67% — Enforcement Only 100% — — TABLE 34 DO YOU CALCULATE SEVERAL CALIBRATION FACTORS DEPENDING ON SPEED? Agency On Call Contractor Contractor/Manager Agency and Contractor Traffic Data Only 86% — 14% — Both 100% — — — Enforcement Only 22% 11% 22% 44% TABLE 35 WHO PERFORMS THE TRAFFIC STREAM TRUCK CALIBRATION? of calibration drift, which is evidently detected through WIM data QC (Table 36). About one-third of agencies perform these calibrations on a routine basis (interval ranges from 1 to 12 months as shown in Table 37). There is roughly an equal division between the methods employed for selecting the number of traffic stream vehicles to use in performing WIM calibration (Table 38). Where a fixed number of vehicles is specified, it varies between 1–100, with an average of 40 being used (Table 39). Where a fixed time interval is used, it ranges between 1 and 168 hours, with the majority of agencies using data collected over a 1- to 4-hour period (Table 40). The type of vehicles included in this sam- ple varies; the majority of agencies using WIM for traffic data or traffic data/enforcement favors selecting vehicles in certain classes regardless of speed, whereas the majority of agencies using WIM only for enforcement screening uses a random selection of vehicle classes (Table 41). Approximately 75% of the agencies use truck inspection station scales for obtaining static axle loads, whereas the remaining agencies use portable static scales. Axle spacing is measured for the majority of these vehicles, mostly by manual means (Table 42). Table 43 lists the number of agencies that turn off the autocalibration system in their WIM systems. The responses to the question regarding where error calculations are performed vary; some agencies do so at the site always, whereas others do so at the office (Table 44). Interestingly, enforcement agencies are more likely to per- form the error/calibration computations at the site, which is explained by their ready access to static data from static scales at truck inspection stations. The actual method for performing the calculations varies; most often, vendor software is used (Table 45). The most common load elements for which errors are computed are GVW, individual axle load, and tandem axle load (Table 46). The most commonly used approach for computing cali- bration factors for traffic data WIM systems is by setting the mean GVW to zero. For traffic data/enforcement and enforcement-only WIM systems, the most common calibra- tion approach is by setting the combined errors for GVW and individual axle loads to zero. About 16% of the agencies that operate traffic data WIM use regression for computing cali- bration factors (Table 47). Most agencies do not compute multiple calibration factors corresponding to different traffic speeds. Only about one-quarter of the agencies that responded indicated that they do compute and input speed-specific cal- ibration factors (Table 48). WEIGH-IN-MOTION CALIBRATION THROUGH WEIGH-IN-MOTION DATA QUALITY CONTROL QUESTIONS The total number of agencies using traffic stream data QC for WIM calibration varies depending on the agency’s function. The survey showed • Twenty of the 34 agencies managing traffic data collec- tion WIM systems, Mean Axle Error = 0 Mean GVW = 0 Combination of Previous Two Slope WIM vs. Static Auto-Computed* Traffic Data Only 9.1% 40.9% 13.6% 4.6% 22.7% Both — 40% 20% 20% 20% Enforcem ent Only — — — — 100% *Responses to survey option ìDo not know (It is incorporated in an error comp utation spreadsheet).” TABLE 33 WHAT FORMULA IS USED FOR COMPUTING CALIBRATION FACTORS?

33 • Six of the 7 agencies managing traffic data and enforce- ment screening WIM systems, and • Six of the 11 agencies managing enforcement-only WIM systems use traffic stream data QC for WIM calibration. Theremainder of this section and Tables 49 through 59 pres- ent a summary of agency responses on the methodology used. Most DOTs perform their own WIM data QC-based cali- bration; however, approximately 37% of the agencies manag- ing enforcement screening WIM use contractors for this pur- pose (Table 49). The majority of these agencies perform WIM data QC daily or weekly (Table 50). Most DOTs (i.e., data collection and data collection/ enforcement screening systems) download data automatically; however, most agencies that manage enforcement-only WIM systems do so manually (Table 51). The actual WIM data QC analysis frequency ranges from daily to monthly or, alterna- tively, it is decided on the basis of personnel availability or per- ceived calibration need (Table 52). It is performed by manual or automated means or a combination of the two (Table 53). Interestingly, the majority of agencies that manage traffic data WIM systems does so automatically, but the majority of agen- cies that either manage dual-use or enforcement-only WIM does so manually. Table 54 summarizes information on when the actual WIM data QC is performed; for example, the major- ity of agencies that manage traffic data collection WIM sys- tems perform the QC analysis during data download. Table 55 suggests that, with few exceptions, almost all of the agencies that responded believe that WIM data QC is capable of identifying system operational problems. The large majority of these agencies believe that WIM data QC detects errors in all of the following categories: • Vehicle errors, • System errors, Routine Schedule Drift Indication Other Traffic Data Only 33.3% 55.5% 11.1% Both 50% 33.3% 16.7% Enforcement Only 36.4% 57.1% — TABLE 36 WHAT TRIGGERS CALIBRATION? 1 Month 3 Months 6 Months 9 Months 12 Months Traffic Data Only 33.3% — 33.3% — 33.3% Both 25% — 25% 25% 25% Enforcement Only — 33.3% 67% — — TABLE 37 IF CALIBRATIONS ARE DONE ROUTINELY, HOW OFTEN? Fixed Sample Size Time Interval Sample Within Time Interval Other Traffic Data Only 28.6% 28.6% 28.6% 14.3% Both 50% — 50% — Enforcement Only 44% 11.1% 22.2% 22.2% TABLE 38 HOW DO YOU SELECT THE NUMBER OF TRAFFIC STREAM VEHICLES? 1 5 10 20 26 75 100 Traffic Data Only 33.3% 33.3% — 33.3% — — — Both — — — 50% — 50% — Enforcement Only — — 25% — 25% — 50% TABLE 39 IF A FIXED NUMBER OF TRUCKS IS USED, SPECIFY HOW MANY? 1 h 4 h 168 h Traffic Data Only 50% 50% Both N/A N/A N/A Enforcement Only 100% — — N/A = not available. TABLE 40 IF A FIXED TIME INTERVAL IS USED, SPECIFY HOW LONG?

34 Yes Manually Electronically Traffic Data Only 42.8% 33.3% 66.7% Both 75%* 100% — Enforcement Only 66%* 33.3% 66.7% *Not for 100% of trucks. TABLE 42 IS THE AXLE SPACING FOR THESE TRUCKS BEING MEASURED, AND IF SO HOW? Yes No Do Not Know Traffic Data Only 100% — — Both 100% — — Enforcement Only 56% 11% 33% TABLE 43 IF AVAILABLE, IS AUTOCALIBRATION TURNED OFF DURING THIS PROCESS? Always Yes, Only for Additional Sampling Never Traffic Data Only 42.8% 28.6% 28.6% Both 50% — 50% Enforcement Only 89% 11% — TABLE 44 ARE THE COMPUTATIONS PERFORMED ON-SITE? Agency Software Vendor Software Calculator Traffic Data Only 22.2% 44.4% 33.3% Both 33.3% — 67% Enforcement Only 15.4% 53.8% 30.7% TABLE 45 HOW ARE WIM ERRORS COMPUTED ON-SITE? Total Length Axle Spacing GVW Tandem Axle Loads Individual Axle Loads Speed Traffic Data Only 14% 29% 100% 14% 71% 14% Both 25% 75% 100% 75% 100% 25% Enforcement Only 22% 56% 100% 56% 78% 33% Note: Percentage reflects the number of agencies reporting to calculate WIM errors for the particular data element. TABLE 46 WHICH DATA ELEMENTS ARE ERRORS COMPUTED FOR? Mean Axle Error = 0 Mean GVW = 0 Combination of Previous Two Slope WIM vs. Static Auto- computed* Traffic Data Only 16.7% 50% 16.7% 16.7% — Both 25% 25% 50% — — Enforcement Only — 22.2% 44.4% — 33.3% *Responses to survey option “Do not know (It is incorporated in an error computation spreadsheet).” TABLE 47 WHAT FORMULA IS USED FOR COMPUTING CALIBRATION FACTORS? Other Traffic Data Only — Both 25% Enforcement Only None (Random Sample) 28.6% — 55.5% Class Only 57% 50% 33.3% Class and Speed 14.2% 25% 11.1% — Other = vehicles screened as overweight. TABLE 41 WHAT CRITERIA ARE USED FOR SELECTING TRUCKS FROM THE TRAFFIC STREAM?

35 No Yes Yes, But Average is Input in Each Speed Bin Traffic Data Only 57.1% 28.6% 14.3% Both 75% 25% — Enforcement Only 62.5% 25% 12.5% TABLE 48 DO YOU CALCULATE SEVERAL CALIBRATION FACTORS DEPENDING ON SPEED? Agency On Call Contactor Contractor/Manager Traffic Data Only 90% — 10% Both 100% — — Enforcement Only 62.5% 25% 12.5% TABLE 49 WHO PERFORMS WIM DATA QC CALIBRATION? Daily Weekly Monthly Other Traffic Data Only 57.9% 31.6% — 10.5% Both 83.3% — 16.7% — Enforcement Only 33.3% 33.3% 16.7% 16.7% Other = Depends on traffic volumes or personnel availability (e.g., some do so bi-weekly). TABLE 50 HOW OFTEN IS THE WIM DATA BEING DOWNLOADED? Traffic Data Only Both Enforcement Only Manually 31.6% 16.7% 66.7% Automatically 63.1% 66.7% 33.3% Combination 5.3% 16.7% — TABLE 51 HOW IS THE WIM DATA BEING DOWNLOADED? error checked by all the agencies operating enforcement screen- ing only WIM systems. For calibration monitoring using traf- fic stream WIM data, most agencies, regardless of WIM data application, focus their analysis on Class 9 trucks or, more specifically, on only the 3S2 configuration (Table 56). Table 57 lists the traffic stream truck properties being monitored and the percentage of agencies by WIM data application using them. The most common load-related truck properties being moni- tored are the steering axle load average, the left-side/right-side wheel loads of the steering axle, the GVW for empty versus loaded trucks and the GVW by vehicle speed. Interestingly, the steering axle load SD and the GVW SD are monitored mostly by agencies that manage enforcement screening WIM systems. This is likely in response to the need for setting the appropriate load screening thresholds. Table 57 also shows that the most common distance measure being monitored is the axle spacing of the tractor tandem axles (for 3S2 trucks) and less frequently the total wheelbase versus the sum of the axle space data. • Unclassified vehicles, • Bad class counts, and • Bad vehicle counts. Unclassified vehicles and bad class counts were the opera- tional problems checked most frequently for traffic data WIM systems. Vehicle errors, systems errors, and unclassified vehi- cles were the operational errors checked most frequently for dual-use WIM systems. Vehicle errors were the operational Daily Weekly Monthly Other Traffic Data Only 21% 36.8% 26.3% 15.6% Both 25% 25% 50% — Enforcement Only 33.3% 33.3% 16.7% 16.7% Other = Depends on the traffic data element being analyzed (e.g., GVW distributions are checked monthly). Sometimes triggered by field observations or decided by field personnel. TABLE 52 HOW OFTEN IS WIM DATA QC BEING PERFORMED?

36 Traffic Data Only Both Enforcement Only Manually 26.3% 50% 50% Automatically 47.3% 16.7% 16.7% Combination 26.3% 33.4% 33.4% TABLE 53 HOW IS THE WIM DATA ANALYSIS BEING PERFORMED? At Time of Download Separate Step Traffic Data Only 67.7% 32.3% Both 33% 67% Enforcement Only 50% 50% TABLE 54 WHEN IS THE WIM DATA ANALYSIS PERFORMED? Yes Vehicle Errors System Errors Unclassified Vehicles Bad Class Counts Bad Vehicle Counts Traffic Data Only 84% 69% 69% 88% 88% 75% Both 100% 83% 83% 83% 67% 67% Enforcement Only 100% 100% 83% 83% 67% 50% Note: Percentage reflects the number of agencies indicating that WIM data QA detects the particular problem. TABLE 55 DOES QA IDENTIFY MOST OPERATIONAL PROBLEMS? IF SO, WHICH ONES? Table 58 summarizes the responses regarding agency actions when WIM data QC indicates calibration “drift.” Only 5% of the agencies that use WIM for traffic data collec- tion suggest that they take no action. The remaining agencies responded that they do take action in the form of an on-site calibration or by performing remote calibration adjustments. The latter are presumably based on the traffic stream data being monitored. A small percentage of these agencies use a combination of these approaches (i.e., attempt to deal with the problem remotely and, if unsuccessful, perform an on-site calibration). Table 59 suggests that most agencies keep records of the calibration adjustments they effect. SUMMARY OF AGENCY OPINIONS This section summarizes the results of particular survey ques- tions related to the opinions of the responders on their WIM system operation and performance. Figure 5 describes the responders’ opinions of the WIM data quality being generated for traffic data purposes, Figure 6 provides similar information from responders at agencies that use WIM for both traffic data and enforcement purposes, and Figure 7 illustrates the responses from agencies that use WIM for enforcement purposes only. Additional comments on the data quality of the Type I systems included the following: • Bending plate systems no longer used owing to their requiring constant maintenance and their data being no better than that from Type II systems. • Assumes data must be adequate, given that “FHWA is not complaining.” • WIM data are “not being used.” Additional comments on the data quality of the Type II systems include the following: • Volume and classification data adequate, but weight data are borderline; and • Accuracy is adequate for planning and pavement design purposes if proper data validation, maintenance proce- dures, and calibration schedules are followed. Figure 8 summarizes the comments from 52 responders that encompass all WIM functions when asked what their WIM priorities would be if they were given additional resources. Additional comments provided in response to this question included the following: • Need more personnel, both in field and office, but will not happen unless FHWA mandates that states must fully staff their traffic data collection programs. • Average WIM GVW and loading data have not changed in 22 years; focus should be more on classifi- cation data to tell us how many trucks there are using routes. • Eliminate WIM data collection. • Would like to upgrade many of the Type II systems to Type I. • Sites beginning to fail after 7 years. Figure 9 summarizes the opinions of responders who only manage traffic data WIM systems, when asked what the main factors are that hinder WIM calibration. Other factors cited include that WIM systems are not a priority and that there are problems with the logistics of test truck calibration on Interstate highways. The following sugges- tions were offered for resolving the issues hindering WIM calibration:

37 • Acquire resources (funding, qualified personnel, time); • Perform more data analysis tuned to the characteristics of each WIM site and its traffic; • Replace existing AC pavement with PCC pavement for WIM sensor installations; • Use smoother and more wear-resistant pavements; • Ensure roadway meets ASTM smoothness specifications before installing WIM; • Employ more weight enforcement to lessen road deterioration; • Install WIM system equipment properly; • Set autocalibration parameters properly and monitor the operation; • Develop new sensor materials not affected by temperature; • Develop sensor technology not dependent on auto- calibration schemes but reasonably priced and easy to install; • Develop better grouts for resealing piezoelectric sensors; and • Utilize at least two calibration test trucks. The following suggestions were offered by responders dealing with WIM traffic data in relation to the urgent tech- nical needs question (where not otherwise noted the number of responders is one): • Ensure controller/electronics reliability, reduction of power consumption, compactness (three responders); – Incorporate MIL-SPEC housing designs and – Increasedatastoragecapacityfor lower cost controllers. 3S2 Only All Class 9s Other Traffic Data Only 26.3% 73.7% — Both 16.7% 66.7% 16.7% Enforcement Only 16.7% 66.7% 16.7% TABLE 56 WHICH TRAFFIC STREAM VEHICLE TYPES ARE USED FOR CALIBRATION MONITORING? Data Element Traffic Data Only Both Enforcement Only Vehicle Length vs. Axle Spacing 42% 17% 33% Other Axle Spacing Property 26% 33% 50% Tractor Tandem Axle Spacing 53% 33% 83% Steering Axle L/R Wheel Load Comparisons 5% 67% 33% Steering Axle Load Average 95% 100% 67% Steering Axle Load SD 32% — 83% GVW Empty vs. Loaded 47% 50% 17% GVW Average by Speed 32% 50% 33% Other GVW Property 26% — 17% GVW SD 32% 33% 100% Note: Percentage reflects the number of agencies that monitor the listed data element. TABLE 57 WHICH TRAFFIC STREAM VEHICLE PROPERTIES ARE ANALYZED FOR CALIBRATION MONITORING? Traffic Data Only Both Enforcement Only On-Site Evaluation 57.9% 16.7% 33.3% Remote Calibration Adjustments 21% 66.7% 50% No Action 5.3% — — Other 15.8% 16.7% 16.7% Other = Depends on site (e.g., try remote adjustment first and if unsuccessful, perform on-site evaluation). TABLE 58 WHAT ACTION IS TAKEN IF QC INDICATES CALIBRATION “DRIFT”? Yes No Traffic Data Only 63.1% 36.9% Both 100% — Enforcement Only 80% 20% TABLE 59 DO YOU KEEP RECORDS OF CALIBRATION FACTOR ADJUSTMENTS?

• Modernize and make more user friendly software for both on-site and office needs (two responders); – Have vendors perform software upgrades in response to customers’ needs and – Update to Windows versions. • Improve communication techniques for remote data retrieval (i.e., this should be done by vendors in response to customers’ needs) (two responders). • Extend sensor reliability and life (five responders); – Perform more metallurgy studies considering the large volume of truck traffic to evaluate optimum materials to be used, – Refine bending plate strain gauges in areas of adhe- sion and failure, and – Hold joint venture studies between states and vendors. • Develop more accurate sensors (three responders). • Develop accurate sensor that is less costly (two responders). • Improve Type II sensors, and consider fiber optic sensors. • Develop sensor that is consistent and easier to install. • Use non-intrusive systems or methods to get out of the road. 38 • Compare accuracy differences between quartz piezo- electric and bending plate sensors; – Perform study of the two sensor types at the same location in a northern tier state and – Perform better autocalibration handling of pavement temperature (three responders). • Develop better pavements in which to install sensors (two responders). • Use better epoxy (three responders). • Create understanding of how calibration test vehicles relate to the traffic stream. • Create understanding of how pavement roughness relates to WIM accuracy. • Calibrate without use of test trucks; – Field-test simulation programs. • Attain knowledge of the accuracy needs of the different data user groups. • Create effective database to retrieve, QC, and process the data (two responders). • Use better methods to check data based upon actual vehicle configurations. • Attain better understanding of the limitations of the data and educate the states on such limitations. • Find best methods of sharing data among the states. Type I 33% 43% 10% 0% 0% 14% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion Type II 25% 50% 11% 0% 0% 14% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion FIGURE 5 Rating WIM data quality; traffic data collection purposes. Type I 33% 67% 0% 0% 0% 0% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion Type II 0% 67% 33% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion FIGURE 6 Rating WIM data quality; both traffic and enforcement purposes.

39 • Find best methods of sharing data among traffic data collection and enforcement entities to improve both operations. • Identify and standardize best calibration practices, as performed by this study. • Create diagnostic guidelines for calibration of WIM sites from centralized office location. Interestingly, one of the responders commented “. . . if FHWA truly believes that this data is important, then they need to work with state legislatures to make sure that ade- quate staff is obtained, not only to collect and process data, but to aid in the advancement of data collection tools and methods.” The following suggestions regarding urgent technical needs were offered by responders dealing with WIM systems used in enforcement: • Use WIM as Virtual Weigh Stations to monitor known bypass routes. • Remove obstacles in state procurement system, which is the biggest hindrance. • Provide better installation and calibration guidelines (i.e., develop guidelines based on production grade data, not research data). • Improve pavement smoothness quality. • Create methods to repair or rehab pavement in which sensors have been installed, without effecting deterio- ration in WIM data quality. • Lessen mix-ups on identification of prescreened vehicles. Some additional comments offered included the following: • Site maintenance is crucial in obtaining consistent WIM system performance, regardless of type. • Running the entire program in-house gives a much more controlled and consistent product. • Only do the absolute minimum WIM data collection. • Piezo WIM data accuracy can be improved by the following method of modifying vendor’s autocalibra- tion software. – Modify temperature binning to 40 bins with 2-degree increments in lieu of 30 bins with 5-degree increments and Type I 13% 87% 0% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion Type II 0% 100% More than adequate Generally adequate Marginal Inadequate Very inadequate No opinion FIGURE 7 Rating WIM data quality; enforcement only purposes. 36% 28% 21 % 15% Add more WIM sites Replace exisiting with newer technlogy Intensify calibraton monitoring Do more test truck calibrations FIGURE 8 WIM related priorities, given additional resources.

– Modify weight limits of autocalibration vehicle type to exclude vehicles exhibiting partial weights. • Percentage of WIM errors changes with temperature change (Type II systems). • Autocalibration feature of vendor’s piezoelectric WIM system is accurate. • Accuracy of the WIM system is only as good as the qual- ity of the pavement—the agency’s road maintenance department needs to be a partner in the maintenance of a WIM site. • Selection of WIM site location is important—data qual- ity lessens when site is located where traffic is changing lanes. 40 • Although consideration has been given to the use of cal- ibration test vehicles on several occasions, it has never appeared to be practical or cost-effective. INVENTORY The end of the survey questionnaire provided responding agencies with the opportunity to describe the type and number of WIM systems currently operational in their jurisdictions. These questions were optional and, as a result, not all agen- cies responded. The WIM system inventory for agencies that responded is provided in Appendix C (web version only). 26% 22% 9% 11% 17% 9% 6% Funding Lack of qualified personnel Time Resources (undefined) WIM sensors installed in poor pavement WIM sensor accuracy affected by environment WIM system equipment FIGURE 9 Factors hindering WIM calibration.

Next: Chapter Five - Conclusions »
High-Speed Weigh-in-Motion System Calibration Practices Get This Book
×
 High-Speed Weigh-in-Motion System Calibration Practices
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 386: High-Speed Weigh-in-Motion System Calibration Practices explores the state of the practice in high-speed weigh-in-motion system calibration. Weigh-in-motion is the process of weighing vehicle tires or axles at normal roadway speeds ranging up to 130 km/h (80 mph).

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!