National Academies Press: OpenBook

Performance Measurement Tool Box and Reporting System for Research Programs and Projects (2008)

Chapter: Appendix E RAC Survey Performance Measure Comments

« Previous: Appendix D Organizations Responding to Survey
Page 99
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 99
Page 100
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 100
Page 101
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 101
Page 102
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 102
Page 103
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 103
Page 104
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 104
Page 105
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 105
Page 106
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 106
Page 107
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 107
Page 108
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 108
Page 109
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 109
Page 110
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 110
Page 111
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 111
Page 112
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 112
Page 113
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 113
Page 114
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 114
Page 115
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 115
Page 116
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 116
Page 117
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 117
Page 118
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 118
Page 119
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 119
Page 120
Suggested Citation:"Appendix E RAC Survey Performance Measure Comments." National Academies of Sciences, Engineering, and Medicine. 2008. Performance Measurement Tool Box and Reporting System for Research Programs and Projects. Washington, DC: The National Academies Press. doi: 10.17226/23093.
×
Page 120

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

E - 1 APPENDIX E – RAC Survey Performance Measure Comments Return on Investment or Benefit-Cost Ratio • It’s a good performance measure but we do not have the resources or knowledge to implement and monitor the return on investment or benefit vs. cost ratio. • This PM can be useful if it is accurate. It is often difficult to achieve an accurate measure. • Most of our projects are selected for immediate implementation and payback. • Benefit values are too subjective. • While we do not currently use benefit cost ratios as a selection tool nor as a performance measure, we are currently contemplating making this a deliverable to be calculated by the contract research team for each applicable research project. We have not implemented this process yet. • Usually done only when benefit is clear - around 10% of projects. • All Ratings are based on the assumption that the research project lends itself to that particular measure. • Obviously most projects would potentially utilize some but not all tools in the toolbox. • This PM cannot be applied across the board, as this type of evaluation will not fit all projects. However, we are developing a process which will allow us to perform with regularity B/C on those projects for which B/C and ROI are appropriate. • This process is currently being employed, and B/C analyses are forthcoming. • We use general benefit vs. cost ratio as one element in the project and final evaluation of a research project. • We have not tracked this as a formal research performance measurement. • We will be developing more performance measurements in this area in the future. • It is a more general performance measurement used by the agency, but has not been applied directly to research. • This PM cannot be applied across the board, as this type of evaluation will not fit all projects. However, we are developing a process which will allow us to perform with regularity B/C on those projects for which B/C and ROI are appropriate. This process is currently being employed, and B/C analyses are forthcoming. • We attempt to quantify triennial dollar benefits vs. costs on all K-TRAN (university research) projects with products or findings that have been implemented. Benefits are accumulated and compared to the total cost of the program to calculate an overall BCR for the K-TRAN Program. • We would like to establish performance measures in various categories; this would be a likely category. • Would be valuable. However difficult to generate potential cost savings on most projects. • This should be added to our program in the future where appropriate. • The assumptions regarding benefits are often difficult to assess and may be discounted when very favorable B/C ratios are given. • Material characteristics and performance are subjected to analysis for contentious issues, such as roadway delineation features, e.g. markings or delineators

E - 2 • I think that this kind of measure can be misleading. Research is no more than information. Economic benefits may result indirectly from research but those benefits are realized by the implementing organization, not the research organization. Second, they tend to overlook or discount those benefits that are harder to quantify, like improved service level, cost avoidance and safety improvements. • We do a sort of harvest ratio with the projects that we select to evaluate. Given that we complete approximately 50 projects per year we find it most effective to select about half of these projects to evaluate for benefit to cost. We conduct these evaluations on a project-by-project basis. • "In September 2002, TxDOT's Research and Technology Implementation Office (RTI) was asked by the Texas Transportation Commission to document the value of research by providing information on the return on investment from the research program. The Commission wanted to use the information to demonstrate the benefits of a research program during budget discussions with the Texas Legislature for the upcoming 2003 session. We limited our analysis to TxDOT's 21 top innovations from 1999 to 2001. For the 21 selected, the benefits were estimated over a ten-year return period after the implementation of the product began. The analysis included: o Reductions in the number of fatalities occurring on the transportation system o Reductions in the number of accidents o Operational cost savings for TxDOT (considered as reductions in taxpayer cost for operating, constructing, and maintaining the transportation system). This coming fiscal year (beginning 9/2004, RTI will require the university researchers on Research Management Committee (RMC) 3 projects, to submit as a separate deliverable, a project specific estimate of projected benefits (reductions in accidents, reductions in fatalities, and operational cost savings). RMC 3’s area of research focuses on environmental, right-of-way, geometric design, and hydraulics issues. This is the first year that the benefits estimation requirement will be in place. If the pilot is successful in RMC 3, the requirement will be adapted in all RMCs for all research projects." • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • I view this as a valuable tool, but we don't currently use it primarily because of the time it takes to gather all of the information needed to do the calculation and the inherent difficulty in defining and justifying the costs & benefits. • We have used a cost/benefit ratio in the past, but not on a consistent basis. In general, the negative feedback we have received from using such a measure has been that it is too subjective. However, we have recently picked up the banner again and are using life cycle costing as means for describing the benefit. • As the department is now using LCCA to determine pavement alternates, this method is more accepted. • Although we are not using this as a performance measure, we intend to use extensively for project evaluation as part of our infrastructure management system.

E - 3 • Not suitable in all cases - some benefits and some costs cannot be accurately quantified. • When the nature of the research lends itself to quantification of costs and benefits, this statistic can be extremely persuasive. • Assessing long-term benefits, or the portion of benefit directly attributable to research, is difficult. Lives Saved • We do not have the resources or knowledge to implement and monitor this performance measure. • As with item number 1, this is a useful measure if it is accurate. • This may be hard to prove based on the variability of fatalities each year. But, it would be a very powerful statement to show the value of the research conducted. I would take approximately three years after the study ended to have data in the accident records system to perform an after study analysis. This performance measure may be used by the contract research team to project the impact of their research findings, but it is currently not mandated by our Research group. • We more typically note improved safety, but don't correlate to fatalities or crashes. Also, our safety folks rather talk about crashes than fatalities due to it being a more reliable value. • May be difficult to measure in many cases. • This PM is used at the Departmental level; however, it is not used as a project level measure. • I anticipate that we will be using lives saved as a formal research performance measure in the future. We have past research that undoubtedly saved lives, but it has not been formally tracked as a research performance measurement. • This PM is used at the Departmental level; however, it is not used as a project level measure. • While lives saved are documented on individual research implementation plans as appropriate only the dollar benefits are accumulated and reported on our status reports. • We would like to establish performance measures in various categories; this would be a likely category. • Difficult to generate data. • It is difficult to estimate how many lives a safety research project really saves. This measure could be extremely valuable because safety is the top priority of many DOTs but given the difficulty of determining the number of lives saved/reduced injuries we have not done such calculations. • Hard to prove. Again, we tend not to claim credit for what our customers are able to accomplish by implementing research results. • This measurement is used by our Bureau of Highway Safety and Traffic Engineering to measure how our department is doing in meeting the 10% reduction of fatalities and serious crashes goals of TEA-21. We don't use this measure for our research program.

E - 4 • See comments in #1. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • It may be interesting to examine the number of lives we expect to save by implementing the results from a proposed research project, but the difficulty in isolating contributing factors makes it very hard to actually confirm this after completion of a project. • PROJECTED numbers that never get verified don't really tell us anything about performance. • We have only had one safety study many years ago that used lives saved as a measure of the effectiveness of the treatments to be employed, not the study. • We have performance measures based on driver behaviors, but nothing related to highway improvements. • More appropriate to use in conjunction with total crashes. • "If the nature of the research and post-research performance monitoring would lend itself to quantification of lives saved, this statistic would be persuasive. • It is almost impossible to generate this data for a specific research project's cause and effect, so although it would be valuable, it is unrealistic to count on this statistic." • This is usually done for safety-related research. Construction, Maintenance & Operations Savings • Again, we don't have the resources or knowledge • This is the basis for selecting projects. • "Showing a cost saving for a given operation change due to research could be performed. The problem is getting the products of research implemented with any regularity. The cost of the item for the research is always higher until it gets wide use." • Usually done only when benefit is clear - around 10% of projects. • This PM is currently being applied to many projects. The overall goal is to provide information both at the program and at the project level. This PM will increasingly be more systematically and formally be applied. • I anticipate that we will be using cost savings as a formal research performance measure in the future. • This PM is currently being applied to many projects. The overall goal is to provide information both at the program and at the project level. • This PM will increasingly be more systematically and formally be applied. • These savings are documented on individual project research implementation plans as appropriate but are not reported as a specific category on our status reports. • We would like to establish performance measures in various categories; this would be a likely category. • This is somewhat easier to quantify than lives saved. We do not use it very often but hope to make this a required component of more research projects for which this measure could apply.

E - 5 • Our customer's accomplishment, not ours. • This measure is used extensively by our highway maintenance and construction organizations. However, we don't use this measure in our research program. Our benefit-to-cost measure does get most of this information for the research program. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • We use this as justification to start many research projects, but we are just starting to verify these cost savings AFTER the research has been implemented. • This type of measure has been used not so much as a measure of study success, but as a way to market the implementation and incorporation into specs. • We have done various studies related to cost of outsourcing things like maintenance and planning/design/construction supervision. • When the nature of the research lends itself to quantification of costs and benefits, this statistic can be extremely persuasive. Reduction in Crashes • This may be hard to prove based on the variability of fatalities each year. But, it would be a very powerful statement to show the value of the research conducted. I would take approximately three years after the study ended to have data in the accident records system to perform an after study analysis. • We more typically note improved safety, but don't correlate to number of crashes. • This PM hasn't been applied to research. In the past, the usefulness of crash reduction factors (CRFs) generally has been limited because of the data. However, this measure will increasingly be applied as a result of the improvements to the Department's CRF database, which has been enhanced as a result of past and ongoing research. • We have a new research project which is tracking crash rate decreases for an improved safety area for a performance measurement. • While reduction in crashes are documented on individual research implementation plans as appropriate only the dollar benefits are accumulated and reported on our status reports. • We would like to establish performance measures in various categories; this would be a likely category. • This measure does not have the same impact as lived saved but is still very valuable. However, similar issues as with estimating lives saved are difficult to arrive at a crash reduction number that can be attributed to a research project. • Our customer's accomplishment, not ours. • This measurement is used by our Bureau of Highway Safety and Traffic Engineering to measure how our department is doing in meeting the 10% reduction of fatalities and serious crashes goals of TEA-21. We don't use this measure for our research program. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative.

E - 6 • While the comments for number 2 are also applicable for this case, it may be easier to collect data for this PM. As we continue to grapple with increasing congestion, this actual reduction of crashes will become increasingly important. • Never used. • If the nature of the research and post-research performance monitoring would lend itself to quantification of reduction in crashes, this statistic would be extremely persuasive. It is almost impossible to generate this data for a specific research project's cause and effect, so although it would be valuable, it is unrealistic to count on this statistic. • This is usually done for safety-related research. Reduction in System Delays • Congestion is not a significant problem in Alaska • Using projection or results of models is a two edge sword. As the warning in the commercial states, actual results may vary. • In a very congested state like NJ, we have many reasons for delays. Showing projects of expected reduction in delays and actual no reduction or increase can mean a loss of credibility for the research program. • There are issues of how well you can calculate delays. • This PM is a general program level PM for the Department; it is not currently a PM for our research. • We have a new ITS project which is coming on line that targets traffic delay reduction through public information. We will measure the reduction in delays as a performance measurement of the ITS project. • This PM is a general program level PM for the Department; it is not currently a PM for our research. • While reduction in system delays are documented on individual research implementation plans as appropriate only the dollar benefits are accumulated and reported on our status reports. (Rarely used to date) We would like to establish performance measures in various categories; this would be a likely category. • Haven't had this type of project yet. However have upcoming evaluations to conduct that will include delay data, before and after. • We have done some work trying to quantify and document the benefits of our emergency road side patrols and ITS activities. • Our customer's accomplishment, not ours. • This measure is used extensively by our highway maintenance and construction organizations. However, we don't use this measure in our research program. Our benefit-to-cost measure does get most of this information for the research program. • If appropriate and can be realistically quantified, this is a good measure. However, this measure can be misleading. For instance, a product such as a new traffic control device of signal optimization could result in the aggregate, a huge travel time savings in terms of person-minutes. However, the large savings disaggregated at the person level could only be few seconds; hardly a benefit.

E - 7 • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • While the comments for number 2 are also applicable for this case, it may be easier to collect data for this PM. Also a good PM for the system user (traveling public). • We have recently used projected reduction in traffic delays not as measure of a technique or process coming out of research, but for LCCA analysis of alternate design systems. • On the Alberta highway system (primarily rural) congestion isn't an issue. • If the nature of the research and post-research performance monitoring would lend itself to quantification of reduction in system delays, this statistic would be persuasive. It is almost impossible to generate this data for a specific research project's cause and effect, so although it would be valuable, it is unrealistic to count on this statistic. • This is useful in some locations in a rural state like SD, but is not a primary factor everywhere. Positive Environmental Impact • It’s a very good performance measure but we do not have the resources or knowledge to implement this performance measure. • Environmental issues are a major part of our program and growing. • This could be a valuable PM as long as the product of the research is implemented with any frequency. • We closely monitor how much research effort and money is applied to the environment, both natural and human. • We have noted the direction such as improved or more environmentally friendly, but have not quantified. • The number of projects fitting any specific criteria (such as positive environmental impact) would not generally be valuable to a small state like NH where we might only cross into a particular discipline once in a while. • This is not used as a PM; however, we can readily identify how much of the program is environmental research (i.e. categorized as Environmental Mgt). In addition, other offices conduct research with ancillary environmental benefits (e.g., scour studies dealing with countermeasures that affect turbidity in waterways). We use performance measurements for general environmental improvements, but not for special research environmental projects. I anticipate that we will be developing this research performance measurement in the future. • This is not used as a PM; however, we can readily identify how much of the program is environmental research (i.e. categorized as Environmental Mgt). In addition, other offices conduct research with ancillary environmental benefits (e.g., scour studies dealing with countermeasures that affect turbidity in waterways). • The number of projects/products shows the breadth and balance of the program and the ability to respond to a particular category. It is not as critical as the effectiveness of the product when deployed.

E - 8 • Too many of our environmental studies tend to be evaluated qualitatively. But many environmental projects could be evaluated quantitatively, e.g. amount of pollutant removed, sediment removed, noxious weeds killed, etc. • Small scale research projects have been undertaken to evaluate mitigation effectiveness and alternate mechanisms to protect wildlife with case specific results. For example, protection of amphibians during a seasonal migration with directed access to safe crossing zones (culverts, pre-cast boxes under the highway) had immediate positive results both in animal fatalities and public relations. • Our customer's accomplishment, not ours. • The PennDOT Research Performance Measures Toolbox includes 5 tools. They are: Benefit-to-Cost, Peer Review, Performance Indicators, Customer Surveys, Life-cycle Cost Analysis. This measurement fits within our Performance Indicators tool area. From time-to-time we are asked to show how our research efforts are supporting PennDOT's strategic plan that includes an environmental/quality of life plank. • We determined in analyzing our top innovations from 1999 to 2001, we did, where appropriate, determine environmental impact savings. • Especially with wildlife habitat connectivity and with animal crash mitigation. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. A criteria for removing uncertainty in environmental regulatory programs. • We have done individual environmental research projects that are designed to have specific positive impacts, but we do not have a goal for the number of projects that must do this, nor do we track how many. If there is a specific need, we do the research; if not, we don't. • We do not break down implemented research by area. We do track implemented projects. We have demonstrated positive environmental impact for several projects, but not as a performance measure. • At present we do not have a measure with respect to environmental impact. • It is not clear how to gather these data for a specific research project's cause and effect, so although it appears to be valuable, we have never used this statistic. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Quality of Life Enhancement • I believe that this will be very subjective and hard to document. • This is not a PM: projects are not broken out categorically as providing psychological or aesthetic benefits. • I see this as a potential future research performance measurement. This is a general performance measurement area. • This is not a PM: projects are not broken out categorically as providing psychological or aesthetic benefits.

E - 9 • The number of projects/products shows the breadth and balance of the program and the ability to respond to a particular category. It is not as critical as the effectiveness of the product when deployed. • Haven't had this type of project. • Quality of Life benefits are very difficult to assess because quality of life is such a nebulous term. What is important to me ma not be as important to others or may have a different level of importance. • Really hard to measure credibly. • The PennDOT Research Performance Measures Toolbox includes 5 tools. They are: Benefit-to-Cost, Peer Review, Performance Indicators, Customer Surveys, Life-cycle Cost Analysis. This measurement fits within our Performance Indicators tool area. From time-to-time we are asked to show how our research efforts are supporting PennDOT's strategic plan that includes an environmental/quality of life plank. • We don't care if people feel good; we just want to move them faster, safer and cheaper. ;-) Seriously, this is a good option to have, but it will require customer surveys to implement. May be useful on certain high profile projects, but probably not economically feasible for most projects. • We do not break down implemented research by area. We do track implemented projects. This seems to be difficult to quantify other than by anecdotal means. • This could be complicated because quality of life can mean different things for different people. It is not clear how to gather these data for a specific research project's cause and effect, so although it appears to be valuable, we have never used this statistic. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Safety Enhancement • Could be a powerful PM to support the value of research program. Here we are talking about the perception of improvements based on the Number of research projects. • We usually state item will improve safety or enhance it, but do not quantify. In rare cases we are able to quantify impact on specific projects. • For comments on this item, simply substitute safety for environmental in the answer to #6. Many other offices do research that enhances safety (much more than is the case with environmental research). • See the answer to question number 4. For comments on this item, simply substitute safety for environmental in the answer to #6. Many other offices do research that enhances safety (much more than is the case with environmental research). • The number of projects/products shows the breadth and balance of the program and the ability to respond to a particular category. It is not as critical as the effectiveness of the product when deployed.

E - 10 • It seems that if you can make the case that a research project improves design methodologies in the area of safety that you could go further and estimate the impact on crashes, fatalities and injuries. • Our customer's accomplishment, not ours. • The PennDOT Research Performance Measures Toolbox includes 5 tools. They are: Benefit-to-Cost, Peer Review, Performance Indicators, Customer Surveys, Life-cycle Cost Analysis. • This measurement fits within our Performance Indicators tool area. From time-to- time we are asked to show how our research efforts are supporting PennDOT's strategic plan that includes a highway safety plank. • Out top innovations benefits analysis focused on reductions in traffic related accidents and fatalities. • Guardrail impacts. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • The number of projects that fall into this category is not as important as the extent of their impact on safety improvements. • We do not break down implemented research by area. • We do track implemented projects. Safety enhancement has been concluded on research projects, but not used a measure. • No measure, but are increasingly paying more attention to the safety aspects of our designs. • If the nature of the research and post-research performance monitoring would lend itself to quantification of the perception of safety enhancement through surveys, this statistic could be persuasive. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Level of Knowledge Increased • Development of manuals and direct training is becoming a major part of our research program. • This PM is decision-maker dependent. We have seen very different emphasis from one group to another after elections. • In some cases, we state a project will increase level of knowledge on specific projects, but do not quantify. • We do not do basic research. We do, however, conduct research that enhances our decision-making processes and that provides increased knowledge as an ancillary benefit. • Not a formal research performance measurement. We do not do basic research. We do, however, conduct research that enhances our decision-making processes and that provides increased knowledge as an ancillary benefit.

E - 11 • The number of projects/products shows the breadth and balance of the program and the ability to respond to a particular category. It is not as critical as the effectiveness of the product when deployed. • Haven't used. See this has below average value. • Most all research projects improve the state of knowledge in the subject area. If I old people that we had 10 research projects that improved the state of knowledge it would not mean much to me, nor do I suspect to them. • We specifically use this measurement for our LTAP efforts. At the end of each training session we ask the participants to gauge their gain in knowledge from this course. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • We carve out a small portion of our budget specifically to support the ODOT Partnered Research Exploration Program (OPREP). These funds are used for basic research activities which may increase the general body of knowledge; however, we do not use the number of projects we fund in this category as a PM for the program. • We do not break down implemented research by area. We do track implemented projects. We often conduct research for research sake; that is for our own use for future purposes. We do not use as a measure. • We tend to concentrate on practical research, not just knowledge research. We may use to justify a research project with negative results! • If the nature of the research and post-research performance monitoring lends itself to qualitative statements of an improved body of knowledge, then this would be cited. It is not clear how persuasive these statements are. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Management Tool or Policy Improvement • We would probably capture this under #9, Level of Knowledge Increased. • This is a valuable PM. It works at the customer/bureau manager level. It has not been as powerful at the upper level management level. • While this is a research area of ours, it has not been a formal performance measurement. • Not a PM. • We count products from implemented K-TRAN projects in the following categories: Hardware/Physical Product; Software; Policy Study; Design/Evaluation Procedure; Test Method; and Training Materials. • The number of projects/products shows the breadth and balance of the program and the ability to respond to a particular category. It is not as critical as the effectiveness of the product when deployed. • We have highlighted a few projects along these lines in our research newsletter and annual report. We also try to present a research project at each Research Advisory Board meeting and a number of them have fallen under this heading.

E - 12 • Our customer's accomplishment, not ours. • During our annual research program development we analyze the level and quality of research effort that we make in policy research. We roughly try to keep this level at around 10%. • Very hard to quantify. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • This is probably the most common tool used to measure the performance of Ohio's research projects; however, we don't typically look at how MANY projects produce these effects, but rather the EXTENT to which each project does. All of our projects are expected to address one or more of these components. • We do not break down implemented research by area. We do track implemented projects. Not used as a performance measure • If the nature of the research and post-research performance monitoring lends itself to qualitative statements about policy, design standards, training and/or procedure development, then this would be cited. It is not clear how persuasive these statements are. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits Public Image Enhancement • This is also a two edge sword. If the research goes well, the Department takes credit for implementing an improvement. If not is another research project that went astray. • We routinely try to involve our Public Information Office and therefore the media in our successful research projects. • This is not a PM, per se. However, the project selection process now identifies projects whose results are expected to be observable to the traveling public. Such projects can be used as public relations opportunities. Other projects that achieve substantial results (e.g., cost savings, safety improvement) can also be marketed. Not a research performance measurement. • This is not a PM, per se. However, the project selection process now identifies projects whose results are expected to be observable to the traveling public. Such projects can be used as public relations opportunities. Other projects that achieve substantial results (e.g., cost savings, safety improvement) can also be marketed. • Just the number per se has little value from my perspective. If provided with short descriptions of how the projects enhanced the STA public image, then it would have a higher value. • This could be very valuable in communicating and marketing the Department or Division's value. • This could be useful, though we (wrongly) do not toot our horn enough. • Our customer's accomplishment, not ours. • We've never been asked to make this assessment. However, we do conduct projects on behalf of our Office of Communications and Customer Relations each year that

E - 13 are meant to enhance PennDOT's public image. We just don't measure this in any way. • Projects are selected for actual public benefit--not for enhancing image of STA. • Other PMs (e.g. crash reductions, safety improvements, dollars saved) indirectly address this issue. • We have recently conducted a customer satisfaction survey for the department to be used as an instrument to enhance public image. • We are actually using a variation of this in our Department wide goals. • If the nature of the research and post-research performance monitoring lends itself to qualitative improvements in public image, then this would be cited. It is not clear how persuasive these statements are, but image improvement is almost always beneficial to an agency and its research program. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Technical Practices or Standards Upgraded • This is a powerful measure at the customer/bureau manager level. • Again, there are many projects that do this, which could be identified as such in our tracking database. However, projects aren't categorically identified, as such. • Has not been used as a formal research performance measurement. Again, there are many projects that do this, which could be identified as such in our tracking database. However, projects aren't categorically identified, as such. • As a number alone this means very little. A percentage would be better but still conveys little information. • Our performance indicator tool measures this information. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • We combine this with number 10. • Provides direct feedback with respect to impact of research. • When the nature of the research lends itself to improving the design processes or contributing new information to technical standards or practices, this fact can be extremely persuasive to decision makers. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Leadership • This is very similar to #9, Increasing Knowledge • This is a powerful measure at the customer/bureau manager level. • As we continue to emphasize that research be strategically focused, the amount of proactive research conducted will likely increase. Most research (as applied)

E - 14 responds to existing problems. Currently, FDOT does not support basic research (except by supporting federal research). • Has not been used as a formal research performance measurement. As we continue to emphasize that research be strategically focused, the amount of proactive research conducted will likely increase. Most research (as applied) responds to existing problems. Currently, FDOT does not support basic research (except by supporting federal research). • This is a good measure for those who are either technically oriented or are wanting to make significant improvements thru research. • Our program has been somewhat oriented in this direction. • Our performance indicator tool measures this information. • This has been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. • We feel we are leading the pack in a number of transportation research areas; however, we don't perform any work exclusively to maintain our position in the field. This is interesting, but seems to be an extremely subjective thing to measure. • While we don't count projects in this category, we certainly use in-house developed work to promote the benefits of the research section. We also use this measure for what we call technical assistance projects that we report in our annual report. These are typically informal research that we initiate in response to operational or headquarters problems. • If the nature of the research and post-research performance monitoring lends itself to a pro-active solution or adding to scientific or technological knowledge in the field, then this would be cited. It is not clear how persuasive these statements are, but we think its impact is similar to that of image enhancement. • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Percent of Projects/Products Implemented • We track both the number of implementation plans we have developed and the measurability of the impact that implementation has. • This is dependent on the interest of the project managers in implementing a research product verses effecting design/construction project costs or schedule. • Although this measure may be too broad, any measure on implementation of research findings is among the most important to our programs. This measure does have the problem of being skewed depending upon how a state runs their research. Very short projects to select a needed change for implementation get high marks. Longer term riskier projects don't result in implementation as often, but may have a much higher payoff. • Some version of this PM will is in the process of being implemented. Most projects (as being applied) are expected to be implemented. However, formal processes weren't in place in the past (implementation was treated as a foregone conclusion for a number of reasons) and are only now being instituted.

E - 15 • Has not been used as a formal research performance measurement. Some version of this PM will is in the process of being implemented. Most projects (as being applied) are expected to be implemented. However, formal processes weren't in place in the past (implementation was treated as a foregone conclusion for a number of reasons) and are only now being instituted. • We report the number of projects authorized, projects implemented and projects with implementation in progress. (percentage is not calculated) • We are under new research management that makes this one of highest priorities of the program. We are reorganizing and developing procedures to incorporate implemented research into all phases of research until the product is a standard for the department. • Have attempted to implement this PM • Something we need to be aware of but not used as a performance measure • This measure sounds good, but ignores the fact that we can learn just as much from a research study that says a solution was not found or that it action should not be taken. • A better direct measure of what we do and how we do it. Does not require us to take credit for the uses to which our customers are able to put research. • Our performance indicator tool measures this information. • RTI reports to our Research Oversight Committee (TxDOT's executive steering committee for research) every six months the status of implementation. We report status on product implementation for the previous five year period. There are three categories reported: implemented, not implementable, and pending. The pending category means the product implementation is planned but not yet begun, or that TxDOT is still waiting on delivery of the product from the researcher. • This may have been a subjective or qualitative criteria for WSDOT Research, rather than quantitative. Feasibility issues related because projects are incremental and budget for implementation may not be readily available. How is implemented defined? • We are currently focusing on implementation in the two areas that have the most projects and funding for the department (structures and pavements). We have implementation plans for several (but not all) of the projects completed within the last 3 years in these areas. Ideally, we would like to have a formal implementation plan for every research project. Time and staff are is the biggest constraints. • We have, under the threat of having to justify our existence, looked at a ratio of projects implemented. More recently under a quality initiative program we have looked at a process review of our implementation process. The committee found that we had implemented 47 percent of our projects. • We have not done this recently as our present performance measures are more outcome orientated. • Implementation needs to have greater emphasis • We have avoided this type of score keeping because the definition of implementation (complete or partial) is not clear and it may not be persuasive statements. Most often, the rule of thumb '20% of the projects generate 80% of the benefits' holds true for transportation research, and nobody knows which 20% will pay off handsomely at the start of each project.

E - 16 • Numbers of projects or products is not considered of prime importance by upper management. They are more concerned with outcome measures, such as dollars of benefits. Percent of Projects Completed on Time • Research is unpredictable, there are many factors that are uncontrollable, sometimes it is better to have a comprehensive research project that lasts a year longer that expected, rather than be caught up in the project being done • We use university contracts to perform our research. It is often difficult to get the project completed in accordance with the schedule. • Tracked only as a matter of interest on an occasional basis as an internal audit. • This ratio hasn't been used as a PM. However, the information is readily available through the tracking database. It may be used for assessing Project Mgr or Principal Investigator performance. It may be used in the future as a PM. • We are in the process of collecting this data and developing this performance measurement. We will have data this calendar year. • This ratio hasn't been used as a PM. However, the information is readily available through the tracking database, and it may be used for assessing Project Mgr or Principal Investigator performance. Not being on time has a negative impact on our ability to implement improvements on some projects. Further, some of these projects could reduce resources, so delays may have a significant impact. However, unlike PM for a construction/maintenance program, on time and on budget for research is really not that important. Those research projects for which it is important can be targeted for special attention. • While not a performance measure we do track all projects for time and money. • We track time the number of projects completed on time. This is more of a program management measure that does not really get at measuring the reason why we do research. We do it because it easy. • Again, this measures our efficiency and effectiveness as a research organization • Our performance indicator tool measures this information. I put of little value on this because I've come to expect time extensions as a standard in the research business • While we want to get research results in a timely manner and we actively manage our projects to ensure that all milestones are met, I do not feel that this is a particularly useful PM, because it does not address the QUALITY of the results. • It's good info to have, but it should not drive a program. • We have avoided this type of score keeping because there does not appear to be strong correlation between project-result value and on-time completion. • Use of the measure does encourage more timely completion, but we are a long ways from 100% on time. Percent of Projects Completed Within Budgets • Projects are paid for on a lump sum basis so very few projects run over budget. In the rare occasion that they do it is based on a valid change in scope of work.

E - 17 • This would be a very good PM for our unit since all project are completed within the proposal budget unless the Department chooses to add work. For NCDOT, very few projects are not completed within original budget unless scope is increased. For this reason, it is not currently a performance measure for us. • Tracked as a matter of interest - but more on an occasional audit basis not yearly. no opinion • This has not been used as a performance measure because supplemental agreements usually represent an expansion of scope, additional requested services, etc., rather than a failure to work within the estimated budget. • Project by project qualification/analysis would be required to render this a viable PM. • This is a research performance measurement that will be started soon. • This has not been used as a performance measure because supplemental agreements usually represent an expansion of scope, additional requested services, etc., rather than a failure to work within the estimated budget. Project by project qualification/analysis would be required to render this a viable PM. • We do track the money but this is not a performance measure. • Funding is a constrained resource. In a given year we approve quite a few no-cost extensions, but rarely approve a request for additional money. Consequently this measure would not have much meaning for us because so few projects exceed the budget. • I don't think this is a number we want to advertise. I also think it could become a counterproductive measure. I take a fairly lenient view of scope and budget changes, on the assumption that if we know what we would find going in, it probably isn't research. • Our performance indicator tool measures this information. This measure is more useful than timeliness. However, many research projects have tasks added to them because discoveries are made throughout a typical project that can be and are added to existing projects. • Our research is based on contractual project agreements with universities. The project agreements stipulate a budget amount that the researchers must adhere to. • Used more subjectively and qualitatively in WSDOT Research, rather than given a numerical value. • All of our projects are completed within budget unless we authorize additional funds for additional work. • Generally, all projects are completed within budget except those projects which have been modified to incorporate additional work requested by the project review committee. • We have avoided this type of score keeping because there does not appear to be strong correlation between project-result value and on-budget completion. Number of Contractors • Our resource pool is fairly fixed. We have the ability to contract with 18 primary universities. They can subcontract to other consultant, or universities. Also the number of contractors will be dependent on the number of project let each year.

E - 18 • Important for diversifying research program content. We have seen a dramatic improvement in research proposals since diversifying the contract universities with which we contract over the last decade. • Tracked as a matter of interest, but not a goal. • While the Research Center is conscious of its relationships with its research contractors, especially the state universities in Florida, the number of contractors is not used as a PM. • We track this, but it is not a formal performance measurement. • While the Research Center is conscious of its relationships with its research contractors, especially the state universities in Florida, the number of contractors is not used as a PM. • In our annual report we do a pie graph showing the distribution of contractors (including in-house staff). We like to have some diversity in who conducts our research. Historically it was very concentrated with one university and we have consciously tried to move away from this. In this regard the measure is useful. • I think this is possibly of significant operational value, not of much external value. It's been a personal goal to expand our stable of investigators and I've encouraged staff to take projects out of state. • We report monthly through the vehicle of a Dashboard information to our deputy secretary. This information includes a measurement on the number of contractors currently conducting research for PennDOT. • Used more subjectively and qualitatively in WSDOT Research, rather than given a numerical value. • Because we have a large number of qualified universities and private researchers in Ohio who are interested in contracting with us, we are diligent in our efforts to ensure open access o all qualified parties. We don't, however, use a formal PM to assess how well we are doing this. • We do not run a contract program like NCHRP and so do not have any need of this statistic. Number of Contractor Partnerships • We do encourage partnerships with other agencies especially resource agencies • Our goal is to get buy-in on the results. • Most of the projects are single university contracts-very small % are joint. • Although important for the reasons described in 17 above, not currently used as a performance measure. • Tracked as a matter of interest only. • The nature of partnerships is not uniform: e.g., we have partnerships with two UTCs, a partnership with another university to conduct our LTAP, and a partnership with yet another university to engage in specialized work (it is to be self-sufficient in 5 years, although we'll continue to use it, as needed, thereafter). • We have these partnerships, but they are not part of a formal research performance measurement.

E - 19 • Efforts to partner are low in priority compared to other issues. Further, the nature of partnerships is not uniform: e.g., we have partnerships with two UTCs, a partnership with another university to conduct our LTAP, and a partnership with yet another university to engage in specialized work (it is to be self-sufficient in 5 years, although we'll continue to use it, as needed, thereafter). • This program has always valued partnered research and makes special efforts to find innovative ways to partner. • I view this measure as similar to the last; a good indirect measure of the quality of a research program, but I do not believe it has much marketing value. • Report monthly through the vehicle of a Dashboard information to our deputy secretary. • This information includes a measurement on the number of partnerships currently in place within PennDOT's Research Program. • Used as a qualitative measure as well as given a quantitative value to leverage other people's money to increase the depth of the WSDOT Research Program. • This is a requirement for research projects selected for funding under the ODOT Partnered Research Exploration Program (OPREP). • Affects about 2-3 projects per year. Not really used as a formal PM. • We encourage these partnerships but have not used as a PM yet. • We do not run a contract program like NCHRP and so do not have any need of this statistic. • Partnership is hard to measure. A yes/no criterion doesn't really quantify the strength or value of the partnership. Percent of Satisfied Customers • Never directly measured, but satisfactions is very important if people are going to turn to research for help. • This is probably the best PM. • Our customers within the NCDOT are the most important indicator of our program. If our customers are happy, and then continually come back to us with more research ideas and with a greater number of requests for assistance, then there is no more powerful indicator of the success of our program. In fact, we currently keep track of the number of current active customers as a performance measure unto itself. We are also looking to enable the customers to define the performance measures for a project on a case-by-case basis at the inception of the project. • The Research Center is very concerned about customer service, but there are many customer groups: e.g., functional areas, Project Mgrs, researchers, universities/other contractors. No formal surveys have been conducted, but numerous forums have been provided to engage each of these areas in conversation and to gain feedback. • Not a formal research performance measurement. • The Research Center is very concerned about customer service, but there are many customer groups: e.g., functional areas, Project Mgrs, researchers, universities/other contractors. No formal surveys have been conducted, but numerous forums have been provided to engage each of these areas in conversation and to gain feedback.

E - 20 • Some times the only feed back we get is whether the customer is satisfied with the results or not. • Last year we conducted our first survey of Research Division customers and held a workshop to review the results and identify ideas for improvement in specific areas that were rated lower. This survey was very valuable regarding how we should align our selves work wise and in identifying what is important to others. We like to tout that more than 86% of the respondents are very satisfied with our services, but the real benefit of the survey is identify our performance in more specific areas that we can work on addressing. • As part of the Planning Section we do a biennial customer satisfaction survey. For the most part it has provided fair and constructive feedback. • Our Customer Survey tool is specifically designed for this purpose. • We report this information to the Secretary of Transportation via our Quarterly Dashboard. • Our research committee structure provides RTI with an adequate feedback mechanism. Also, our research project directors often come from the districts or divisions who will be responsible for implementing the products developed by the research. • Used as a qualitative as well as quantitative percentage in the past by the WSDOT Research Program. • The bulk of our PM comes from the results of qualitative surveys distributed to our technical liaisons and researchers. It's good feedback on the project level, but lacks usefulness on the program level. • We have recently conducted three customer satisfaction surveys that are being analyzed; one each for our DOTD employees, industry partners (contractors, suppliers, governmental agencies, consultants), and our university researchers (more devoted to the research process or the PI experience). • This is in our provincial business plan. • If the nature of the research and pre-/post-research customer-satisfaction surveys would lend itself to quantification of satisfied customers, this statistic would be persuasive. • We haven't used this, but suspect it would have high value. • Research customers set research priorities and participate in the research. Customers are involved in the research process from the problem identification through product test and evaluation. Research is not complete until the customer is not satisfied. Contribution of the Overall Mission of the Department • We have found mission statements to be vague and all encompassing. We now take the approach that the department's performance measures are a more specific representation of the department's mission and that each of our research projects should have the potential to improve at least one department performance measurement. • This would be good if there was a strategic plan to compare your projects against.

E - 21 • We conduct very little basic research. All of our projects are driven by the needs of the Department. • This one seems to be very nebulous. • All projects contribute to the overall mission. Project ideas develop from within each area requesting research, the areas prioritize the projects, and upper mgt reviews and approves the projects to ensure that they are in line with the Department's strategic direction. Constant improvements are made to this process. • This has not been a formal research performance measurement, but I anticipate adding this measurement in the near future. • It should go without saying that this PM should be implicit in any research program. All projects should contribute to the overall mission. In Florida, project ideas develop from within each area requesting research, the areas prioritize the projects, and upper mgt reviews and approves the projects to ensure that they are in line with the Department's strategic direction. Constant improvements are made to this process. • Seemingly all projects would contribute to the mission of the STA in some way. • This should be included in our program in the near future. • We have been trying to better align our research work program to support the goals and objectives in our agency's business plan. As such this is very valuable measure for us. The difficulty in the past has been those projects that do not more directly fit support a goal or objective (e.g. smoother pavements) in a definitive area can always be lumped under a business plan emphasis area of improved efficiency. To help address this issue we modified our problem statement form to request that the submitter identify the specific goals and objective the research project would support. So in essence all of projects will support a goal/objective in the business plan. What may be more important to show is how the project are distributed over the six emphasis areas of the business plan. e.g. safety, mobility, system preservation, customer service, etc. • Hard to measure. Frankly, our Department's stated mission is a bit of a moving target. • We report this measurement through our Annual Business Plan. • Not a formal PM. However, through our research committee structure, the research undertaken by TxDOT is consistent with the overall mission of the department. In theory, 100% of our research contributes to the overall mission of TxDOT. • Used as a subjective or qualitative assessment as well as quantitative. All projects were expected to contribute to the overall mission of the STA. However no projects are outside of mission, so is it meaningful? • This is another item that is examined BEFORE the research is funded as opposed to afterwards. • We have provided measures to our department over the years as we are required to submit measures to the legislature for our programmatic based budget. Generally, the indicators used were on the order of number of projects started, completed and underway. These are not considered to be effective measures. If effective measures were chosen, I would probably change this rating. • Have not quantified this. • It is not clear how to gather this type of impact information for a specific research project's cause and effect, so although it appears to be valuable, we have never used this statistic.

E - 22 • We make a very strong effort to both align our research to the Department's strategic plan and to use our research to inform and influence the plan. I believe we could contribute some unique examples of how this is done. • The research program is aligned with Department priorities. Each research project is explicitly related to the Departments Guiding Principles that are associated with the overall mission of the Department.

Next: Appendix F Comprehensive List of Performance Measures »
Performance Measurement Tool Box and Reporting System for Research Programs and Projects Get This Book
×
 Performance Measurement Tool Box and Reporting System for Research Programs and Projects
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web-Only Document 127: Performance Measurement Tool Box and Reporting System for Research Programs and Projects explores the integration of standard performance measures and tools to assist users in implementing performance measures into the Research Performance Measurement (RPM) System.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!