National Academies Press: OpenBook

Transit Service Evaluation Standards (2019)

Chapter: Chapter 4 - Case Examples

« Previous: Chapter 3 - Survey Results
Page 37
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 37
Page 38
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 38
Page 39
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 39
Page 40
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 40
Page 41
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 41
Page 42
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 42
Page 43
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 43
Page 44
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 44
Page 45
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 45
Page 46
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 46
Page 47
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 47
Page 48
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 48
Page 49
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 49
Page 50
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 50
Page 51
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 51
Page 52
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 52
Page 53
Suggested Citation:"Chapter 4 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 53

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

37 The survey results provided an overview of the transit agencies’ evaluations of their pro- cesses and issues in relation to transit service evaluation standards. After a review of these results, six agencies were chosen as case examples. Personnel directly involved with the service evaluation process were interviewed by telephone. The case examples provide details on the development and updates of the service evaluation process, how standards are used, priori- ties among standards, and board and agency attitudes toward the service evaluation process, challenges, lessons learned, and keys to success. The selection process for case examples had several criteria: • Transit agencies of various sizes in different parts of North America, with a special emphasis on the inclusion of small agencies; • Agencies that have taken innovative approaches or faced significant challenges; and • Agencies that provided detailed survey responses and interesting observations. More than 80% of responding agencies offered to serve as a case example. The six case example cities and agencies are • Boston, Massachusetts: The Massachusetts Bay Transportation Authority is pioneering the development of customer-based metrics that better reflect the passenger experience instead of the operational characteristics of delivered service. • Corpus Christi, Texas: The Corpus Christi Regional Transportation Authority has a flexible process that takes board concerns into account in the application of performance standards and has recently made changes to bus stop spacing in accordance with these standards after an in-depth discussion between board members and staff. • Denver, Colorado: The Denver Regional Transportation District has a long history of service evaluation using standards tied directly to the agency’s mission and goals to define the type and level of service that a community can expect and to provide an objective, transparent basis and a rationale for the district’s service-level decisions that everyone can understand. • Modesto, California: Modesto Area Express is a small agency that consulted its stakeholders and riders as it developed metrics, guiding policies, and principles 2 years ago to guide the future development of transit and is continuing to educate the public and elected officials as it begins to use these metrics to make decisions about its system. • Seattle, Washington: King County Metro developed a performance evaluation process in conjunction with its strategic plan and has established priorities that directly inform service decisions. • West Palm Beach, Florida: PalmTran created a new Performance Management Office that reports directly to the executive director with the purpose of using performance metrics to improve the agency as opposed to simply being performance monitors and report providers. C H A P T E R 4 Case Examples

38 Transit Service Evaluation Standards Nine process improvement teams were created around individual performance areas with specific metrics assigned to each team. A basic description of the transit agencies included in the case examples (ridership, revenue hours, and peak vehicle requirements for all services operated) has been developed from FY 2016 NTD reports (Table 15). The case example interviews explore issues raised by the survey responses in greater depth and provide a more complete view of the service evaluation process at the individual agency level. The opinions and findings presented in each case example are relevant for each transit agency highlighted. Massachusetts Bay Transportation Authority, Boston, Massachusetts The Massachusetts Bay Transportation Authority (MBTA) provides public transportation in the Boston metropolitan region, including bus, heavy rail, light rail, commuter rail, ADA para- transit services, BRT, and trolleybus and ferryboat services. According to the 2016 NTD data, MBTA’s service area population is 3.109 million. MBTA has 779 buses, 612 demand response vehicles, 421 commuter rail cars, 336 heavy rail cars, 156 light rail cars, 30 BRT buses, 22 trolley- buses, and 9 ferryboats in maximum service. Average weekday ridership in 2016 was 1.3 million and annual ridership was 403.0 million. How Performance Evaluation Has Developed and/or Changed MBTA has had a service delivery policy since the 1970s and has revised it several times. The most recent revision to performance metrics and standards was adopted in January 2017. The performance metrics identify specific measures to use in evaluating service, while the perfor- mance standards define acceptable performance and set goals, or both, for each measure. The primary goal of the new service delivery policy was to create metrics that better reflect the pas- senger experience instead of the operational characteristics of delivered service. The revision also took advantage of new data sources. MBTA gathered both internal and external stakeholder feedback on the policy goals and the best metrics to measure those goals on the basis of available data. The revised policy was presented and adopted by the governing board. The revised performance metric for heavy and light rail reliability reflects the focus on pas- senger experience. Owing to the frequency of trips on these modes, passengers do not rely on printed schedules; rather, they expect trains to arrive at consistent headways. MBTA can also Agency Annual Ridership Annual Vehicle Revenue Hours Number of Peak Vehicles Massachusetts Bay Transportation Authority 403,003,734 6,685,426 2,374 Corpus Christi Regional Transportation Authority 5,456,925 359,996 102 Denver Regional Transportation District 103,340,797 4,267,263 1,435 Modesto Area Express 3,241,665 188,969 63 King County Metro 127,384,761 4,662,806 2,818 PalmTran 10,581,570 1,047,899 424 Source: FY 2016 NTD reports and agency data. Table 15. Characteristics of case example agencies.

Case Examples 39 estimate passenger arrivals to each station by the minute using its automated fare collection system. Therefore, schedule adherence is now based on passenger wait time (measured by the percentage of passengers who wait the scheduled headway or less for a train to arrive) instead of on-time performance (measured by the percentage of timepoints that are served within the on-time range). Under the new metric, a train may arrive late to a station, but only the per- centage of passengers who wait longer than the scheduled headway are counted against the standard. In addition, because the new metric is measured in terms of passengers, the metric is more heavily weighted toward peak times of the day, when both service levels and passenger volumes are the highest. On the bus side, because the fare collection and passenger count systems record a passenger only upon his or her boarding of a vehicle, MBTA does not know how long a passenger may have been waiting, but does know how many passengers are on a bus at any given time. MBTA uses these data to estimate passenger comfort, a new metric that measures the number of passenger minutes in crowded conditions, with these conditions defined differently by type of service and time of day. The old metric did not take time into consideration, instead using the number of standing passengers at the peak load point to measure whether or not each trip exceeded a crowding standard. The difficulty with the new metric lies with forecasting the impact on passenger comfort of potential service changes. MBTA uses data from its AFC, AVL, and APC systems. Different data sources are used to measure different performance metrics. How Standards Are Used The short answer is that it depends on the standard. MBTA posts reliability data every day for accountability on its public dashboard (an online tool that allows members of the public to download data), reports on all of the metrics annually to the state legislature as part of its annual performance report, and uses the metrics to guide service planning. The agency has minimum and target standards for most metrics that act as a trade-off mechanism for resources. For example, there is a trade-off between efforts to improve reliability by increasing run and recovery times, which, in a revenue-neutral scenario, results in frequency being reduced and worsens passenger comfort as the same number of passengers are crowded into fewer trips. To guide this and other trade-offs, the minimum sets a floor for each metric that requires a given resource level, while the target sets a goal for each metric where achievement depends on resource availability. Reliability has been prioritized in assessing bus and train schedules, but MBTA is aware of the implications for frequency and passenger comfort. MBTA uses several of the performance metrics and associated data for service planning. To address reliability, MBTA uses its AVL system to report on trip run times, on-time performance, and scheduled departure lateness. These measures are then used to adjust trip run and recovery times, segment times, and departure times. To assess existing comfort, MBTA uses data from its AFC system to identify problem routes or times of day. As mentioned above, although the Ser- vice Delivery Policy uses a passenger comfort standard, MBTA uses its APC system to estimate impacts on average maximum load due to a service change because of the difficulty in using the AFC system to estimate impacts on comfort. Two examples of the agency’s innovative approach to performance evaluation are in the areas of route cost–benefit ratios and service coverage. MBTA assesses a cost–benefit ratio for each bus route. Previously, route elimination would be considered if a route was three times the system average in net cost per passenger. The current process, based on three metrics, is more com- plicated and holistic. The metrics are total ridership, equity (percentage of transit-dependent riders on the route), and value of the route to the network (measured by number of households

40 Transit Service Evaluation Standards and jobs given access to the network and percentage of transfers). MBTA calculates each metric, normalizes the results, and produces a transit benefit that is compared with operating cost. Looking at transit benefit allows MBTA to speak to different types of roles that routes play within network. Ridership is weighted 70 in the calculation and the other two metrics are each weighted 15. Once all this is calculated, MBTA identifies the outliers, both good and bad, and considers potential actions for improvement as part of the service planning process. The second example is MBTA’s approach to service coverage. Instead of a single coverage measure, MBTA uses three: a basic minimum, a coverage target for areas with a high percentage of low-income households, and a coverage target for high frequency in high-density areas. This measure design provides some flexibility in cutting back on overall coverage to increase high- frequency routes and low-income access. Are Some Standards More Important? The revised Service Development Policy prioritizes reliability and passenger comfort and overcrowding. As noted above, it provides a holistic approach to issues such as transit benefits and costs and frequency versus coverage. Board Attitude The board worked with MBTA planning staff to review the revised Service Delivery Policy through a 2-hour workshop and many presentations over the course of the process. On the basis of what staff have heard from peer agencies, the MBTA board is engaged to a greater extent in the process of developing service delivery policies than most boards. Agency Attitude The transfer from operational to customer-focused metrics is challenging because it is so new and so different. Customer-focused metrics factor in many variables that are beyond an agency’s direct control. Many operations staff continue to use operational metrics such as head- way maintenance, trip departure lateness, or run time variability when assessing service and express frustration when operational improvements fail to translate into performance metric improvements. The Service Delivery Policy is sometimes seen as a black box. Challenges The last major update to MBTA’s Service Delivery Policy took more than 2 years to complete. It is likely it will need small revisions to address any changes that arise in the implementation process. While the availability of data presents new opportunities, the volume and newness of these data also present potential challenges. MBTA is still creating new methodologies for using its AFC and APC data to assess service. These methodologies need to be vetted to ensure they are providing correct interpretations of data. In addition, as methodologies change or adapt to new data sources, a challenge will be to present a consistent analysis history. For example, MBTA will be implementing a new fare collection system in 2021 that will replace its current AFC data. The quality of data collection will undoubtedly improve, but will that data be comparable to the data that MBTA is currently collecting? While the creation of a service delivery policy itself represents a significant achievement for an agency, the implementation of the standards contained in that policy is an additional

Case Examples 41 challenge. As mentioned above, in a resource-constrained environment, prioritizing one standard often occurs at the expense of another. These trade-offs are likely to be politically unpopular, making their implementation more difficult. Indeed, limited resources are one of the reasons for actual service levels failing to meet the service evaluation standards. Broader issues include the following: As we get more data, how do we translate that into oper- ations? How do we fit this into our work flow? How do we do service planning in the age of so much data? How do we find the nuggets of information that are important and prioritize them? Lessons Learned One of the lessons MBTA learned in developing its public-facing dashboard was how to address multiple audiences at once. The MBTA team recognized the need for nesting levels of detail for different audiences. The first level is for those who just want a number for their route or train. The second level uses a section of frequently asked questions to explain how the measures are defined. The third level allows users to download data. The fourth level leads to the agency’s blog, where very detailed discussions of data sources and methodologies take place. • There will sometimes be a disconnect between certain metrics used to assess service and the metrics used to adjust service, particularly to the extent that service evaluation focuses on the passenger experience. At MBTA, the two examples of this are the reliability standard, which only assesses schedule adherence but says nothing about how to set run or recovery times, and the comfort standard, which assesses crowding from the passenger perspec- tive but does not indicate what frequencies are necessary to obtain comfort at the accept- able level. However, agencies may be able to achieve performance improvements through means other than schedule adjustments alone (e.g., bus priority measures, stop relocation or consolidation, more efficient fare collection and boarding processes, improved commu- nication), so it may not be advisable to use the same metric or to create a direct correlation between the metrics for service assessment and adjustment. • It is important to try to calculate all the metrics prior to implementing them. The benefits of doing so are to prove that assumed methodologies work with actual data, to standardize the process for summarizing and reporting metrics, to consider how these metrics will be used by other analytical processes (such as Title VI monitoring), and to bring light to issues that did not rise to the surface when the methodologies were being developed. For example, had MBTA had more time to develop the latest revisions, it would have developed a methodology for assessing the frequency metric in corridors where routes overlap (whether to measure the combined frequency or treat each route’s frequency separately). Keys to Success • A combined effort across agencies (Massachusetts DOT, MBTA) provided both operational and data experience. • Flexibility. Last-minute changes were required in response to comments from leadership. • A very extensive public outreach process prior to adopting new standards. Public feedback was very valuable in striking a balance between operational versus public views of level of service. At workshops, staff set out a square meter on the floor and asked how many people can fit into it? This provided insight regarding what customers see, and this insight guided the design of appropriate measures. • Balancing detail and simplicity. Presentations to the board and the general public had to simplify some complex concepts while still explaining the metrics in sufficient detail.

42 Transit Service Evaluation Standards Corpus Christi Regional Transportation Authority, Corpus Christi, Texas The Corpus Christi Regional Transportation Authority (CCRTA) provides an integrated system of public transportation services in the Corpus Christi metropolitan region. Its ser- vices include bus, ADA paratransit services, and vanpools. According to the 2016 NTD data, CCRTA’s service area population is 349,000,000. CCRTA has 67 buses, 28 demand response vehicles, and seven vanpool vehicles in maximum service. Average weekday ridership in 2016 was 17,770 and annual ridership was 5.46 million. How Performance Evaluation Has Developed and/or Changed CCRTA’s original performance standards were developed more than 10 years ago and were based on similar-sized peer systems with guidance from FTA. The standards were approved by the 11-member CCRTA Board of Directors. The agency has revised the standards to take into account all required Title VI elements within FTA Circular 4702.1B. The most recent board- approved revision widened the acceptable bus stop spacing. Changes have also been made to the minimum number of weekday boardings required for bus stop amenities. A detailed on-time performance standard for fixed-route services was discontinued. How Standards Are Used The standards are discussed at executive staff meetings, especially those related to on-time performance and stop spacing. On a case-by-case basis, bus stop amenities are discussed from requests received from customers, the board, elected officials, and other stakeholders. Internally, they are used to make service decisions for specific services. Performance standards are also help- ful in responding to customer comments. The CCRTA board asks staff questions about performance standards and how they are applied and then discusses the specific issue at hand in light of these standards. This question- and-answer dialog normally takes place at monthly board meetings. A flexible process that takes board concerns into account in the application of performance standards is important. On average, decisions regarding service changes are made in accord with the performance evaluation process about half of the time and are made for other reasons about half of the time. Are Some Standards More Important? Standards that guide placement of bus stop amenities are important, given the hot climate of Corpus Christi for most of the year. Standards related to bus stop spacing are viewed as very important by the Operations Division but less so by customers. On-time performance is also important. Performance standards related to productivity are used internally, but these are not always discussed in detail with the board. A periodic review of productivity standards with board members would be helpful. Board Attitude The CCRTA board tries to meet customer needs when public comments are received on ser- vice change issues. Board members are aware of the performance standards but have had limited exposure to the performance evaluation process. In the discussion of the recent proposed change in the guidelines for spacing stops, one board member expressed concern that the guidelines would be applied universally without regard for extenuating circumstances. Standards are usually discussed only when an action item comes before the board for approval.

Case Examples 43 Agency Attitude The executive staff are knowledgeable and supportive of the performance evaluation process. The Operations Division is very involved in the process. CCRTA views performance evaluation as providing an objective rationale for service changes. The agency was careful to present rec- ommendations from a recent systemwide planning study conducted by a consultant as being in accordance with performance standards. CCRTA proceeds cautiously when proposing changes to performance standards to its board. The recent change in bus stop spacing resulted in useful discussion among board members. Challenges • Education is very important! The CCRTA board has a general understanding of current stan- dards, but the recent adoption of revised stop spacing standards was the first opportunity for newer board members to have an in-depth discussion with staff on standards. • Case-by-case scenarios can override the performance-based recommendation. The board listens to heartfelt concerns. Development of alternatives is needed when decisions are being made. • Limited staff time is a significant challenge. A challenge later in 2018 will be obtaining board approval for revamping Sunday service. Lessons Learned • Use a phased approach when developing a performance evaluation process. Do not start too aggressively. Begin more generally at a high level, then move to a more finite level. • Do not assume knowledge of what a service standard means or how it is used. Conduct as much outreach and education as possible. Use innovative methods to collect and share feedback. • Express standards as clearly and concisely as possible. Some standards(e.g., passengers per revenue mile) are not readily comprehensible to laypersons. • Think through the implications of performance standards and be willing to revise them as necessary. For example, standards for “lifeline” fixed-route services require 1-hour headways but do not address the possibility of replacing these with flexible route or demand response dial-a-ride service. Keys To Success • A strong working relationship with the Operations Division. This provides confidence in and support for performance-based recommendations. • Education is key! Consistently state the standards in presentations related to performance monitoring and reporting. • Willingness to update service standards more often on the basis of peer examples and internal efforts. Denver Regional Transportation District, Denver, Colorado The Denver Regional Transportation District (RTD) provides public transportation in eight counties in the Denver metropolitan region. Its services include bus, rail, shuttles, ADA para- transit services, demand response services (Call-n-Ride), special event services, vanpools, and many more. According to the 2016 NTD data, RTD’s service area population is 2.920 million. RTD has 873 buses, 404 demand response vehicles, 140 light rail vehicles, and 18 commuter

44 Transit Service Evaluation Standards rail vehicles in maximum service. Average weekday ridership in 2016 was 345,143, and annual ridership was 103.34 million. How Performance Evaluation Has Developed and/or Changed RTD developed performance standards more than 20 years ago. The agency has revised the standards on a few occasions. The most recent major revision was in 2016. Changes included the service classification system, new service warrants, service design and performance standards, demand response transit, and standards for transit-dependent riders. The revision involved extensive analysis and review, both internally and with stakeholders. How Standards Are Used The standards are used frequently and serve three major functions: • To define the type and level of service that a community can expect; • To provide an objective basis and a rationale for RTD’s service-level decisions (board mem- bers and customers understand the performance standards and how they are used); and • To guide the use of limited resources to provide efficient service. RTD has the data and the analytical tools to recommend and evaluate changes objectively. Are Some Standards More Important? The most important standards by far are performance and economic measures. Productivity is measured by passengers per hour on most routes and by passengers per trip on regional-class routes. Cost-effectiveness is measured by subsidy per passenger. Standards related to on-time performance and crowding are very important; these are directly related to the customer experience every day. Bus stop spacing is also important. RTD recently consolidated bus stops on the basis of its spacing guidelines along the Broadway– Lincoln corridor by using a model to identify between 20 and 25 stop candidates for con- solidation. RTD extended a bus lane on Broadway and Lincoln Street (a one-way pair) and consolidated many of these stops, which resulted in a travel time savings of 3 minutes in each direction. Public reaction was positive, and RTD is now implementing bus stop consolidation on additional well-used routes with closely spaced stops. RTD staff is more confident in presenting recommendations based on data and (in the case of stop consolidation) models. It takes time to develop this process as the standard operating procedure. Board Attitude The RTD board has been very supportive of the performance evaluation process. The newly extensive availability of data and modern analytics has provided a foundation on which to build an easily understood rationale and credibility for service development. Agency Attitude Performance evaluation has been part of RTD for a long time. The basic approach of analyz- ing boardings per hour and subsidy per boarding for every route by class of service is under- stood and accepted throughout the agency. Routes that meet the standards are good; routes that do not meet the standards are subject to further analysis. The on-time performance stan- dards tend to come into play when there is a reliability issue on a particular route.

Case Examples 45 The performance evaluation process is transparent: everyone comprehends why RTD looks at specific routes. The businesslike approach, combined with the use of classes of service for jurisdictional equity, is understood by board members and stakeholders, although it does help to remind them periodically. Over the years, RTD incorporated charts and graphics in public reports and presentations to provide a clearer picture of the data. Data tables are available for backup, but not everyone wants to pore over numbers. Graphics aid transparency. Challenges • The initial challenge 20 years ago was the lack of data. Now RTD has the data. It took a bit of time to get people used to the data and the veracity of the data, but now the data are believed. • Performance standards are not applied in a vacuum. RTD considers other factors, such as the presence of a large number of riders with disabilities, before making a decision about a given route. The process needs to be flexible in certain circumstances. • As noted above, including classes of service in performance standards is a way to provide juris- dictional equity. Standards for routes serving low-density suburban areas are different from those applied to routes in the urban core. This approach works well for the evaluation of exist- ing routes, but some areas outside the urban core argue that they deserve more new service. • A final challenge is understanding changes in sociodemographics, travel patterns, urban– suburban development, city policies (e.g., Complete Streets), technology-enabled options, and operating environment and how these might affect changes to standards. Meeting this challenge requires research, evaluation, and major discussions. Lessons Learned • Identify your agency’s mission and goals and tie performance standards directly to them. In RTD’s case, a crucial goal is to provide cost-effective service throughout the district. Next, develop a few principal objectives that measure the goals and show your management, board, and customers (preferably annually) how your individual services are meeting the goals or will lead to investigation of changes. • Develop other specific criteria and measurements (standards) that support the principal objectives and provide specific and clearly defined support for service development. • Develop a data-driven process using verified and understandable data and information that directly support the agency’s mission, goals, and objectives. • Evolve on the basis of available data and analytical tools to address new situations. RTD’s 2016 revision clarifies that cost sharing by municipalities can result in additional service, especially local shuttles or circulators. With RTD’s FasTracks 15-year capital improvement program, each new light rail corridor was implemented with a restructuring of bus services in that corridor. • Communicate internally, with your board, and with stakeholders. RTD’s annual perfor- mance report goes to its board and is sent to stakeholders. The report explains why RTD does what it does. Actively inform your stakeholders. RTD has a list of 100 stakeholders who are notified about proposed service changes. Communication needs to be a constant process, not a one-shot event. • Internally, examine the top and bottom routes in the performance evaluation, depending on the budget situation. RTD makes changes every 4 months, so change is a constant and is understood. If an agency makes changes infrequently, change becomes a big deal. • Manage the process. The performance evaluation process becomes more familiar each time RTD does it. The process is ongoing, with service standards as the starting point.

46 Transit Service Evaluation Standards Keys to Success • Staff who recognized the need for performance measures and took the time and effort to develop the performance evaluation process. • Willingness to evolve as the need arises. The history of the transit industry: transit came out of the private sector. The business-like approach was already there, and basic performance standards have always been used. Financial and economic feasibility and analytics are not new, nor is the concept of the net social benefit of providing service. While acknowledg- ing the social issues related to mobility, the essential questions are where service is provided and why. • Performance is not isolated. The market for transit is also considered: who are we trying to reach? Agencies develop new services to meet specific markets with different expectations and thus different standards. One example is RTD’s suburban service standards: approxi- mately 10 boardings per hour is the tipping point between demand response and fixed-route service. • Transit development among peers is similar, but the context in which it applies is always different. However, context should not be an excuse. Find out what works well elsewhere, then see how that can apply in your agency’s context and your board’s goal and objectives. Agencies are more similar than different. Peer group experiences can help to answer the ques- tion, “How do we best do this?” Important performance factors form the basis, and context informs the specific implementation. Modesto Area Express, Modesto, California The City of Modesto adopted the name Modesto Area Express (MAX) in 1990 for its transit system that serves the city and neighboring communities. Commuter routes connect to Bay Area Rapid Transit and Altamont Commuter Express commuter rail. According to the 2016 NTD data, MAX’s service area population is 189,000. MAX has 46 buses and 17 demand response vehicles in maximum service. Average weekday ridership in 2016 was 10,929, and annual rider- ship was 3.24 million. How Performance Evaluation Has Developed and/or Changed The transit manager brought experience with performance evaluation to MAX. In 2016, the agency consulted its stakeholders and riders as it developed metrics, guiding policies, and principles to guide the future development of transit. The process was based on experience, best practices from other systems, and how the system is currently operating. The resulting policies balance performance and financial measures with customer comfort, convenience, and satisfaction. How Standards Are Used Monthly reports shows route performance with regard to six standards. Headways are gener- ally unchanging, but other performance measures are tracked monthly. The standards and policies are not hard and fast rules; rather, they are used to flag the need for improvement. Standards apply only to local routes; because the agency’s service area does not extend to rural communities, a single standard is appropriate for all local routes. As an example, MAX proposed elimination of a poorly performing route in February 2018, and this was a sig- nificant change for the agency. Public outcry led to the decision to retain a portion of the route with minimal service as a lifeline route in the community.

Case Examples 47 Are Some Standards More Important? On-time performance and farebox recovery ratio are important standards. The State of California requires all agencies to meet a minimum farebox recovery as a condition for fund- ing. Attention to farebox recovery ratio implies attention to ridership trends, as the two metrics are closely linked. Board Attitude MAX is a department of the City of Modesto; thus, the city council is its governing body. The council supports and formally approved the policies and standards. Transit is a small piece of city government and does not receive extensive oversight or attention in the normal course of events. Agency Attitude All transit division staff support the standards and understand the logic behind them. The agency met with its operators as it developed the standards. One of the standards addresses recovery time, and operators see recovery time built into all schedules now. Challenges Educating the public on what good transit looks like and why is important. Even in areas with poor performance, provision of service affects people’s lives. Discussions of what to do when service does not meet standards are difficult. The agency struggles to overcome the percep- tion of “leaving grandma at home” when a route is removed or modified. Educating elected officials is always a challenge. Obtaining agreement on standards is easier than implementing changes that affect constituents. MAX provides elected officials with infor- mation and encourages educated decisions. Lessons Learned Make the effort to inform the public and encourage its participation in the process of introducing performance standards. MAX could have developed a more meaningful public participation process by putting money into advertising upcoming meetings and talking to many community groups before deciding to implement its standards. This process would also have helped riders and stakeholders understand transit and the impacts of a route that does not perform well. Keys to Success • Education of individuals on your governing body is key. It is also useful to educate elected officials and city staff. • Using graphics to explain the concepts behind standards. The underlying data are obvi- ously important, but graphics help those less knowledgeable about transit to understand what standards are and how they can be used. King County Metro, Seattle, Washington King County Metro (KCM) provides a wide range of transportation options and choices for King County. In addition to the region’s largest bus network, KCM’s choices include vanpools, paratransit services, and many new forms of transportation solutions. KCM also

48 Transit Service Evaluation Standards operates Sound Transit’s regional express bus service and Link light rail in King County, along with Seattle Streetcar. According to the 2016 NTD data, KCM’s service area population is 2.117 million. KCM has 981 buses, 220 demand response vehicles, 1,469 vans, 138 trolleybuses, eight streetcars, and two ferryboats in maximum service. Average weekday ridership in 2016 was 3.2 million, and annual ridership was 127.4 million. How Performance Evaluation Has Developed and/or Changed Two documents in which performance evaluation is discussed are the KCM strategic plan and the service guidelines. These documents were originally part of a single document that resulted from the work of the 2010 Regional Transit Task Force. The strategic plan is broader in focus and includes approximately 75 performance metrics associated with a variety of goals. Service guidelines were established as a new way to allocate resources throughout the county: when the county gets a new dollar, where does it go? and when the county loses a dollar, where does it come from? (cuts have been necessary only once since the service guidelines were adopted). Criteria take into account existing service, jobs, density, performance, and several other factors to guide resource allocation. Both documents were adopted 8 years ago, and there have been only minor changes to the metrics. KCM separated service guidelines from the stra- tegic plan 2 years ago as part of a big revamping based on 5 years of experience. Changes to the service guidelines arose after the experience with service reductions: three phased packages of service reductions were planned, but KCM only implemented the first phase because the financial situation improved. KCM had analyzed the three phases as one package, but the first package included several service reductions in a single area of the county. Addi- tional language was added to the guidelines to ensure fair and more evenly distributed service reductions across the county in the event of a future budget shortfall. Technical changes added guidelines for park-and-ride lots and other provisions to allow a route to continue if it is the last service in a specific area, even if it is low performing. KCM changed the crowding threshold from one based solely on seats (“load factor”) to one based on seats and the space available for standing passengers. KCM also made changes to the way it determines how much service each transit corridor in the system deserves (target service levels). All changes were formulated and approved by a task force of stakeholders and elected officials, vetted through the executive branch, and approved by the governing board (the King County Council) through a legislative process. KCM has also implemented a monthly business review process consisting of 16 performance metrics in four categories: service quality, service efficiency, service growth, and employees. These 16 metrics were winnowed from an agencywide pool of about 200 metrics; collectively, they represent the agency’s strategic goals in broad terms. How Standards Are Used KCM produces an annual performance report using service guidelines. The strategic plan is revised as needed every 2 years. The strategic plan is a public-facing document addressed to the county council and to stakeholders. Because service metrics for the strategic plan are broad and changes within the system are gradual, a progress report is produced every 2 years; changes in these metrics are usually not dramatic. Trends are monitored closely. Several times a year, KCM staff run analyses based on the policies in the service guidelines that directly inform investment decisions. These decisions follow established priorities: 1. Reducing crowding, 2. Improving reliability,

Case Examples 49 3. Increasing frequency, and 4. Investing in highly productive routes. The evaluation proceeds in order: crowding gets first attention, followed by reliability, and on down the list. There is so much need in Priority 3 that KCM has never had the resources to directly address Priority 4, although many routes in this category also are included in higher priority categories. The first two priority categories are route specific. Priority 3 (increasing frequency) is based on transit corridors (where multiple routes may run) and takes into account jobs, housing, student enrollment, social equity factors, and geographic value (routes that serve as the primary connec- tion between major activity centers scattered throughout the county are valued more highly). Are Some Standards More Important? As just noted, KCM’s guidelines prioritize crowding and reliability. Routes are analyzed sev- eral times a year on these metrics, typically at the end of every service change. The Priority 3 (frequency) analysis has more inputs, takes longer, and tends to change less, so the Priority 3 list is updated annually. Board Attitude The county council serves as the governing board for KCM and views performance evalua- tion as vital. The council’s Regional Transit Committee (which also includes mayors and city council members) is directly involved with KCM at the policy level. Committee members are very interested in transit data and performance metrics and encourage data-driven decisions. The committee expects and even demands justification based on service guidelines for proposed service changes. Agency Attitude Attitudes toward performance evaluation vary by department. Broadly, all departments agree with the priorities and their rankings from 1 through 4. These guide the service analysis process and are a good reflection of importance, especially given the data that are available. With regard to reliability, an area that KCM is revisiting currently, a need has arisen to reeval- uate the technical details of standards. There are two on-time performance standards for each route: across the day (no more than 20% late) and during the afternoon (PM) peak (no more than 35% late). The PM peak standard is receiving the most attention, as some staff members think that being late more than one-third of the time is a poor standard. Additionally, the only policy prescription for reliability in the guidelines is to add buses and lengthen schedules (to reflect actual travel times), whereas the emerging belief is that infrastructure improvements are likely a better solution in some situations. The performance evaluation process highlights reliability issues, while the available budget dictates the extent of actions that can be taken to address the issues. Challenges There is not enough budget to meet all standards and to invest in all identified needs. The pri- orities guide allocation of available resources. Balancing the results of service evaluation against other, sometimes competing, goals is challenging. It is unlikely that a single, manageable set of evaluation criteria will comprehensively capture every goal an agency has in terms of providing

50 Transit Service Evaluation Standards mobility. Therefore, when resource allocation is tied directly to performance (as it is at KCM), only those services that pop in the analysis become available for investment, even though invest- ment in other services may help advance certain other agency or county goals. Striking a balance between having a manageable set of metrics that identify top investment priorities and being able to allocate resources to projects, places, or services that do not pop in the analysis—and then being able to communicate the rationale behind those investments—can be difficult. Induced demand is also an interesting phenomenon KCM has observed. If KCM invests to reduce crowding by adding frequency, more people ride, and routes that are chronically crowded remain crowded despite the investments. Induced demand, which is based in part on the latent demand in the market, means that KCM has difficulty solving the overcrowding problem. Over time, it becomes difficult to address Priority 3 (increased frequency) issues because of ongoing Priority 1 (crowding) and Priority 2 (reliability) issues. Additionally, this policy results in more service hours being allocated to denser areas while suburban/rural areas receive a propor- tionately smaller share of new service, unless Metro has the resources to invest in Priority 3 (fre- quency). The council also retains the right to adjust how and where the county invests in transit service, in order to achieve greater balance in service investments countywide. The standards do not address how the network should expand geographically, and thus it is difficult for KCM to respond proactively to new developments in currently unserved areas. The service guidelines emphasize increased frequency where KCM has service instead of looking at new origin–destination patterns. The strategic plan helps in this regard by providing a different point of view and approach to new developments. Additionally, KCM’s long-range plan METRO CONNECTS fills this gap. KCM is currently working to integrate METRO CONNECTS and the service guidelines. Lessons Learned • Keep official or public-facing standards and metrics simple, broad, and few in number. Submetrics can always be developed for deeper insight and can be used to justify investments and reductions under the umbrella of a smaller, broader set of metrics. • Ensure you have the capability to measure the metrics being proposed without spending excessive staff time in collecting, cleaning, and analyzing data. The less complex, the better. • Choose standards that can lead directly to taking action. Link standard problem-solving actions to underperforming services (“If service x is underperforming on metric y, do z to correct it.”). This strategy provides a simple logic chain and justification for investments that boards, stakeholders, and the public can understand. Ensure policies are aligned with the metrics so as to facilitate or enable the allocation of resources to fix problems revealed by the metrics. • Encourage broad participation in deciding on the high-level standards, including staff, stakeholders, and members of the public. The Regional Transit Task Force (RTTF) made the original recommendation in 2010 that KCM adopt transparent, performance-based guide- lines that emphasize productivity, social equity, and geographic value. The success of the RTTF effort and subsequent revisions was due in part to collaboration between King County, partner cities, regional decision-makers, and diverse stakeholders. Keys to Success • Management support. An agency can measure lots of things all day long. The truism that “what gets measured gets managed gets fixed” is not true. An agency can measure whatever it wants, but it also needs the willingness to make changes. The implementation of a perfor- mance evaluation process runs the risk of bogging down in requests to see data in different

Case Examples 51 ways in order to sidestep the discomfort associated with confronting an identified problem head on. Management must be willing to accept the available data and what it says and take corrective action based on the analysis that is possible. • Public involvement in developing the performance evaluation process is necessary. An environment with only operating data and no customer survey/outreach/satisfaction mea- sures will not create optimal results. Discussion with the public is a part of KCM’s service guidelines. KCM staff are required to deliver documentation of all outreach performed as part of a service change recommendation. Some potentially softer metrics related to the public would be helpful. • Good data people. Staff who understand how to collect, clean, and analyze data are integral to a data-driven process. PalmTran, West Palm Beach, Florida PalmTran provides public transportation in Palm Beach County, including bus and ADA paratransit services. According to the 2016 NTD data, PalmTran’s service area population is 1.269 million. PalmTran has 130 buses and 294 demand response vehicles in maximum service. Average daily ridership was 36,024, and annual ridership was 10.6 million. How Performance Evaluation Has Developed and/or Changed PalmTran developed its performance standards through a peer comparison with nine transit agencies similar in size and methods of operation. After benchmarking the results, staff mem- bers spent time with transit agencies in Jacksonville and Miami, Florida, and Columbus, Ohio. PalmTran then established three levels of goals: minimum/maximum acceptable goals, target goals, and aspirational or stretch goals. The process began in FY 2017. Currently, the agency is looking at its entire organization on the basis of the first year of data. How Standards Are Used PalmTran created a new Performance Management Office (PMO) for the purpose of using performance metrics to improve the agency as opposed to simply being performance monitors and report providers. In June 2017, the PMO created nine process improvement teams around individual performance areas (e.g., safety, on-time performance, ridership, customer concerns, maintenance, financial performance), with specific metrics assigned to each team. Each team was charged with meeting weekly or biweekly and making a presentation to the Executive Leader ship Team every month. On the basis of 12 months of data, teams were asked to present adjustments up and down to the minimum/maximum, target, and stretch goals. The process was named the PTSTAT (PalmTran statistics) program and modeled after approaches taken in Chicago, Illinois, and Cleveland and Columbus, Ohio. Are Some Standards More Important? All standards are very important, but the safety team made the most progress. Rear accidents were a major issue, so the safety team developed a recommendation to add flashing lights to the backs of all buses. This recommendation was first tested successfully on a small sample of buses, which had few to no rear accidents, and then applied to all 157 buses in the fleet. PalmTran won awards from the Florida Public Transit Association (FPTA) and Palm Beach County for this innovation. Another team, which focused on paratransit safety, made a similar recommendation for paratransit vehicles with a different type of flashing lights. This program is currently being implemented.

52 Transit Service Evaluation Standards On-time performance is a longstanding concern for the agency. “Dig and find” was the approach taken by each team, and this team found technology-related issues that affected the reporting of on-time performance. The trigger boxes that recorded bus arrivals were not large enough and were missing bus arrivals. In this case, the underlying cause was related to technol- ogy, not the actual operation of the bus. Ridership trends were also a major concern. The ridership team developed the PalmTran ride challenge, which rewarded ridership by PalmTran employees, many of whom did not ride and thus could not answer the transit equivalent to the question, “If you own the restaurant, would you eat there?” The ridership team got on community center agendas (seniors are a major segment of PalmTran ridership) and made presentations on their activities and findings. On the basis of all that the ridership team had learned, it proposed a pilot program to extend Route 4 to the new ballpark and to erect a new bus shelter at this location with a baseball facade. Ridership on Route 4 increased by 60%. Teams are still meeting weekly or biweekly and developing new ideas for their specific areas. PalmTran has 630 employees, and between 60 and 70 employees are on nine teams in total. Ten percent of the PalmTran organization is actively working on solving problems. Board Attitude Before the PalmTran Service Board was approached, the first step was to make sure that the executive leadership was totally on board. This involved education that the process is not just a scorecard of things wrong, that it is instead a company-wide forum where teams present results. Executive leadership then agreed on nine cross-functional teams whose focus would be digging for core reasons behind performance and problem solving. The approach was based on the principles of Lean Six Sigma, a set of techniques and tools for process improvement that uses empirical methods to improve quality by identifying and removing the causes of defects and minimizing variability. The Service Board’s attitude was phenomenal, as its members recognized that this was a powerful means of addressing customer issues. The performance evaluation process changed the perception of PalmTran as a reactive agency to a forward-thinking one. The Florida DOT and FPTA have agreed that this is a model for all transit agencies in Florida: standardize metrics; emphasize that agencies need to improve as well as measure; and take action on the basis of the analytical results. Agency Attitude The leadership of the executive director and the executive team is critical in the success of this process. The executive director and nearly all of the management team were new to PalmTran and brought transit experience and a fresh motivation to address the transit issues in Palm Beach County. As noted earlier, the executive director established the PMO, which reports directly to him and backed up his commitment by reserving a portion of the monthly execu- tive team meetings for the process improvement teams to present their findings and proposed solutions. The agency embraced a holistic performance evaluation process oriented toward solutions, not just performance standards and guidelines. Challenges The primary challenge is ensuring the integrity and validity of the data used in the process. To this end, PalmTran standardized the following elements: who pulls the data (one person), the analytical systems, and the process of pulling the data (same time every month). Before the

Case Examples 53 new performance evaluation system was implemented, ridership was being reported separately by the planning and operations departments (the totals did not agree), data would be pulled at different times each month, and definitions of certain metrics were unclear. The PMO clarified responsibilities, definitions, and time frames [data would always be provided on the 10th of each month related to data reporting, sometimes using peer results (e.g., on-time performance was defined as 0 to 5 minutes late to match peer agencies)]. Then the PMO validated all data submitted. Staff were trained on how to pull numbers, and results were compared with what was reported. There were a few iterations of this pro- cess in the early months, with the result that PMO staff became an authority on the numbers. The goal was to decentralize by shifting responsibility for data collection and accuracy to the appropriate department. PMO staff trained specific departments on the data collection system and process and continue to check the data for consistency and accuracy. Lessons Learned • Buy-in at the very top of the organization is an absolute necessity. The executive director will receive complaints about how the process is a waste of a department’s time and needs to have the vision of continuous process improvement driven by data. The entire leadership team has to buy into the process. At PalmTran, every leadership team meeting began with the PMO report. This commitment has resulted in better, more informed conversations regard- ing trends and what the agency is doing to address them—conversations that are based on data and facts. What kind of conversation can you have without the metrics? • Actualize and use metrics as a decision-making tool to seed the discussion and get employees involved. • Educate your employees. The on-time performance team created a campaign around on-time performance with operators and discovered that the training of bus operators included no discussion or presentation about on-time performance. The team designed a PowerPoint presentation for the operator training course and also designed a business card–sized explana- tion of on-time performance that included a graphic. Employees cannot be held accountable if they do not understand the concepts and know the expectations. Keys to Success • A visionary leadership team willing to prioritize and support performance evaluation throughout the agency. • Creation of a dedicated PMO that reports directly to the executive director. If the PMO had reported to a department director, the process would not have worked. • Team presentations to the leadership team. These presentations reflect a commitment to ongoing identification of potential solutions. Without a continuous process improvement, all an agency has is a scorecard.

Next: Chapter 5 - Conclusions »
Transit Service Evaluation Standards Get This Book
×
 Transit Service Evaluation Standards
Buy Paperback | $80.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Synthesis 139: Transit Service Evaluation Standards provides an overview of the purpose, use, and application of performance measures, service evaluation standards, and data collection methods at North American transit agencies.

The report addresses the service evaluation process, from the selection of appropriate metrics through development of service evaluation standards and data collection and analysis to the identification of actions to improve service and implementation.

The report also documents effective practices in the development and use of service evaluation standards. The report includes an analysis of the state of the practice of the service evaluation process in agencies of different sizes, geographic locations, and modes.

Appendix D contains performance evaluation standards and guidelines provided by 23 agencies.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!