National Academies Press: OpenBook
Page 1
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 1
Page 2
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 2
Page 3
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 3
Page 4
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 4
Page 5
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 5
Page 6
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 6
Page 7
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 7
Page 8
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 8
Page 9
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 9
Page 10
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 10
Page 11
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 11
Page 12
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 12
Page 13
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 13
Page 14
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 14
Page 15
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 15
Page 16
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 16
Page 17
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 17
Page 18
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 18
Page 19
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 19
Page 20
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 20
Page 21
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 21
Page 22
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 22
Page 23
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 23
Page 24
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 24
Page 25
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 25
Page 26
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 26
Page 27
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 27
Page 28
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 28
Page 29
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 29
Page 30
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 30
Page 31
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 31
Page 32
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 32
Page 33
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 33
Page 34
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 34
Page 35
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 35
Page 36
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 36
Page 37
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 37
Page 38
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 38
Page 39
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 39
Page 40
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 40
Page 41
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 41
Page 42
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 42
Page 43
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 43
Page 44
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 44
Page 45
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 45
Page 46
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 46
Page 47
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 47
Page 48
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 48
Page 49
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 49
Page 50
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 50
Page 51
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 51
Page 52
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 52
Page 53
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 53
Page 54
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 54
Page 55
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 55
Page 56
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 56
Page 57
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 57
Page 58
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 58
Page 59
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 59
Page 60
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 60
Page 61
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 61
Page 62
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 62
Page 63
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 63
Page 64
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 64
Page 65
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 65
Page 66
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 66
Page 67
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 67
Page 68
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 68
Page 69
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 69
Page 70
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 70
Page 71
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 71
Page 72
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 72
Page 73
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 73
Page 74
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 74
Page 75
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 75
Page 76
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 76
Page 77
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 77
Page 78
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 78
Page 79
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 79
Page 80
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 80
Page 81
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 81
Page 82
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 82
Page 83
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 83
Page 84
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 84
Page 85
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 85
Page 86
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 86
Page 87
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 87
Page 88
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 88
Page 89
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 89
Page 90
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 90
Page 91
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 91
Page 92
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 92
Page 93
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 93
Page 94
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 94
Page 95
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 95
Page 96
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 96
Page 97
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 97
Page 98
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 98
Page 99
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 99
Page 100
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 100
Page 101
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 101
Page 102
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 102
Page 103
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 103
Page 104
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 104
Page 105
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 105
Page 106
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 106
Page 107
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 107
Page 108
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 108
Page 109
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 109
Page 110
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 110
Page 111
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 111
Page 112
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 112
Page 113
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 113
Page 114
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 114
Page 115
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 115
Page 116
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 116
Page 117
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 117
Page 118
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 118
Page 119
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 119
Page 120
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 120
Page 121
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 121
Page 122
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 122
Page 123
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 123
Page 124
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 124
Page 125
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 125
Page 126
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 126
Page 127
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 127
Page 128
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 128
Page 129
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 129
Page 130
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 130
Page 131
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 131
Page 132
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 132
Page 133
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 133
Page 134
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 134
Page 135
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 135
Page 136
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 136
Page 137
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 137
Page 138
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 138
Page 139
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 139
Page 140
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 140
Page 141
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 141
Page 142
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 142
Page 143
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 143
Page 144
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 144
Page 145
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 145
Page 146
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 146
Page 147
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 147
Page 148
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 148
Page 149
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 149
Page 150
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 150
Page 151
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 151
Page 152
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 152
Page 153
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 153
Page 154
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 154
Page 155
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 155
Page 156
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 156
Page 157
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 157
Page 158
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 158
Page 159
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 159
Page 160
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 160
Page 161
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 161
Page 162
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 162
Page 163
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 163
Page 164
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 164
Page 165
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 165
Page 166
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 166
Page 167
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 167
Page 168
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 168
Page 169
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 169
Page 170
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 170
Page 171
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 171
Page 172
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 172
Page 173
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 173
Page 174
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 174
Page 175
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 175
Page 176
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 176
Page 177
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 177
Page 178
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 178
Page 179
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 179
Page 180
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 180
Page 181
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 181
Page 182
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 182
Page 183
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 183
Page 184
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 184
Page 185
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 185
Page 186
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 186
Page 187
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 187
Page 188
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 188
Page 189
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 189
Page 190
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 190
Page 191
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 191
Page 192
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 192
Page 193
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 193
Page 194
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 194
Page 195
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 195
Page 196
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 196
Page 197
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 197
Page 198
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 198
Page 199
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 199
Page 200
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 200
Page 201
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 201
Page 202
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 202
Page 203
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 203
Page 204
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 204
Page 205
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 205
Page 206
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 206
Page 207
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 207
Page 208
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 208
Page 209
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 209
Page 210
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 210
Page 211
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 211
Page 212
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 212
Page 213
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 213
Page 214
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 214
Page 215
Suggested Citation:"2015.03.20 L38B Report FINAL." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota. Washington, DC: The National Academies Press. doi: 10.17226/22255.
×
Page 215

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

SHRP 2 Reliability Project L38B Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota

SHRP 2 Reliability Project L38B Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota Michael Sobolewski Minnesota Department of Transportation Roseville, Minnesota Todd Polum, Paul Morris, Ryan Loos, and Krista Anderson SRF Consulting Group, Inc. Minneapolis, Minnesota TRANSPORTATION RESEARCH BOARD Washington, D.C. 2015 www.TRB.org

© 2015 National Academy of Sciences. All rights reserved. ACKNOWLEDGMENTS This work was sponsored by the Federal Highway Administration in cooperation with the American Association of State Highway and Transportation Officials. It was conducted in the second Strategic Highway Research Program (SHRP 2), which is administered by the Transportation Research Board of the National Academies. This project was managed by Stephen J. Andrle, Deputy Director of SHRP 2. The research reported on herein was performed by the Minnesota Department of Transportation (MnDOT), supported by SRF Consulting Group, Inc. Mike Sobolewski of MnDOT was the principal investigator. The other authors of this report are Paul Morris, Ryan Loos, and Krista Anderson of SRF Consulting Group. The authors acknowledge the contributions to this research from Paul Czech, Tony Fischer, Jim Henricksen, and Steve Misgen of MnDOT; Brian Kary and Jesse Larson of the Regional Transportation Management Center; Jim Aswegan, Chengdong Cai, Renae Kuehl, Todd Polum, and Ning Zhang of SRF Consulting Group; Henry Liu, Ken Shain, and Heng Hu of SMART Signals Technologies; and Eil Kwon of the University of Minnesota Duluth. COPYRIGHT INFORMATION Authors herein are responsible for the authenticity of their materials and for obtaining written permissions from publishers or persons who own the copyright to any previously published or copyrighted material used herein. The second Strategic Highway Research Program grants permission to reproduce material in this publication for classroom and not-for-profit purposes. Permission is given with the understanding that none of the material will be used to imply TRB, AASHTO, or FHWA endorsement of a particular product, method, or practice. It is expected that those reproducing material in this document for educational and not-for-profit purposes will give appropriate acknowledgment of the source of any reprinted or reproduced material. For other uses of the material, request permission from SHRP 2. NOTICE The project that is the subject of this document was a part of the second Strategic Highway Research Program, conducted by the Transportation Research Board with the approval of the Governing Board of the National Research Council. The Transportation Research Board of the National Academies, the National Research Council, and the sponsors of the second Strategic Highway Research Program do not endorse products or manufacturers. Trade or manufacturers’ names appear herein solely because they are considered essential to the object of the report.

DISCLAIMER The opinions and conclusions expressed or implied in this document are those of the researchers who performed the research. They are not necessarily those of the second Strategic Highway Research Program, the Transportation Research Board, the National Research Council, or the program sponsors. The information contained in this document was taken directly from the submission of the authors. This material has not been edited by the Transportation Research Board. SPECIAL NOTE: This document IS NOT an official publication of the second Strategic Highway Research Program, the Transportation Research Board, the National Research Council, or the National Academies.

The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. On the authority of the charter granted to it by Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Ralph J. Cicerone is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. C. D. (Dan) Mote, Jr., is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Victor J. Dzau is president of the Institute of Medicine. The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy’s purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Ralph J. Cicerone and Dr. C.D. (Dan) Mote, Jr., are chair and vice chair, respectively, of the National Research Council. The Transportation Research Board is one of six major divisions of the National Research Council. The mission of the Transportation Research Board is to provide leadership in transportation innovation and progress through research and information exchange, conducted within a setting that is objective, interdisciplinary, and multimodal. The Board’s varied activities annually engage about 7,000 engineers, scientists, and other transportation researchers and practitioners from the public and private sectors and academia, all of whom contribute their expertise in the public interest. The program is supported by state transportation departments, federal agencies including the component administrations of the U.S. Department of Transportation, and other organizations and individuals interested in the development of transportation. www.TRB.org www.national-academies.org

Contents 1 Executive Summary 1 Key findings and recommendations 5 CHAPTER 1 Introduction 5 Overview of Pilot Testing Process 7 Refined Testing Analysis 8 CHAPTER 2 TTRMS Development 8 Travel Time and Traffic Data 8 Transportation Information and Condition Analysis System Tool 11 Travel Time Data Extraction Process 13 Weather Data 20 Event Data 24 Crash and Incident Information 31 Road Work Data 32 TTRMS Database Development 32 Input Data Processing 34 TTRMS Database Format 34 TTRMS Analysis Tool 40 Aggregate Reliability Measures 45 CHAPTER 3 Reliability Report 45 Description of Facilities 45 Results Summary 55 Facility Observations 58 CHAPTER 4 Evaluation of the Project L07 Tool 58 Introduction 58 Initial Investigation 59 Findings Summary 59 Evaluation Process 65 Validation Comparison 73 Additional Sensitivity Testing and Exploration 77 Detailed Summary of Findings 79 Recommended Refinements 81 Opportunities for Future Testing of the L07 Tool 82 CHAPTER 5 Minnesota Reliability Workshop 82 Overview 83 Workshop Introduction 83 SHRP 2 Background and Concept 93 Technical Analysis of the SHRP 2 Tools 118 Utility of the SHRP 2 Tools 141 Review of Background and Concepts

142 Example Applications for Travel Time Reliability 176 Conclusions and Next Steps 182 Key Findings 186 CHAPTER 6 Refined Technical Analysis 186 Alternative Time Intervals 192 Disaggregation of Delay Causes 195 Demand Regimes 200 SMART Signal Traffic Data 201 Updated L07 Benefit-Cost Tool 204 CHAPTER 7 Findings and Recommendations 204 Project L02 205 Project L07 205 Project L05 207 References A-1 APPENDIX A Study Facility Reliability Reports B-1 APPENDIX B Test Results of L07 Tool Evaluation

EXECUTIVE SUMMARY The Minnesota pilot site has undertaken an effort to test data and analytical tools developed through the Strategic Highway Research Program (SHRP) 2 Reliability focus area. The purpose of these tools is to facilitate the improvement of travel time reliability on highways by reducing the frequency and effects of events that cause travel times to fluctuate in an unpredictable manner. The SHRP 2 reliability data and analytical tools evaluated by the Minnesota team are intended to address travel time variability in one of three ways: 1. Establish monitoring systems to identify sources of unreliable travel times (Project L02) 2. Identify potential solutions to cost-effectively improve reliability (Project L07) 3. Incorporate consideration of travel time reliability into transportation agencies’ planning and programming framework (Project L05) The Project L02 data and analytical tools were pilot tested by collecting vast amounts of traffic and nonrecurring conditions data and compiling it in a travel time reliability monitoring system (TTRMS) for purposes of conducting reliability evaluations. The L07 benefit-cost tool was evaluated in a number of facets, including usability, performance, and sensitivity testing. Guidance from Project L05 to incorporate reliability into the planning and programming process was introduced to a wide audience of stakeholders through an extensive outreach effort of meetings, workshops, and web conferences. Key Findings and Recommendations A number of key findings were discovered through the pilot testing process. The following points provide a summary of critical takeaways for individuals and agencies considering adoption or exploration of the SHRP 2 reliability tools. The Minnesota team has also identified a number of recommended product refinements for consideration by SHRP 2 and developers. Project L02 Travel time and demand (expressed as facility vehicle-miles traveled [VMT] in the Minnesota case) were identified as the bare minimum data sources required to conduct a reliability evaluation. Other inputs required for a fully functioning travel time reliability monitoring database (TTRMS) include • Weather • Crashes • Incidents • Special Events • Road Work 1

A wide variety of graphical products and performance measures can be produced using results from the TTRMS database. The Minnesota pilot team found the following items provide the greatest value to potential audiences: • Surface plots • Pie charts • Cumulative density function (CDF) curves • Reliability indices • Comparison pie charts and bar charts There is a spectrum for the level of effort that can be expended to evaluate travel time reliability performance. Performing a complete reliability evaluation, including collection of all nonrecurring data sources, can require significant time and resources. Scaled-down evaluations are also an option, with a focus only on the traffic data or perhaps a single nonrecurring factor. Finding an approach that balances the level of effort with the values of information produced is important and depends on the questions that need to be answered as a result of the analysis. Project L07 The L07 project evaluation tool is applicable to freeway facilities and is capable of analyzing one segment with uniform geometry and volume characteristics. When considering use of the L07 tool, it is critical to identify the primary bottleneck location along a congested freeway facility. The tool was found to misrepresent speed profiles in locations that are influenced by upstream or downstream bottlenecks. The Minnesota team recommends first using the L02 TTRMS to identify high-potential locations to be evaluated in the L07 tool, such as high travel time variability due to crashes and incidents. This will maximize the likelihood of identifying cost-effective treatments through the L07 evaluation. While defaults are provided for many inputs required in the tool, some are more critical than others. Specifically, model results can be sensitive to the occurrence and duration of crashes and incidents. Therefore, customized inputs reflecting local conditions data are recommended to accurately capture the nonrecurring congestion effects. Conversely, default regional weather inputs are regarded as entirely adequate, as evaluation results are comparatively less sensitive to changes in rain and snow frequencies. Additional treatment data for the treatments in the tool are desired. Some stakeholders have expressed concern that the actual effects of some treatments in the tool are not fully understood. Additional treatment options are also desired to test more options for operational and geometric improvements. The 2013 version of the L07 tool allowed the user to modify financial variables in the graphical user interface (GUI); however, this ability was restricted in the 2014 update. The 2

Minnesota team recommends restoration of this feature to provide more flexibility for performing cost-benefit analyses. The tool does not provide any functionality for traffic growth over the project’s lifetime. The addition of a feature to capture future traffic growth and its impacts on travel time and financial outcomes is recommended. Project L05 A variety of important feedback was identified through the stakeholder outreach process. Some of the key points in the feedback included the following: • Stakeholders really liked inclusion of travel time reliability for project-level evaluations. They found that the results resonated and reinforced their experience of conditions along the facility. • Existing data sources were found to be adequate for evaluating travel time reliability on the Twin Cities freeway system. Initial concerns about inadequate data were eased through successful use of loop detectors, crash, weather, and other available data sources. • There is concern over the level of effort to conduct reliability evaluation. In particular, lack of consistency in crash, incident, and road work data sources make linking these congestion causes to unreliable travel times time-consuming. Refined data collection and storage techniques and streamlined analysis tools will be needed to bring reliability evaluation into the mainstream. • There is a desire to include the contribution of nonrecurring congestion in benefit-cost analysis, but there are reservations about whether the state of the practice is ready for integration. More demonstration and proof will be needed to convince decision makers that this is the next step. • A disconnect remains between urban and rural applications for reliability evaluation. The urban environment benefits from widely deployed instrumentation and active traffic management that facilitates reliability evaluation, but is not available in rural areas. Furthermore, nonrecurring congestion may be the only cause of delay in rural areas, underscoring the importance of capturing these impacts on those roadways. • Different types of information and presentation techniques are needed to communicate reliability performance to different audiences. For example, regional planners are interested in basic reliability indices at the facility or system level, whereas traffic engineers or operations managers may benefit from detailed surface plots and CDF curves along shorter highway segments. • More education is needed to define travel time reliability. The survey conducted at the workshop showed that a number of participants had in fact seen reliability used in previous project evaluations but did not realize that was what they had seen. 3

Following the pilot testing technical work and outreach, MnDOT is committed to advancing reliability evaluation in its business practices. This was most clearly demonstrated by the success of the project example used for the I-94 traffic study conducted alongside the pilot testing work. Project stakeholders found that the reliability evaluation enhanced the project study process and are now seeking similar information on future projects. In addition to project-level evaluation, MnDOT will also seek to implement reliability evaluation in programming context, starting with the Congestion Management Safety Plan (CMSP). The CMSP is a subset of highway mobility funds allocated in regional investment plans to deploy lower-cost/high-benefit solutions to address congestion and safety problems. The next CMSP prioritization process is expected to use reliability as a key performance measure. Further, department leadership sees strong potential for travel time reliability to drive additional investment in highway operations. Understanding the causes and magnitude of nonrecurring congestion such as weather, crashes, and incidents will make more effective use of snow plowing, incident response, and traffic management resources. Finally, success in these areas will be carried forward as reliability becomes more widely accepted and appreciated and ultimately adopted in decision-making structures throughout the organization. 4

CHAPTER 1 INTRODUCTION The Minnesota pilot site has undertaken an effort to test data and analytical tools developed through the second Strategic Highway Research Program (SHRP) 2 reliability focus area. The purpose of these tools is to facilitate the improvement of travel time reliability on highways by reducing the frequency and effects of events that cause travel times to fluctuate in an unpredictable manner. Previous SHRP 2 research has identified seven potential sources that result in unreliable travel times: • Traffic incidents • Work zones • Demand fluctuations • Special events • Traffic control devices • Weather • Inadequate base capacity of the roadway The SHRP 2 reliability data and analytical tools are intended to address travel time variability in one of three ways: 1. Establish monitoring systems to identify sources of unreliable travel times (Project L02). 2. Identify potential solutions to cost-effectively improve reliability (Project L07). 3. Incorporate consideration of travel time reliability into transportation agencies’ planning and programming framework (Project L05). This report provides a complete summation of the efforts undertaken by the Minnesota Pilot Test team to explore the SHRP 2 reliability tools, apply them to local conditions, evaluate their functionality and effectiveness, and report findings to SHRP 2 and product developers. For the analytical products, it contains technical details and data summaries of the process to collect, analyze, and report reliability performance. For the Project L05 evaluation, a full accounting of the Minnesota Reliability Workshop is presented, the capstone of an extensive outreach effort that spanned the duration of the pilot testing project. Overview of Pilot Testing Process Pilot testing of the reliability data and analytical products was conducted throughout the pilot testing period. The early stages of the study focused on data collection and processing; then tools were used to evaluate test facilities and identify performance and opportunities for improvements. Finally, outreach efforts were undertaken to introduce the tools and initiate dialog for incorporating them into planning and programming processes. 5

Project L02 The L02 guidance was utilized to establish a framework for collection, storage, processing, and analysis of travel time data to evaluate reliability performance. First, the mechanics of this process were established in terms of data sources and their collection and processing techniques. A detailed account of this process is the focus of Chapter 2 in this report. This chapter also defines some of the early graphical media conceived to communicate travel time reliability findings. Chapter 3 builds on the data analysis process described in Chapter 2 through preparation of a reliability report for the Twin Cities pilot study facilities. This reliability report is intended to serve as an example of the type of information an agency could produce on an annual or other regular basis to publicly document the reliability performance of selected facilities or an entire highway system. The methods used to prepare the reliability report and the interpretation of its elements are described in Chapter 3, and the complete set of graphical results are included in Appendix A. Project L07 Pilot testing of the L07 benefit-cost tool featured five steps to evaluate the tool’s performance, usability, and sensitivity. These included • Initial Investigation: This process was an exploration of the tool and its features. Analysts identify each of the inputs required for an evaluation and note how these should be prepared and whether default values are available. • Validation Comparison: In this section, the methods and data sources used to compare the performance of the L07 tool to ground-truth data are documented. It includes a series of graphics that show the relative results of the tool and detector data. • Additional Sensitivity Testing and Exploration: Here analysts consider a range of options available for conducting an evaluation with the tool, such as the use of default versus detailed values and the relative benefit-cost performance of the available treatments. • Detailed Summary of Findings: The findings from the initial investigation, validation comparison, and sensitivity testing are stated in this section. Findings are organized in a similar manner as the tool itself, including each input tab and output results. • Recommended Refinements: Finally, recommendations for improvements to the L07 tool and final report are provided. This information should be communicated to SHRP 2 and tool developers to enhance future version of these products. The L07 tool evaluation is recorded in Chapter 4, and technical outputs of the validation and sensitivity tests are included in Appendix B. 6

Project L05 The Minnesota pilot team carried forward guidance developed in the L05 project to initiate a dialog regarding incorporation of reliability evaluation in the planning and programming process. This was accomplished through an extensive outreach effort over the course of the pilot testing work. A number of groups were assembled and engaged through meetings and workshops: • Research Team: A group of eight to 10 MnDOT and Metropolitan Council technical experts that met monthly to guide the development and testing of data and analytical tools and provide feedback on the reasonability and presentation of results. • Policy Advisory Committee: A group of 15 to 20 MnDOT, Metropolitan Council, and local agency representatives that met bimonthly to review findings of the pilot testing and provide feedback on the applicability of planning and programming functions. • State Department of Transportation (DOT) Web Conference Updates: Updates on the technical progress of the pilot testing performed in Minnesota were occasionally shared with regional state DOTs, including those of Wisconsin, Iowa, and Kansas, interested in learning more about reliability evaluation. • Minnesota Reliability Workshop: The keystone event of the outreach effort, this full- day event brought together representatives from DOTs, metropolitan planning organizations (MPOs), the Federal Highway Administration (FHWA), local agencies, and universities to outline the findings of the pilot testing and provide examples of reliability evaluations. Participants were surveyed to gauge their understanding and awareness of travel time reliability. The Reliability Workshop is viewed as the culmination of much of the technical analysis and outreach work accomplished in the pilot study. Therefore it has been documented in detail in Chapter 5 of this report. The chapter includes many of the presentation slides from the workshop and paraphrases the presenter and participant discussions. Refined Technical Analysis The final element of the pilot testing was a return to further in-depth technical analysis of the L02 and L07 toolsets. In the initial evaluations documented in Chapter 2, 3, and 4, there were a number of topics that were not fully explored due to time and analyst constraints. Further, important feedback gathered through the L05 outreach efforts brought up new areas of inquiry the team desired to explore. These technical assessments are described in Chapter 6 and include a variety of recommendations for enhancing and streamlining future evaluation of travel time reliability. 7

CHAPTER 2 TTRMS DEVELOPMENT This chapter documents the development of a travel time reliability monitoring system (TTRMS) for the Minnesota pilot site. The development of this system followed the guidelines of SHRP 2 L02: Guide to Establishing Monitoring Programs for Travel Time Reliability (Institute for Transportation Research and Education, 2014). This report details the data sources used in the development of the TTRMS for the Minnesota pilot site. This includes • Travel time and traffic data • TTRMS database development • TTRMS analysis tool This report is intended to provide an additional reference guide for future development and refinement of TTRMS efforts. Many of the data sources described in this report are specific to the Twin Cities in Minnesota; however, the lessons learned and general application are useful for TTRMS developments in outstate Minnesota and other locations throughout the country. Travel Time and Traffic Data Travel time data is the single most important input to the TTRMS database. From a data analysis and causality perspective, it should be considered the dependent variable among the data elements. Reflecting this importance, the data collection and processing of this data were paramount to successful evaluation of reliability. This section discusses the source of traffic data used in this effort and how they were used as the backbone of the development of the TTRMS. The Twin Cities freeway system is highly instrumented, with inductance loop detectors deployed across the network. The loop detectors are organized in stations and are placed every half mile on the freeway system, with a total of approximately 4,000 detectors. Each loop detector collects volume and occupancy data, which are transmitted to MnDOT’s Regional Transportation Management Center (RTMC). These data are archived daily and stored in 30- second intervals. The volume and occupancy data collected by the loop detectors are processed at the RTMC to assess a range of traffic measures. Each detector is assigned a vehicle-length factor, which allows occupancy to be converted to density, in units of vehicles per mile per lane. In turn, the flow is divided by density giving speed. All of these measures at all loop detector locations are publicly available from the RTMC via a Java interface accessed from MnDOT’s website. This information was the basis for the traffic data used to develop the TTRMS. Transportation Information and Condition Analysis System Tool The baseline traffic flow measures just described have been further refined for use in more detailed traffic analyses by Dr. Eil Kwon at the Civil Engineering Department of the University 8

of Minnesota Duluth. This has led to the development of an interface program named the Transportation Information and Condition Analysis System (TICAS). This program uses the RTMC detector data to develop additional traffic measures for freeway facilities by combining data from multiple detectors and referencing geometric conditions. These measures include travel time, vehicle-miles traveled (VMT), and delay, among others. Members of the Minnesota pilot site research team attended a presentation by Dr. Kwon to learn more about how the TICAS software functions. This section summarizes the TICAS program and the features it provides to evaluate facility-level traffic conditions. Figure 2.1 shows the interface screen of the TICAS program. Figure 2.1. TICAS interface. TICAS performs a number of additional calculations using the data from the detector stations to produce the travel time and VMT information. Sections of highway between detector stations are divided into three segments of equal length (due to the varying distances between detector stations, the segments between stations vary along a highway). The upstream and downstream segments adjacent to detector stations are assigned the q (flow), u (speed), and k (density) values observed at that station. The middle segments halfway between detector stations are assigned the average of k and u values from the upstream and downstream stations; q is then computed using the new k and u values. Figure 2.2 shows the process for computing travel time from these values. 9

Figure 2.2. Schematic of TICAS travel time calculation. The q, u, and k values of each segment for each time period are known, so each box in the graph has a flow, speed, and density. Travel time is computed as though an individual vehicle departed the starting point at the timestamp of the data record. The time to traverse Segment 1 in Time Period 1 is computed from the speed, and then the vehicle moves to Segment 2 with that accumulated travel time. The vehicle begins traversing Segment 2 at the accumulated time according to the speed of Segment 2 during the period. If the vehicle does not fully traverse Segment 2 before the end of Time Period 1, the speed of Segment 2 during Time Period 2 is applied for the remainder of time needed to complete traversing Segment 2. This process is repeated until the vehicle reaches the end point. The reported travel time is the sum of all segment travel times required to traverse the full study facility. An additional note on the travel time computation process is that detection data is averaged across all lanes at each detector station. In the event a single lane detector is out of service or has erroneous data, neighboring stations are used to estimate data for that section. Computation of VMT data used the same baseline calculations as the travel time process described above. In this case, VMT is reported for each time interval as the flow rate multiplied by the length of each box shown in Figure 2.2. 10

Travel Time Data Extraction Process TICAS was used to extract the traffic data for the TTRMS and to perform additional calculations to derive the travel time and VMT information. TICAS calculates and provides cumulative travel time with records every 0.1-mile from the specified start point to the end point along the highway. Data are available in varying time intervals, ranging from 30 seconds to 1 hour. It was very important that the facility definitions stayed consistent when downloading the data. Facility definitions can be saved in TICAS, so a user can call up previously used facilities if needed for later analysis. The start and end points for the study highways are listed in Table 2.1. Table 2.1. Facility Endpoints Facility From To Length (miles) Number of Stations TH-100 77th Street 57th Avenue 14.6 30 I-94 (I-494 to TH 101) I-494 CR 81 9.0 11 I-94 (Minneapolis to Saint Paul) Plymouth Avenue Mounds Boulevard 12.8 29 The team downloaded data from January 2006 through December 2012 for each facility. Downloading the traffic data was a time-consuming effort, because the TICAS software is only capable of calculating 2 months of data per query. In addition, when downloading the data for Trunk Highway (TH) 100, TICAS would quit working if more than two weeks of data were selected at one time, due to the large number of stations along the highway. If days with no travel time or VMT data available were selected when using TICAS, an “error in evaluation” message would appear. When this occurred, each individual day had to be checked to determine which days were missing data. Generally, the only days with missing data were the first Saturday and Sunday of November each year (on occasion, the last weekend of October was missing data). Based on that, the search was narrowed down fairly quickly and the days with missing data were simply not selected in TICAS. Table 2.2 lists the days that had missing travel time or VMT data. Table 2.2. Days with Missing TICAS Data Year Days with Missing Data 2006 October 28–29 2007 November 3–4 2008 November 1–2 2009 October 31–November 1 2010 November 6–7 2011 November 5–6 2012 November 3–4 The amount of time spent downloading and aggregating the traffic data was extensive, given the limitations on large-scale data processing discovered in this process. On average, it took two members of the team 4 days to download the travel time and VMT data for all 7 years for each facility. 11

Table 2.3 and Table 2.4 provide an example of the travel time and VMT data downloaded using TICAS. Table 2.3. Travel Time TICAS Output Example TT Accumulated Distance Cumulative Travel Time for 0.1-mile Segments 2009-01-01 00:05:00 2009-01-01 00:10:00 2009-01-01 00:15:00 77th St. (S375) 2 lanes 0.0000 0.0000 0.0000 0.0000 0.1000 0.0996 0.0928 0.0989 0.2000 0.1993 0.1856 0.1978 0.3000 0.2989 0.2783 0.2967 0.4000 0.3992 0.3743 0.3968 0.5000 0.4995 0.4702 0.4970 0.6000 0.5999 0.5662 0.5971 0.7000 0.7009 0.6655 0.6984 0.8000 0.8019 0.7648 0.7998 70th St. (S376) 2 lanes 0.8000 0.9029 0.8641 0.9011 Table 2.4. VMT TICAS Output Example VMT Accumulated Distance VMT for 0.1-mile Segments 2009-01-01 00:05:00 2009-01-01 00:10:00 2009-01-01 00:15:00 77th St. (S375) 2 lanes 0.0000 3.7000 2.8222 3.9667 0.1000 3.7000 2.8222 3.9667 0.2000 3.7000 2.8222 3.9667 0.3000 3.1833 2.7167 3.7333 0.4000 3.1833 2.7167 3.7333 0.5000 3.1833 2.7167 3.7333 0.6000 2.7500 2.5556 3.2222 0.7000 2.7500 2.5556 3.2222 0.8000 2.7500 2.5556 3.222 70th St. (S376) 2 lanes 0.8000 2.7500 2.5556 3.2222 As shown in Table 2.3, TICAS outputs the traffic data in blocks for each day. Within these blocks the location is organized by row and the observation time is organized by column. The TTRMS database used macros to reorganize this data so that the cumulative travel times and VMT are calculated for the entire facility and stored in three columns including timestamp, travel time, and VMT. An example of the database table format is shown Table 2.5. 12

Table 2.5. TTRMS Traffic Data Format Time Stamp Travel Time VMT 20090101 00:05 13.98 647.90 20090101 00:10 13.71 659.07 20090101 00:15 13.92 801.09 20090101 00:20 14.22 934.82 20090101 00:25 13.92 1089.63 20090101 00:30 13.68 1202.14 20090101 00:35 13.96 1238.10 This format provided the backbone for the TTRMS data structure. As shown in Table 2.5, time intervals in this example were set at 5 minutes. As the key data source in the evaluation of travel time reliability, this is the only input that requires a fixed time interval definition. Other data sources described in this report are enumerated with more flexible temporal definitions, allowing them to be compatible with a variety of traffic data time bin sizes. Weather Data Similar to the traffic data, weather data were needed for the analysis years from 2006 to 2012. Three data sources were considered to obtain this weather data at various locations near the study highway: • MnDOT’s Road and Weather Information System (R/WIS) • National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center • Weather Underground This section describes each of these data sources in detail, including the characteristics, download process, and use in the TTRMS database. Road and Weather Information System R/WIS was chosen because it is operated by MnDOT and provides weather history, in small time intervals, varying from 1 to 10 minutes, at numerous sites across Minnesota, with several in the metro area. The sites that were the closest to the study highways were • I-494 and I-94 (near Trunk Highway (TH) 100 and I-94: I-494 to TH-101 facility) • I-35E at Cayuga Street Bridge (near the I-94: Minneapolis to Saint Paul facility) • I-35W at the Minnesota River (near the TH-13 facility). Figure 2.3 displays the location of each R/WIS site. 13

For each site, R/WIS has atmospheric (weather condition) and precipitation history tables available to view and export. The attributes reported from the atmospheric history table are • Air temperature • Relative humidity • Dew point temperature • Barometric pressure • Average wind speed • Maximum wind speed • Average wind direction • Precipitation type • Precipitation intensity • Precipitation accumulation • Precipitation rate • Visibility Figure 2.3. R/WIS station locations. ESRI, 2013 14

The attributes reported from the precipitation history are the following: • Precipitation type • Precipitation intensity • Precipitation rate • Precipitation start time • Precipitation end time • 10 Minute precipitation accumulation • 1 Hour precipitation accumulation • 3 Hour precipitation accumulation • 6 Hour precipitation accumulation • 12 Hour precipitation accumulation • 24 Hour precipitation accumulation After reviewing the data provided by the atmospheric and precipitation history tables, it was determined that the four key attributes to be referenced in the database spreadsheet are • Precipitation Intensity: Intensity of the precipitation as derived from the precipitation rate. The National Weather Service defines the following intensity classes: light, moderate, or heavy. (Source: R/WIS) • Precipitation Type: Type of precipitation detected by a precipitation sensor, if one is available. Certain types of precipitation sensors can only detect the presence or absence of precipitation and will display yes or no. Other types of precipitation sensors, such as the Weather Identifier and Visibility Sensor (WIVIS) or Optical Weather Identifier (OWI), can classify the type of precipitation. The WIVIS and OWI precipitation sensors may report “yes” at the onset of precipitation until sufficient time has elapsed to classify the precipitation type. (Source: R/WIS) • Precipitation Rate: Average precipitation rate computed every minute. Snowfall is converted to water equivalent and the rate represents the rate of liquid equivalent. (Source: R/WIS) • Precipitation Accumulation: Rainfall amount or snow liquid equivalent for the previous time period (10 minutes, 1 hour, 3 hours, etc.). This value is only displayed for National Transportation Communications for ITS Protocol (NTCIP) sites configured with the appropriate sensor. (Source: R/WIS) The process for downloading the data from R/WIS for 7 years was long and time- consuming. The website is only capable of showing users a single day of data at once. For each day, the desired table (atmospheric or precipitation) had to be viewed in the export mode, and the data had to be selected, copied, and pasted as text into a Microsoft Excel document and then converted from text to columns. This single-day viewing and downloading limitation resulted in a significant time and effort required to gather this data. Another aspect of the R/WIS website 15

that reduced the rate at which data could be downloaded is that every 2 to 3 hours the website would become unavailable for up to 10 minutes. One year of atmospheric and precipitation history at one site requires approximately 10 hours of work. An additional issue with the R/WIS data was that there were several days with missing or incomplete data at each site. Table 2.6 displays the days with missing data. Table 2.6. Days with Missing R/WIS Data Year I-94 at I-494 I-35E at Cayuga Street Bridge 2007 January1–September 2 January 27, February 10–11, April 5–9 and 24–29, May 6–7 and 9–31, June–December 2008 January–April, May 1–9 2009 October 15–31, November–December 2010 January–August, September 1–20 2011 May 13–15, October 5–9 2012 November 5–29 October 11–23 and 26–29, November 2–30, December Due to these incomplete records in the R/WIS data, the team recognized that an additional source of weather data would be required to fully populate the TTRMS database. National Oceanic and Atmospheric Administration The NOAA hourly surface data have access locations in the Twin Cities region at the Minneapolis-Saint Paul Airport, at the Crystal Airport, and in Eden Prairie. It was determined that the Minneapolis-Saint Paul Airport and the Crystal Airport locations were most relevant to the study highways. To acquire the data, users must request a specific location and time frame via e-mail. In this case, an access URL was made available to the analyst via e-mail approximately 10 minutes after the request was sent. The process for downloading the data was almost identical to R/WIS, but all 7 years could be done at once. The time it took to download these data was much shorter than the R/WIS data. However, once the data had been reviewed, several concerns arose. First, the data were recorded in much longer time intervals than the R/WIS, ranging from 20 to 30 minutes on average. Second, three of the four attributes (precipitation type, precipitation rate, and precipitation accumulation) that were needed in the database were not reported by the stations selected for the study highways. Finally, the time stamp for this data was recorded in Greenwich Mean Time (GMT) instead of Central Standard Time (GMT is the current time measured on the Prime Meridian [0 degrees longitude], and is the same all over the planet), requiring an adjustment. To make this data have consistent formatting with the R/WIS data and the database spreadsheet, significant assumptions were required regarding the precipitation accumulation, precipitation type, and precipitation rate. Due to the lack of usable information, NOAA was eliminated as a possible source of weather data for the TTRMS database. 16

Weather Underground The third source of weather data considered was the online service Weather Underground, which was developed in 1995 as an offshoot of the University of Michigan’s Internet weather database. Jeff Masters, a Ph.D. candidate at the time, was the creator of this service. Since 1995, Weather Underground has become the weather data provider for The Associated Press and Google’s search engine. In 2012, The Weather Channel acquired Weather Underground, although the website operates as a separate entity. The historical weather data from Weather Underground comes from over 25,000 personal weather stations that are a part of Weather Underground’s network. According to their website, quality control checks are performed on all incoming weather data observations to make sure they are displaying accurate data. These data were only available for download 1 day at a time, and there was a nearly identical process for downloading as the R/WIS data. However, by modifying the website address rather than clicking the back button and selecting the link for each individual day, the amount of time it took to download the daily data was greatly reduced. In addition, Weather Underground supplied the data in a single file for each day, rather than two separate files for atmospheric and precipitation information. There were many personal weather station sites available in the Twin Cities area. The Honeywell Labs location in Golden Valley was the primary site used for the TH-100 facility. On days when the Honeywell Labs station had missing or incomplete data, stations at Robbinsdale Middle School, the City of Plymouth, and Uptown Minneapolis were used to supplement the database. The station at Macalester-Groveland was used for the I-94 facility between Minneapolis and Saint Paul from January 2007 to October 2008. The Blair Manor station was used for the I-94 facility from October 2008 to December 2012; this station is in a more central location between Minneapolis and Saint Paul and reports the data in 15-minute intervals. Mounds Park was used to supplement missing data from the Macalester-Groveland and Blair Manor stations. Figure 2.4 displays the locations of the Weather Underground stations that were used. 17

Figure 2.4. Weather Underground station locations. There were several days that had some data available, such as dew point temperature, pressure, and wind speed, but the temperature and hourly precipitation values frequently appeared unreasonable. In some cases, the hourly precipitation was recorded as negative, or the temperature was recorded as -999 degrees. Records with these values were replaced with data from alternate stations. The most efficient way to deal with this was to download a year’s worth of data at a time, keep track of the dates/times with errors reported, replace these dates with acceptable data, and then move on to the next year. ESRI, 2013 18

The one key attribute that the Weather Underground data was missing that was provided by the R/WIS data was the precipitation type. The R/WIS data reported the precipitation type as none, rain, frozen, snow, or other. It was deemed necessary to have the precipitation type in the database spreadsheet for all time intervals, so a method was developed to calculate the precipitation type in the Weather Underground data using the R/WIS data. Data from February, March, and April 2011 from both sources were compared side by side. Using the temperature from the Weather Underground data and the precipitation type from the R/WIS data, the temperature ranges for the different precipitation types were determined. Precipitation with a temperature of 36 degrees or less would be categorized as snow, between 36 and 42 degrees was frozen, and any precipitation with a temperature greater than 42 degrees was considered rain. Figure 2.5 shows the range of temperatures at which precipitation occurred during April 2011. This was used to develop the precipitation temperature thresholds. Figure 2.5. Precipitation temperature thresholds. If there was no precipitation reported, the precipitation type was classified as none. After completing this exercise, the Weather Underground data was suitable to use in the database. The initial plan was to insert the Weather Underground data only on days where the R/WIS data was missing or incomplete. However, when reviewing both sources of data, there appeared to be some major inconsistences between the precipitation rates. The R/WIS data had an overall precipitation rate that was much higher (five to 10 times higher) than the Weather Underground data. Upon further investigation, there appeared to be several records in the R/WIS data that were unreasonable based on the precipitation rate (i.e., 7.9 inches of rain per hour) and precipitation type (snow in July). Since the code used in the database spreadsheet selected the “worst” condition in the 5-minute time bin, the snow records were always selected as the precipitation type in the spreadsheet. To decide which source was the most accurate, daily 0 10 20 30 40 50 60 70 80 90 32 33 34 35 36 37 38 39 40 41 42 > 42 N um be r o f R ec or ds Temperature (degree F) April 2011 Snow Frozen Rain 19

precipitation totals from both sources were compared to historical weather data from the Minnesota Climatology Working Group and KARE 11 News. The precipitation reported by Weather Underground was much more consistent than the amount reported by R/WIS. Although the R/WIS data had already been downloaded and processed for all 7 years, it would have taken a considerable amount of time and effort to rewrite the code to account for the errors in the R/WIS data than it would to download the remaining data from Weather Underground. After much discussion, it was decided that the Weather Underground data would be the sole source of weather data. This approach was thought to provide the greatest accuracy and consistency in the weather data. Event Data Special events are one of the sources that lead to unreliable travel times that are considered in the TTRMS. These events are characterized by concentrated traffic patterns with specific origins or destinations. These disruptive traffic demand conditions may reoccur frequently but are not part of the recurring commuter peaks. Event Types and Data Sources Several types of events were considered for the analysis and were primarily focused on events in downtown Minneapolis and Saint Paul. Professional sports schedules for the Minnesota Twins, Vikings, Wild, and Timberwolves were downloaded from various sources. The Target Center website along with the City of Minneapolis Event Log and the I-394 MnPASS Reversible Lane Calendar provided information about additional events taking place in downtown Minneapolis. Minnesota Vikings Historical Minnesota Vikings football game schedules were found on the team’s website. The Minnesota Vikings are a National Football League (NFL) franchise and produces the largest concentration of trips among the events included in the TTRMS. The Vikings play their home games at the Hubert H. Humphrey Metrodome located in downtown Minneapolis, which has a capacity of nearly 70,000 seats. In addition to identifying the schedule of past game start times, an investigation was completed to determine the timing and duration associated with arrival and departure patterns for these events. Four home games from fall 2012 were examined to determine the appropriate arrival and departure windows. The games selected were all Sunday games, one from each month of the regular season. Thursday or Monday games were not analyzed, due to background commuter traffic. The impact of each game was examined by measuring the duration of the arrival and departure periods and when it occurred in relation to the start and end times of the event. The end time of each game was assumed to be 3 hours after the start of the game. Traffic volumes were obtained from freeway loop detector counts for each day along four facilities (I-94, I-395, I- 35W, and I-494) to determine the impact of the arrival and departure. There are many variables associated with arrival and departure times of NFL games. The intent is to find a highly representative duration of travel, but fully acknowledge that it does not cover all cases. 20

Speed and volume observations were made for the following highways: • I-94: The selected loop detectors were located just east of the 5th Street off-ramp and the 6th Street on-ramp. The following results were recorded: − Arrival duration lasted 100 minutes and occurred up to 15 minutes before the start of the game. o Two of the four games had no noticeable speed changes during the arrival period. − Departure duration lasted 85 minutes, starting 25 minutes after the end of the game. o Speeds decreased by approximately 5 miles per hour for roughly 30 minutes, starting 15 minutes after the end of the game. • I-394: The selected loop detectors were located just east of the I-94 on- and off-ramps. − Arrival duration lasted 115 minutes and occurred up to 20 minutes before the start of the game. − Departure duration lasted 95 minutes, starting 10 minutes after the start of the game. − Little to no speed impacts were observed along this facility. • I-35W: The selected loop detectors were located just south of the TH-65 on- and off- ramps, which are the primary access points to downtown Minneapolis. − Arrival duration lasted 110 minutes and occurred up to 25 minutes before the start of the game. o One game had arrival impacts on speed. These impacts lasted 60 minutes and ended 30 minutes before the game start time. − Departure duration lasted 80 minutes, starting 10 minutes after the end of the game. o Departure impacts on speed lasted 85 minutes and occurred starting 15 minutes after the end of the game. • I-494: The selected loop detectors were located west of Fish Lake Interchange (I-494) on- and off-ramps. This location is approximately 15 miles from the Metrodome; however, it is an important interregional route to Minneapolis from greater Minnesota. − Arrival duration lasted 90 minutes and occurred up to 50 minutes before the start of the game. o Arrival times were hard to distinguish from regular Sunday volumes. − Departure duration lasted 85 minutes and occurred starting 10 minutes after the end of the game. o Note: it is unlikely that a vehicle could make it to this location from the stadium in 10 minutes unless it left early. − No speed impacts were observed along this facility. Using this information, the arrival duration was set as 3 hours, ending at the game start time. The departure duration was determined to be a 2-hour window, beginning 3 hours after the game start time. 21

Minnesota Twins The Minnesota Twins are a Major League Baseball team that plays home games in downtown Minneapolis. For the Minnesota Twins, game data from 2006 to 2012 was found on Wikipedia, and game start times were identified from ESPN.com because they were not provided in the Wiki data. The games were then sorted into two groups: 2006–2009 and 2010–2012. This was done to account for the location change from the Metrodome (where the Twins played through the 2009 season) to their new ballpark, Target Field, which opened in 2010. Both facilities are located in downtown Minneapolis; however, they are on opposite sides of downtown Minneapolis and may be expected to result in different impacts to the freeways serving downtown Minneapolis. This location change is also associated with an increase in attendance; while Metrodome games typically drew 12,000 to 20,000 viewers, the early years of Target Field games consistently had over 40,000 in attendance. The arrival duration for all Twins’ games was 3 hours, ending at the game start time. Departure duration lasted 2 hours, beginning 2.5 hours after the game start time. Minnesota Wild The Minnesota Wild is a National Hockey League team that plays home games in downtown Saint Paul at the Xcel Energy Center. Wild hockey data was downloaded from the Wild website for the 2006 to 2012 seasons. Wild games at the Xcel Energy Center have generally consistent attendance of 18,000 fans. The arrival time was determined to be 3 hours prior to the game start time, and the departure duration was set as 2 hours, starting 2.5 hours after the game start time. To confirm that these windows were indeed capturing all of the event traffic, detector data along I-94 was pulled for several game days. This investigation found that the arrival and departure times varied from game to game, but the 3-hour arrival window and 2.5-hour departure window would cover the variable arrival/departure times for all of the sporting events. Minnesota Timberwolves The Minnesota Timberwolves are a National Basketball League franchise that plays home games at the Target Center arena in downtown Minneapolis. Their home game schedule was found on the team’s website. The capacity of the Target Center can accommodate up to 20,000 seats; however, most Timberwolves games during the 2006 through 2012 seasons were attended by 10,000 to 12,000 viewers. For these events, the arrival duration was set to 3 hours prior to the game start time. The departure duration was determined to be a 2-hour period, beginning 2.5 hours after the game start time. 22

Additional Sources Two additional sources were used to collect information on events taking place in downtown Minneapolis. The Minneapolis Event Log, collected by the City of Minneapolis, provides data on the start and end time for events taking place in Minneapolis in 2012 (it did not exist prior to 2012). Events from this log had a wide range of sizes and impacts. Therefore, only events with significant attendance and concentrated arrival and departure patterns were included in the TTRMS database. The threshold was set at approximately 15,000-person attendance and featured activities such as University of Minnesota sporting events and live concerts. Events with no concentrated arrival and departure times, such as convention center auto shows and home shows, were not included. Supplemental event data for downtown Minneapolis during the period from 2006 to 2011 was identified from the I-394 MnPASS Reversible Gate Arm Schedule. I-394, which connects downtown Minneapolis to the western suburbs, includes a reversible two-lane section which is accessible by high-occupancy vehicles, buses, and toll-paying single-occupant vehicles. The gate arm schedule, which typically restricts access to the lanes for the peak commuter direction in the mornings and afternoons, is occasionally adjusted to accommodate traffic increases related to special events. MnDOT maintains a list of events that warrant special scheduling of the reversible lane and includes events such as major concerts or other civic events. These events were included in the TTRMS. Other Target Center Events Additional Target Center event data were downloaded from the Target Center website. Other sporting events taking place at Target Center included Minnesota Lynx (Women’s National Basketball Association [WNBA]) and Minnesota State High School League games. A variety of entertainment events at Target Center were included: Nickelback, Sugarland, and The Fray concerts; and specialty events such as Elmo’s Green Thumb, Curious George, and Dane Cook. The arrival duration for all other events taking place at Target Center was set to 2 hours, ending at the event start time. The departure duration was determined to be a 2-hour window starting at the event end time. Event Data Preparation Event data was prepared for use in the TTRMS database by organizing the records by record number, date, start time, end time, and event type. Table 2.7 shows an example of the event records formatted for input to the database. 23

Table 2.7. TTRMS Even Data Formatting Event Record Number Date Start Time End Time Event Type 11 11/11/2012 11/11/2012 9:00 11/11/2012 12:00 Vikings_A 12 11/11/2012 11/11/2012 15:00 11/11/2012 17:00 Vikings_D 13 12/9/2012 12/9/2012 9:00 12/9/2012 12:00 Vikings_A 14 12/9/2012 12/9/2012 15:00 12/9/2012 17:00 Vikings_D 15 12/30/2012 12/30/2012 12:25 12/30/2012 15:25 Vikings_A 16 12/30/2012 12/30/2012 18:25 12/30/2012 20:25 Vikings_D 17 4/9/2012 4/9/2012 12:10 4/9/2012 15:10 Twins_A 18 4/9/2012 4/9/2012 17:40 4/9/2012 19:40 Twins_D 19 4/11/2012 4/11/2012 16:10 4/11/2012 19:10 Twins_A 20 4/11/2012 4/11/2012 21:40 4/11/2012 23:40 Twins_D Crash and Incident Information Crashes and other incidents are significant sources of travel time unreliability on the highway system. These frequently result in lane blockages that reduce the capacity and throughput of a roadway and cause significant delays. The Twin Cities highway network is actively managed by law enforcement, service patrols, closed-circuit television (CCTV) cameras, and 911 dispatch services. The Minnesota pilot team attempted to utilize as much data from these sources as possible. This section describes the data sources that were obtained for this purpose, the type of information contained in each, and how it was processed for use in the TTRMS database. It is important to distinguish the definitions of crashes and incidents in the context of the Minnesota pilot site TTRMS development. Crashes are motor vehicle collisions with other vehicles or fixed objects that result in over $1,000 of property damage or personal injury. These events are recorded by law enforcement personnel, and crash records are compiled in the Minnesota Department of Public Safety (DPS) database. These are frequently used to perform safety reviews on highways by computing historical crash rates to identify high crash locations. Incidents are any other nonrecurring disruption to the highway that has the potential to affect capacity and throughput. These situations can include stalled vehicles, medical emergencies, or animals and debris that are on the roadway. Crash and Incident Data Sources Three sources of data were used to assemble crash and incident data. These sources include the Minnesota State Patrol Computer Aided Dispatch (CAD) data, MnDOT’s Dynamic Message Sign (DMS) logs, and the DPS crash records. This section describes each of these data sources and how it was used to develop the crash and incident inputs to the TTRMS database. 24

Computer Aided Dispatch Data The research team obtained the CAD database for the analysis years from the RTMC, which hosts the joint dispatch center with the State Patrol. The CAD data provide information about calls received by State Patrol 911 operators, call records, and emergency response actions. Details of each call include the location of the event, actions taken, roadway impacts, start time, and end time, in addition to many others. Records containing information along the study highways were queried from the overall database for the metropolitan area. These records were further refined to include those referencing crashes, debris, vehicle stalls, and other incidents. It is important to note that the CAD system was upgraded in August 2008. The new CAD system provided dispatchers with additional data entry fields and greater flexibility to add incident detail. As a result, data prior to August 2008 do not include the same level of detail as more recent records. Dynamic Message Sign Logs MnDOT’s RTMC operates a system of DMS along the principal arterial system in the Twin Cities region. These signs are frequently used to display general information such as current travel times, seat belt warnings, and increased impaired driving enforcement. In the case of crashes or incidents, they are also utilized to notify motorists of traffic disruptions downstream or on connecting routes. Logs of the messages displayed on these signs were obtained from the RTMC for the pilot study highways for inclusion in the TTRMS development. DMS data were available for the following dates for each facility: • TH-100: 2006 to 2012 • TH-13: 2010 to 2012 • I-94 Minneapolis to Saint Paul: 2010 to 2012 I-94 (I-494 to TH-101): 2006 to 2012 Figure 2.6 shows how the team manipulated the information from the DMS logs so the data could be used in the TTRMS. The DMS logs provided information regarding the time, location, type, and impact of events displayed on the signs. Generally, the logs included a timestamp for when the message was initially displayed and when it cleared, allowing the user to calculate the duration of the disruption. The five types of impact are on shoulder, lane closed, two or more lanes closed, and road closed. For messages without a specific location, a reference point of 2,000 feet downstream from the DMS device was used. 25

Figure 2.6. Schematic of DMS event detail description extraction. MnCMAT Crash Records The final source that was used for nonrecurring conditions data was the DPS crash records. These data were accessed using MnDOT’s Minnesota Crash Mapping Analysis Tool (MnCMAT). MnCMAT is a geographic interface that allows users to select crashes by road segment and apply certain filters to the data by visually selecting records from a specific facility during a specified time period. These data can also be accessed upon request from the Minnesota DPS; however, the MnCMAT interface provides enhanced capabilities for selecting data and downloading large quantities of data at once. The crash records included many data attributes describing the crash details. Following review of the data, the following categories were carried forward into the aggregate crash data file used to process crashes for use in combination with other sources: • Route • Reference Point • Crash Number • Day, Month, and Year • Day of Week • Time • Travel Direction • Severity • True Miles • Point X • Point Y 26

Conflation of Crash and Incident Records Many of the crash and incident records in the CAD, DMS, and MnCMAT databases are believed to represent the same events occurring on the highway. For example, in the case of a crash, the dispatcher would receive a 911 call and make an entry in the CAD system. This would be passed on to an RTMC operator who would deploy a DMS warning. Finally, the responding law enforcement officer would file a crash report, which would be a part of the DPS crash database. Figure 2.7 illustrates the relationship between each of the three data sources for nonrecurring conditions and the pertinent information contained in each. Figure 2.7. Nonrecurring conditions data sources. The Minnesota pilot team sought to conflate these different sources for two reasons. The first was to eliminate duplicate records for the same event, and the second was to utilize details from as many sources as possible to classify the incident. Unfortunately, no unique identifiers existed to directly link these records. Due to the spatial and temporal nature of crashes and incidents, a geographic information system (GIS) linking process was used to geographically represent the data. A shapefile was constructed for each crash or incident data set, using the milepost of the crash as the x-coordinate and a time equivalence factor for the y-coordinate. The scale for the y-axis was defined as 1 hour set equal to 1 mile along the highway. A spatial join was performed using a 1 mile/1 hour threshold. This process linked unique identifiers for crash records from each source to each of the others. Crashes and incidents were treated separately in this exercise and joins were performed in both directions to ensure that records were matched to their closest neighbor; i.e., data set A joined to data set B, and then data set B joined to data set A. Figure 2.8 shows the relationship between the different crash and incident sources. 27

Figure 2.8. Schematic of crash and incident record spatial join relationships. Processing of Conflated Crash and Incident Records Following the conflation step, the crash and incident records contained details from all of the source data. Many of these attributes overlap; however, the confidence in each data source may be different. Furthermore, some of the records were missing key attribute information or failed to join to nearby records in the conflation process. Through discussion with RTMC staff, hierarchies were developed to establish an order for the duration, type, severity, and impacts of crashes and incidents for use in the TTRMS. Crash Data Hierarchy Development Crash Duration: Input received from RTMC staff suggested that CAD data contain the most reliable duration information for crashes. This makes sense, since the initial instance of such an event would be recorded when the 911 dispatcher receives a call, and that same dispatcher can view closed-circuit TV (CCTV) footage showing when the situation has been resolved. CAD data consistently provided an end timestamp, so computation of crash duration from this data source was nearly always possible. In some cases in the DMS data, no cleared timestamp was provided, limiting the ability to compute the crash duration. To help develop estimates for durations of these crashes, average durations of those records that did include cleared timestamps were calculated. Table 2.8 lists the different crash and incident types and the average duration for each. These averages were applied to the crash and incident records without cleared timestamps. 28

Table 2.8. DMS Crash and Incident Type and Estimated Duration Crash and Incident Type Estimated Duration (minutes) Crash 31 Disabled Vehicle 14 Debris 8 Other 14 MnCMAT, by contrast, does not provide any information regarding the duration of crashes. In records where no CAD or DMS duration data were available, the severity was used to establish estimates for crash durations. These estimates were developed by comparing the unknown crash duration to the same type of crash from the CAD data and DMS logs for which duration was available. From those records, an average duration time was calculated. These were further discussed and confirmed by MnDOT traffic staff. The duration thresholds for the various types of crashes are shown in Table 2.9. Table 2.9. Crash Severity and Estimated Duration Crash Severity Code Estimated Duration (minutes) Fatal K 180 Incapacitating Injury A 90 Non-incapacitating Injury B 45 Possible Injury C 30 Property Damage N 30 Unknown X 30 As a result, the source hierarchy for crash duration is 1. CAD 2. DMS 3. MnCMAT Crash Severity: In this case, the DPS crash records were considered the authoritative source, since they had been recorded by a law enforcement officer. CAD data also contained severity information; however, it is not as detailed as the DPS crash records. The DMS logs did not include any severity information. As a result, the following hierarchy was used for crash severity: 1. MnCMAT 2. CAD 29

Crash Impact: The DMS records provided the best data source for roadway capacity, since the displays typically included messages such as “Crash on Shoulder” or “Left Lane Closed.”. CAD data included less reliable impact data, subject to what the dispatcher elected to include in the database. DPS crash records did not include any impact data. As a result, the following source hierarchy was used for crash impact: 1. DMS 2. CAD A data hierarchy was developed to use in the database for selection of crash records if multiple records exist for a single time period. This order was established to identify the worst- case crash condition if multiple records were observed for the same crash or during the same time period. This hierarchy is shown in Table 2.10. Table 2.10. Crash Hierarchy for Multiple Records During One Time Period Crash Records Source Severity 1. K CAD/MnCMAT 2. A MnCMAT 3. B MnCMAT 4. INJ CAD 5. C MnCMAT 6. N MnCMAT 7. PDO CAD Impact 1. Road Closed CAD/DMS 2. 2+ Lanes Closed DMS 3. Lane Closed DMS 4. Blocking CAD 5. On Shoulder CAD/DMS 6. Not Blocking CAD 7. Ran Off Road (ROR) CAD Incident Data Hierarchy Development The selection of data sources for the incident data followed a similar process as the crash data. In this case, it was simpler, since there were just two data sources for incidents, as compared to three for crashes. Incident Duration: As noted previously, CAD records were the preferred sources for duration information, followed by DMS records. For incidents where CAD data were not available and DMS records did not provide a cleared timestamp, the estimated durations shown in Table 2.8 were used. The source hierarchy for incident duration is 30

1. CAD 2. DMS Incident Type: Incidents differ from crashes in that severity is not an associated characteristic. Rather, an “incident type” is used to describe the conditions of a particular incident. Incident types in the Minnesota data set include Debris, Disabled Vehicle, and Other Incident. These descriptions were available in both data sources; however, DMS logs typically provided more consistent information. As a result, the source hierarchy for incident types is 1. DMS 2. CAD Incident Impact: The hierarchy of incident impact was also similar to that of the crash records. Again, DMS was the preferred source, followed by the CAD data. The source hierarchy for incident impact is 1. DMS 2. CAD A data hierarchy was developed to use in the database for selection of crash or incident records if multiple records exist for a single time period. This hierarchy is shown in Table 2.11. Table 2.11. Incident Hierarchy for Multiple Records during One Time Period Incident Records Source Type 1. Debris CAD/DMS 2. Disabled Vehicle CAD/DMS 3. Other Incident CAD/DMS Impact 1. Road Closed CAD/DMS 2. 2+ Lanes Closed DMS 3. Lane Closed DMS 4. Blocking CAD 5. Wrong way CAD 6. On Shoulder CAD/ DMS 7. Not Blocking CAD 8. ROR CAD Road Work Data Road work is another source of unreliable travel identified by previous SHRP 2 reliability research and represents an additional critical input for the TTRMS. For the purposes of this 31

effort, road work is defined as any agency activity to maintain or improve the roadway that may result in impacts to capacity. This may include short-term activities such as guardrail, sign, or lighting repair as well as more significant, long-term construction actions. Maintenance and construction information was primarily identified from the DMS logs, which generally show road work information, as illustrated in Figure 2.6. MnDOT news releases and the MnDOT annual construction program were also cross-referenced to verify and supplement this information. Road work records were prepared with information similar to crashes and incidents for use in the TTRMS. Therefore, information regarding duration and impact was needed. The MnDOT news releases and DMS records were typically used to determine the duration of road work. If there was no information about a specific project, loop detector data were used to estimate the duration. Similarly, impacts were also identified from news releases and DMS logs and occasionally supplemented with loop detector data. TTRMS Database Development The TTRMS database is configured using a macro-enabled Microsoft Excel spreadsheet. This software package was identified to be the most user-friendly and data-compatible for the variety of data sources under consideration. The macros that were developed for the database application assist in the organization of the various data sources to construct the database. This section describes how these features operate and how the various data sources are linked in the database. A series of refinements were made throughout the development of the database. For example, the database allows users to specify study facility length if a shorter segment of the overall highway is to be analyzed. In addition, it is capable of accommodating a variety of observation time bins (e.g., 1, 2, 3, 5, 10, or 15 minutes), provided the traffic data are in the corresponding format. These refinements were expected to be used to their full capabilities in later stages of this study and the findings were to be documented in future memoranda. Input Data Processing The previous section described the sources of the input data, how these data were collected, and any preparations applied to these data for use in the TTRMS. This section explains how the TTRMS interprets each data source and configures it to a standardized format, allowing the data to be combined in the TTRMS database. Traffic Data The traffic data downloaded using TICAS determined the maximum facility length and the analysis time interval. The start and end point were chosen based on the detector stations, and travel time and VMT output data are provided in 0.1-mile increments. A major challenge experienced with the traffic data was how to account for days with missing travel time and VMT data. Since the spreadsheet format was initially based on the data provided by the RTMC database, when there were days with missing data, the timestamp would not match up correctly 32

to the other attributes. The code was eventually modified to include all timestamps, even for days with no traffic data. An additional challenge the team faced when dealing with the traffic data was the differences in the way the travel time and VMT data were displayed. The reported value for VMT was expressed in vehicle-miles for each particular segment and time bin, whereas the travel time values reported were cumulative from the start to end of the facility. To account for this, the cumulative travel time at the starting point of the facility was subtracted from the cumulative travel time at the end point of the facility to calculate the intermediate (individual) travel time for each segment. Once this process had been completed for the travel time information, it was consistent with the VMT data and they could be processed in the same manner. Weather Data A process in the database reformatted the weather data into the appropriate length time intervals, as determined by the traffic data. When the time interval from the raw weather data was greater than 5 minutes, the missing intervals were assigned the conditions of the previous bin until another record was available. For example, in a 5-minute system, if the source data interval was from 13:05 to 13:20, the travel time records for the 13:10 and 13:15 bins would be assigned the 13:05 weather record. Conversely, when multiple records were available in a single 5-minute bin, the most severe weather condition was chosen to represent the bin. Table 2.12 ranks the precipitation type and intensity in descending order. Table 2.12. Precipitation Type and Intensity Hierarchy Precipitation Type Precipitation Intensity Snow Heavy Frozen Moderate Rain Slight Other None None Crash and Incident Data For crash data, the duration was added to the start time to determine when the crash had cleared. If the crash spanned multiple time intervals, all of the time bins contained in the duration were assigned with the crash details. Similar to the weather data, if crashes overlapped, the worst crash (based on severity) was selected to populate the affected time bins. A similar process was used for the incident data, with “incident impact” used as the hierarchy variable. The order of severities and impacts is summarized for crashes in Table 2.10 and for incidents in Table 2.11. Event Data When an event was taking place, all time bins in the arrival and departure windows were marked with the name of the event type. Each event type was assigned a name, such as “Twins_A” (for 33

arrival) or “Vikings_D” (for departure). For instances with multiple events taking place the names of events were combined to reference both events. For example, if a Twins game arrival overlapped a Vikings game arrival, the event would be categorized as “Twins_A_Vikings_A.” Road Work Road work records were applied to the TTRMS records using a similar protocol as the crash, incident, and event conditions. The input data included start time, end time, and impact attributes. All travel time records during the periods that the road work was active were assigned with the impact category. TTRMS Database Format The TTRMS database spreadsheet compiles all of these data sources into a single table. The table includes the observed conditions for every time bin for one calendar year. Table 2.13 shows the headings in this table and the units, number formats, or entry type for each. Table 2.13. TTRMS Database Attributes and Formats Attribute Format/Units Timestamp yyyymmdd hh:mm Travel Time XX.XX minutes PrecipType Precipitation Type (text) VMT XX.XX vehicle-miles EventType Description (text) Crash_Severity Code (K,A,B,INJ,C,N,PDO) Crash_Impact Description (text) Incident_Type Description (text) Incident_Impact Description (text) Road work Impact Description (text) TTRMS Analysis Tool Following development of the TTRMS database using the macro-enabled spreadsheet tool, the analysis to develop travel time reliability measures can be undertaken. A separate analysis tool was developed to facilitate these calculations. This was completed in a separate spreadsheet from the database tool for two important reasons: • The database spreadsheet has a very large file size (~300MB) and extensive computation features, making it time-consuming to operate. • Researchers had a desire to distinguish between the activities associated with building the database and those with developing travel time reliability measures. 34

The analysis tool references the TTRMS database with all the associated records shown in Table 2.13. It is a macro-enabled spreadsheet which produces basic travel time reliability measures such as cumulative density function (CDF) curves, reliability indices, and more. Regime Selection Query Tool A tool for developing CDF curves for specific weather conditions and event traffic was created. A graphical user interface (GUI) query tool allows users to select up to eight specific weather or event thresholds. This feature is expected to be updated to include crashes, incidents, and road work conditions in future steps. Once the user selects the conditions to be analyzed, the tool produces overlapping CDF curves for each condition. The travel times on the CDF curves are shown for 0.1-minute intervals, which provide a high degree of resolution on output graphics. The CDF curves in this tool are computed in two ways. First, the number of observations (time bins) is used to assign the cumulative percentages at each travel time. The Minnesota pilot team felt strongly that the number of users at each travel time should also be represented, so a second approach which weights the cumulative percentages based on VMT was also used. For each regime, the accumulated VMT was calculated for every time interval. The cumulative VMT for each regime, at each time interval, was divided by the total VMT for each category to get the cumulative percentage. The CDF curves display the travel time along the x-axis and the cumulative percentage along the y-axis. This represents the percentage of the VMT under a specific travel time for each regime. This tool is useful for comparing the reliability performance for very specific conditions or even different types of conditions to others on a given highway. Figure 2.9 shows an example of a CDF curve developed using the query tool. It shows traffic along I-94 westbound (heading toward downtown Minneapolis) during weekday afternoons under conditions with and without Twins baseball games. The CDF curves clearly show that travel time reliability is severely impacted in the records that include Twins games. Both the non-weighted and VMT-weighted CDF curves show similar patterns for these conditions. 35

Figure 2.9. Query tool CDF curve example. Nonrecurring Conditions Surface Plots A useful method of reviewing all of the nonrecurring conditions data was to display the conditions of each category in a surface plot. The analysis tool prepares these surface plots for all input data (weather, event, crash, incident, and road work). Figure 2.10 through Figure 2.15 represent year 2012 conditions for TH-100 northbound. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 10 20 30 40 50 Pe rc en ta ge Travel Time (min) None_Weekdays_Apr to Sep time: 9 to 19 (12849) Twins_A_Weekdays_Apr to Sep time: 9 to 19 (1813) VMT & None_Weekdays_Apr to Sep time: 9 to 19 (12849) VMT & Twins_A_Weekdays_Apr to Sep time: 9 to 19 (1813) 36

Figure 2.10. 2012 travel time surface plot for TH-100 northbound. Figure 2.11. 2012 VMT surface plot for TH-100 northbound. 37

Figure 2.12. 2012 weather surface plot for TH-100 northbound. Figure 2.13. 2012 crash surface plot for TH-100 northbound. 38

Figure 2.14. 2012 incident surface plot for TH-100 northbound. Figure 2.15. 2012 road work surface plot for TH-100 northbound. 39

Aggregate Reliability Measures The TTRMS analysis tool also provides users with aggregate reliability measures for the facility under evaluation. These measures include many of the tools described in previous SHRP 2 reliability literature, such as CDF curves and reliability indices. Others are also described here, such as pie charts of regime observation frequency and delay. While the nonrecurring conditions data sources include a significant level of detail (e.g., crash severity, incident impact, and so on) there are too many possible combinations to reference all of them in a baseline evaluation. Rather, a binary indicator for the presence or absence of each condition is used to establish initial regimes. Since there are five factor categories (weather, event, data, incident, road work) this results in 2^5=32 regimes under this rubric. The 32 combinations are shown in Table 2.14. Travel times for each of these regimes are summarized in cumulative distributions, which are then displayed in a CDF curve graph. Figure 2.16 shows an example of this graphic for the year 2012 TH-100 northbound. This includes a CDF curve for all observations, regardless of regime factors and 27 separate regime categories (five regimes were not observed in the data). Figure 2.16. 2012 TH-100 northbound travel time cumulative density function (CDF) curve for 27 regimes. Initially, the analysis spreadsheet identifies the free-flow travel time for the facility. It also is equipped to compute the vehicle-hours traveled (VHT) and delay for every time period. These calculations are needed to calculate the cumulative delay for the facility overall and for each regime. Figure 2.17 shows an observational frequency pie chart covering of each of the regimes. That is the proportion of time intervals throughout the year that each condition was 40

observed. In this example, a 5-minute interval was used, resulting in a total of 105,120 intervals in 1 year. The graph labels the percentage for each regime. Table 2.14. List of Regimes Combination Number Description 1 None 2 Road Work 3 Incident 4 Incident, Road Work 5 Crash 6 Crash, Road Work 7 Crash, Incident 8 Crash, Incident, Road Work 9 Event 10 Event, Road Work 11 Event, Incident 12 Event, Incident, Road Work 13 Event, Crash 14 Event, Crash, Road Work 15 Event, Crash, Incident 16 Event, Crash, Incident, Road Work 17 Weather 18 Weather, Road Work 19 Weather, Incident 20 Weather, Incident, Road Work 21 Weather, Crash 22 Weather, Crash, Road Work 23 Weather, Crash, Incident 24 Weather, Crash, Incident, Road Work 25 Weather, Event 26 Weather, Event, Road Work 27 Weather, Event, Incident 28 Weather, Event, Incident, Road Work 29 Weather, Event, Crash 30 Weather, Event, Crash, Road Work 31 Weather, Event, Crash, Incident 32 Weather, Event, Crash, Incident, Road Work 41

Figure 2.17. 2012 observation frequency pie chart by regime. The delay experienced during each of these regimes is also displayed in a pie chart in the analysis tool. To accomplish this, the average delay for each time period—observed travel time minus free-flow travel time—is multiplied by the number of users during that time period. All of the time periods are then separated by regime to establish the proportion of delay experienced under each condition. The resulting pie chart is shown in Figure 2.18. 42

Figure 2.18. 2012 delay pie chart by regime. Reliability Indices In addition to generating the various plots and charts, the analysis spreadsheet computes several reliability index statistics. These indices include Travel Time Index (TTI): The ratio of the average observed travel time divided by the average free-flow travel time. TTI = TTObservedTTFreeFlow Buffer Index (BI): The proportion of extra time (or time cushion) that most travelers add to their average travel time when planning trips to ensure on-time arrival. BI = TT95% − TTMeanTTMean Planning Time Index (PTI): The factor applied to the free-flow time needed to ensure on-time arrival 95 percent of the time. It differs from the buffer index since it includes recurring delay as well as unexpected delay. 43

PTI = TT95%TTFreeFlow Planning Time Failure/On-Time Measures: Describes the percentage of trips with travel times within a certain factor of the median travel time. Common thresholds include 1.1 ∗ 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑇𝑇𝑇𝑇𝑀𝑀𝑇𝑇𝑀𝑀𝑇𝑇 𝑇𝑇𝑀𝑀𝑇𝑇𝑀𝑀 1.25 ∗ 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑇𝑇𝑇𝑇𝑀𝑀𝑇𝑇𝑀𝑀𝑇𝑇 𝑇𝑇𝑀𝑀𝑇𝑇𝑀𝑀 Other formulations of these measures denote the percentage of trips with average speeds below a specified threshold: for example, 50 mph, 45 mph, or 30 mph. 80th Percentile Travel Time Index: The 80th percentile travel time divided by the free- flow travel time. It represents another threshold of impacted traffic flow condition. Misery Index: The average of the highest 5 percent of travel times divided by the free- flow travel time. This is often referred to as the 97.5 percent travel time index. 44

CHAPTER 3 RELIABILITY REPORT Description of Facilities Three study highways were selected for the Minnesota pilot testing of the SHRP 2 reliability products. The first facility analyzed was TH-100, starting at 77th Street in Edina and ending at 57th Avenue in Brooklyn Center. TH-100 is a major north-south freeway facility in the western portion of the metropolitan area. The roadway has a six-lane core with the exception of the southbound lanes just north of the TH-7 interchange, which has a two-lane cross section. TH-100 is a major feeder to several east-west roadways including I-494 and TH-62 in the southern, I-394 in the central, and I-94/I-694 in the northern parts of the roadway. The current average daily traffic (ADT) is approximately 100,000 vehicles per day (vpd) near the I-394 interchange. The roadway’s crash history has been fairly low for the past several years. The next study facility was I-94 from I-494 in Maple Grove to its junction with TH-101 in Rogers. This portion of I-94 is currently a six-lane freeway, with a rural four-lane divided interstate highway west of TH-101. The road handles approximately 100,000 vpd and is an important commuter route in the northwest travel shed of the metropolitan area. It also serves as a primary route to the northern Minnesota recreational lakes area, handling both commuter and weekend recreational traffic during the summer months. The last facility includes a second stretch of I-94 between Minneapolis and Saint Paul. This segment of I-94 has a six-lane core as well as segments with eight lanes or more. The east and west ends have interchange complexes with two or more interstate freeways and at least one additional trunk highway approaching the junction. The west end of this facility has a six-lane tunnel with a right-angle curve on the approach, which is a capacity-limiting geometric feature. The roadway handles up to 158,000 VPD, is a primary commuter route between Saint Paul and Minneapolis, and it provides a major river crossing over the Mississippi River. This portion of I- 94 was part of the designated traffic diversion route during the I-35W bridge collapse, at which time it carried 250,000 daily trips. The roadway has an exceptionally high crash history, with both ends ranking near the top of statewide locations. Results Summary The following descriptions provide an overview of the information provided in the sample reliability reports. These explanations are intended to allow the reader to understand the methods and calculations used to develop the statistics and graphics presented in each report. In many cases, these methods have been refined since the preparation of the Task 3 Technical Memorandum, and are noted as such. Examples are also provided to help illustrate these descriptions. The complete reliability reports are presented in Appendix A. 45

Facility Characteristics For each highway presented in the reliability report, a series of roadway characteristics are provided. These begin with a number of basic elements describing the facility’s physical attributes and traffic demand. These include • Highway number • Facility termini • Length in miles • Number of lanes • Speed limit • Annual average daily traffic In addition, an aerial map is provided illustrating the full length of the facility, intersecting roadways, and surrounding environment. Reliability Indices To provide a summarized overview of the reliability performance of the facility, several reliability indices were calculated. These indices include • TTI • BI • OTP • PTI • 80th Percentile Travel Time Index • Misery Index Surface Plots The surface plots have been revised from the Task 3 Technical Memorandum to display individual records throughout an entire year for each facility. This makes the plots more accurate and easier to compare with surface plots from other years and highways. For all of the surface plots shown in this section, each day of the year is shown across the x-axis from left to right. The time of day is shown on the y-axis from bottom to top and is split into 5-minute time intervals. Surface plots were prepared for all of the data elements included in the TTRMS database. These can broadly be categorized in two groups: one is the traffic data, which is continuous and includes the aggregate measures of VMT and travel time. The other is the nonrecurring conditions data. These records are not necessarily continuous as there are times when none of these conditions are present on the highway. The nonrecurring conditions include • Weather • Crash • Incident • Vehicle-Miles Traveled (VMT) 46

• Road Work • Event • Travel Time A key modification to these plots was that records without a given condition present are shown as blank rather than a color. This is intended to avoid confusion that a “None” condition is a part of the data, whereas this method correctly shows the blank areas represent the absence of records in the data. Weather Figure 3.1 shows the weather data collected at the Weather Underground site near the TH-100 study facility in 2012. This data provided information about when the precipitation was observed, but the type was not recorded. MnDOT’s Road and Weather Information System (R/WIS) was used to determine the precipitation type based on temperature. The four categories shown in Figure 3.1 are used in all reports, but not all categories are represented in Figure 3.1. Figure 3.1. 2012 TH-100 northbound weather surface plot. Crash The surface plot for the crashes in 2012 along TH-100 northbound is shown in Figure 3.2. Crash and incident information was gathered from three different sources, which include: • Minnesota State Patrol Computer Aided Dispatch (CAD) data • MnDOT’s Dynamic Message Sign (DMS) logs • Minnesota Department of Public Safety (DPS) crash records Crash plots are color-coded by crash severity. 47

Figure 3.2. 2012 TH-100 northbound crash surface plot. Incident Incident records were developed from two of the same sources used for the crash records: MnDOT DMS logs and State Patrol CAD data. Examples of incidents include disabled vehicles and debris on the road. As shown in Figure 3.3, incident plots are color-coded by impact to roadway capacity. Figure 3.3. 2012 TH-100 northbound incident surface plot. Road Work Maintenance and construction information was primarily identified from the MnDOT DMS logs, news releases, and the annual construction program. As shown in Figure 3.4, road work surface plots have been updated to reflect the impact of the road work on the facility. 48

Figure 3.4. 2012 TH-100 northbound road work surface plot. Event Figure 3.5 shows that the majority of the events considered in this analysis take place during or after the p.m. peak period. Therefore, events have a greater impact on the facilities where the traffic volume is highest during the p.m. peak period. Figure 3.5. 2012 TH-100 northbound event surface plot. 49

Traffic Data The traffic data is a continuous, rather than a discrete data source, and is therefore displayed differently in the surface plots compared to the nonrecurring conditions data. The traffic data category is comprised of the VMT and travel time observations. VMT VMT data for each facility was obtained from in-road loop detectors collected via the TICAS program, as described in the Task 3 Technical Memorandum. This metric is used to provide an approximation of the traffic demand present on the highway. In these plots, each 5-minute observation is displayed as a unique value. Seven VMT bins with increments of 1,000 vehicle-miles were selected for the VMT surface plots shown in Figure 3.6. Figure 3.6. 2012 TH-100 northbound VMT surface plot. Travel Time The travel time surface plots shown in Figure 3.7 have been refined more dramatically since the previous technical memorandum (Task 3 Technical Memorandum). In addition to displaying each individual record to make the surface plots more accurate, the travel time surface plots now display the travel time in terms of the travel time index (TTI). The TTI is the ratio of the observed travel time divided by the speed limit (free-flow) travel time. This allows for an easier comparison between each highway, regardless of length or free-flow speed. The following thresholds were used for the travel time surface plots: • Speed limit travel time. For this facility the speed limit travel time is 14.8 minutes. • The 45 miles per hour (mph) travel time. This threshold was chosen because MnDOT defines congestion as 45 mph or less. The 45 mph travel time for northbound TH-100 is 19.5 minutes. 50

• 1.5-2.0 times the TTI • 2.0-2.5 times the TTI • 2.5-3.0 times the TTI • 3.0-3.5 times the TTI • 3.5-4.0 times the TTI • Greater than 4.0 times the TTI Figure 3.7. 2012 TH-100 northbound travel time surface plot. In addition to the surface plots listed above, a travel time cumulative density function (CDF) was developed along with observation and delay pie charts. In total, there are 32 regimes of nonrecurring conditions. The CDF plots shown in Figure 3.8 have been simplified to show combinations in the “others” category. 51

Figure 3.8. 2012 TH-100 northbound travel time CDF curve. Observation Frequency Pie Chart by Regime Figure 3.9 shows the observed frequency of each of the nonrecurring factors for northbound TH- 100 in 2012. This represents the number of 5-minute intervals throughout the year that have these factors present in the database. In this example, 76 percent of the time intervals do not have any nonrecurring factors present, i.e., the conditions are normal. The intervals with a single factor observed, such as weather, crash, incident, event or road work, range from one percent to 11 percent of the intervals. Time intervals with two or more factors present are shown in the combinations category and were observed in two percent of the time intervals in 2012. 52

Figure 3.9. 2012 TH-100 northbound observation pie chart. Delay Pie Chart by Regime Figure 3.10 shows the delay experienced during each of the regimes. To calculate this, the average delay for each time period (observed travel time minus free-flow travel time) is multiplied by the number of users during that time period. All of the time periods are then separated by regime to establish the proportion of delay experienced under each condition. Comparing this chart with Figure 3.9 shows that a disproportionate amount of the delay is experienced during times with nonrecurring factors present, indicating that these factors contribute to increased delay. 53

Figure 3.10. 2012 TH-100 northbound delay pie chart. Comparison Pie Charts The comparison pie charts for I-94 are shown in Figure 3.11. The radius of each comparison pie chart is proportional to the total annual delay; the larger the radius of the pie chart the more delay was observed. From the pie charts, it is clear that the delay is lowest in 2006 and 2009. This lower delay in 2006 is due to the additional lanes, interchange modifications, and capacity improvements along TH-100 (in both the northbound and southbound directions) near TH-7 and Minnetonka Boulevard. By 2007 capacity was reached once again, resulting in greater delay. In addition, the higher delays observed in 2007 and 2008 can be attributed to the increased traffic on TH-100 due to the I-35W bridge collapse. The implementation of ramp metering in late 2008 on the north part of the facility led to the reduced delay shown in 2009. Figure 3.11. I-94 comparison pie charts. 54

Comparison Bar Charts The comparison bar charts shown in Figure 3.12 display the total delay separated by each year for a particular highway. The solid black bar represents the average volume of daily trips recorded along the facility. The height of the colored bar corresponds to the total delay observed. The various colors in the bar indicate the factors present during time intervals when drivers experienced delay. Figure 3.12. TH-100 northbound comparison bar charts. The bar charts on the last page of Appendix A clearly show that the total delay is much higher on I-94 between Minneapolis and Saint Paul than at any other location. In addition, the peak period for each direction can be identified based on the event delay, which always occurs during or after the p.m. peak hour. If there is a large amount of delay caused by events, the majority of the delay occurred during the afternoon or evening. Facility Observations TH-100 Northbound Overall, TH-100 in the northbound direction had more congestion and a larger amount of delay than in the southbound direction. The crash and incident analysis on this facility showed a large number of property damage crashes and incidents during July 2011. This phenomenon is believed to be the result of improperly coded data in the CAD system, which was a key data source for the crash and incident records. The team observed that events have a greater effect on the delay in the northbound direction than in the southbound direction. This is because the events considered in this analysis primarily take place in the afternoon when northbound traffic is heaviest. 55

TH 100 Southbound Southbound TH-100 had higher VMT during the a.m. peak hour than northbound TH-100. From January to August 2008, there was no CAD data available, which resulted in fewer crash and incident observations. However, crashes and incidents still account for 15 percent of the overall delay for southbound TH-100 in 2008. Overall, the reliability evaluation confirmed that TH-100 is a moderately congested freeway facility. Recurring congestion due to high demand is experienced in both the northbound and southbound directions. The comparison of nonrecurring condition and annual delay pie charts demonstrate that factors such as weather, crashes, and incidents are associated with disproportionate delay compared to normal conditions, indicating that there is potential to improve reliability through countermeasures such as improved snow removal, crash reduction, and rapid incident clearance. The other key finding from the investigation of TH-100 was that the diversion of traffic from the I-35W bridge collapse, from August 2007 through September 2008, also had a negative impact on the reliability of this facility. I-94 Westbound: I-494 to TH-101 Westbound I-94 between I-494 and TH-101 had fewer crashes and incidents reported. Overall, this facility is not very congested, so the various regimes have a larger impact on the facility compared to the others. October 2010 had a very high travel time compared to other months. In 2009, a large portion of the delay was due to road work. In addition, the travel time CDF curve for 2009 shows that the travel time is less reliable in 2009 than in other years. From August 2006 to February 2007, the VMT drops significantly, which is believed to be the result of errors in the loop detector data. The travel time is also noticeably less during this period. Also, westbound I- 94 from I-494 to TH-101 has more delay than eastbound I-94 along the same stretch. However, there is an increase in delay from 2009 to 2011 in both directions of I-94 from I-494 to TH-101. This increase is most likely caused by the construction taking place in 2010. I-94 Eastbound: TH-101 to I-494 The delay caused by events along this facility was small compared with the other locations. This is because the VMT for this facility is higher in the morning than in the afternoon or evening, when the majority of the events take place. The delay caused by road work was higher in 2011 and 2010 than any of the other years, as a major pavement rehabilitation project was underway at this time. In 2010, there was a low VMT and high travel time during the summer months. This is most likely caused by the increased amount of road work along this facility during that time. 2009 also saw a significant amount of road work resulting in higher travel times. The investigation of I-94 between Maple Grove and Rogers resulted in changed perceptions about the performance of this facility. Previously, the perception of many analysts and stakeholders was that this facility, particularly westbound I-94, experienced heavy recurring congestion on weekdays throughout the year. The reliability evaluation, however, demonstrated that most weekdays do not experience significant congestion, and that unreliable travel times are almost exclusively attributable to recreational travel and other nonrecurring conditions. 56

I-94 Eastbound: Minneapolis to Saint Paul Eastbound I-94 between Minneapolis and Saint Paul consistently had a lower VMT each year than westbound I-94. In addition, the delay caused by events was higher in the eastbound direction compared to I-94 in the westbound direction. In August 2007, the travel time and VMT increased slightly along this facility. Lastly, in 2006, a larger percentage of the delay was due to road work compared with the other years. According to the delay pie charts for all years and highways (see Appendix A), it is obvious that I-94 in both the eastbound and westbound directions has higher delays than the other facilities. I-94 Westbound: Saint Paul to Minneapolis In general, the number of crashes and incidents reported along I-94 between Saint Paul and Minneapolis represents some of the largest in the state. The VMT along westbound I-94 is similar during the a.m. and p.m. peak hours, but the travel time was slightly higher in the afternoon. This facility is congested for multiple hours in the morning and afternoon peaks. In 2012, the majority of road work occurred in the evening. During 2010, the travel time was higher throughout the entire day compared with the other years; 2010 also had more road work than any of the other years. VMT was the highest for this facility during 2009. The last notable observation for westbound I-94 between Saint Paul and Minneapolis is that the VMT increased during the a.m. peak hour beginning in August 2007. This is a result of the traffic diversion following the I-35W bridge collapse. I-94 between Minneapolis and Saint Paul is a core artery in the Twin Cities region and is well known for heavy congestion and high frequencies of crashes and incidents. This evaluation not only affirmed that understanding of this facility, but also quantified the magnitude of issues on this roadway. Compared with other study highways (TH-100 and I-94 from Maple Grove to Rogers), this section of I-94 has both recurring and nonrecurring delay that is an order of magnitude greater. Quantifying these measures is a critical step forward for decision makers to allocate resources to address the most pressing issues in the region. 57

CHAPTER 4 EVALUATION OF THE PROJECT L07 TOOL Introduction The L07 tool was developed as an economic analysis tool to compare treatments that help mitigate nonrecurring congestion on freeway and major arterial segments. It is designed for use by agencies seeking a tool to assist with the analysis and prioritization of projects addressing nonrecurring congestion. The tool attempts to identify the full range of possible roadway design features used by transportation agencies to improve travel time reliability and reduce delays due to key causes of nonrecurring congestion. The tool then monetizes the operational, safety, and reliability benefits and provides a benefit-cost ratio for each treatment. This can be used as an aid to help provide recommendations for treatments included in future roadway designs and improvements. In its current form, the L07 tool focuses on geometric improvement options to help reduce nonrecurring congestion. Initial Investigation The L07 tool was reviewed to determine the usability and understanding of the tool. The initial review focused on the graphical user interface (GUI). The GUI screen is divided into three panes, each with a series of tabs. The tool being split into three sections simplified this process greatly. The three sections contained the following: Site Inputs This area has a background color of red and is located on the left side of the screen. It contains seven different input tabs including Geometry, Demand, Incident, Weather, Event, Work Zone, and Graphs. Some of the tabs require user-supplied input data, while others provide a default value that could be adjusted by the user if more detailed data are available. Treatment Data and Calculations This area has a background color of green and is located in the center of the screen. It allows the user to select design treatments intended to address nonrecurring congestion. Up to 10 treatments can be selected at a time. For each treatment, the user can select the provided default values for benefits generated by the treatment or provide new values, if more detailed data are available. The user is also required to enter capital costs, annual maintenance costs, and service life for each treatment. The tool then provides monetized benefits by category and an economic effectiveness value for each treatment, based on the user’s inputs. Optionally, the user can define the value of time, reliability ratio, and discount rate if local data is available. Results This area has a background color of blue and is located on the right side of the screen. It contains three selectable tabs including: Reliability Inputs, TTI, and Reliability Measures of Efficiencies 58

(MOEs). These tabs provide the user with a visual summary of the data input for each scenario being analyzed and the option to select the MOE to be displayed. Findings Summary All inputs and outputs in each tab are clearly marked with defined units, allowing the user to quickly understand what each section is asking of them or reporting to them by following the logical layout of the GUI. After a minimal amount of effort, new users were able to quickly become acclimated to the tool. The inclusion of the graphs on the GUI was specifically helpful by allowing the user to immediately see the impact any change in inputs would cause. Additionally, the combination of “help” buttons located on most tabs and the SHRP 2 Report L07: Identification and Evaluation of the Cost-Effectiveness of Highway Design Features to Reduce Nonrecurrent Congestion (Potts et al. 2014), hereby referred as the L07 final report, allowed the user to access more detailed information. Information provided in these resources included background information on how required inputs were computed, how default values were developed (SHRP 2 Report S2-L03-RR- 1: Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies [Cambridge Systematics, Inc. et. al. 2013] was referenced frequently), and how calculations were performed within the L07 tool. Since the L07 tool is not presently set up to allow the user to copy and paste input data, the pilot testing team found some utility with inputting data directly into the spreadsheet and reloading the GUI to save time when performing repetitive inputs (i.e., new demand sets). Evaluation Process The Minnesota pilot testing team selected three segments of westbound I-94 near downtown Minneapolis for year 2012. These segments experience heavy demand at or above capacity for multiple hours per day and have high crash and incident rates. The three segments analyzed were between • TH-280 On-Ramp and the Huron Boulevard Off-Ramp • Southbound I-35W On-Ramp and the 11th Street Off-Ramp • Hennepin Avenue/Lyndale Avenue Off-Ramp and the I-394 Off-Ramp The pilot testing team elected to specifically analyze year 2012 because of the availability of a more robust event data set. The team’s evaluation objective was to analyze two different scenarios with the L07 tool for each of these segments. The first would be a “default” scenario requiring a minimal amount of input data by the analyst. The second would be a “detailed” scenario inputting detailed data developed using the travel time reliability monitoring system (TTRMS) database for each of the test segments. The TTI percentile output from the L07 tool for these scenarios would then be compared to real-life data generated for these segments using the L02 TTRMS database tool. The questions the team wanted to answer with this evaluation were the following: 59

• Can the L07 tool accurately replicate travel conditions along the highly congested test segment? • What is the value in developing additional detailed data for use in the L07 tool? Will agencies with less available data still be able to obtain accurate results from the L07 tool? Input data for each evaluation scenario was obtained using the following methodology: Default Scenario Geometry: Input data for the geometry tab was obtained using Google Earth for each project segment. The measured free-flow speed was obtained by examining detector data for the segment from year 2012. Demand: Input data for the demand tab was obtained using the MnDOT Data Extract software, which provides access to vehicle count data collected by loop detectors in the roadway. Data was pulled for each segment by 5-minute time interval. Speed and volume data were used to compute segment demand by hour of the day as prescribed in Chapter 4 of the L07 final report. The 30th highest day was found for each hour of the day using the volume data obtained from detectors. The speed data were then reviewed to determine when the onset of congestion started. The L07 final report states that this threshold is typically between the 35- to 45-mph range. A value of 45 mph was used to determine when congestion would start to build, because this is the threshold used in MnDOT’s annual congestion report. Cumulative demand was then computed until the halfway point of the congested period, dissipated during the second half of the congestion period, and matched with the volumes immediately after the congestion period. Truck data were obtained from the MnDOT Traffic and Forecasting Traffic Volume (average annual daily traffic/heavy commercial annual average daily traffic [AADT/HCAADT]) Table. Incident: For the default scenario, crash data were obtained using MnCMAT for each segment from year 2012. These data were downloaded and sorted to ensure that crashes occurred between the test segment’s mileposts. Crash data were then sorted by severity to obtain the inputs required for the L07 tool. Crashes with no label, unknown designation, or property damage only (PDO) designation were considered property damage only crashes. Crashes labeled as “C” or “B” were included in the minor injury crash type, and crashes labeled “A” or “K” were included in the major injury and fatal category. Incident data for the default scenario used the L07 tool’s “Calculate based on relation to crash %” option. For both crash and incident duration data the L07 tool defaults were used. An example of the incident input data sheet is shown in Figure 4.1. 60

Figure 4.1. Example incident input data. Crash costs were obtained from MnDOT’s recommended values for use in benefit-cost analysis for transportation projects. Crash costs from MnDOT are provided for PDO, C, B, A, and K severity crashes. Crash costs used in the L07 tool were weighted, based on historical crash data for years 2010 to 2012 by facility type. These data are available from MnDOT Traffic Safety Green Sheets, which provide a variety of information about crash and severity rates based on facility type. Weather: For the default scenario, the proxy location for Minneapolis, Minnesota, was selected from the L07 tool defaults. Event: No event data were included in the default scenario. It was assumed that a smaller or outstate agency would not have a detailed documented history of event data. Work Zone: No work zone data were included in the default scenario. It was assumed that a smaller or outstate agency would not have a detailed documented history of work zone data. 61

Detailed Scenario Geometry: Same process as the default scenario. Demand: Same process as the default scenario. Incident: Crash and incident data were obtained from the TTRMS database for each segment. Crash and incident data were developed for the TTRMS by combining data from MnCMAT, DMS logs, and CAD logs. In addition to a more robust data set for the number of crashes and incidents by type, these new data sources contained time stamp information that was used to calculate average crash and incident duration by severity, which were entered into the L07 tool. Crash costs for the detailed scenario are calculated using the same process as the default scenario. Weather: Weather data were gathered from the TTRMS database for each segment. The TTRMS data were compiled from a Weather Underground data collection point near the study segments. Data were sorted by hours of accumulation by hour of the day for the entire year for 2012, for both rain and snow precipitation types. Event: Event data were obtained from the TTRMS database for each study segment. Event data for the TTRMS were obtained by gathering schedules for professional football, baseball, hockey, and basketball teams. Additional detailed event data were obtained from the Minneapolis Event Log. Events from this log had a wide range of sizes and impacts. Therefore, only events with significant attendance and concentrated arrival and departure patterns were included in the TTRMS database (see Figure 4.2). The threshold was set at approximately 15,000-person attendance and featured activities such as University of Minnesota sporting events and live concerts. Figure 4.2. Example event input data. 62

Event data were used in combination with VMT data from the TTRMS database to determine the percent increase in traffic volume during events by hour of the day. VMT from the TTRMS could not be specifically defined to an individual event type, so all events were combined into a single recurring average event that occurred 46 days a year. This was computed from the number of 5-minute time slices that contained an event out of the total number of 5- minute time slices for year 2012 within the TTRMS. To determine the percent volume increase by hour, the steps below were followed. These steps are also highlighted and shown in Figure 4.3. 1. Non-event and event VMT was computed for each hour (all days of the year). 2. Number of 5-minute time slices was counted for each hour (all days of the year). 3. Non-event VMT was divided by the number of non-event time slices to obtain an average non-event VMT. The same was done for the event information. This calculation was done for each input hour. 4. The event VMT was divided by the non-event VMT to determine a percent increase in VMT for each input hour. 5. If the event VMT showed a percent increase, that increase was carried forward for data smoothing. 6. If the event VMT did not show a percent increase, steps 1–4 were recomputed for weekdays only. 7. If the weekday data showed a percent increase, that value was carried forward for data smoothing. If no increase was found using the weekend or weekday data, the volume increase was determined to be zero percent. 8. Percent increase in VMT values that were carried forward for each input hour were averaged with the input hours before and after to generate the final value that would be used in the L07 tool. 63

Figure 4.3. Example event input computation for 1 hour of the day. Work Zone: Work zone data was obtained from the TTRMS data for each study segment. Work zone data for the TTRMS were developed using data from the DMS logs and were pulled from the TTRMS based on the segments beginning and end mileposts. Five-minute time slices in the TTRMS were counted for each impact type (one-lane closure, two-lane closure, etc.) by hour of the day to compile the data in a format usable in the L07 tool (see Figure 4.4). Work zone columns were added for each impact type. The first work zone column included data for a one-lane closure, and the second column included data for a two-lane closure. A single day is made up of 288 individual 5-minute time slices. In the example, no individual hour had more than 288 5-minute time slices, so the work zones were only active for 1 day per year. The team did discover that the L07 tool would not allow the user to input a value in the “lanes closed” input boxes that was equal to the amount of lanes for the test segment. For this reason, no work zone data were included for construction that closed the entire roadway segment. Input Hour Non-Event VMT Event VMT Non-Event Time Slices Event Time Slices Average Non- Event VMT Average Event VMT Percent VMT Increase for Events 9 to 10 15444571 1024564 4078 290 45447 42396 -6.71% Input Hour Non-Event VMT Event VMT Non-Event Time Slices Event Time Slices Average Non- Event VMT Average Event VMT Percent VMT Increase for Events 9 to 10 12664329 550001 3006 126 50556 52381 3.61% L07 Tool Input Input Hour Calculated Percent VMT Increase Percent VMT Increase used in L07 Tool 8 to 9 3.34% 9 to 10 3.61% 2.61% 10 to 11 0.89% All Days Week Days Only Non-event VMT higher than event VMT ÷ ÷ = = Repeat Computation for week days only Carry to L07 input computation Average 64

Figure 4.4: Example work zone input data. Validation Comparison Validation of the L07 tool was performed by comparing the TTI percentile curves observed by the TTRMS to those produced by the L07 tool for both scenarios. Daily TTI percentile curves would be compared for the year 2012 data set. Additionally, a.m. peak, p.m. peak, and off-peak CDF curves for TTI percentile values were also compared between the TTRMS and two L07 scenarios, with the objective of determining how well the L07 tool replicates real-life conditions under different demand situations. The team first compared the daily TTI percentile results for year 2012 between the two L07 scenarios, and the TTRMS data. Data for the test segment on westbound I-94 between the southbound I-35W on-ramp and 11th Street off-ramp are shown in the following figures (all segment test data can be found in Appendix B). Figure 4.5 shows the output from the TTRMS data, Figure 4.6 shows the results from the default L07 scenario, and Figure 4.7 shows the results from the detailed L07 scenario. 65

Figure 4.5. Hourly travel time index profile: observed field data from L02 analysis. Figure 4.6. Hourly travel time index profile: L07 analysis with default inputs. 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 66

Figure 4.7. Hourly travel time index profile: L07 analysis with detailed inputs. Note – TTI 99% for 16:00 hour is 109.78 In the particular example shown above, the a.m. peak hour matches fairly well using the methodology prescribed in Chapter 4 of the L07 final report to compute demand. However, it was immediately apparent to the team that the methodology used to compute demand caused the L07 tool to drastically overestimate travel times on highly congested segments that are characterized by multiple hours of peak hour congestion. A comparison between detector volume and computed demand is illustrated in Figure 4.8. The reason for the overestimation in travel times could be that the analyst is compounding volume starting in the early afternoon (i.e., 2:30 p.m.) and ending later in the evening (i.e., 7:05 p.m.) for a demand calculation for the 4:00 p.m. to 5:00 p.m. hour. This is demonstrated in Figure 4.9. This demand computation process is repeated for each input hour and could be using a different day, depending on the 30th highest volume for that hour. 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 67

Figure 4.8. Segment volume comparison of demand computed by L07 method vs. detector counts. Figure 4.9. Example demand computation for one input hour. The team also chose to plot TTI CDF curves for specific hours of the day for each test segment to examine in more detail how the L07 tool outputs compared to real-life conditions generated by the TTRMS database. Figure 4.10 through Figure 4.16 illustrate the a.m. peak, p.m. peak, and off-peak conditions respectively for the test segment between the southbound I-35W on-ramp and the 11th Street off-ramp. This was done to show the differences between the observed data and L07 tool output data for each hour of the day in isolation. 0 5000 10000 15000 20000 25000 30000 Ve hi cl es Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp Demand Volume Date 11/8/2012 Time 2:25 2:30 2:35 2:40 … 4:00 4:05 4:10 4:15 … 4:50 4:55 5:00 5:05 … 7:00 7:05 7:10 7:15 Detector Volume 463 461 465 383 … 360 361 341 358 … 416 387 421 329 … 421 400 374 374 Unserved Demand 79 … 102 101 121 104 … 46 75 41 133 … 41 62 Cumulative for Demand 463 461 465 462 … 1728 1830 1931 2052 … 2690 2607 2525 2442 … 539 457 374 Total Demand 28156 = Five minute time slice is under 45 mph = Five minute time slice is included in hourly demand computation Average Desired Demand 68

Figure 4.10. Morning peak (7:00 a.m.) CDF curves. Figure 4.11. Afternoon peak (4:00 p.m.) CDF curves. Note – TTI 80% for 4:00 p.m. hour is 16.44, TTI 95% for 4:00 p.m. hour is 22.44, and TTI 99% for 4:00 p.m. hour is 109.68. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 Pe rc en til e TTI Southbound I-35W On-Ramp to 11th St Off-Ramp L07 Detailed L07 Default TTRMS 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 Pe rc en til e TTI Southbound I-35W On-Ramp to 11th St Off-Ramp L07 Detailed L07 Default TTRMS 69

Figure 4.12. Off-peak (3:00 a.m.) CDF curves. The CDF curves reiterated to the team that the L07 tool actually calibrated relatively well during off-peak and congested a.m. peak conditions. However, the CDF curve for the p.m. peak hour shows how drastically different the results from the L07 tool differ from the TTRMS data. Because the L07 tool results were substantially different from the TTRMS data for these segments, the team performed the analysis again using only volume data for each segment (30th highest measured volume for each hour). Figure 4.13 and Figure 4.14 show the results of the default and detailed analysis using volume data instead of demand data for the same sample segment used above. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 Pe rc en til e TTI Southbound I-35W On-Ramp to 11th St Off-Ramp L07 Detailed L07 Default TTRMS 70

Figure 4.13. Hourly travel time index profile: L07 analysis with default inputs—volume test Figure 4.14. Hourly travel time index profile: L07 analysis with detailed inputs— volume test. 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 71

Reviewing Figure 4.13 and Figure 4.14 shows that, for this particular segment, the 99th percentile curve shape matches the real-life conditions from the TTRMS data fairly closely (Figure 4.5), but the other TTI curves remain relatively flat. The design team then tried applying adjustment factors to demand values in the detailed analysis to see if the TTI percentiles from the TTRMS could be replicated using the L07 tool. The team tested a variety of different manipulations to the volume and demand inputs that include reducing the demand computations for hours using the 45 mph threshold by a factor ranging between 0.5 and 0.7, only reducing the highest p.m. peak hour by a factor of 0.5, and doubling the volume during the p.m. peak hour for time slices that would normally need a demand computation. It was found that simply doubling the volume during hours where a demand computation would normally be required (speeds under 45 mph) during the p.m. peak. For the segment between the southbound I-35W on-ramp and the 11th Street off-ramp, the daily TTI curves shown for the volume testing in Figure 4.15 compare well to the TTRMS data in Figure 4.5. Similarly, TTI CDF curves show that the volume adjustment more closely replicates conditions shown in the TTRMS data (see Figure 4.16), with some variance for the 99th percentile TTI. Figure 4.15. Hourly travel time index profile: L07 analysis with detailed inputs—demand tests. 0.00 5.00 10.00 15.00 20.00 25.00 Tr av el T im e In de x Time of Day Southbound I-35W On-Ramp to 11th St Off-Ramp TTI Mean TTI 10% TTI 50% TTI 80% TTI 95% TTI 99% 72

Figure 4.16. Afternoon peak (4:00 p.m.) CDF curves—demand testing. Additional Sensitivity Testing and Exploration Through the pilot testing process, the team also performed additional informal analyses while becoming familiar with the L07 tool. The team created scenarios with theoretical segments and demands to test the functionality of the L07 tool to help understand how the tool was computing variables such as speed, crash rates, travel time savings benefits, and how benefits changed for certain treatment options when different input variables were adjusted. Findings from these tests were presented to local stakeholders to demonstrate tool capabilities and potential shortcomings. Demand-Speed Equations Some initial findings from these tests found that the tool uses a combination of Highway Capacity Manual (HCM) and National Cooperative Highway Research Program (NCHRP) equations for computing speed. The tool uses the HCM equation when volume is less than capacity and switches to the NCHRP equation when volume is greater than capacity. The reason for this is likely that the HCM speed equation is a single order function and will return negative values when the volume is high enough, whereas the NCHRP equation is a second order polynomial equation and will asymptotically approach zero as volumes continue to increase. These equations are highlighted in Figure 4.17. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 Pe rc en til e TTI Southbound I-35W On-Ramp to 11th St Off-Ramp L07 Detailed TTRMS 73

Figure 4.17. Speed computation comparison in the L07 tool. PCE = passenger car equivalent. The pilot team was curious why this additional complexity was introduced instead of only using the NCHRP equation. The team found that at the largest difference point between the two equations in the uncongested regime (V/C <1) there was a 2 mph difference. Over a 5-mile segment with a free-flow speed of 70 mph, this is a difference in travel time of 9 seconds. In terms of travel time index, the difference is 1.11 for the HCM equation and 1.08 for the NCHRP equation, less than a 3 percent difference. Crash Rates The team also found that the L07 tool used a crash rate equation that was based on the test segment’s density. The function is shown in Figure 4.18. Local stakeholders were skeptical of the crash methodology used within the L07 tool because of how high the crash rate value could potentially get before dropping off drastically. The L07 tool is capable of using crash rates over 50 crashes per million vehicle-miles traveled (MVMT) while even the highest crash segments in the state do not exceed 15 crashes per MVMT. 74

Figure 4.18. Crash rate calculation in the L07 tool. Design Treatments Though the team’s formal test case scenario did not include any treatment analyses, the team did examine two theoretical segments and the impact the L07 tool inputs had on the final benefit- cost ratio for each treatment. One segment was a highly congested segment where demand greatly exceeded capacity for a few hours of the day and was characterized by high crash and incident occurrence with longer than default clearance times. The second segment was a less congested segment where demand never exceeded capacity, and there were substantially fewer crashes with average clearance times. Table 4.1 and Table 4.2 summarize the testing completed by the team. Most capital costs, maintenance costs, and service life assumptions used the L07 tool defaults for purposes of this comparison. 0 10 20 30 40 50 60 0 20 40 60 80 100 Cr as h Ra te (c ra sh es /M VM T) Density (pce/hr/ln) Crash Rate Computation Total Crash Rate Injury Crash Rate PDO Crash Rate 75

Table 4.1. Treatment Comparison: Low Demand/Low Incident Life Cost Maintenance B/C Ratio Accessible Shoulder 25 $250,000 $2,000 0.21 Alternating Shoulder 25 $150,000 $2,000 0.29 Crash Investigation Site 20 $50,000 $2,000 0.55 Emergency Pull-off 25 $10,000 $500 2.74 Emergency Access 20 $20,000 $1,000 0.89 Emergency Crossovers 30 $5,000 $500 2.61 Gated Turnarounds 20 $10,000 $3,000 0.57 Drivable Shoulders 25 $250,000 $2,000 0.12 Extra High Median Barrier 20 $30,000 $3,000 1.69 Runaway Truck Ramp 20 $50,000 $2,000 0.47 Incident Screen 20 $10,000 $5,000 0.32 Wildlife Crash Reduction 20 $45,000 $1,000 2.09 Anti-icing Systems 10 $50,000 $5,000 0.74 Snow Fence 10 $80,000 $4,000 0.61 Blowing Sand 10 $30,000 $5,000 0.2 It was concluded that benefit-cost (B/C) ratios vary drastically based on the segment being analyzed. Many of the treatments on the low demand segment do not exceed 1.0 and provide no benefit to the study segment. However, many of the treatments in the high demand segment not only exceed 1.0, but greatly exceed 1.0. Analysts should review all outputs for reasonableness. 76

Table 4.2. Treatment Comparison: High Demand/High Incident Life Cost Maintenance B/C Ratio Accessible Shoulder 25 $300,000 $2,400 123.3 Alternating Shoulder 25 $150,000 $2,000 182.69 Crash Investigation Site 20 $50,000 $2,000 292.87 Emergency Pull-Off 25 $10,000 $500 1154.8 Emergency Access 20 $20,000 $1,000 8.61 Emergency Crossovers 30 $5,000 $500 18.18 Gated Turnarounds 20 $10,000 $3,000 6.05 Drivable Shoulders 25 $300,000 $2,400 0.57 Extra High Median Barrier 20 $30,000 $3,000 58.59 Runaway Truck Ramp 20 $50,000 $2,000 3.79 Incident Screen 20 $10,000 $5,000 2.64 Wildlife Crash Reduction 20 $4,500 $1,000 23.91 Anti-icing Systems 10 $50,000 $5,000 5.74 Snow Fence 10 $80,000 $4,000 4.69 Blowing Sand 10 $30,000 $5,000 0.2 Detailed Summary of Findings While using the L07 tool and performing detailed test scenarios, the pilot team discovered several challenges with compiling input data for use in the tool, which are detailed below: Demand Demand calculations have proven to be a challenge in tool calibration. The L07 tool specifically requires a demand input. It was found that the L07 tool drastically overestimated travel times when compared to real-life data while using these hourly demand computations for the segments tested. It was also found that the tool was not able to accurately analyze segments that are upstream or downstream of a bottleneck. Segments that are upstream of a bottleneck could potentially have queues building downstream that back up into the study section. The L07 tool does not account for this and produces underestimated travel times for segments that fall into this category. Similarly, a study segment downstream of a bottleneck will have relatively free-flow conditions, and the L07 tool overestimates travel times for segments falling into this category. Demand computation guidance is provided in Chapter 4 of the L07 final report. However, the team found that this guidance should potentially be revisited. The guidance states that the 77

analyst should determine the 30th highest hour or the year for each individual time slice. This creates a scenario where adjacent time slices could have demands occurring on different days of the year. This resulted in some analysis scenarios where the demand jumped drastically between one time slice and the next. It also created a fair amount of additional work when computing the demand by time slice. Speeds for the 30th highest hour between 4:00 p.m. and 5:00 p.m. may have been below 45 mph starting at 2:00 p.m. that day and remained under 45 mph until 7:00 p.m. The analyst would have to compute the demand from 2:00 p.m. to 7:00 p.m. just to get a value for the 4:00 p.m. to 5:00 p.m. time slice. This could potentially need to be repeated for other adjacent time slices (which could have a different time interval of when speeds were below 45 mph and need to be calculated again). It is recommended that the analyst find the 30th highest day of the year, and then perform hourly demand calculations using that day’s data only. The computed hourly demand should also be reviewed for reasonableness. Demand computations for some segments resulted in values that would be impossible for upstream segments to deliver to the study segment. The team realized that some of these values were not reasonable inputs but proceeded with the analysis using these values. Capacity The team often found that the actual operating capacity of a segment was lower than what was defined by the L07 tool (and the HCM). This segment property was not able to be adjusted unless the analyst manually modified the values in the spreadsheet. Weather The team’s originally collected weather data were not to the level of detail required by the L07 tool. The weather data originally collected were only to the tenth of an inch, as opposed to the hundredth of the inch required by the L07 tool. Event Compiling volume growth for each event type was found to be a challenge for the analyst, despite the availability of abundant data from freeway loop detectors. The L07 final report could provide more detailed instruction on how to prepare these inputs. Work Zone It was found that the L07 tool would not let the user enter a value in the “lanes closed” input boxes that matched the total amount of lanes on the test segment. This meant that any work zones that closed the entire roadway segment for some portion of time were not included in the analysis. It should also be cautioned that these work zone data are included in every year of the analysis. The primary testing only looked at 1 year’s worth of data, but any analysis that examines a treatment for the segment over the course of more than 1 year should take caution that these data should be considered a yearly average, as opposed to a one-time construction. For example, a treatment with a service life of 15 years on a segment that has pavement reconstruction every 7 years should distribute the impacts of these two construction periods 78

across the entire 15 years of analysis. Additional guidance should be provided on how to address construction work zones that occur every few years. Other Findings While performing evaluations the team had additional findings that are listed below: • The tool operates as a segment analysis tool. If stakeholders are interested in creating a system-wide comparison, the level of effort to complete all of the analysis using the L07 tool would be extensive. It has been found that to study a segment of freeway extending multiple miles with several access points requires iterative use of the L07 tool. Factors that compound the number of runs include − Access points: require further segmentation − Evaluation years: no traffic growth function provided − Alternative geometric layouts: need to adjust geometry inputs These factors compound the amount of data needed for entry into the tool and compiling tool outputs. This increases the amount of time required to perform an analysis and introduces more opportunities for user error. • Many of the problems on the system in question are node-related, and the L07 tool currently has no functionality to address these issues. • The L07 tool does not have a traffic growth function. This should be built into the model, so that alternatives with longer service lives can capture growth in volume/demand data. • By modifying individual data inputs, the team realized that some inputs had a much greater impact on the final B/C ratio than others. Analysts should be aware of these sensitivities before starting an analysis to better coordinate data collection needs within the scope of the analysis. − The model is not highly sensitive to weather data. Values for snow can be changed by a factor of two with little effect on benefit calculations. − The model was sensitive to crash and incident frequency, as well as crash and incident duration. Recommended Refinements Through the pilot testing process, the team performed numerous evaluations and tests and recommends the following improvements: L07 Final Report The L07 final report should be updated to contain additional detail on the computation of demand for input into the tool. Recommendations include • Demand inputs should be checked to ensure that values are not greater than an upstream segment can deliver. This is particularly important for segments that experience long periods of time under the speed threshold for the onset of congestion. 79

• Guidance stating that demand should be equal to volume over the course of a day. AADT could be used with hourly percentages to accomplish this. If demand grows during the peak hour, it should be lowered during other hours of the day so demand does not exceed AADT. This would also ensure that benefits would be accruing for the correct number of users. Additional Treatment Options The L07 tool should contain additional treatments that are more in line with what agencies are looking to implement on their systems. Many of these treatments are addressed in the design guide, but are not available in the L07 tool unless a custom analysis is used. Additional desired treatments include • Auxiliary lanes • Dynamic shoulder lanes • Managed lanes • Highway helpers • Dynamic message signs • Intelligent lane control systems • Ramp metering There was some concern about modifying the tool to be capable of analyzing managed lane and dynamic shoulder scenarios because the tool does not account for vehicle occupancy. If additional treatments are added that address benefits to transit or high-occupancy vehicles, a vehicle occupancy input should be considered. Correction of Tool Errors There were some errors found within the L07 tool that should be fixed in future versions. They include • Change the L07 tool to compute delay using the actual volume instead of the passenger car equivalent demand. Currently the tool applies differences in travel time for the treated and untreated condition to the passenger car equivalent demand. The team feels that this is incorrect, and benefits should only be applied to actual users. • Fixing the GUI to save geometric and incident data between closing/saves. • Provide a page or reference number to the L03 report, L07 report, or L07 design guide (Potts et al., SHRP 2 Report S2-L07-RR-2: Design Guide for Addressing Nonrecurrent Congestion, 2014) inside the help button dialog boxes where more detailed data can be found that describe each input, output, or treatment. Other Tool Improvements There were additional changes the team thought could be made to the L07 tool that would provide additional utility to future analysts. 80

• Allow the user to input an amount of lane closures equal to the total lanes on the segment to account for work zones that close the entire roadway segment. − This is a common strategy used by MnDOT to accelerate maintenance projects. • Allow inputs to be copied from mediums such as Microsoft Excel and pasted into the GUI. • Adjust the dimensions of the GUI based on the size of an analyst’s computer screen. • Inside the help buttons on the GUI that describe each input, output, or treatment provide a page or reference number to the L03 report, L07 report, or L07 design guide where more detailed data can be found. • Add an output tab or functionality so the analyst can more easily access the TTI data computed by the tool. • Provide additional functionality in the tool so it is capable of performing a corridor analysis as opposed to a segment analysis. Opportunities for Future Testing of the L07 Tool The team has identified opportunities for additional testing of the L07 tool, and they are outlined below: • Quantify the difference in results between a default and detailed analysis by input type (weather, crash data, incident data, crash duration, incident duration). • Before and after evaluation of treatments using the L07 tool. • Development of custom treatment methodology and procedures for treatments not currently included in the L07 tool. • Develop a standard method for computing demand using L07 guidelines. − Test the impact of using different speed thresholds to compute the demand used in the L07 tool (e.g., 35 mph versus 45 mph). 81

CHAPTER 5 MINNESOTA RELIABILITY WORKSHOP Overview The Minnesota pilot team hosted an interactive workshop on Thursday, February 20, 2014, to share the progress that the team had made on travel time reliability testing since April 2013 as part of the SHRP 2 L38B Reliability program. The purpose of the workshop was to introduce the concept of travel time reliability to stakeholders; demonstrate the utility and effectiveness of tools developed as part of the reliability research programs; share findings discovered while testing the reliability tools; and provide a forum to discuss future policy for making planning and programming decisions as reliability becomes implemented as a performance measure. This process is outlined as part of the SHRP 2 Report S2-L05-RW-1: Incorporating Reliability Performance Measures into the Transportation Planning and Programming Processes (Cambridge Systematics, Inc., 2014). Audience members in attendance represented federal, state, county, city, and consultant transportation professionals involved in traffic evaluation or transportation planning and rogramming functions. The workshop was divided into a morning session and afternoon session. During the morning session, presenters delivered information about the following topics: • SHRP 2 background and context • Opening travel time reliability survey • Technical analysis of the SHRP 2 tools − L02: Monitoring System − L07: Alternative Analysis − L05: Planning and Programming • Utility of the SHRP 2 tools − L02: System Evaluation − L02: Data Needs − L07: Project Evaluation − L07: Reliability Solutions − L05: Planning and Programming − During the afternoon portion of the workshop, the following topics were discussed: • Review of background and concepts • Real-World Examples for Travel Time Reliability − I-94 Maple Grove to Rogers − I-35 Lakeville − L07 Shoulder Evaluation − Florida Reliability 82

− Benefit-Cost Enhancement − Other Applications • Closing Travel Time Reliability Survey • Next steps Workshop Introduction Presenter: Mike Sobolewski of the Minnesota Department of Transportation Figure 5.1. Welcome slide. To begin the workshop, a brief introduction of the topic of travel time reliability was given, as well as a short welcome from the Minnesota pilot team. Figure 5.1 shows the organization of the Minnesota pilot team and the project schedule. As part of the introduction, the objective of the research being done by the Minnesota pilot team was explained. The team has been testing the functionality of the tools and providing feedback to SHRP 2 and the developers about the tools’ ease of use and technical accuracy. The goal of the Minnesota Reliability Workshop was to introduce travel time reliability to a broader audience and explain how these concepts could be used in a standard business model. It was made clear to the audience that this is a timely topic, as it is a key indicator of facility function and driver experience. 83

SHRP 2 Background and Concept Presenter: David Plazak of SHRP 2 Following the introduction to the workshop, background information was provided to help audience members better understand the SHRP 2 Reliability focus area (see Figure 5.2 and Figure 5.3). Figure 5.2. SHRP 2 context. Figure 5.3. SHRP 2 focus area. 84

An important concept from this portion of the workshop is that SHRP 2 is a large, targeted research program that has a limited amount of time associated with it and has built on the success of the original SHRP. The original SHRP ended in 1993 and resulted in several technologies, including SuperPave mix design and winter pretreatments. The four pilot sites (Minnesota, Washington, California, and Florida) are in the process of testing five technical tools. Each of the pilot sites was given a choice of what tools to test; the Minnesota team chose L02, L05, and L07, which were the three tools presented at the workshop. The Federal Highway Administration (FHWA) plans to host an implementation meeting (see Figure 5.4) in the spring of 2014 to develop a plan to implement these tools; funding will be available through the Implementation Assistance Program to help agencies incorporate the SHRP 2 products. The L38B project has already been instrumental in allowing SHRP 2 to improve some of these tools. To conclude the SHRP 2 context of the morning session, the audience participants were shown posters from other pilot sites to highlight other applications of these tools. Figure 5.4. SHRP 2 reliability technical tools. Participant Introductions Next, all workshop participants (both in-person and remote viewers) proceeded to introduce themselves. There was substantial interest throughout the state of Minnesota as well as across the country. This allowed for great data and information sharing from other states throughout the workshop. 85

Opening Travel Time Reliability Survey Presenter: Renae Kuehl of SRF Consulting Group Once introductions were completed, an opening survey was conducted to gauge the audience’s level of knowledge and understanding of travel time reliability. A series of nine questions was posed to the audience to gauge their understanding and use of reliability data. The same survey was completed at the end of the workshop and compared with the opening survey results, to determine if participants’ responses changed after being educated on the tools. The following questions were posed to the audience during the opening survey: • What type of agency are you representing? • How familiar are you with the concept of travel time reliability? • Describe the extent to which you believe travel time reliability can be quantified? • How often have you seen travel time reliability used in a project evaluation previously? • How often has your agency used travel time reliability in a program or planning application? • How likely are you to consider evaluation of travel time reliability in the future? • What applications of travel time reliability do you find most promising? • What types of reliability evaluation would your agency be most likely to implement? • What barrier is most likely to impede your agency’s ability to evaluate travel time reliability? After the morning session survey had been conducted, a detailed explanation of SHRP 2 background information and concepts was presented. Background and Objectives Presenter: Todd Polum of SRF Consulting Group Through numerous reliability studies, the importance of travel time reliability and why it needs to be evaluated has become clear (see Figure 5.5). Most transportation professionals only consider peak period congestion and perform analyses, ignoring nonrecurring conditions such as weather, crashes, and incidents. However, in reality, drivers experience many of these events, which result in unreliable travel times and additional delay. Travel time reliability considers these factors when addressing both recurring and nonrecurring congestion. 86

Figure 5.5. Reliability background. In October of 2012, the Minnesota Department of Transportation (MnDOT), in partnership with SRF Consulting Group, responded to the request for proposal (RFP) for the SHRP 2 Project L38 Pilot Testing of Reliability Data and Analytical Tools. Minnesota was selected as one of four pilot sites. The Minnesota pilot team submitted a competitive proposal, highlighted in Figure 5.6, focusing on the magnitude of available data in the Twin Cities, including loop detectors (which provide volume and delay information) along with extensive documentation of crash and incident records. The differentiation between crashes and incidents was crucial in the team’s analysis. The definition of a crash is clear, but incidents are more difficult to explain. An incident is not a reportable crash but still has an impact on the system, such as a vehicle pulled over to the side of the road. 87

Figure 5.6: Minnesota pilot site. After beginning the reliability study, it took the team a short time to understand exactly what was being evaluated with these tools, using the wide range of data available. As stated previously, there was a wide spectrum of tools being evaluated at the SHRP 2 pilot sites. Figure 5.7 through Figure 5.12 highlight the various tools, along with their purpose. 88

Figure 5.7. SHRP 2 reliability tools. Figure 5.8. SHRP 2 L02 tool. 89

Figure 5.9. SHRP 2 C11 tool. Figure 5.10. SHRP 2 L08 tool. 90

Figure 5.11. SHRP 2 L07 tool. Figure 5.12. SHRP 2 L05 tool. 91

The Minnesota team evaluated three SHRP 2 reliability products: • Project L02 Guide: Establishing Monitoring Systems for Travel Time Reliability. The purpose of the L02 tool was to compile different data sources into one data set for each study facility and to understand how the system is functioning today. • Project L07 Tool: Evaluating the Highway Design Features to Improve Reliability. The L07 tool is a predictive tool. It focuses on evaluating alternatives and understanding how different geometric design elements may improve operational conditions. This tool considers solutions other than a typical capacity expansion to reduce delay, such as gawk screens and emergency access. The L07 tool attempts to quantify these solutions, which have not been easily quantifiable in the past. • Project L05 Guidance: Incorporating Reliability into Planning and Programming Processes (applied in practice during the workshop). The purpose of this tool is to consider how to incorporate reliability into the planning and programming process. It is important to note that the team’s goal was not to advocate for these tools, but instead to test them to evaluate their usefulness and suggest potential improvements that would benefit future users. In March 2013, a kickoff meeting was held in Washington, D.C., where the team met with the other pilot states and the developers of the tools. This served as an introduction for the pilot teams, who engaged in discussion regarding the intentions and goals of each tool, as well as how they worked. After this meeting, the team began collecting data required to use the tools, including traffic volume, travel time, weather, crash, incident, work zone, and event data. The team then used this data to see if what the developers of the tools had envisioned could be accomplished. Finally, the team provided feedback to SHRP 2 and the tool developers. The project schedule is shown in Figure 5.13. 92

Figure 5.13. Project schedule. The final piece of information presented during the background and objectives portion of the workshop was a brief explanation about the study corridors used in the Minnesota pilot testing of the SHRP 2 tools. The study corridors are shown in Figure 5.14. Figure 5.14. Twin Cities corridors. 93

Technical Analysis of the SHRP 2 Tools Project L02 Technical: Monitoring System Presenter: Paul Morris of SRF Consulting Group To begin, the presenter explained that during this part of the workshop, the audience would be looking at numbers and graphics to illustrate how the team performed their analysis. The purpose of this tool is to have a better understanding of travel time when it’s not an ideal weather condition, or when there are other events impacting the system (e.g., a crash on the side of the road). However, trying to understand this can be a difficult task (see Figure 5.15). Reliability can be described as the variability of travel time over time; it is not necessarily about measuring something different from what is being measured today. Many of the units are familiar concepts, but instead of looking at them as a static snapshot in time, the team mapped out how they change over time and how they influence drivers’ experience on various highways. This involves some statistical techniques to measure magnitudes and degree of variation. Once a graph of the variability has been created and a wide range of data have been reviewed, a connection can be made to understand what actually caused the variability. There are several factors that can contribute to the congestion, such as volume exceeding capacity, weather events, incidents, and crashes. Figure 5.15. L02 system evaluation. The first figure presented (Figure 5.16) was the travel time for TH-100 northbound during the year 2012. 94

Figure 5.16. TH-100 northbound (NB) travel time. Figure 5.16 shows the travel time data for the entire year of 2012 grouped in 5-minute time bins. There are approximately 105,000 data points on the graph displayed in this figure. The height and color of the bars indicate the travel time. Most of them lie along the plane representing free-flow travel time, which is approximately 15 minutes for this facility. There are some significant spikes during the afternoon peak hour, which typically has higher travel times due to commuter traffic. The drivers using the road at that time have a travel time that is double the free-flow travel time of what it would be under uncongested conditions. The red peaks indicate extreme conditions, which suggest that there were some other factors occurring during that time other than the typical rush hour. Figure 5.17 has a similar color palette, but is a two-dimensional version of the graph shown in Figure 5.16. The red areas in the graph represent a snow event that occurred in December and increased travel time significantly. While this graph does not display any numbers, it is a valuable resource, which can be used as a quick diagnosis tool to visually inspect any outlier conditions that may need to be examined more closely. 95

Figure 5.17. TH-100 NB travel times. The result of a statistical analysis performed was presented in the CDF curve in Figure 5.18. This is another way to display the travel time data that were collected during this part of the evaluation. The CDF curve takes all of the data points that were shown in Figure 5.16 and Figure 5.17 and separates them out by regime. The y-axis shows the cumulative percent of vehicles using the highway under the specific regime, while the x-axis shows the travel time. 96

Figure 5.18. TH-100 NB 2012 CDF curve. In this example, 50 percent of the users have a 16-minute travel time during crash conditions, which is not much longer than the 13-minute free-flow trip. Additionally, 90 percent of drivers using the roadway during crash conditions have a 26-minute travel time. This means that the top 10 percent of users have travel times in excess of 26 minutes; between the 50th and 90th percentile there is an increase of approximately two times the free-flow travel time (FFTT). The road work line on this figure was called into question by an audience member wondering if there was an explanation for why the road work travel time appears to be more reliable than the normal conditions. This is due to the small amount of road work taking place in 2012 along this highway. This can also be seen in the nonrecurring conditions pie chart shown in Figure 5.19. The construction that did take place in 2012 was the type of work that would take place overnight and would have a very small impact on travel time. The pie chart in Figure 5.19 can also be used to help explain the data. For this facility, the breakdown of all of the 5-minute time periods shows that approximately 73 percent fall into the category of normal conditions (no presence of nonrecurring factors). 97

Figure 5.19. Pie chart of observations of nonrecurring conditions. However, when the delay experienced by drivers on the roadway is examined, the normal conditions drop to approximately 50 percent, as shown in Figure 5.20. The size of the pie pieces for other conditions such as events, incidents, and crashes increases. This indicates that when these factors are present on the roadways, they account for a disproportional amount of delay, compared with the normal conditions. An audience member inquired about whether or not this pie chart was strictly for TH- 100, or if these conditions occurred everywhere. While similar results have been produced along other highways for various years, the pie charts shown in Figure 5.19 and Figure 5.20 are specific to TH-100 in 2012. The pie charts are an intuitive way to view the data, whereas reliability indices (see Figure 5.21) provide a more statistical representation. These indices are an alternative way to quantify reliability, which may be more appealing to certain audiences. 98

Figure 5.20. Pie chart of delay by nonrecurring conditions. Figure 5.21. Definitions of reliability indices. 99

To develop a database using the crash and incident data that was collected, the L02 guide was used. (See Figure 5.22.) Figure 5.22. Travel time monitoring. The L02 guide is not a software program where users input data and a specific solution is generated, but a guide to develop a travel time monitoring database. The reason that it is not a tool is due to the fact that every agency or metropolitan area is going to have different sets of data (travel time data, crash records, weather records, etc.) in terms of how it is collected and stored. This resulted in the Minnesota pilot team creating its own database, using specific information collected for each condition. One way information in the database was displayed was with surface plots (similar to Figure 5.17) developed for each condition. Ultimately, the database was used to create the figures and indices in Figure 5.16 through Figure 5.32. These graphics, shown in Figure 5.23 through Figure 5.27, were useful in diagnosing what was going on and to confirm that the results produced were logical (i.e., no snow in July). The crash surface plot, Figure 5.24, is something that the team believes could be used by the highway service patrol or even the Regional Travel Management Center (RTMC). This is because it is clear that crashes tend to be more concentrated during the morning and afternoon peak hour. The event surface plot in Figure 5.26 shows that most drivers head to downtown Minneapolis (mainly for Twins games) between 4:00 and 7:00 p.m. Figure 5.27 shows that there was not a significant amount of road work on this facility in 2012, but there was a complete weekend closure in August. 100

Figure 5.23. Weather surface plot. Figure 5.24. Crash surface plot. 101

Figure 5.25. Incident surface plot. Figure 5.26. Event surface plot. 102

Figure 5.27. Road work surface plot. Using the travel time database, two additional surface plots were produced. The first, Figure 5.28, displays the traffic volume information in terms of vehicle-miles traveled (VMT). This plot produced an intuitive result that most transportation professionals are already aware of: there is a morning peak and an afternoon peak, when the volumes increase. The next plot that was produced (Figure 5.29) is of the travel times, expressed in the travel time index for the facility. In this plot, indices are shown as ratios to the base travel time (FFTT), which is defined as the speed limit travel time. This is because there were several observations with speeds faster than the speed limit. The team felt it was important not to attribute delay to drivers who were going above the speed limit. Travel time CDF curves, similar to Figure 5.30, were also produced using the travel time database. For these CDF curves, a line showing the FFTT (15.3 minutes for TH 100 northbound) was placed as a reference on the curve. See Figure 5.30. 103

Figure 5.28. VMT surface plot. Figure 5.29. Travel time surface plot. 104

Figure 5.30. Travel time CDF curve. The team also looked at the delay across a number of years (2006 to 2012). While the relative pieces of the pie are important, the size of the pie is also indicative of the total amount of delay experienced by roadway users over the course of a year. Figure 5.31 shows all delay pie charts from 2006 to 2012. The pies for 2008 and 2009 are larger than the pies for 2006 and 2007. This increase in delay was due to the reconstruction of the I-35W Bridge after its collapse in 2007 and TH-100 was a designated reliever route. The team also tested an alternative way of displaying the total delay for each individual year. Figure 5.32 shows the same information as Figure 5.31, but is shown in a bar chart format rather than a pie chart. The flexibility in displaying results was one of the things the team wanted to share, as different display methods might connect more effectively with various audiences. 105

Figure 5.31. Delay pie charts. Figure 5.32. Delay bar chart. The final aspect of the Project L02 technical portion of the workshop was to review the L02 system monitoring. This information is shown in Figure 5.33. 106

Figure 5.33. Project L02 system monitoring. This monitoring is able to give users a sense of the highways’ performance over time as well as relative to each other. What the team was able to determine using this information while testing this particular tool is that it is possible to evaluate travel time reliability. The team was successfully able to link the data to the factors that cause increased travel times. In addition, the team did discover that evaluating corridors that are not on the freeway (instrumented) system gets slightly more complicated, because data are not readily available. However, the team is actively exploring other ways to collect the data. It was noted that data storage began to be an issue with this project. When looking at a single day’s worth of data compared to a representative amount of data, there is a large difference in the number of gigabytes required for storage. Lastly, the presenter reiterated that the Minnesota team was indeed pilot testing the tools (the first time through). If this is a process that wants to be adopted by agencies as a way to measure travel time, there may be efficiencies in thinking about different ways the data can be collected and stored. For example, a significant amount of time was put toward processing weather and crash data and putting it into the database. If this data was accessible through a simplified portal, it would decrease the amount of time and effort required to download and prepare the data. 107

Project L07 Technical: Alternative Analysis Presenter: Ryan Loos of SRF Consulting Group To begin this portion of the workshop, background information about the L07 tool was provided (see Figure 5.34). The L07 tool is a benefit-cost tool for geometric elements that can be used to improve the nonrecurring congestion conditions. Traditionally, geometric improvements have been used to address recurring congestion issues: for example, adding a lane to increase capacity. However, this does not consider crashes or incidents. This is where the L07 tool can be used. Figure 5.34. L07 history. 108

Additional background information is shown in Figure 5.35. Figure 5.35. L07 tool introduction. Figure 5.36 shows the L07 user interface. Figure 5.36. L07 tool graphical user interface. 109

The same elements of nonrecurring congestion defined in the L02 technical portion of the workshop are used in the L07 tool. This tool compares treated versus untreated geometric conditions that the analyst chooses. The L07 tool captures and estimates the operational benefits between the two alternatives. It is important to note that the L07 tool, in its current form, expects that the inputs into the model have a uniform geometry and traffic flow throughout the segment. The computational engine of the tools exists in an Excel spreadsheet format. There is also a graphical user interface (GUI) for the analyst to use. This simplifies the process and clearly indicates the information the analyst needs to be entering when performing the analysis. The L07 process is shown in Figure 5.37. Figure 5.37. L07 process. The L07 tool takes the input data and applies the conditions (either treated or untreated conditions) to four categories (shown in Figure 5.37): demand-capacity (D/C) ratio, lane hours lost, rain impact, and snow impact. The prediction models are developed to output TTI values by and capturing the shift between the curves. Figure 5.38 shows how the L07 tool calculates delay. 110

Figure 5.38. Calculating delay. Next, the presenter discussed the required user inputs needed to perform an analysis using the L07 tool. These are the bare minimum requirements, but there are additional detailed inputs that an analyst can use to fine-tune the analysis to more accurately represent real-life conditions. The required user inputs for the L07 tool are shown in Figure 5.39. Figure 5.39. Required user inputs. 111

The optional detailed user inputs are shown in Figure 5.40. Figure 5.40. Optional detailed user inputs. The additional data required for the optional detail user inputs are available after the L02 analysis has been completed or from in-depth data collection. The supplemental duration data are entered into the tool to increase or decrease delay depending on the crash or incident type to replicate real-life conditions. The tool does provide the user with default weather inputs, but if more detailed weather data are available, the tool is open for the analyst to use it. There is an option for detailed event data, but this data can be harder for some agencies to obtain. Detailed work zone and benefit-cost data can also be added. Several design treatments are included in the tool and are shown in Figure 5.41. The design treatments highlight geometric improvements. 112

Figure 5.41. L07 design treatments. Similar to the user inputs, there are custom treatment capabilities available for the design treatments in the L07 tool. If there has been additional research done about other design treatments, an agency can use that for an analysis not included on this list, using the custom treatment inputs. The L07 technical segment of the workshop concluded with a walk-through of the L07 tool (see Figure 5.42). The presenter showed the audience an analysis scenario and explained what was needed to perform a basic analysis, as well as a detailed analysis. Figure 5.42. L07 tool walk-through. 113

While it may appear that the L07 tool uses inputs from data sources that may not be available to perform the analysis, at a bare minimum most agencies should be able to use this tool to perform a benefit-cost analysis for these geometric treatments. The scenario that was shown at the workshop was a generic facility type of roadway experiencing nonrecurring congestion. The first piece of information needed to complete the analysis is the segment length (and other geometry). This would be readily available from aerial photography or programs such as Google Earth. The second piece of information required is the free-flow speed. This can be obtained by using Google Street View and finding a posted speed limit or by allowing the tool to calculate the free-flow speed based on other inputs. The free-flow speed can also be calculated using resources such as loop detector speed data at the hourly level. Fortunately, the Twin Cities has loop detector data readily available, and it can be accessed using a program such as Data Extract. A single-day traffic count is required to complete an analysis in the absence of detector data. The L07 guide provides some direction on how to convert the volume data into the demand that’s needed to perform this analysis. The fourth piece of information needed is the number of annual crashes. This is generally readily available through a tool such as the Minnesota Crash Mapping Analysis Tool (MnCMAT). Although not all states have a tool like MnCMAT, most agencies do publish crash records. The final piece of information needed is the cost of the project. That is all that is needed to perform analysis with the L07 tool. In data-rich environments, such as Minneapolis, there are opportunities to further refine the analysis with information that can be used to replicate real-world conditions, such as the following: • If there is a stadium nearby, event traffic can be accounted for using a percent volume increase per hour. • A better weather resource that would decrease the level of effort required to collect and process the data would also be beneficial. This is because the more detailed weather data may not have yielded any more refined results, but it required an extensive amount of time to collect and process. • Detailed incident data, as well as construction and routine maintenance schedules, can also be used to replicate real-world conditions. • Flexibility to change economic inputs, depending on the location of the analysis, helps to refine the results. In particular, Minnesota provides information on what should be used for the value of time and discount rates. Project L05 Technical: Planning and Programming Presenter: Mike Sobolewski of the Minnesota Department of Transportation In addition to the technical evaluation of the SHRP 2 tools that were being tested (see Figure 5.43), the team was also pilot testing the L05 guidelines, with the purpose of integrating the use of reliability into the standard planning process. 114

Figure 5.43. SHRP 2 reliability tools. The team also contemplated how reliability could be improved. Traditionally, the thought has generally been to add a lane or other capital improvements and not necessarily consider operational changes that could be made. The team discussed the possibility of adding capacity versus operational investments. This would be an opportunity to rethink the planning process. By investing in operations, the travel time reliability—as well as the customer experience on the roadway—can be improved. Similar to the other SHRP 2 projects, the L05 tool came with a guide (see Figure 5.44). This particular guide gave the team a sense of how to go about incorporating operational investments in the planning process. The L05 guide was the least technical of the guides included in the pilot testing. This guide contains several process maps that attempt to indicate the different models for planning that exist and how an agency might incorporate them. The guide also talks about the trade-offs between funding and project priorities and describes technical and institutional requirements for incorporating reliability. The guide highlights performance-based reliability and discusses a number of opportunities to integrate travel time reliability into the transportation planning process. 115

Figure 5.44. L05 implementing reliability. The next portion of the L05 technical presentation was a discussion of policy and programming. At the beginning of this presentation, a participant inquired about the extent to which the L05 tool is being tested by other pilot sites. It was clarified that while all four pilot sites were working with the tool, each site is at a different stage in the testing process. Often in the past, as well as in current discussions in the policy and programming area, recurring delay has been examined, as well as its impact on travel time reliability. However, roughly 50 percent of delay is caused by nonrecurring congestion (see Figure 5.45) and most agencies do not have the tools, staff, or resources necessary to analyze the nonrecurring delay. 116

Figure 5.45. Policy and programming discussion. Following the planning and programming discussion, information was presented about performance measures, as shown in Figure 5.46. It was explained that there are a lot of discussions happening about performance measures and performance-based planning. There are many performance measures that can be examined during the planning process, and reliability needs to be in that discussion. The importance of reliability in future applications was discussed next (see Figure 5.47). Reliability is anticipated to be a requirement under the Moving Ahead for Progress in the 21st Century (MAP-21) Act program. The pilot testing of reliability will help to determine how the results can be used. This is why workshops such as the Minnesota Reliability Workshop are so critical to advance the understanding of this new approach. 117

Figure 5.46. Performance measures. Figure 5.47. Future applications. 118

Communicating reliability was the next topic discussed. There are many audiences for the evaluation of travel time reliability, so it is challenging to present the large amount of technical analysis in an effective way that is understandable to the audience and enables them to grasp the implications of the analysis. There are numerous ways the information could be presented, which are highlighted in Figure 5.48. Figure 5.48. Communicating reliability. Utility of the SHRP 2 Tools L02 System Evaluation Presenter: Paul Morris of SRF Consulting Group All of the presentations up to this point set the stage by introducing the workshop audience to the tools that the Minnesota pilot testing team has explored, but this part of the presentation was a candid reflection on what the team discovered about the utility of these tools. The next presenter noted that several issues can arise when it comes to performing the system evaluation. Most of the examples that were shown were for a specific facility. However, there may be a desire in the future to expand this to an entire metro area or statewide system. This would introduce some new challenges regarding the amount of data required and how they would be presented. Other issues that must be faced as the tools are implemented on a system- wide basis are shown in Figure 5.49 through Figure 5.52. 119

Figure 5.49. Utility of SHRP 2 tools. Figure 5.50. Utility of SHRP 2. 120

Figure 5.51. Utility of SHRP 2. Figure 5.52. Utility of SHRP 2. 121

A primary concern of the pilot teams about these tools is whether or not they are providing users with new information. MnDOT, in particular, has established efficient methods for producing congestion and safety reports. Most transportation professionals have a strong understanding of the relative cost of those different impacts on the roadway. Will these tools actually tell users to do something different to the roadways? For example, instead of adding lanes to reduce congestion, are there other options (such as adding increased service patrols and more assertive plowing techniques to remove snow from the road) that the SHRP 2 tools may suggest, which were not previously considered, and that could yield similar results? The presenter pointed out that much of the analysis that has been completed thus far had an urban focus. However, reliability may even affect rural areas to a greater extent because there is little recurring congestion (nonrecurring congestion is often the only type of congestion experienced) in a rural setting. L02 Data Needs Presenter: Jesse Larson of the Minnesota Department of Transportation To conduct the travel time reliability analysis, it was necessary to collect a large amount of data from various sources. To begin, the team collected traffic volumes and speed data for the study highways. These data were collected from the sources listed in Figure 5.53. Travel time was calculated using specialized software that analyzes the data that are returned from the various sources. Figure 5.53. L02 data collection. 122

Loop detectors were the principal source of traffic volumes for the facilities analyzed by the Minnesota team (see Figure 5.54). In the Twin Cities metro area, there are detectors spaced approximately every half mile on the freeway system which send data to the Regional Transportation Management Center (RTMC) every 30 seconds. MnDOT has over 15 years of data that the team was able to access. Bluetooth was also used for data collection in the original L02 project (see Figure 5.55). These data generally come from receivers alongside the roadway that collect Bluetooth signals from cellphones and other devices in cars, getting travel time from location and speed, as well as routing information. Figure 5.54. Loop detectors. 123

Figure 5.55. Bluetooth. An additional source of data that could be used for reliability evaluation is GPS probe (Figure 5.56) data, which offer the opportunity to obtain data off the instrumented system. Figure 5.56. GPS probe data. 124

SMART Signal data is another source of data for signalized highways. This is a proprietary program where data are processed through special software to provide speed, travel time, and other performance measures. (See Figure 5.57.) Figure 5.57. SMART Signal system. As stated previously, crash and incident data is required to find the cause of congestion for reliability issues. The team used MnCMAT to acquire this data (see Figure 5.58). It was necessary to use several sources to determine crash and incident impact. Event data was obtained from the Minneapolis Event Log. The I-394 reversible lane calendar was also used to determine the timing of special events. (See Figure 5.59.) 125

Figure 5.58. Crash and incident data sources. Figure 5.59. Event data sources. Weather data came from several sources, which are listed in Figure 5.60. This is another area where the method of data collection would benefit from improved efficiency. Minnesota is equipped with a system of sensors. However, a storm could move through the metro area and 126

only affect a portion of it, thus missing the sensors. The Minneapolis-Saint Paul airport data can also provide a reference for weather data. Figure 5.60. Weather data sources. Work zone data were also obtained for development of the travel time reliability monitoring system (TTRMS) for the L02 pilot testing. There is a large amount of information, such as previous press releases, which can be used to gather data on major construction projects. For small projects (e.g., pothole repairs), which are generally not logged in the system but still can have an impact on travel times, obtaining data can become difficult. Dynamic message sign logs were also a critical source for work zone data. (See Figure 5.61). 127

Figure 5.61. Work zone data sources. L07 Project Evaluation Presenter: Ryan Loos of SRF Consulting Group The team had a test case where three segments along I-94 westbound were developed for the year 2012; these segments are shown in Figure 5.62. The goal of this evaluation was to compare the results produced by the L07 tool for two different approaches. The first was to summarize all of the detailed weather, crash, and incident data from the database developed for the L02 evaluation. The second was to rely solely on the default inputs in the L07 tool. 128

Figure 5.62. I-94 westbound example. The team learned several important lessons while analyzing the test case; these lessons are highlighted in Figure 5.63 and Figure 5.64. Figure 5.63. L07 lessons learned. 129

Figure 5.64. L07 lessons learned. Another valuable lesson learned during the L07 tool evaluation was that (because the tool is a segment tool) it does not account for any node issues that may be present. The team had some concern with merging and on- and off- ramps and the potential impact that ramp metering could have that isn’t being accounted for within this tool. The level of effort also compounded as the level of detail increased. Any change in highway segmentation (e.g., a new on-ramp) or shift/increase in volume created more iterations of analysis. There is no traffic growth function within the L07 tool, so if the analysis required increasing volumes over time, the analyst would need to repeat the analysis using new volume sets. Also, if the geometry is changed (e.g., new shoulder for a portion of the segment), the analyst must segment that portion and perform two analyses. The team also learned that analysis time can range from a few hours to multiple days, depending on how much data are available and are being included in the analysis; but when the detailed analysis was completed, the results were closer to real-life data. An audience member questioned whether or not the detailed analysis gave improved results that were on par with the increased effort and what the increase in effort was for the detailed analysis. The presenter explained that there are graphs available (not included in this presentation) that compare real-life data, the baseline model, and the detailed model. There was approximately a 15 percent improvement from the baseline to the detailed models. To achieve this modest increase in accuracy, the level of effort increased by about four-fold. The presenter also suggested that certain regimes might be focused on and could provide more accuracy compared with others. Recommended product refinements (Figure 5.65) were also discussed during this part of the workshop. 130

Figure 5.65. L07 product refinements. After completing several analyses using the L07 tool, the team determined that the tool would be more user-friendly if it allowed for corridor analysis, since many operational issues affect multiple segments. The team also would have liked to see some additional treatment options, such as nongeometric improvements. The team also found an error within the tool: it was computing delay based on passenger car equivalent (PCE) and not actual volume. L07 Reliability Solutions Presenter: Jesse Larson of the Minnesota Department of Transportation The team identified some of the causes of nonrecurring delay beyond the capacity issues that are discussed the majority of the time. These are things like weather, crashes, incidents, or special events (such as sporting events). There are varying operational improvements (see Figure 5.66) that could be employed in an attempt to reduce nonrecurring delay. For example, if delay is due to crashes or incidents, looking at improving emergency response could be a potential operational solution. In the Twin Cities metro area, there is the Highway Helper program, and MnDOT reaches out to emergency responders to explain that incidents should be moved from the roadway lane onto the shoulder to reduce the amount of delay due to incidents. 131

Figure 5.66. L07 reliability solutions. There are also improvements that could be made to reduce the impact that weather has on delay, such as anti-icing and plowing improvements, snow fences, and properly timed maintenance activities. Other reliability solutions are highlighted in Figure 5.67 and Figure 5.68. Figure 5.67. Reliability solutions. 132

Figure 5.68. Reliability solutions. L05 Planning and Programming Presenter: Jim Henricksen of the Minnesota Department of Transportation To begin the planning and programming portion of the workshop, the presenter explained that investments can be maximized by increasing the focus on operations at the planning and programming stage (see Figure 5.69). 133

Figure 5.69. Planning and programming. As part of the L05 tool evaluation, implementation issues were considered (see Figure 5.70). The team debated about whether this tool provides new information or reinforces information that has already been collected. These tools allowed the team to turn the data into information and knowledge that could be used to assist with decision making. Large databases have been created to accommodate the necessary data that will need continual maintenance. In addition, the data need to be accessible to multiple parties; where that information is stored and how it will be stored are two important implementation issues that must be considered. Transportation professionals at MnDOT spend a lot of effort maintaining congestion reports, which include how many hours of congestion there are in a particular facility. There is a saying, “We manage what we measure.” Therefore, if reliability is not being measured, then the system is not managed with this in mind. 134

Figure 5.70. Implementation issues. Figure 5.71 highlights the applications of the L05 tool. 135

Figure 5.71. L05 tool applications. Figure 5.72 shows a wide range of tools are available. As study of reliability matures, agencies will learn how to integrate existing tools into the process. Figure 5.72. L38B tools. 136

The presenter explained that there needs to be a balance between minor details and the big picture when new tools are integrated into existing processes. The user needs to step back and “see the forest for the trees.” It is anticipated that this will be a difficult transition. There will be many more questions and challenges as these tools move toward implementation into the planning and programming process, some of which are listed in Figure 5.73. One anticipated challenge is the difference between rural and metro areas. The focus thus far has been mainly on the metro area. However, there are certainly opportunities to use these data in a rural setting. Rural areas face similar challenges that metro areas face, such as tight budgets and resource issues. There are more nonrecurring types of reliability issues in rural areas that could be enhanced by using some of these less costly strategies, such as enhanced operations and maintenance. Also, rural areas deal with things like seasonal variability in travel demands, accidents, and weather issues, so there is an opportunity to implement the lower-cost operational strategies to address travel delay. However, whereas metro areas like the Twin Cities are data-rich environments, rural areas are data-poor environments, making study of travel time reliability more challenging. Figure 5.73. L05 questions and challenges. There are also issues related to the legacy systems listed in Figure 5.73; the team has talked extensively about how these tools relate to planning and programming, specifically acknowledging that institutional change is difficult when existing approaches are supplemented or replaced with newer tools and techniques. 137

Conclusions from Utility of SHRP 2 Tools Presenter: Mike Sobolewski of the Minnesota Department of Transportation All of the background information presented up to this point of the workshop was aimed at preparing the audience for the upcoming discussion. There were a few final questions for consideration posed to the audience. For instance, do these tools provide users with any additional actionable knowledge, either on the technical side or the nontechnical side? Would they replace an existing tool or set of tools? Will they be added to an existing set of tools? How would the use of these tools be institutionalized? What are the resource requirements associated with that institutionalization of the tools? Who will ultimately be the owner of these tools? If it is the FHWA, how will that influence the way that transportation funding is distributed? (See Figure 5.74.) Will implementation of these tools become a requirement or a suggestion? What opportunities exist for users (those people who are in the pilot state or who are not part of the pilot program) to continue to shape that implementation process? Figure 5.74. Other considerations. There is a Federal Highway Administration/American Association of State Highway Transportation Officials (FHWA/AASHTO) workshop in March 2014. Some objectives of this workshop are listed in Figure 5.75. Other considerations are shown in Figure 5.76. 138

Figure 5.75. Implementation considerations. Figure 5.76. Other considerations. Final thoughts about the utility of the SHRP 2 tools were presented. (See Figure 5.77.) 139

Figure 5.77. Final thoughts. Technical Session Summary The team, through testing the tools, tried to determine if the tools were functional and if there was value in using the tools. Throughout this process, a lot of nontechnical problems arose about the tools’ use and about the policy implications that these data seemed to be providing. The challenge arose of how to communicate this information to different audiences, with different options for graphical results shown in Figure 5.78. The teams have been pilot testing the tools, trying to determine what shape and format the results should be presented in and trying to identify the appropriate commitment of resources to operate these tools. 140

Figure 5.78. Final thoughts. The Minnesota pilot testing team believes that these tools need to be continually maintained and improved. The team has also already identified a number of potential improvements, which have been submitted to SHRP 2 for consideration and action. As the work continues, additional recommendations will be made. All of the recommendations will be available in the project documentation. Review of Background and Concepts Presenter: Todd Polum of SRF Consulting Group The reliability data and analytical tools produced through SHRP 2 can be viewed as a spectrum. These tools, shown in Figure 5.79, allow users to evaluate different levels of system performance. The Minnesota pilot testing team is attempting to consider how these tools and procedures can be integrated into agencies’ planning and programming processes. 141

Figure 5.79. SHRP 2 reliability tools. Example Applications for Travel Time Reliability A series of examples showing real-world applications will be presented to the audience. The examples are the following: • I-94 Maple Grove to Rogers • I-35 Lakeville • L07 Shoulder Evaluation • Florida Reliability • WisDOT Benefit-Cost Enhancement • Other Applications To help facilitate discussion, three panelists (listed in Figure 5.80) agreed to react to the examples presented and pose questions to the presenters throughout the afternoon session. A brief introduction was given for each panelist. 142

Figure 5.80. Panelists. Deanna Belden is the director of performance, risk, and investment analysis in MnDOT’s Office of Transportation System Management. Ms. Belden’s responsibilities include development and delivery of MnDOT’s annual performance report. Her group also supports performance measure development and analysis in areas such as operations, maintenance, and program delivery. MnDOT enterprise risk management is also housed within her unit. Prior to her work in performance measures, she worked as an economic policy analyst, which included conducting economic analysis of transportation investments, such as benefit-cost analysis and road user analysis. Deanna holds an MS in Urban and Regional Planning and an MA in economics from the University of Iowa and a BA in economics from the University of Oregon. Mark Filipi of the Metropolitan Council currently serves as the manager of Technical Planning and Support for Metropolitan Transportation Services. Mr. Filipi joined the council in 1990 as a transportation planner. The focus of his work is travel demand forecasting and air quality. One of his current work tasks is preparing the congestion management process and the system performance evaluation for updating the Metropolitan Council’s Transportation Policy Plan, scheduled for adoption in December of this year. Travel time reliability is expected to be an important performance measure in both of those areas. Jim McCarthy is a traffic operations engineer for the Minnesota Division FHWA. Recently, Mr. McCarthy served as metro area engineer and worked part time on the traffic operations team in the FHWA Resource Center, where he worked on simulations. He worked on the traffic analysis team, determining analysis methods for use throughout the organization. 143

I-94 Maple Grove to Rogers Presenter: Paul Morris of SRF Consulting Group This is an example of an actual corridor study along I-94 in the northwest metro area that was one of the Minnesota pilot site test segments. This segment is highlighted in Figure 5.81. Figure 5.81. I-94 Maple Grove to Rogers. This is a critical facility because it connects the Twin Cities metro area to central Minnesota. I-94 is also a major freight route headed to North Dakota and further west. A defining characteristic of this corridor is that it is a heavily used recreational corridor during the summer months. This corridor was also selected because it was a candidate for a Minnesota grant to become a Corridor of Commerce project. Figure 5.82 highlights the travel time reliability evaluation performed for this facility. 144

Figure 5.82. I-94 Travel time reliability evaluation. Traffic volumes and travel times were not linked to the detailed weather or crash information for this facility; a basic level of traffic information was used. Figure 5.83 displays the VMT for westbound I-94 in the year 2012. Every day of the year is shown from left to right. Each band of heavy traffic shows the weekday VMT, and little breaks in traffic over the weekend can be seen. In the summer months, there is heavier traffic skewed toward the end of the week, which is caused by people traveling out of town (for recreational purposes such as camping or visiting summer homes, etc.). In addition to VMT, a surface plot was created for travel time along this facility and is shown in Figure 5.84. 145

Figure 5.83. I-94 VMT. Figure 5.84. I-94 travel time. This surface plot was important for the I-94 study because it shows that the facility only becomes congested near the weekend (and other special holidays, such as the Fourth of July), but not during regular weekday conditions. This was different from some stakeholders’ perceptions of what was actually occurring on the roadway. 146

CDF curves were also developed for this example. Figure 5.85 shows the cumulative percentage of traffic that is able to get through the corridor at or below the specified travel time (which is along the x-axis). The red line represented all traffic and the blue line represented the weekday p.m. peak period traffic. The vertical green line is placed at the FFTT. Figure 5.85. I-94 CDF curve. A CDF curve with a low slope translated to the right would indicate poor reliability. For this example, the weekday traffic is slightly less reliable than the overall traffic. However, having 75 percent of the traffic at or above the free-flow speed suggests that this facility is not suffering from severe congestion issues. One of the panelists questioned the reliability of the recreational trips along this highway and what kind of investment should be made to reduce that congestion. The presenter referred back to the pie graphs. If one of the congestion regimes dominates a large portion of the delay, then improvements should be considered that address that regime. Additionally, the overall size of the pie chart can be compared with another corridor that might be competing for the same funds; if one pie chart is larger than another, that would be a piece of information that decision makers would want to consider. One of the audience members commented that it would be helpful to know how many people are actually experiencing the high travel times. The participant noted that this facility does not have a large amount of total delay, and that total delay is not just about two people who were stuck in traffic for 2 hours in a large work zone. The comment suggested it would be much worse to have 2,000 people stuck for 10 minutes in the same work zone. Next, a second panelist raised a question about the volumes associated with each of these curves and why they are not displayed on the CDF curve. This curve only considers the number 147

of vehicles on the roadway; it does not take into account whether there is more than one person in a single vehicle. This was one of the criticisms the team had, and the team speculates that there will be future improvements to adjust for occupancy. The presenter pointed out that when traffic is at its worst (with the most delay), that is precisely when volumes are at their highest. It was noted that the team will label the axis of this graph in its documentation materials. The last comment by the panelists about Figure 5.85 was that it appears that the value of reliability is less than the value of time, which seemed inconsistent. The presenter stated that the team is still unsure about which is more valuable to drivers. It is variable, depending on the location (urban versus rural), type of roadway, and other factors. It is an extremely important issue that needs to be researched. There is also the question of whether it is a linear relationship or not. There is a big difference between being 3 minutes late and being 30 minutes late. This concept is currently being researched through other SHRP 2 initiatives. Another key aspect to this facility is that various improvements have been implemented in recent years (see Figure 5.86), which could contribute to travel time reliability. This is shown in the historical travel time surface plots. Figure 5.86. I-94 traffic forecasts update. 148

The travel time surface plot for the year 2008 is shown in Figure 5.87. Figure 5.87. I-94 2008 travel time. The team wanted to see if these tools were capable of picking up on changes over time. The surface plot for 2008 (Figure 5.87) looks similar to the surface plot for 2012 (Figure 5.84). Much of the congestion occurred during the summer months, with some also in the later months. The travel time surface plots for 2009 and 2010 are shown in Figure 5.88 and Figure 5.89. In Figure 5.88, one can see that the congestion is worse in 2009 and becomes very severe in 2010, which is clearly shown in Figure 5.89. 149

Figure 5.88. I-94 2009 travel time. Figure 5.89. I-94 2010 travel time. 150

There was a concrete joint repair project that took place during September, October, and November of 2010, which the tool was able to capture. While this project only decreased the capacity by one third, the backups it caused were several miles long and lasted the entire day. It was very promising that the team was able to capture that in Figure 5.89. The flyover highlighted in Figure 5.86 was also built around the same time. Figure 5.90 shows the travel time surface plot from 2011. Figure 5.90. I-94 2011 travel time. While there was some additional road work in 2011, the travel time is greatly reduced during the p.m. peak hour. This plot shows that the flyover had a major impact on this roadway. A panelist made the comment that in some situations, the benefit of adding a lane can be outweighed by the cost of the project and the amount of delay caused by construction and impacts to reliability. A way to improve reliability may be doing a better job of managing traffic during construction. The presenter also emphasized that in this situation, an additional lane was not even added; this construction project was only maintenance on the existing lanes. An audience member commented that it would be helpful if numbers were displayed on the surface plots. The team responded that comments such as that one are good information. Part of implementing these tools will be understanding how to effectively present the information to different audiences. Figure 5.91 is a bar chart summarizing the traffic data in Figure 5.87 through Figure 5.90. The yellow bar represents the volume, which remains fairly consistent as the years progress. This means that the volume is not affecting the delay as much as drivers may think. This figure clearly shows the pre-improvement delay (2008 and 2009), the construction delay (2010), and the improved/post-construction delay (2011 and 2012). The team hoped that figures like this one would add value to the decision-making process. 151

Figure 5.91. I-94 annual traffic and delay. I-35 Lakeville Presenter: Paul Morris of SRF Consulting Group The team wanted to try this process on a facility that was not a previous study highway. The team chose to examine I-35 in Lakeville (see Figure 5.92 and Figure 5.93). 2012 data were used, due to the construction taking place along the facility in 2013. TICAS was used to obtain loop detector data; this program post-processes the raw data to get travel times and volumes. 152

Figure 5.92. I-35 in Lakeville. Figure 5.93. I-35 in Lakeville. The downloaded data were input into the database, which created the graphics shown in Figure 5.94 to Figure 5.96. The team did not add nonrecurring factors, but rather performed a bare minimum analysis to see what results could be produced and the level of effort required to produce these results. Figure 5.94 shows that the dominant volume pattern is the inbound 153

commuter peak, which is why there is a consistent band of higher VMT between 6:00 a.m. and 8:00 a.m. Figure 5.94. I-35 in Lakeville VMT. The team expected to observe some congestion; however, only a small amount was observed in Figure 5.95. The small red and purple spots on the travel time surface plot are indicative of nonrecurring events such as crashes or weather, but not volume exceeding capacity. 154

Figure 5.95. I-35 in Lakeville travel time. The travel time CDF curve shown in Figure 5.96 shows that there is a minimal difference between the two curves. This means that this area does not see much congestion or unreliable travel overall or during the morning peak period. Figure 5.96. I-35 in Lakeville travel time CDF curve. 155

Additional comments about the process for this bare minimum analysis are highlighted in Figure 5.97. Figure 5.97. I-35 in Lakeville. The graphics shown in Figure 5.94 through Figure 5.96 took a single analyst approximately 1 day to complete, so from a time and cost-effectiveness perspective, this process could be applied to a larger system. An audience member stated that this process could work well and that it is good measure of the magnitude of a problem. The participant also stated that there is definitely a place for it and that there has been useful implementation into a project. The presenter commented that this is good to hear and that this can be another tool for the toolbox. The final graphic the presenter showed for this example was a level-of-effort graph, which is shown in Figure 5.98. The graphic illustrated that for a 1-year historical look at a single facility, the level of effort was actually rather low. The level of effort would increase as more years were looked at for the same facility and would compound as the number of years, corridors, and delay regimes increased. 156

Figure 5.98. Level-of-effort graph. An audience member commented that a day of analysis per facility is definitely more reasonable than a week per facility. The participant asked which of the more detailed areas (weather or events) provides a better “bang for your buck” if one wanted a more detailed analysis. The presenter explained that there is indeed a trade-off between the two. Value of a Shoulder Presenter: Ryan Loos of SRF Consulting Group The Minnesota team had an opportunity to apply the L07 tool outside of the SHRP 2 pilot testing work. A client had an extremely congested highway with specific right-of-way restrictions. The team wanted to use the L07 tool to determine if there was value to two proposed alternatives for the roadway. The two alternatives provided by the client are listed in Figure 5.99. 157

Figure 5.99. Value of a shoulder. The team used the L07 tool (see Figure 5.100) as a supplement to benefit-cost work done previously. The two alternatives were segmented to create an “apples to apples” comparison. Additional information about the project setup is shown in Figure 5.101. Figure 5.100. L07 tool. 158

Figure 5.101. Value of a shoulder. Figure 5.102 shows the two alternatives that were evaluated. The top alternative is the eight-lane alternative with a narrow shoulder. The team had to make segments where the two alternatives were the same. In locations with different cross-sections, new segments were specified. Figure 5.102. Shoulder schematic. 159

Project assumptions and data sources are highlighted in Figure 5.103. Figure 5.103. Value of a shoulder. Nonrecurring congestion factors were considered in this application. The team did not have any incident or weather data, so the L07 tool defaults were used for the analysis. There was an event trip generator near the study highway, so event volume increases were included. No work zone data were available at the time of analysis, and they were not included. Results of the analysis are shown in Figure 5.104. 160

Figure 5.104. Value of a shoulder analysis results. The results show that as the volume/capacity ratio increases, the road has less ability to handle those nonrecurring conditions. An audience member commented that this example is confusing because two options with varying capacities are being compared. The participant was wondering if the team could control that and if a dynamic shoulder was considered. The presenter responded, saying that these were the two alternatives provided by the client and that it was understood that they are different and that the team was trying to determine the difference between the two, while ignoring traffic diversion. Another audience member questioned the graph in Figure 5.104, saying that if it was extended backwards, it would taper at 100. The participant said that this is unreasonable and appears to be inaccurate. A different audience member responded, saying that typically the graph would be flipped so it would flatten at the top. The presenter added that the shape of this curve shows the difference between nonrecurring congestion and what is known about recurring congestion. As the years go on (x-axis) the volume increases, so the roadway is less able to handle these nonrecurring situations. Figure 5.105 shows another output from this analysis and displays the difference between the two alternatives over the course of a single day. The nonrecurring congestion is highest where there is already high volume. 161

Figure 5.105. Year 2040 hourly nonrecurring delay comparison. Figure 5.106 illustrates that the nonrecurring delay is much more evident in the narrow lanes alternative with minimal shoulders (top example) then the standard lanes alternative. Figure 5.106. Daily delay summary. 162

The conclusions and considerations from this analysis are highlighted in Figure 5.107. Figure 5.107. Value of a shoulder: conclusions and considerations. Florida Reliability Report Presenter: Todd Polum of SRF Consulting Group Florida is a leader in travel time reliability, where the department of transportation (DOT) and research universities have established several methods to further understand travel time reliability (see Figure 5.108). The Florida Reliability Report is a snapshot for every segment of all statewide freeways. The report summarizes the reliability performance using a series of indices presented in a tabular format (see Figure 5.109). The reliability indices used in the Florida Reliability Report are shown in Figure 5.110 and Figure 5.111. 163

Figure 5.108. Florida reliability. Figure 5.109. Florida Reliability Report. 164

Figure 5.110. Reliability indices. Figure 5.111. Reliability indices. The presenter asked for feedback about Figure 5.111, specifically whether or not the graph shown would be easier for audiences to understand or if the reliability indices are clearer. Either way, when the data are available, both the report and surface plots can be produced. At this point an audience member asked what the future of the Florida Department of 165

Transportation (DOT) using this information looks like. The presenter responded that the team is hopeful but does not know for certain, and he added that the reliability process started “bottom- up,” meaning that technical staff has developed these reports, but the results have not yet been institutionalized in the decision-making process. Wisconsin Department of Transportation: Benefit-Cost Enhancement Presenter: Paul Morris of SRF Consulting Group To provide context for this example, Dawn Krahn from the Wisconsin Department of Transportation (WisDOT) provided a short introduction to the project. The project objectives are shown in Figure 5.112. Figure 5.112. WisDOT project objectives. WisDOT is incorporating travel time reliability concepts into its benefit-cost procedure. For major highway projects—greater than $30 million and adding capacity of either 5 miles or more to an existing alignment or 2.5 miles to a new alignment—the improvements need to be approved by state legislature. WisDOT has a formal evaluation procedure for these major projects, which includes a benefit-cost analysis. The current process takes into account efficiency, recurring congestion, and costs over a 52-year facility life. Data about delay are missing because weather and other nonrecurring events are not considered. When the roadways get close to capacity, this can become an issue resulting in significant travel delays. The goals of this project are shown in Figure 5.113. The main goal was to understand the causes of unreliability at different times of day. WisDOT has been working with SRF Consulting Group to capture delay from nonrecurring 166

sources and will apply the results of the analysis to evaluations of projects in spring–summer 2014. Figure 5.113. Reliability project goals. The project evaluation process is shown in Figure 5.114 and Figure 5.115. Figure 5.114. Project evaluation. 167

Figure 5.115. Project evaluation process. Figure 5.116 shows the anticipated process for estimating travel times by generating hourly demand throughout 1 year and then estimating travel times for each hour for normal, rain, snow, crash, and incident conditions. Finally, a weighted average travel time is computed based on the probability of each of these conditions occurring. Figure 5.116. Travel time model process. 168

The team selected a group of highway sample sections that would be used to collect and analyze traffic and travel time data to develop this model. The first step was to categorize the highways by road type and facility function, as shown in Figure 5.117. Figure 5.117. Section categorization. The locations selected as the final sample sections are shown in Figure 5.118. 169

Figure 5.118. Highway sample sections. Following selection of the sample sections, traffic volume data collected from automatic traffic recorder (ATR) sites were reviewed. These data were summarized by the month of the year and the day of the week to identify different demand patterns. The team found that not every day fits neatly into a category. Intuitively, weekdays would all be similar, but some of the highest demand days of the year were on or near holidays, which may or may not be a weekday (see Figure 5.119). 170

Figure 5.119. Annual traffic profile. For example, Figure 5.120 illustrates that Christmas Day is a unique day in December; it is extremely different from any other Tuesday in that month. Figure 5.120. Holiday traffic profiles. 171

In the end, the team found 109 different unique day types, 25 of which are holiday- or event-related (see Figure 5.121). Figure 5.121. Annual traffic profiles. An audience member asked if any of the unique days could be disregarded. The presenter answered, saying that those days may be the most important. The participant commented that it seems as though this might be too much detail and too much data. Another audience member added that if the work was done upfront, subsequent analyses would not be needed to perform this work. INRIX speed data were obtained for the sample sections for the years 2010 through 2012 (see Figure 5.122). This was an enormous data set, which could not be opened in a program such as Microsoft Excel. 172

Figure 5.122. INRIX speed data. The team needed to confirm the reliability of the speed data, so early on in the project a check was performed on a test highway. Figure 5.123 illustrates the number of records based on time of day. The green bars represent real-time data, the red bars represent historical data, and the blue bars represent free- flow speed. It was encouraging that the majority of the data are real-time data. 173

Figure 5.123. INRIX speed data. Nonrecurring event data were also obtained for this project. Figure 5.124 shows the sources of weather, crash, and incident data. 174

Figure 5.124. Weather/crash/incident data. A model estimation process was developed. The graph in Figure 5.125 shows that when volume increases, at a certain point, travel time increases exponentially. This relationship is frequently used in travel demand and other planning-level tools and will be used to estimate travel times in this process. 175

Figure 5.125. Model estimation process. The team is currently in the model estimation and development process. Additional example applications are highlighted in Figure 5.126. Figure 5.126. Example applications. 176

Other Applications Presenter: Mike Sobolewski of the Minnesota Department of Transportation There are many other applications where travel time reliability evaluation can be used. There is some functionality of these tools to many different programs, which are highlighted in Figure 5.127 to Figure 5.129. CMSP and the other programs shown on Figure 5.129 all have the potential to incorporate evaluation of travel time reliability. Figure 5.127. CMSP primary screening example application. 177

Figure 5.128. CMSP secondary screening example application. Figure 5.129. Example applications. 178

Conclusions and Next Steps Closing Travel Time Reliability Survey Presenter: Renae Kuehl of SRF Consulting Group A travel time survey that was administered at the beginning of the workshop was taken again by the workshop participants, to gauge the level of knowledge they gained after participating in the Minnesota Reliability Workshop. The results from the afternoon survey were compared with the results from the morning survey and are shown (in percentage of total votes) in Figure 5.130 through Figure 5.138. Note that Question 1 was not repeated in the afternoon session. Upon reviewing the survey results, the team was pleased to find that the majority of the participants’ understanding of travel time reliability increased. In addition, after hearing the workshop presentations, many participants indicated that they would consider using travel time reliability evaluation in the future. Figure 5.130. Question 1, morning results. 179

Figure 5.131. Question 2 results. Figure 5.132. Question 3 results. 180

Figure 5.133. Question 4 results. Figure 5.134. Question 5 results. 181

Figure 5.135. Question 6 results. Figure 5.136. Question 7 results. 182

Figure 5.137. Question 8results. Figure 5.138. Question 9 results. Next Steps The presenter told the audience that the pilot teams are hoping to receive feedback and hear questions about the work that has been completed thus far. 183

Ongoing SHRP 2 Projects The determination of the reliability ratio is an important factor, as it is an assumed parameter in many reliability analysis tools. This number is different for different highway users, and different users have different needs as far as reliability is concerned. SHRP 2 research project L35 is currently using two approaches to arrive at a value for reliability ratio: • Surveys were administered at the University of Arizona, where the participants stated their individual preference for route based on historical travel times. • The University of Maryland is currently using analytical methods to determine the reliability ratio. SHRP 2 is also hosting reliability workshops as part of the L36 project, with the goal of having young professionals from numerous agencies attend the workshops and become champions of reliability for their agency. In closing, the Minnesota pilot team thanked all of the participants for taking part in this event. The audience was invited to contact the team if they had any questions and to stay tuned for more information from SHRP 2 and the pilot team. Key Findings Below is a list of lessons learned, areas of concern, and areas of support for each SHRP 2 reliability product discussed at the workshop. Project L02 Guidelines • Analysts using the tool need to be cognizant of their audience and generate reports from the tool that will connect with them. Outputs range from single values to detailed graphs, and audiences range from decision makers to the general public. • Level of effort is directly related to the detail of results. For example, 1 year of historical travel time information for a single segment can be processed in a day, while system- level analysis broken down by delay regime could potentially take months. • The tool could be used to determine what the specific sources of delay were, so that specific treatments could be focused to address these conditions. • For larger-scale analysis, data storage can become an issue. • Documentation and guidance regarding the collection of data for use in this tool should be provided, so that agencies with different sources of data can adapt data to meet the needs of the tool. • Stakeholders were supportive of the potential of this tool to be used to categorize historical data by delay type, to provide information for a project-level evaluation, and to be used in the planning and programming process. 184

Project L07 Tool • Audience members expressed concern about the level of effort required to perform a fully detailed analysis, and they expressed concern that in some cases detailed data would not be available at all. • There was also concern expressed about the level of effort required to perform a system- level analysis using this tool. It was suggested that the L02 tool be used as an initial screening to identify potential high-reward corridors before performing a detailed analysis. • It was expressed that if a fully detailed analysis did not substantially increase the accuracy of the tool output, certain categories should be targeted first to help increase the accuracy of the analysis. It was suggested that crash and incident occurrence and duration be looked at before weather. • One audience member mentioned that there was skepticism on how the tool was computing benefits for each treatment, since they are often based on case studies, and that more analysis would be required for the audience members to become comfortable with the results. Project L05 Guide • The L05 tool is less of a technical tool compared with L02 and L07 and more of a guidance strategy for implementing reliability. • A survey conducted during the workshop demonstrates a need for reliability education, as the definition of reliability currently varies between transportation professionals and agencies. • There were concerns with how to institutionalize reliability between urban and rural areas. • The survey also revealed barriers that exist to institutionalize reliability, such as level of effort and staff capabilities. 185

CHAPTER 6 REFINED TECHNICAL ANALYSIS Alternative Time Intervals The majority of the reliability analysis performed by the pilot team used 5-minute intervals; however, it was never determined if this was the optimal length. The travel time reliability monitoring system (TTRMS) had originally been designed to handle varying time intervals, and the team investigated alternative time intervals to determine the optimal interval to be used in future reliability work. To conduct a reliability analysis using the TTRMS developed by the pilot team, the travel time and VMT data must be obtained using the applicable time interval. Travel time and VMT data are downloaded using TICAS; they are available in eight select intervals of 1, 2, 3, 5, 10, 15, 30, and 60 minutes. The crash, incident, and weather information can be applied to any size time interval, as this information is stored by event occurrence. Therefore, the same inputs can be used for any time interval. The team downloaded travel time and VMT data for 2012 for two facilities: westbound I- 94 from TH-61 in Saint Paul to TH-55 in Minneapolis, and northbound TH-100 from 77th Street in Edina to 57th Avenue in Brooklyn Center. These data were downloaded in intervals of 3, 5, 10, 15, 30, and 60 minutes. The observation and delay pie charts for westbound I-94 are shown in Figure 6.1 and Figure 6.2. 186

Figure 6.1. Westbound I-94 observation pie charts. Figure 6.2. Westbound I-94 delay pie charts. The observation and delay pie charts for TH-100 are shown in Figure 6.3 and Figure 6.4. 187

Figure 6.3. Northbound TH-100 observation pie charts. Figure 6.4. Northbound TH-100 delay pie charts. 188

These pie charts show that as the size of the time interval increases, the amount of delay occurring during the periods with nonrecurring conditions increases. This is because the database methodology was to apply nonrecurring conditions to any interval in which they were present for any length of time. From these pie charts, the team concluded that longer time intervals (10 minutes or longer) would likely require a modified approach. For example, the condition must be present for at least 50 percent of the interval for the interval to be categorized in that condition. A potential downside of this approach, however, is that longer time intervals may omit short- duration incidents. CDF curves were also developed for the data evaluated using different time intervals. The CDF curves for the different time intervals are shown in Figure 6.5. Figure 6.5. Westbound I-94 CDF curves for alternative time intervals. This figure shows that the CDF curves for the varying time intervals are very similar. The only noticeable difference is that the 60-minute curve has slightly shorter travel times in the 80th to 95th percentile range. These results are expected to be due to the longer time intervals averaging out some of the high travel time observations. The consistency of the CDF curves for alternative time intervals was also compared statistically. This was done using two different tests: the first was a t-test, and the second computed the correlation. These tests were performed by matching the travel time for each percentile in the distribution (1 percent to 100 percent in 1 percent increments) of one time interval to another. The results of the t-test are shown in p-values, where values closer to one 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 12 16 20 24 28 32 36 40 Pe rc en ta ge Travel time (min) 3-min 5-min 10-min 15-min 30-min 60-min 60-minute 189

indicate a strong fit between the data sets. Similarly, correlation values close to one indicate that changes in one data set are reflected in the other. Table 6.1 shows the t-test results and demonstrates that the cumulative travel time for the alternative time intervals are highly consistent. The p-values in Table 6.1 for all intervals are very close to one, which indicates that all travel times come from the same population group, and there is no significant evidence to prove they are not the same. The only exceptions are among the 30- and 60-minute intervals, which have p-values below 0.9, indicating lower confidence in the consistency of the data. Table 6.1. CDF t-test P-value Statistics 3min 5min 10min 15min 30min 60min 3min NA 5min 0.990 NA 10min 0.981 0.971 NA 15min 0.939 0.929 0.958 NA 30min 0.943 0.953 0.924 0.883 NA 60min 0.850 0.860 0.833 0.794 0.908 NA The correlation tests were performed by calculating the correlation between each cumulative travel time group. The high correlation coefficients, close to one, revealed that travel time from different time intervals are highly correlated (see Table 6.2). This means that no obvious difference in cumulative travel time for the different time intervals was detected. Again, only the 30- and 60-minute intervals were found to have correlation coefficients less than one. Table 6.2. CDF Correlation 3min 5min 10min 15min 30min 60min 3min NA 5min 1.000 NA 10min 1.000 1.000 NA 15min 1.000 1.000 1.000 NA 30min 0.999 0.999 1.000 1.000 NA 60min 0.996 0.997 0.998 0.998 0.999 NA The team also calculated the total annual delay for the year 2012 along both facilities, which is shown graphically in Figure 6.6. This is an instructive figure, because it shows that the annual delay is consistent at time intervals of 15 minutes or less but begins to decline at the 30- and 60-minute intervals. The results are expected to be due to the high travel times being averaged out in the longer time intervals. 190

Figure 6.6. Total annual delay. In addition to computing the total annual delay, the analysis requirements for the different time intervals were quantified and are shown in Figure 6.7. This is a helpful graph, because it illustrates the staff and computing resources required, which decrease sharply from 1, 2, 3, and 5 minutes and flatten out at 10 to 60 minutes. Figure 6.7. Analysis requirements. 0 100 200 300 400 500 600 700 3 min 5 min 10 min 15 min 30 min 60 min De la y (1 ,0 00 m in s) Time Interval TH100 I-94 0 500 1000 1500 2000 2500 3000 3500 0 2 4 6 8 10 12 14 16 1 2 3 5 10 15 30 60 Si ze (M B) Ho ur s Time Interval (min) Process Time (hr) Download Time (hr) File Size 191

The team concluded that the ideal time interval appears to be in the 10- to 15-minute range. These time intervals optimize the trade-off between accuracy of aggregate performance measures and staff and computing resources. In addition, intervals over 5 minutes may help to smooth the speed data collected by loop detectors at lower volumes. Disaggregation of Delay Causes Throughout the pilot testing process, the team was concerned that pie charts developed early in the study often had an unreasonable amount of delay attributed to nonrecurring factors, particularly events (see Figure 6.8). This is because a portion of the event delay was likely due to normal delay; however, all delay occurring during an event time period was attributed to events. Figure 6.8. Delay distribution. Two methods were developed to separate this recurring delay from the nonrecurring delay. In the first method, regression was used to estimate the normal conditions travel time based on a speed-flow plot similar to the one shown in Figure 6.9 for westbound I-94 in 2012. 192

Figure 6.9. Speed-flow plot example. This method was found not to be highly successful, due to high variability in underlying travel times and impacts of nonrecurring conditions. It was determined that more detailed research is needed for this method to be useful in mainstream reliability analysis. The second method used a simplified approach of computing the average travel time for each 5-minute time period on weekdays and weekends and subtracting that value from the travel time for weather, crash, incident, event, and road work conditions. The average travel time for normal conditions was computed for 5-minute intervals by time of day. This graph is shown in Figure 6.10 for I-94 in 2012. Figure 6.10. Average weekday travel times for I-94. 0 10 20 30 40 50 60 70 12 :0 0: 00 A M 1: 15 :0 0 AM 2: 30 :0 0 AM 3: 45 :0 0 AM 5: 00 :0 0 AM 6: 15 :0 0 AM 7: 30 :0 0 AM 8: 45 :0 0 AM 10 :0 0: 00 A M 11 :1 5: 00 A M 12 :3 0: 00 P M 1: 45 :0 0 PM 3: 00 :0 0 PM 4: 15 :0 0 PM 5: 30 :0 0 PM 6: 45 :0 0 PM 8: 00 :0 0 PM 9: 15 :0 0 PM 10 :3 0: 00 P M 11 :4 5: 00 P M Tr av el T im e (m in ut es ) Time of Day Average of Travel Time Max of Travel Time Min of Travel Time 193

Figure 6.11 shows the database the team used to separate the delay caused by nonrecurring factors. The difference between the observed travel time from the nonrecurring congestion (Column B: yellow) and the average travel time (Column E: first pink column) gives the delay caused by the nonrecurring factor (Column F: second pink column). Note that if this difference was less than zero, the team assumed that the factor did not contribute to the delay. Using this method, the team was able to separate the delay that was previously attributed exclusively to the nonrecurring factors into the normal delay and the actual delay cause by those factors. Figure 6.11. Disaggregation of delay. After disaggregating the delay, updated pie charts, similar to Figure 6.12, for the various time intervals were developed. In these updated pie charts, the proportion of delay caused by the nonrecurring factors is reduced significantly, particularly for special events. This is especially apparent in the combinations category, because the original pie chart included a large amount of delay due to recurring congestion. After reviewing these results, the team agreed that this approach produced a more reasonable distribution of delay. Time Stamp Code Travel Time VMT Delay Base TT Extra TT Events Total Extra Delay Normal Delay Total_delay 1 Total_delay 2 1/1/2012 0:05 14.569 1041.366 0.77 13.052 1.517 0 0.772 0 0.7717 1/1/2012 1:40 14.335 1384.614 0.60 12.604 1.731 0 0.605 0 0.6046 1/1/2012 4:05 14.015 805.004 0.02 12.991 1.024 0 0.016 0 0.0159 1/1/2012 23:55 16.674 1143.461 3.98 15.336 1.339 1.993 1.988 3.982 3.9820 1/2/2012 4:25 14.436 597.819 0.34 13.064 1.372 0.339 0 0.339 0.3394 1/2/2012 4:45 14.540 761.699 0.54 12.741 1.800 0.536 0 0.536 0.5359 1/2/2012 4:55 14.094 661.672 0.08 12.942 1.152 0.081 0 0.081 0.0812 1/2/2012 5:10 14.030 672.834 0.03 12.760 1.270 0.026 0 0.026 0.0258 1/2/2012 23:55 16.880 939.994 3.53 14.616 2.264 0 3.525 0.754 3.5253 1/3/2012 3:40 14.170 594.408 0.13 13.517 0.652 0 0.131 0 0.1312 1/3/2012 7:45 14.317 5240.894 2.17 17.817 -3.500 0 2.165 26.049 2.1653 1/3/2012 16:45 14.626 5069.387 4.13 20.632 -6.006 0 4.135 43.777 4.1346 194

Figure 6.12. Updated pie chart. Demand Regimes The Project L02 guide, along with several other examples the team researched, frequently divided travel time observations into high, medium, and low regimes; an example is shown in Figure 6.13. The purpose and methodologies associated with identifying these regimes were not clear based on this information, and so treatment of demand regimes was not incorporated in the early stages of TTRMS development. To gain an understanding of separating travel time observations into demand regimes, the pilot team explored this topic as part of the Task 7 effort. The process and findings of this work are summarized in this section. 195

Figure 6.13. Project L02 example CDF graph with high, medium, and low demand regimes. A process was developed to identify the critical speed, density, and capacity where congestion is observed to begin. First, the team converted the volume data to flow and created a flow versus speed plot of westbound I-94 data for year 2012, which is shown in Figure 6.14. This plot was used to determine the critical speed. The critical speed corresponds to the highest capacity (or the 95th percentile capacity) that the test segment can handle. The purpose of deriving the critical speed is to test whether speed changes when traffic approaches the capacity in different traffic scenarios. 196

Figure 6.14. Flow vs. speed plot. Next, density was calculated using the flow and the number of lanes. A density versus flow plot was developed to determine the critical density, which is shown for 2012 I-94 data in Figure 6.15. The critical density also corresponds to the highest flow (or 95th percentile) of the test segment. Generally, congestion occurs after the traffic reaches this critical density, so volumes above this point were for normal conditions and were determined to be high demand volumes. 197

Figure 6.15. Density vs. flow plot. The capacity thresholds for the crash, incident, event, weather, road work, and combination regimes were lower than the normal conditions. Volumes below these points were classified as low demand volumes. (See Figure 6.16.) This was repeated for each of the nonrecurring factors. 198

Figure 6.16. Demand thresholds. Volumes between the high and low thresholds were assigned to the medium demand regime, with the rationale that volumes in this range would not cause congestion under normal conditions but would likely cause congestion in the presence of nonrecurring factors. The team concluded that this is the range where the Minnesota Department of Transportation (MnDOT) has the potential to address reliability concerns. The thresholds for high, medium, and low demand regimes are shown in Table 6.3. Table 6.3. Demand Regime Thresholds Demand Regime Volume Threshold Low < 4,800 Medium 4,800–6,000 High > 6,000 The volumes observed on I-94 were summarized in a CDF curve (see Figure 6.17) to illustrate the proportion of time in the different volume ranges. Along I-94, it was found that the 90th to 100th percentile were in the medium range, or about 10 percent. 199

Figure 6.17. Demand regime thresholds. A potential use of this approach in future evaluations would be to identify facilities that have a high percentage of traffic in the medium demand regime. This would indicate there is a high potential to improve reliability along such a facility. SMART Signal Traffic Data Evaluation of travel time reliability performed as part of the pilot testing focused almost exclusively on freeway facilities in the Twin Cities metropolitan area. This approach was largely based on the availability of data. Freeways in the Twin Cities are instrumented, with loop detectors every half mile, and the data collected are available through online access utilities such as DataExtract and TICAS. There was also a desire to expand the evaluation to include signalized highways; however, this would require alternative methods of data collection and analysis. TH-13, between County Road 5 in Burnsville and Yankee Doodle Road in Eagan, was identified as a promising candidate for this analysis. While this highway is not fully instrumented in the same way as the freeways, it is equipped with loop detectors between signals and interconnected signal controllers. The University of Minnesota affiliate startup company SMART Signal Technologies, Inc., has developed data processing technologies to turn this information into performance measures, including travel time, number of stops, queue length, intersection delay, and level of service. SMART Signal Technologies has also developed a web interface to make this information available to MnDOT and its partners. This application, called iMonitor, allows users to view travel time and volume data for all, or a portion, of the facility at specific time stamps. The pilot team explored this interface; however, some features were identified that limit its utility for reliability evaluation. First, each data record must be downloaded individually, 200

requiring thousands of manual steps to obtain a meaningful sample of data over months or years. Second, along the portion of TH-13 the team was analyzing, only 8 days of data were found to be available in the 2012 to 2013 time frame. The pilot team contacted SMART Signals Technologies to discuss alternative methods for processing and collecting the desired travel time and volume data. SMART Signals agreed that the iMonitor program was not the appropriate tool for this application and noted that a custom program could generate these specific outputs for the study highway and time period. The cost associated with this work, while quite reasonable, was not able to be accommodated within the pilot testing work, but the Minnesota pilot team will seek future opportunities to utilize SMART Signals data on future evaluation of signalized highways. Updated L07 Benefit-Cost Tool The updated L07 tool (obtained April 2014) includes a number of improvements implemented since the previous version. First, an initial project interface improves the ease of use. It enables the user to create new projects or open previous projects when the tool is activated. Second, the updated L07 tool allows the user to save projects in special files (with L07 extension), and those files are easily reloaded for future use. This feature was not available in the previous version, where data would be lost once the tool was closed. Third, users are able to develop demand data with L07DemandGen, a demand-generating companion utility to the L07 analysis tool. This allows users to reduce manual typing of volume data into the tool, automates the calculations, and exports a demand file that can be read into the L07 tool directly. Despite these useful improvements, some issues with the L07 tool remain. The primary concern is that some information in the incident input pane is not properly executed by the tool. This was observed for non-crash incidents, where results showed no difference between using the default and defined number of incidents (non-crash) by the user. Upon further investigation, it was discovered that when different numbers of crashes and incidents are specified by the user, the percentage of crashes and non-crash incidents does not update correctly. Users must calculate the percentage and enter it into the tool to use a specified number of incidents. Finally, the tool could be made more convenient to use if weather, event, and work zone data could be imported directly rather than inputting this data manually, similar to the demand comparison tool. To test the L07 tool’s sensitivity to default and detailed input values, three road segments were selected along westbound I-94 from downtown Minneapolis to downtown Saint Paul, and 12 scenarios were performed for each road segment. The construction of scenarios is displayed in Table 6.4. All scenarios use the same geometry, demand inputs, and number of crashes. They also use defaults for all other inputs unless specified. The default scenario is defined as Scenario 1. 201

Table 6.4. L07 Tool Test Scenarios Scenario Specified Information Scenario 1 Number of crashes (default scenario) Scenario 2 Number of crashes and crash duration Scenario 3 Number of crashes and number of incidents Scenario 3a Number of crashes and number of incidents (percent) Scenario 4 Number of crashes, crash duration, number of incidents, and incident duration Scenario 4a Number of crashes, crash duration, number of incidents (percent), and incident duration Scenario 5 Number of crashes and detailed weather Scenario 6 Number of crashes and event traffic Scenario 7 Number of crashes and work zones Scenario 8 Full detail (incident number) Scenario 8a Full detail (incident percent) Total delay was the performance measure selected for testing the sensitivity of the tool. To compare the change of total delay for different scenarios, percent change in delay compared to the default scenario was calculated. The percent change in delay for the three test road segments is displayed in Figure 6.18. Figure 6.18. Change in delay compared with default scenario. Scenario 2 shows that the L07 tool is sensitive to crash duration. The duration of crashes for I-94 near TH-280 is close to the L07 default, but the duration for the other two segments is much higher. The higher delay corresponds to the longer crash duration. -5.00% 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 2 3 3a 4 4a 5 6 7 8 8a Pe rc en t c ha ng e Scenarios I-394 I-35 TH 280 202

Scenario 3 shows no difference compared to the default scenario. Upon further investigation, it was determined that the L07 tool does not read the number of incidents if the duration is not specified. Scenario 3a corrects the tool error identified in Scenario 3, so incident duration is now used. The negative sign for Scenario 3a indicates that there is a gap in the proportion of the number of incidents between the default and specified values. The default values overestimated the proportion of non-crash incidents (78 percent using the default values compared with an average of 55 percent for the test segments). Scenario 4 indicates that the delay estimated by the tool is sensitive to the duration of both crash and non-crash incidents. Scenario 4a also corrects the incident proportion, as in Scenario 3a. A decrease in delay is detected compared with Scenario 4; it is presumed that this decrease is because the default incident duration is longer than the observed data. Scenarios 5, 6, and 7 reveal that weather, event, and road work are not sensitive in the L07 tool. Changes in delay are less than 5 percent when these inputs are specified. Scenario 8 and 8a are fully detailed tests, which included all of the specified inputs. Scenario 8 uses the default proportion of incidents, but Scenario 8a uses the actual proportion. As in Scenarios 3a and 4a, these results reveal that the incident proportion decreases the delay in the L07 tool on these segments. Overall, these scenarios show the greatest change in delay. 203

CHAPTER 7 Findings and Recommendations This section highlights the key findings that were discovered through the pilot testing process. These topics are discussed in greater detail throughout this report; however, the following points provide a summary of critical takeaways for individuals and agencies considering adoption or exploration of the SHRP 2 reliability tools. Project L02 Travel time and demand (expressed as facility VMT in the Minnesota case) were identified as the bare minimum data sources required to conduct a reliability evaluation. Other inputs required for a fully functioning travel time reliability monitoring database (TTRMS) include • Weather: obtained through MnDOT’s R/WIS, Weather Underground, and NOAA • Crash: readily available from Department of Public Safety crash records • Incident: Aggregated from multiple sources coordinated through the RTMC • Special Event: Aggregated from multiple sources such as sports schedules and concert venues • Road Work: Aggregated from multiple sources including press releases and DMS messages There is a spectrum for the level of effort that can be expended on conducting travel time reliability evaluations. A minimal analysis consisting of only VMT and travel time records over a single year can be completed by a single analyst with approximately 1 day of work. The addition of more detailed elements such as weather, crashes, and event data over multiple years can add significant time and effort to the evaluation. Understanding the trade-offs between the level of effort and the resulting value of information is important for identifying the appropriate approach for any given reliability analysis. A wide variety of graphical products and performance measures can be produced using results from the TTRMS database. The Minnesota pilot team found that the following items provide the greatest value to potential audiences: • Surface plots • Pie charts • Cumulative density function (CDF) curves • Reliability indices • Comparison pie charts and bar charts 204

Project L07 The L07 project evaluation tool is applicable to freeway facilities and is capable of analyzing one segment with uniform geometry and volume characteristics. When considering use of the L07 tool, it is critical to identify the primary bottleneck location along a congested freeway facility. The tool was found to misrepresent speed profiles in locations that are influenced by upstream or downstream bottlenecks. The Minnesota team recommends first using the L02 TTRMS to identify high-potential locations to be evaluated in the L07 tool, such as high travel time variability due to crashes and incidents. This will maximize the likelihood of identifying cost-effective treatments through the L07 evaluation. While defaults are provided for many inputs required in the tool, some are more critical than others. Specifically, model results can be sensitive to the occurrence and duration of crashes and incidents. Therefore, customized inputs reflecting local conditions data are recommended to accurately capture the nonrecurring congestion effects. Conversely, default regional weather inputs are regarded as entirely adequate, as evaluation results are comparatively less sensitive to changes in rain and snow frequencies. Additional treatment data for the treatments in the tool are desired. Some stakeholders have expressed concern that the actual effects of some treatments in the tool are not fully understood. Additional treatment options are also desired, to test more options for operational and geometric improvements. The 2013 version of the L07 tool allowed the user to modify financial variables in the graphical user interface (GUI); however, this ability was restricted in the 2014 update. The Minnesota team recommends restoration of this feature to provide more flexibility for performing cost-benefit analyses. The tool does not provide any functionality for traffic growth over the project lifetime. The addition of a feature to capture future traffic growth and its impacts on travel time and financial outcomes is recommended. Project L05 A variety of important feedback was identified through the stakeholder outreach process. Some of the key points in the feedback included the following: • Stakeholders really liked the inclusion of travel time reliability for project-level evaluations. They found that the results resonated and reinforced their experience of conditions in the corridor. • Existing data sources were found to be adequate for evaluating travel time reliability on the Twin Cities freeway system. Initial concerns about inadequate data were eased through successful use of loop detectors, crash, weather, and other available data sources. 205

• There is concern over the level of effort to conduct reliability evaluation. In particular, lack of consistency in crash, incident, and road work data sources make linking these congestion causes to unreliable travel times time-consuming. Refined data collection and storage techniques and streamlined analysis tools will be needed to bring reliability evaluation into the mainstream. • There is a desire to include the contribution of nonrecurring congestion in benefit-cost analysis, but there are reservations about whether the state of the practice is ready for integration. More demonstration and proof will be needed to convince decision makers that this is the next step. • A disconnect remains between urban and rural applications for reliability evaluation. The urban environment benefits from widely deployed instrumentation and active traffic management that facilitates reliability evaluation, but this is not available in rural areas. Furthermore, nonrecurring congestion may be the only cause of delay in rural areas, underscoring the importance of capturing these impacts on those roadways. • Different types of information and presentation techniques are needed to communicate reliability performance to different audiences. For example, regional planners are interested in basic reliability indices at the facility or system level, whereas traffic engineers or operations managers may benefit from detailed surface plots and CDF curves along shorter highway segments. • More education is needed to define travel time reliability. The survey conducted at the workshop showed that a number of participants had in fact seen reliability used in previous project evaluations but did not realize that was what they had seen. Following the pilot testing technical work and outreach, MnDOT is committed to advancing reliability evaluation in its business practices. This was most clearly demonstrated by the success of the project example used for the I-94 traffic study conducted alongside the pilot testing work. Project stakeholders found the reliability evaluation enhanced the project study process and are now seeking similar information on future projects. In addition to project-level evaluation, MnDOT will also seek to implement reliability evaluation in a programming context, starting with the Congestion Management and Safety Plan (CMSP). The CMSP is a subset of highway mobility funds allocated in regional investment plans to deploy lower-cost/high-benefit solutions to address congestion and safety problems. The next CMSP prioritization process is expected to use reliability as a key performance measure. Further, department leadership sees strong potential for travel time reliability to drive additional investment in highway operations. Understanding the causes and magnitude of nonrecurring congestion, such as weather, crashes, and incidents, will make more effective use of snow plowing, incident response, and traffic management resources. Finally, success in these areas will be carried forward as reliability becomes more widely accepted and appreciated and ultimately adopted in decision-making structures throughout the organization. 206

References Cambridge Systematics, Inc., Texas A&M Transportation Institute, University of Washington, Dowling Associates, Street Smarts, H. Levinson, and H. Rakha. SHRP 2 Report S2-L03- RR-1: Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Transportation Research Board of the National Academies, Washington, D.C., 2013. Cambridge Systematics, Inc. SHRP 2 Report S2-L05-RW-1: Incorporating Reliability Performance Measures into the Transportation Planning and Programming Processes. Transportation Research Board of the National Academies, Washington, D.C., 2014. Institute for Transportation Research and Education; Iteris/Berkeley Transportation Systems, Inc.; Kittelson & Associates, Inc.; National Institute of Statistical Sciences; University of Utah; Rensselaer Polytechnic Institute; J. Schofer, and A. Khattak. SHRP 2 Report S2- L02-RR-1: Establishing Monitoring Programs for Travel Time Reliability. Transportation Research Board of the National Academies, Washington, D.C., 2014. Potts, I. B., D. W. Harwood, C. A. Fees, J. M. Hutton, and C. Kinzel. SHRP 2 Report S2-L07- RR-2: Design Guide for Addressing Nonrecurrent Congestion. Transportation Research Board of the National Academies, Washington, D.C., 2014. Potts, I. B., D. W. Harwood, J. M. Hutton, C. A. Fees, K. M. Bauer, and L. M. Lucas. SHRP 2 Report S2-L07-RR-1: Identification and Evaluation of the Cost-Effectiveness of Highway Design Features to Reduce Nonrecurrent Congestion. Transportation Research Board of the National Academies, Washington, D.C., 2014. 207

APPENDIX A Study Facility Reliability Reports A-1

Next: 2015.01.29 Appendix A FINAL »
Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota Get This Book
×
 Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Reliability Project L38 Report: Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Minnesota tests SHRP 2's Reliability analytical products at a Minnesota pilot site for SHRP 2 Reliability products for the L02, L07, and L05 projects. These data elements were combined into a travel time reliability monitoring system.

Other pilots were conducted in Southern California, Florida, and Washington.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!