National Academies Press: OpenBook
« Previous: Chapter 2 - Research Approach
Page 12
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 12
Page 13
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 13
Page 14
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 14
Page 15
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 15
Page 16
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 16
Page 17
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 17
Page 18
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 18
Page 19
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 19
Page 20
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 20
Page 21
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 21
Page 22
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 22
Page 23
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 23
Page 24
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 24
Page 25
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 25
Page 26
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 26
Page 27
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 32
Page 33
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 33
Page 34
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 34
Page 35
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 35
Page 36
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 36
Page 37
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 37
Page 38
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 38
Page 39
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 39
Page 40
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 40
Page 41
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 41
Page 42
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 42
Page 43
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 43
Page 44
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 44
Page 45
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 45
Page 46
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 46
Page 47
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 47
Page 48
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 48
Page 49
Suggested Citation:"Chapter 3 - Research Findings." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports. Washington, DC: The National Academies Press. doi: 10.17226/22182.
×
Page 49

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

12 C H A P T E R 3 As detailed in the previous chapters, this research involved testing three different methods of estimating an airport’s annual operations and also testing five different aircraft traffic counting technologies (that can be used to take samples that are then extrapolated into an annual operations estimate for the airport). The methods for estimating annual operations that were tested included the following: • Multiplying based aircraft by an estimated number of OPBA, • Applying a ratio of FAA IFPTO, and • Expanding a sample count into an annual estimate through extrapolation. Aircraft traffic counters tested included the following: • AAC (portable acoustic counter), • SMAC (portable acoustic counter), • S/TC (portable camera with infrared night vision), and • VID System with ADS-B transponder receiver. The results of the tests are described in the following sec- tion: Methods for Estimating Annual Airport Operations and Aircraft Traffic Counters Technologies Evaluated. Methods for Estimating Annual Airport Operations Estimates of annual operations for non-towered airports using three methods are analyzed in this research: (1) multi- plying based aircraft by an estimated number of OPBA, (2) Applying a ratio of FAA IFPTO, and (3) extrapolation of a sample count. As non-towered operations data is not based on tower counts, a dataset containing information on small, towered airports was developed for use in the analysis of the above estimating methods. (See Chapter 2, Estimating Methods, for a full description of the dataset.) Data reported by these small, towered airports was used to compare their reported annual operations to their estimated annual opera- tions using the above three methods. Summary of Data Sources and Descriptions Since there is no valid source for counts of operations data at non-towered airports, data on small, towered air- ports were used as a proxy for non-towered airports in the analysis of methods for estimating annual operations. Chap- ter 2 includes a description of the STAD developed for this research project. The sources for the data on the STAD airports used in this analysis were the FAA TAF and the FAA OPSNET databases from 2006 to 2010. To more accurately describe the opera- tions at a non-towered airport, total general aviation opera- tions (Total GA OPS) at small, towered airports were used in the analysis rather than total operations. Table 3-1 identifies the name of each variable, its descrip- tion, and its sources used in this analysis. This table may be referred to while reading the analysis that follows. Averaging the Data for Years 2006–2010 For each of the 205 airports, data from each of the 5 years from 2006 to 2010 were collected and stored in the STAD. The research team analyzed the data to see if an average of the 5 years of data for each airport could be used instead of the data from each year. An average of the 5 years of airport data allows for statistically accurate analysis of the 205 airports in the dataset, and simplifies the statistical analyses and outputs. Based on the results of statistical tests described in Appen- dix A, the average of the operations data for each airport were determined by the research team to be acceptable for use in the analysis. As a result, the Total GA OPBA ratios for the 5 years for each airport were averaged to obtain the Aver- age GA OPBA (AvgOPBA in Table 3-1) for each of the small Research Findings

13 non-towered airports. AvgOPBA was used in the following regression analyses. OPBA Method to Estimate Annual Airport Operations The first method analyzed for estimating operations is the OPBA where the number of based aircraft at an airport are multiplied by an estimated number of operations per based aircraft. To use this method, the estimated number of OPBA is needed. FAA Order 5090.3C, Field Formulation of The National Plan of Integrated Airport Systems (NPIAS) gives the following general guidelines for OPBA values: • 250 OPBA for rural general aviation airports with little itinerant traffic. • 350 OPBA for busier general aviation airports with more itinerant traffic. • 450 OPBA for busy reliever airports. • 750 OPBA in unusual circumstances (e.g., busy reliever with high itinerant operations). The objective of this research task was to determine if there was a consistent number(s) of OPBA that occur at small, tow- ered airports (i.e., STAD), if it varied by climate or popula- tion, and if having a flight school affected this number. Initial analysis revealed that an extremely large range of OPBAs exist for the STAD airports overall and by region, and practical use of any averages would not produce confident results. (See Table 3-2.) With this in mind, the research team attempted to actually model total OPBA through regression analysis to determine if an equation could be produced that offered bet- ter results. To do this, the research team modeled total OPBA at non-towered airports from operations data at small, tow- ered airports using information about the population, NOAA climate region, and flight schools. Table 3-1. Variables and descriptions of sources. Variable Description Source AvgOPBA Average general aviation OPBA for each airport 2006-2010. OPBA calculated from OPSNET and TAF data Enp Enplanements or revenue passenger boardings. TAF OPS Average general aviation operations for each airport 2006-2010. OPSNET Data AvgPop The average population for the years 2006- 2010 for the city or town surrounding the airport. U.S. Census. United States Census Bureau. Population Estimates 2000-2009 http://www.census.gov/popest/data/cities/totals/200 9/SUB-EST2009-4.html United States Census Bureau. 2010 Population Finder http://www.census.gov/popfinder/index.php Pop Scaled The average population scaled by 10,000 for the city or town surrounding the airport for the years 2006-2010. AvgPop/10,000 NFS The number of flight schools at the airport. AOPA (Training and Safety) http://www.aopa.org/learntofly/school/index.cfm FS Y/N The presence of a flight school. (1=Yes and 0=No) AOPA (Training and Safety) http://www.aopa.org/learntofly/school/index.cfm CTHrs Yearly hours of control tower operations. FAA Airport Facility Directory. (March 2013 data as no historical data was available) C 1 for Central; 0 for other regions. Definition from NOAA and data from OPSNET EN 1 for East North Central; 0 for other regions. Definition from NOAA and data from OPSNET NE 1 for Northeast; 0 for other regions. Definition from NOAA and data from OPSNET NW 1 for Northwest; 0 for other regions. Definition from NOAA and data from OPSNET S 1 for South; 0 for other regions. Definition from NOAA and data from OPSNET SE 1 for Southeast; 0 for other regions. Definition from NOAA and data from OPSNET SW 1 for Southwest; 0 for other regions. Definition from NOAA and data from OPSNET CM 1 for Commercial airport; 0 for GA or RL National Plan of Integrated Airport Systems RL 1 for Reliever airport; 0 for CM or GA National Plan of Integrated Airport Systems Note: West is not defined here, but it occurs when all other regions are set to 0. GA is not defined here, but it occurs when CM and RL are set to 0. North West Central is not included because there are no airports that met the criteria for inclusion in this dataset from this region. Prepared by: Purdue University.

14 Table 3-2 summarizes the OPBAs for the 205 STAD air- ports that were used in this study. Analysis Regression analysis was performed to determine if there is a consistent number(s) of OPBA that occur at STAD air- ports. If there is a consistent OPBA, then that factor could be applied to non-towered airports to estimate annual opera- tions. The regression analysis also considered if the OPBA varied by climate or population and if having a flight school affected this number. Regression analysis of the data was used to determine the effect these variables have on AvgOPBA in the STAD. The analysis includes: A. Full model and reduced model using AvgOPBA. B. Transformation of AvgOPBA and average based aircraft (AvgBA). C. Full model and reduced model using transformed data. D. Full model and reduced model using operations (OPS). A. Full model and reduced model using AvgOPBA. First, the full model regression was created using AvgOPBA as the variable to be estimated by the regression equation. The vari- ables used in the full model regression analysis are: • AvgOPBA, • AvgBA, • Number of Flight Schools at the airport (NFS), • Flight School Yes/No (FS Y/N), • Based Aircraft (BA), • Population (Pop Scaled), • Yearly Hours of Control Tower Operations (CTHrs), • Central (C), East North Central (EN), Northeast (NE), South (S), Southeast (SE), Southwest (SW) climate regions, • Commercial airport (CM), • Reliever airport (RL) (see Table 3-1 for descriptions of these variables). A reduced model was also developed. A reduced model is used to filter out uninformative variables and thereby, simplify the model. While the regression equations appeared significant in statistical terms, further analysis revealed that the equations found to estimate OPBA did not explain enough of the air- port data to be practically useful. In addition, the regression did not meet the necessary assumptions for statistical validity (e.g., normality, linearity, independence, etc.). Therefore, full model and reduced model regression using AvgOPBA were rejected. (See Appendix A for details on the full statistical analysis.) B. Transformation of AvgOPBA and AvgBA. Since the full model and reduced model regression described above did not meet the necessary assumption for statistical valid- ity, the data was “transformed” to see if it would better meet the required statistical assumptions. (Note: Transformed data changes the scale and may make relationships more visible than with non-transformed data.) Table 3-2. Summary of small, towered airport data by region used in this study. NOAA Climate region Number of airports AvgBA per region Avg Ops per region AvgPop OPBA mean OPBA OPBA range median 95% Confidence Interval for the median Low High Alaska 1 965.8 152,018 283,382 157.40 157.40 NA NA NA Central 33 141.01 49,187 162,441 429.54 360.13 (298.02, 426.85) 201.75 1,015.54 E. N. Central 13 188.52 67,823 260,933 473.92 462.29 (266.65, 550.52) 177.42 798.85 Hawaii 1 22.80 104,224 13,689 4,771.68 4771.6 NA NA NA Northeast 28 187.06 72,081 353,687 432.95 408.37 (351.95, 504.20) 225.91 828.52 Northwest 8 202.90 80,577 224,704 382.95 779.38 (264.80, 453.03) 219.87 779.38 South 41 154.19 65,312 352,947 597.89 338.00 (302.52, 522.53) 132.17 2,481.89 Southeast 38 212.66 95,457 171,804 561.74 439.42 (338.62, 572.66) 190.89 2,491.54 Southwest 15 394.01 16,802 391,318 487.23 396.66 (336.31, 646.39) 192.52 819.86 West 27 381.98 124,391 388,546 370.13 326.30 (282.28, 362.85) 139.69 875.89 W.N. Central 0 NA NA NA NA NA NA NA NA Overall 205 222.35 85,890 394,118 501.68 377.78 (350.30, 412.86) 132.17 4,471.68 Legend: Avg = Average BA = Based Aircraft Ops = Operations OPBA = Operations per Based Aircraft NA = Not Applicable Note: There are no airports from the West North Central region that meet the selection criteria for airports to be included in the dataset. Prepared by: Purdue University

15 Based on the statistical analyses detailed in Appendix A, the data for AvgOPBA and AvgBA were changed algebraically in a way that the statistical assumptions could be met. Instead of AvgOPBA and AvgBA, the logarithms of these numbers were used. Analysis of the transformed data determined that it met the required statistical assumptions and, therefore, was valid to use in building a model, which is described in Part C. C. Full model and reduced model using transformed data. Based on the findings in Part B, two models were developed: 1) Full model regression using logarithm data and all of the variables described earlier. log10AvgOPBA 3.95 0.681 log10AvgBA 0.000215 Pop scaled 0.0246 NFS 0.0206 FS Y/N 0.000036 CTHrs 0.153 C 0.0921 EN 0.0716 NE 0.0421 NW 0.0704 S 0.0079 SE 0.118 SW 0.0652 CM 0.0176 RL = − + + + + − − − − − + + − − 2) Reduced model regression using logarithm data with cer- tain variables removed. = − + + − + + log10AvgOPBA 3.94 0.621 log10AvgBA 0.000232 Pop scaled 0.0279 NFS 0.0797 C 0.0631 SE 0.169 SW The regression for the full and reduced model were statis- tically significant at the 95% level (alpha equals 0.05). The R-Sq(adj) equaled 51% and 50.4% respectively. (Note: The adjusted R-Squared is the proportion of the total variation of outcomes explained by the model taking into consideration the number of variables in the model.) The analysis of the trans- formed data using a reduced model regression is valid based on the residual plots (refer to Appendix A for more detail), along with its ability to meet the other regression assumptions. The reduced model has a very slight reduction in R-Sq(adj) than the full model (50.4% compared to 51%). However, the reduced model is preferable to the full model because it uses only six variables, while the full model uses 14 variables. Prac- tically speaking, to use this equation to estimate the OPBA, the only data a person needs are the number of based air- craft, the population (divided by 10,000) for the city or town surrounding the airport, the number of flight schools at the airport, and the NOAA region for the airport. (An example of a calculation is provided in the Appendix A.) However, this equation only accounts for approximately 50% of the behav- ior of annual OPBA, and therefore, it may not provide use- ful estimates in a practical application. If only approximately 50% of the variation of the AvgOPBA is explained by the vari- ables in the equation (i.e., flight schools, population, climate, and airport category), then large variations from actual to estimated operations are likely to occur. Therefore, use of this model is not recommended. D. Full model and reduced model using OPS. Because using AvgOPBA did not prove to be a relatively accurate way to estimate operations using the variables described in Part A, the research team chose to explore a different approach. While the research problem was to determine if there was a consistent number of OPBA that could be used to estimate an airport’s annual OPS, the ultimate goal is to estimate the annual OPS, not the OPBA. Therefore, analysis of a regression model for estimating OPS rather than OPBA was performed. Previous research (GRA, Inc. 2001) has shown that statistical models of operations may be more descriptive than models of OPBA. In this analysis, full and reduced regression models using OPS were analyzed using the same variables as described in Part A. Full Model Equation: OPS 8321 185 AvgBA 5185 NFS 1315 FS Y/N 43.3 Pop Scaled 3.39 CTHrs 19462 C 11778 EN 9125 NE 3418 NW 9397 S 5062 SE 45472 SW 2670 CM 3353 RL = + + + + + − − − + − + + − + Reduced Model Equation: OPS 16535 199 AvgBA 5174 NFS 44.1 Pop scaled 14880 SE 52389 SW = + + + + + Both the full and reduced model regression equations were statistically significant at the 95% level (alpha equals 0.05). The R-Sq(adj) was found to be 64.6 and 65.3%, respectively. However, this equation only accounts for approximately 65% of the behavior of annual operations, and therefore, it may not provide useful estimates in a practical application. In addition, neither equation met the necessary assumptions for statistical validity. Therefore, full model and reduced model regression using OPS were rejected. (See Appendix A for details on the full statistical analysis.) Conclusion Overall, the research team concludes that based on the study objectives and data, there were no practical and con- sistent OPBAs found or modeled at STAD airports that can

16 be used to estimate annual operations nationally or by cli- mate region at non-towered airports, even when considering the number of flight schools based at the airport. From all the models analyzed, only the full and reduced model using transformed data (i.e., log10AvgOPBA and log10BA) met the necessary assumptions for statistical validity. However, the two regression equations developed for them only accounted for about 50% of the behavior of annual operations—that is, they did not explain a high proportion of the variability in the airport operations data tested, and therefore are unable to predict airport operations with high certainty. (See Appen- dix A for details on the full statistical analysis.) IFPTO Method to Estimate Annual Airport Operations The second method analyzed for estimating operations is calculating them as a ratio of instrument flight plans filed to total operations. The objective of this research task was to determine if a consistent ratio of IFR flight plans filed with the FAA to total operations (IFPTO) occur at small, tow- ered airports, and if it varies by climate. Chapter 2 includes a description of the STAD developed for this research project. Analysis The total operations over the years 2006 to 2010 were aver- aged to obtain Avg GA OPS for each of the STAD airports. The General Aviation IFR flight plans over the years 2006 to 2010 were also averaged to obtain Average Total General Avia- tion IFR (Avg GA IFR). The IFPTO was calculated by dividing Average Total GA IFR flight plans by Avg GA OPS. In this task, the airports in the STAD were reduced from 205 to 202 for the following reasons. Alaska and Hawaii were removed because there was only one airport in each region. Additionally, the West North Central Climate Region was removed because it had no airport in the dataset. One airport in the South Cli- mate Region was removed because it had no IFR flight plans (San Marcos Municipal-HYI); therefore, an IFR to total oper- ations ratio could not be computed for it. Table 3-3 contains the analysis of the 202 STAD airports in the final dataset. Figure 3-1 is a summary of the descriptive statistics for the dataset. The average IFPTO of all the airports analyzed is approximately 0.13. The lowest IFPTO of all the airports was 0.003, while the highest was 0.55. This range is about four times the average IFPTO in the dataset. It is sus- pected that this range would not be considered consistent or useful to airport managers because of its wide span. For instance, if a non-towered airport determines that its number of IFR plans for a year is 1,000, then an estimate of total operations using the average of 0.13 IFPTO would be calculated as 7,692 total operations. Using the low end of the IFPTO range (0.003), total operations would be calculated as 333,333. Using the high end of the IFPTO range (0.55), total operations would be calculated as 1,818. By region, the average IFPTO spans from a low of 0.05 to a high of 0.18, which is a very wide range. Again, the IFPTO does not appear consistent or useful because the IFPTO within each region has a very wide range. Because the range of IFPTO is very large for each region, similar ranges of total operations estimates, as detailed above, are found for each region. Conclusion Overall, the research team concludes that based on the study objectives and data, there are no practical and consistent IFPTOs found at the STAD airports that could then be used to estimate annual operations nationally or by climate region. Region Number of Airports IFR/Total GA OPS Mean IFPTO Range (Low) IFPTO Range (High) Central 33 0.1842 0.0134 0.4442 East North Central 13 0.1232 0.0572 0.3469 Northeast 28 0.1195 0.0400 0.3234 Northwest 8 0.0735 0.0174 0.1524 South 40 0.1306 0.0057 0.5495 Southeast 38 0.1656 0.0034 0.3759 Southwest 15 0.0818 0.0102 0.2007 West 27 0.0498 0.0057 0.1785 Overall 202 0.1298 0.0034 0.5495 Note: Alaska, Hawaii, and W. N. Central regions are removed due to having 0 or 1 airport in the region. One airport from the South is removed due to no IFR operations. Prepared by: Purdue University Table 3-3. Summary of the ratio of average GA IFR flight plans to total GA operations.

17 Extrapolation Methods to Estimate Annual Airport Operations The third method analyzed for estimating operations is expanding a sample count into an annual estimate. The count- ing of operations is time consuming. Sampling methods use statistical methods to reduce the amount of time needed for counting samples and still provide accurate estimates. Esti- mating annual operations using sampling methods is typi- cally done either by statistical extrapolation of airport-specific sample counts or by extrapolation using monthly/seasonal adjustment factors developed from towered airports. The pro- cess and results of testing these two methods using data from small, towered airports are described below. Statistical Extrapolation When sample counts of aircraft operations are taken at an airport, the number and times of the samples will impact the results. Ideally, statistical sampling provides for a process where all weekly operation counts have an equal chance of being sampled because sampling relies on random choice. As a result, the sampling process ensures that the operations sampled are truly representative of the actual operations that occur throughout the year. This prevents certain factors from affecting the sample and skewing the resulting estimate (e.g., only sampling in good weather or sampling during a fly-in). The process of random sampling ensures that opera- tions are sampled independently of the sampler’s preferences and biases. Since operations are estimated from samples and the end result may vary depending on the size of the sample and when the sample was taken (because airport activity will often vary according to day of week, weather, and season), this study attempted to analyze the accuracy of different sample sizes and times. The objective of this exercise was to examine the accu- racy of extrapolating different sample sizes and times using the statistical methods in FAA-APO-85-7, Statistical Sampling of Aircraft Operations at Non-Towered Airports. Specifically, estimates of annual operations at small, towered airports (i.e., STAD) were calculated from different sample sizes and times using the methods in FAA-APO-85-7 and compared to the actual tower operations records. Prepared by: Purdue University Summary for Ratio of AVG GA IFR to GA OPS Anderson-Darling Normality Test A-Squared P-Value < 3.61 0.005 Figure 3-1. Graphical summary for the IFR to total GA OPS for 202 towered airports.

18 This exercise included the following four elements: 1. Following FAA-APO-85-7, take random samples from 2010 FAA historical data for two randomly selected airports from the STAD in each climatic region for the following time periods. A. One week in each season (number of seasons depends on climate) B. Two weeks in each season (number of seasons depends on climate) C. One month in spring, summer, or fall D. One month in winter (Note: Four seasons of 13 weeks each were assumed for each year.) 2. Using forms and equations provided in Report No. FAA- APO-85-7, estimate annual operations for each airport for each of the four sample periods. 3. Compare estimated operations to actual operations for the year and determine variances. 4. Compare and present the various accuracy levels of differ- ent sampling sizes and times of year. Analysis. From the STAD, two towered airports from each of eight NOAA climatic regions were randomly selected using a random numbers table. This selection resulted in 16 towered airports that were included in this analysis. These 16 airports are listed in Table 3-4. The West North Central region is excluded from this analysis because there are no Airport 3-Letter Identifier (Climatic Region) 1 Week Per Season1 2 Weeks Per Season 1 Month (Winter) using Seasonal Distribution1 1 Month (Spring, Summer, or Fall) using Seasonal Distribution1 1 Month (Winter) (25%)1 1 Month (Spring, Summer, or Fall) (25%)1 Month Sampled Actual CPS (Central) 115,427 127,177 82,237 125,533 102,646 115,813 Fall 111,620 DPA (Central) 104,377 88,472 69,128 90,041 86,285 85,166 Spring 89,989 ANE (ENC) 68,978 82,833 59,084 95,807 73,747 90,620 Spring 79,603 MIC (ENC) 32,695 44,305 35,895 62,250 44,804 54,990 Summer 44,229 ASH (N. East) 72,644 85,816 44,466 69,107 55,502 63,756 Fall 74,111 RME (N. East) 38,922 48,734 41,076 53,235 51,270 47,027 Summer 47,790 PDT (West) 12,194 12,013 11,035 10,897 13,774 10,054 Fall 12,994 TIW (West) 43,914 51,514 39,486 57,986 49,286 54,847 Spring 53,960 FTW (South) 86,268 80,397 66,528 77,128 83,039 71,156 Fall 78,499 GLS (South) 27,599 33,687 22,787 40,154 28,443 35,472 Summer 31,652 HEF (Southeast) 81,744 100,170 65,652 109,057 81,946 90,339 Summer 92,394 OPF (Southeast) 100,763 99,433 82,794 103,351 103,342 97,756 Spring 98,708 BJC (Southwest) 113,048 114,955 88,248 88,248 110,150 110,150 Fall 120,363 HOB (Southwest) 15,639 14,701 13,574 19,940 16,943 18,860 Spring 16,637 CMA (West) 150,319 149,633 178,355 158,211 168,688 168,927 Spring 146,863 TOA (West) 118,716 105,617 98,010 118,442 122,334 104,630 Summer 106,438 1. See Appendix A for detailed information on how the sampling sizes and timeframes were structured. Prepared by: Purdue University Table 3-4. Estimated total annual operations using statistical extrapolation for four sample sizes and times of actual weekly operations.

19 airports from that region in the dataset. This occurs because there are no airports from the West North Central region that meet the selection criteria for airports to be included in the dataset. Alaska and Hawaii were also excluded from this analy- sis because there is only one airport in each of those regions in the dataset. Random samples of daily historic 2010 tower operations from the FAA for the four different timeframes presented were collected. Using these random samples from these four differ- ent timeframes, estimates of annual operations for each of the 16 airports were computed using the statistical methods pre- sented in FAA-APO-85-7. The estimated annual operations were then compared to the actual annual operations to gauge reliability of using the four sample sizes and timeframes. (The sampling process for each of the four timeframes is described in detail in Appendix A.) In practice, actual error rates will be unknown for a non-towered airport, but a per- cent sampling error can be calculated which measures the precision of the annual operations estimate (e.g., 27,430 operations ±17.5%.). Table 3-4 shows the annual operations estimated from the four sample sizes of operations data for each of the 16 small, towered airports selected. Table 3-5 shows the percent difference between each estimate of annual opera- tions and the actual annual operations. At the bottom of the table, the highest and lowest percent differences are identified. The range between the highest and the lowest is also shown. Percent Difference from Annual Operations Airport 3-Letter Identifier (Climatic Region) 1 Week Per Season 2 Weeks Per Season 1 Month (Winter) using Seasonal Distribution 1 Month (Spring, Summer, or Fall) using Seasonal Distribution 1 Month Winter (25%) 1 Month Spring, Summer, or Fall (25%) Month Sampled Actual CPS (Central) 3.4% 13.9% -26.3% 12.5% -8.0% 3.8% Fall 111,620 DPA (Central) 16.0% -1.7% -23.2% 0.1% -4.1% -5.4% Spring 89,989 ANE (ENC) -13.3% 4.1% -25.8% 20.4% -7.4% 13.8% Spring 79,603 MIC (ENC) -26.1% 0.2% -18.8% 40.7% 1.3% 24.3% Summer 44,229 ASH (N. East) -2.0% 15.8% -40.0% -6.8% -25.1% -14.0% Fall 74,111 RME (N. East) -18.6% 2.0% -14.0% 11.4% 7.3% -1.6% Summer 47,790 PDT (West) -6.2% -7.5% -15.1% -16.1% 6.0% -22.6% Fall 12,994 TIW (West) -18.6% -4.5% -26.8% 7.5% -8.7% 1.6% Spring 53,960 FTW (South) 9.9% 2.4% -15.2% -1.7% 5.8% -9.4% Fall 78,499 GLS (South) -12.8% 6.4% -28.0% 26.9% -10.1% 12.1% Summer 31,652 HEF (Southeast) -11.5% 8.4% -28.9% 18.0% -11.3% -2.2% Summer 92,394 OPF (Southeast) 2.1% 0.7% -16.1% 4.7% 4.7% -1.0% Spring 98,708 BJC (Southwest) -6.1% -4.5% -26.7% -26.7% -8.5% -8.5% Fall 120,363 HOB (Southwest) -6.0% -11.6% -18.4% 19.9% 1.8% 13.4% Spring 16,637 CMA (West) 2.4% 1.9% 21.4% 7.7% 14.9% 15.0% Spring 146,863 TOA (West) 11.5% -0.8% -7.9% 11.3% 14.9% -1.7% Summer 106,438 High 16.0% 15.8% 21.4% 40.7% 14.9% 24.3% Low -26.1% -11.6% -40.0% -26.7% -25.1% -22.6% Range 42.1% 27.4% 61.4% 67.4% 40.0% 47.0% Prepared by: Purdue University Table 3-5. Percent differences between statistical extrapolation of operations estimates and actual annual operations using four sample sizes and times.

20 Conclusion. Based on this analysis of the objectives and the dataset, the best statistical extrapolating method for these 16 airports is the 2 weeks per season because it provides the overall lowest variations from estimated to actual operations. This is consistent with the previous research results discussed in ACRP Synthesis 4: Counting Aircraft Operations at Non- Towered Airports. Extrapolation Using Monthly/Seasonal Adjustment Factors from Towered Airports Another method to extrapolate sampled operations to annual is the use of monthly or seasonal adjustment factors. The objective of this research exercise was to examine the accuracy of extrapolating annual operations using different sample sizes and times. This research exercise consisted of three elements: • Calculate the percentage of operations that occur in each month for small, towered airports, and use these percent- ages to create monthly factors and seasonal factors for each region; • Use those monthly and seasonal factors to extrapolate annual operations for two randomly selected airports in each NOAA Climatic Region; and • Present and compare the accuracy levels of this extra- polation process using different sampling sizes and times of year. Analysis. The analysis performed in this research task also included use of the STAD airports. The analysis included three steps: 1. Determine regional monthly and seasonal factors using all airports in the STAD by region. 2. Extrapolate annual operations using the monthly and sea- sonal factors from the STAD. 3. Compare actual operations to the estimates. 1. Determine regional monthly and seasonal factors using all airports in the STAD—As stated before, the first step in the analysis consisted of calculating monthly and seasonal factors for aircraft operations by region. To do this, the total operations for each month of 2010 were recorded for each airport in the STAD, and then monthly and seasonal factors for each region were calculated. Table 3-6 includes the monthly and seasonal factors for each region calcu- lated from all airports in the STAD. (See Appendix A for detailed information on this analysis.) This analysis assumes all airports in a region have the same monthly and seasonal factors, that there are four sea- sons, and each season has 13 weeks. To maintain seasonal representation and to get all 12 months into four seasons for that calendar year, the seasons were identified as Winter (January–March), Spring (April–June), Summer (July– September), and Fall (October–December). In this way, the 2010 annual operations could be compared to the estimates Month Northeast Northwest South Southeast Southwest West Central East North Central January 0.07 0.06 0.07 0.08 0.08 0.07 0.05 0.05 February 0.05 0.07 0.07 0.08 0.08 0.07 0.06 0.06 March 0.08 0.09 0.09 0.09 0.09 0.09 0.09 0.09 April 0.09 0.09 0.09 0.10 0.08 0.08 0.09 0.08 May 0.10 0.10 0.09 0.09 0.08 0.09 0.09 0.09 June 0.10 0.10 0.09 0.08 0.09 0.09 0.09 0.10 July 0.10 0.10 0.09 0.08 0.08 0.09 0.10 0.12 August 0.10 0.10 0.09 0.08 0.08 0.09 0.10 0.10 September 0.08 0.09 0.09 0.08 0.09 0.09 0.09 0.09 October 0.08 0.08 0.09 0.09 0.09 0.08 0.10 0.09 November 0.08 0.05 0.08 0.08 0.08 0.08 0.08 0.07 December 0.06 0.05 0.07 0.07 0.08 0.07 0.06 0.05 Season Winter 0.20 0.22 0.23 0.24 0.25 0.23 0.20 0.20 Spring 0.28 0.29 0.27 0.27 0.25 0.26 0.27 0.27 Summer 0.28 0.30 0.26 0.23 0.25 0.27 0.29 0.31 Fall 0.23 0.19 0.24 0.25 0.24 0.23 0.24 0.22 Prepared by: Purdue University Table 3-6. Monthly and seasonal factors per region using all STAD airports.

21 of annual operations developed using seasonal factors. It is important to note, however, that in practice, climatic con- ditions may vary widely between regions and even within each region. 2. Extrapolate annual operations using the monthly and sea- sonal factors from the STAD airports—The next steps for this research were to extrapolate annual operations using the monthly and seasonal factors developed in Table 3-6. Two STAD airports were randomly selected from each the eight regions (16 total) and samples from the following time peri- ods were extrapolated into annual counts (using Table 3-6): A. One week in each season B. Two weeks in each season C. One month (either spring, summer, or fall) D. One month in winter The airport’s actual operations and extrapolated opera- tions were then compared to determine the accuracy of the time periods and monthly factors, which is outlined in the section below. (See Appendix A for detailed information on the sampling scenarios and airports.) Table 3-7 pro- vides the results of the extrapolation. 3. Compare actual operations to the estimates—The final task included a comparison of the actual operations of the 16 test airports to the estimated operations. As shown in Table 3-7, the percent difference between each test air- port’s estimated annual operations and the actual annual operations were calculated. A summary of the percent dif- ferences between actual operation and estimated opera- tions calculated with the monthly and seasonal factors is shown in Table 3-8, which includes the average, the aver- age of the absolute values, the highest, the lowest, and the range for each of the four sampling scenarios. As may be seen in Table 3-8, estimates made using the sam- pling scenario of two weeks per season provided an estimate closest to actual operations for the test airports, on average. The ranges for estimated operations for the sampling scenar- ios of 2 weeks per season and 1 month (spring, summer, or fall) were the closest to actual operations, in terms of range of the percent differences. Conclusions. When extrapolating a sample count using monthly or seasonal factors, the sampling scenario of two weeks in each season is preferred by the research team. While the statistical analyses did not find a significant difference between the sampling scenarios (e.g., one week in each season, two weeks in each season, etc.) except for one month winter and one month spring, summer, or fall, there is a difference in the average percent difference and the range of percent differences that may be observed in Table 3-8. Additionally, Table 3-6 does show that there is a difference in the seasonal factors calculated for the seasons and this would result in a slight difference in the outcome if the season were comprised of different months. However, the statistical analysis is between what the computed and the actual operations are, and that range is so great that changing the months will not improve the outcome. The difference of the averages cannot be seen statistically because the variance is so large within the dataset for these airports. Of the four sampling scenarios, the two weeks in each season scenario has a combination of statistics reported that indicate preference over the other methods in this analysis. More airports would need to be tested in a future research proj- ect to determine if this preference is statistically significant for a larger variety of small, towered airports. (See Appendix A for more details on the statistical analysis performed.) Overall Conclusions for Methods of Estimating Annual Airport Operations Overall, the research team concludes that based on the study objectives and data, there were no practical and consistent OPBAs found or modeled at small, towered airports nationally or by climate region, even when considering the number of flight schools based at the airport. Therefore, the research team cannot recommend an OPBA for estimating annual operations at non-towered airports. Additionally, based on the data and study objectives, the research team concluded that there were no practical and consistent IFPTOs found at small, towered airports nationally or by climate region. Therefore, the research team cannot recommend an IFPTO for estimating annual operations at non-towered airports. Accordingly, to estimate an airport’s operations, the team recommends taking a sample of actual operations and extrapolating annual operations from the sample. (See the following section for technology that can be used for sampling/counting aircraft operations). When taking a sample count, the research team recom- mends sampling for two weeks in each season. This sample can be extrapolated by either a statistical extrapolation process or by use of seasonal/monthly adjustment factors developed from small, towered airports. The latter process assumes that the monthly and seasonal variations in traffic at small, towered air- ports are representative of non-towered airports. Based on this fact alone, the research team recommends using the statistical extrapolation process and performing sample counts for two weeks each season. This removes the need for additional data and the influences of outside forces on the extrapolation process. The statistical extrapolation method may appear more math- ematically difficult than the monthly/seasonal extra polation method. However, step-by-step instructions, examples, and forms are available in FAA-APO-85-7, Statistical Sampling of Aircraft Operations at Non-Towered Airports. Appendix B includes an example of how this is done. The following section describes different technology that can be used to sample operations.

Airport Region 1 Week each Season 2 Weeks each Season 1 Month Spring, Summer, or Fall Season Selected 1 Month Winter Month in Winter Selected Actual Operations (OPSNET) 1 Week each Season 2 Weeks each Season 1 Month Spring, Summer, or Fall 1 Month Winter CPS Central 113,764 126,605 97,938 Fall 126,909 Feb. 111,620 2% 13% -12% 14% DPA Central 101,692 82,865 72,858 Spring 72,360 Mar. 89,989 13% -8% -19% -20% ANE East North Central 80,256 78,920 79,473 Spring 78,928 Feb. and Mar. 79,603 1% -1% 0% -1% MIC East North Central 30,029 40,558 35,739 Summer 45,481 Feb. and Mar. 44,229 -32% -8% -19% 3% ASH Northeast 68,659 82,627 57,111 Fall 61,563 Jan. 74,111 -7% 11% -23% -17% RME Northeast 49,531 47,908 35,943 Summer 73,128 Feb. 47,790 4% 0% -25% 53% PDT West 12,106 12,440 14,016 Fall 13,034 Feb. and Mar. 12,994 -7% -4% 8% 0% TIW West 48,266 48,837 42,199 Spring 54,603 Jan. and Feb. 53,960 -11% -9% -22% 1% FTW South 83,370 81,069 72,014 Fall 91,839 Feb. 78,499 6% 3% -8% 17% GLS South 28,646 33,290 30,301 Summer 27,556 Feb. and Mar. 31,652 -9% 5% -4% -13% HEF Southeast 81,030 100,971 92,411 Summer 80,306 Feb. and Mar. 92,394 -12% 9% 0% -13% OPF Southeast 94,524 96,819 82,483 Spring 101,658 Jan. and Feb. 98,708 -4% -2% -16% 3% BJC Southwest 115,364 113,461 114,742 Fall 106,536 Jan. and Feb. 120,363 -4% -6% -5% -11% HOB Southwest 14,941 14,233 16,914 Spring 14,974 Feb. and Mar. 16,637 -10% -14% 2% -10% CMA West 151,100 148,393 165,637 Spring 174,536 Feb. and Mar. 146,863 3% 1% 13% 19% TOA West 118,025 79,103 85,326 Summer 115,623 Mar. 106,438 11% -26% -20% 9% Note: Positive % differences indicate that the actual annual operations are larger than the estimated annual operations. Negative % differences indicate that the actual annual operations are smaller than the estimated annual operations. Prepared by: Purdue University Table 3-7. Estimates of annual operations using monthly/seasonal extrapolation and four sampling scenarios.

23 Aircraft Traffic Counters Evaluated As detailed under Task 3 in Chapter 2, four different air- craft counting technologies were evaluated in a multiple case study using four airports. The technology included the following: • AAC (portable acoustic counter). • SMAC (portable acoustic counter). • S/TC (portable camera with infrared night vision). • Stationary VID with ADS-B transponder receiver (stationary). Please refer to Chapter 2, Research Approach, for detailed information on the technology, the equipment evaluated, and the evaluation process. While the results of the analysis are detailed in the following pages, Table 3-9 below provides an overview of the findings. Automated Acoustical Counter Principle(s) of Operation and Intended Use The AAC tested was a portable acoustic counter that oper- ates by analyzing sounds for specific characteristics. (See Table 3-8. Summary of the percent difference between estimates using monthly/seasonal factors and OPSNET annual operations. % Difference from OPSNET Annual Operations 1 Week each Season 2 Weeks each Season 1 Month Spring, Summer, or Fall 1 Month Winter Average of real values 4% 2% 9% 2% Average of absolute values 9% 8% 12% 13% Highest 13% 13% 13% 53% Lowest -32% -26% -25% -20% Range 45% 39% 38% 73% It is important to note that the goal of this research was not to develop a new method to count aircraft operations. Rather, it was to evaluate existing meth- ods and technology for obtaining this information. These existing technologies and methods were iden- tified in Tasks 1 and 2 in Chapter 2. The equipment tested represents typical technology used in the field at the time the evaluation program was developed. (New technological advances continue to result in new ways to count aircraft, and this report briefly discusses them and their potential in a section to- wards the end.) It is important to note that all research has a certain level of uncertainty that limits the conclusions that can be drawn from it. This research is no different. While one may be able to effectively eliminate many of the factors that can affect the accuracy of a piece of equipment in a lab setting, this research did not attempt to do that. One of the primary goals of this project was to evaluate the equipment and methods as they are typically used in practice, and to use the equipment in the field tests in the same types of sit- uations that it typically would be used, without elim- inating natural factors that may affect the results. Natural factors include such things as wind direction, preferred runway, aircraft type and user experience, aircraft engine type, airport configuration, environ- mental influences, etc. Since these natural factors cannot be controlled in practice, no attempt was made to control or quantify them in this research. For example, on any given day, the wind may shift from favoring the use of one runway to favoring the use of another. One would not continually relocate counting equipment in practice based on wind direc- tion, so this was not done during evaluation. Since natural factors are so numerous and vary from airport to airport, they are virtually unquantifiable; therefore, the results shown here are only appli- cable to their respective test airports and should be considered case studies. The results in the field at other airports would likely be different depending on their unique characteristics. However, the infor- mation obtained from this research provides great value in understanding the limitations of the equip- ment and applying that understanding to its practi- cal use in the field.

COUNTER Automated Acoustical Sound-Level Meter Acoustical Security/Trail Camera Video Image Detection (VID) Service Provider VID Supplemental ADS-B Transponder Receiver Service Provider Principle(s) of Operation Embedded 32 bit, 72 megahertz, ARM 7 microprocessor, and system software. Class 2 sound-level meter and analyzing software. Passive infrared motion detection, nighttime infrared illuminator, and digital camera. Electronic-based aircraft tracking using advanced video tracking that uses proprietary software, aircraft sensor systems, and digital camera equipment, and Aircraft Situation Display to Industry data. Receiver collects information periodically broadcast from ADS-B equipped aircraft on their position obtained from satellite navigation. Intended Use Aircraft Counting. Aircraft Counting. Security, Wildlife Monitoring. Automated landing fee collection, airport security, operations monitoring. Air traffic and airport surface surveillance. Computer Requirements Typical Microsoft Windows- based computer with a USB port and Microsoft Excel® will allow the user to view the data. Typical Microsoft Windows- based computer with a SD card slot and Microsoft Excel® will allow the user to view the data; ASNL software provided. Typical Microsoft Windows-based computer with a SD card slot. Typical Microsoft Windows-based computer with Internet access to view service provider website. Typical Microsoft Windows- based computer with Internet access to view service provider website. Event Recorded Takeoff Takeoff Taxi to or from runway Taxi to or from runway Takeoff Landing Overflight Typical Data Provided Date Time Date Time Date Time Temperature Moon Phase Image Date Time Image Aircraft N-Number Aircraft Make Aircraft Model Weight Design Group Wingspan Date Time Aircraft N-Number Ease of Portability Easy - small, light, compact (weighs approx. 20 lbs.) Easy - small, light, compact (weighs approx. 20 lbs.) Easy - small, light, compact (camera weighs approx. 2 lbs.) Although it is a standalone unit, it is not portable. Requires installation by technician. Not portable. Requires installation by technician. Durability PVC housing for microphone and microprocessor and the solar panel were sturdy and durable. With the addition of a sealed bucket for housing the components and battery, the unit proved weather resistant. Equipment is housed in a sturdy Pelican® case making it durable and weather resistant. Camera is housed in a rugged weatherproof enclosure making it sturdy and durable. Equipment is housed in sturdy all- weather casing which makes it durable. The receiver used by the service provider failed during the test. Ease of Installation and Airport Impacts FAA Form 7460 filing required; Required to stay clear of RSA and TSA; Portable and self-contained unit resulted in easy installation. FAA Form 7460 filing required; Required to stay clear of RSA and TSA; Portable and self-contained unit resulted in easy installation. FAA Form 7460 filing required; Required to stay clear of RSA and TSA; Portable and self-contained unit resulted in easy installation. FAA Form 7460 filing required; Required to stay clear of RSA and TSA; Self-contained unit, but not portable and requires installation by company technician. Small unit and roof-top antenna. Portable, but requires installation by company technician. Table 3-9. Counting equipment evaluation matrix.

Table 3-9. (Continued). COUNTER Automated Acoustical Sound-Level Meter Acoustical Security/Trail Camera Video Image Detection (VID) Service Provider VID Supplemental ADS-B Transponder Receiver Service Provider Maintenance and Operation Little maintenance required; solar panel was cleared of snow and grass removed from blocking microphone. Required changing batteries on a frequent basis, replacing windscreen, clearing snow and grass from blocking microphone, calibrating sound-level meter. Little maintenance required; solar panel was cleared of snow and grass removed from blocking lens. No maintenance required other than ensuring cameras were not blocked by snow. No maintenance required. Ease of Data Retrieval Simple - USB connection for direct upload to computer. When multiple counters are used on the same runway, manual removal of duplicate counts is required. Simple - SD card slot for upload into computer. When multiple counters are used on the same runway, manual removal of duplicate counts is required. Simple - SD card slot for upload into computer. Removal of duplicate pictures required for count because more than one picture is needed per motion detection to ensure tail number is viewable. Simple - computer with internet service. Simple - computer with Internet service. Performance in Various Weather and Lighting Conditions No impacts from lightning, thunder, or frigid temperatures encountered; Lighting issues not a factor. No impacts from lightning or thunder encountered; Frigid temperatures deplete battery quickly and there is no solar panel charging option; Lighting issues not a factor. No impacts from low/no lighting encountered; Frigid temperatures deplete battery in approx. 2 weeks, but addition of solar panel solves this; Night photos exceeded 70 ft. range limits of specifications. No impacts from low/no lighting or frigid temperatures encountered. The receiver used by the service provider failed during the test. Service Contract Requirements No contract required. No contract required. No contract required. Contract required. Contract required from service provider who writes specific algorithms to identify operations. Cost Approximately $4,800 each at time of test. Approximately $4,800 each at time of test. Approximately $1,000 each at time of test. Approximately $31,000 for lease of two cameras and data analysis service for 7 months at time of test. Approximately $5,000 for lease and data analysis for 7 months at time of test. Best Accuracy Obtained During Case Studies Multiple counters needed for longer runways; 92% using 3 counters on single 5,500 ft. runway. Multiple counters needed for longer runways; 94% using 1 counter on single 2,800 ft. runway. 100% for taxis to and from runway at airport with simple configuration and centralized terminal area. All touch-and-goes missed. Error rate dependent on number of touch-and-goes at airport. 90% for taxis to and from the runway. All touch-and-goes missed. Error rate dependent on number of touch-and-goes at airport. 0% during testing. Unit failed during study. When working, it only identified 5 aircraft that were not already identified by the VID. Other Only counts takeoffs, which requires doubling to estimate operations; exceptionally quiet aircraft are missed; premise based on missed takeoffs (false negatives) being approximately offset by false positives. Only counts takeoffs, which requires doubling to estimate operations; exceptionally quiet aircraft are missed; premise based on missed takeoffs (false negatives) being approximately offset by false positives. Does not count touch-and-goes. Does not count touch-and-goes As of February 24, 2014, only 2% of the U.S. fleet had ADS- B out. (Lee-Lopez 2014) With this low equipage rate, ADS-B is not a viable solution to counting aircraft at this time.

26 Figure 3-2.) The system had an embedded 32, bit, 72 mega- hertz, ARM 7 microprocessor and system software that was programmed to detect the sounds associated with a takeoff. If the correct characteristics are detected, the microproces- sor records the time, date, and acoustic characteristics of the event in its internal memory (Basil Barna). In order to con- serve power, the AAC system tested is programmed to listen for activity at one second intervals. The system tested was first developed in the late 1990s for counting aircraft operations at secondary and backcountry airports (Basil Barna). Computer Requirements A typical Microsoft Windows-based computer with a USB port and Microsoft Excel® will allow the user to view the data. Data Provided The AAC tested provided the user with the date and time of the event recorded, its loudness, and its duration in seconds. No individual aircraft characteristics were provided. Since the device only records takeoffs, the total events recorded were doubled to determine operations under the premise that for every takeoff there is a landing and vice versa. Ease of Portability The AAC was completely portable. It consisted of a poly- vinyl chloride (PVC) plastic cylinder housing, four gigabytes of internal memory, 12-volt sealed lead-acid battery, 5-watt solar panel, and a USB 2.0 connection. The heaviest item was the battery. The sum total weight of the entire unit was approximately 20 pounds. Although it was shipped in a durable Pelican® case, the case was not designed for use in the field. The initial installa- tion included simply placing all the pieces on the ground (see center picture in Figure 3-2), but it quickly became apparent that this would not protect the equipment from the elements. The Indiana Department of Transportation, Office of Avia- tion staff, who utilize similar equipment, advised housing the unit inside a five-gallon bucket. Accordingly, a hole was cut into the side of the bucket a few inches from the base for the microphone, and everything but the solar panel was placed inside with the microphone extending through the hole. (See lower two pictures in Figure 3-2.) There were no user ser- viceable parts inside the unit. The microprocessor and micro- phone slide into the PVC weather protection sleeve. The solar panel and the electronics package power cable plug into the connector on the battery. Durability Despite its lack of housing for all the individual compo- nents, the AAC tested was designed for hardy use in outdoor conditions. The PVC housing for the microphone and micro- processor was sturdy and effectively sheltered the internal components. The maintenance-free battery ran the equip- ment continuously. The solar panel recharged the battery regardless of weather, but snow was cleared away at times dur- ing the winter. With the addition of the bucket for housing the components, the unit proved quite durable.Figure 3-2. AAC.

27 Ease of Installation and Airport Impacts The FAA determined that any equipment installation on the airport, even if it were temporary and outside the run- way safety area (RSA), required an FAA approval (through the filing of FAA Form 7460) in order to be in compliance with Title 14 of the CFR Part 77. In the case of this research, a Form 7460 airspace determination was filed for each location where the AAC was evaluated. Since the AAC is portable and it simply sits on the ground, there were no permanent installa- tion requirements. As such, there was no impact to the airport infrastructure. The user manual for the equipment instructed that it be located adjacent to the runway near a typical lift-off point, with the best location being one that maximized the sound of a takeoff and minimized all other sounds. It additionally instructed that the equipment be close to, but a safe distance away from the runway, typically 10 to 20 feet. However, to receive a non-objectionable airspace determination from the FAA on the Form 7460 submittal, the equipment had to be located outside of the RSA of the airports where it was evaluated. Typical RSAs at non-towered airports range from 120 feet wide (60 feet each side of runway centerline) to 500 feet wide (250 feet each side of runway centerline) depending on the size of the aircraft that use the airport. At this distance the equipment is generally farther away from the runway than it was designed to be. (Note: FAA AC 150/5300-13A, Airport Design, provides the width for all runway classifications in Appendix 7, Runway Design Stan- dards Matrix. Although some are wider than 500 feet, the maximum width of the RSA where the acoustic equipment was tested was 500 feet.) Maintenance and Operation The AAC required little maintenance in the field. The solar panel provided ample power to recharge the battery during the seven months the equipment was deployed. The AAC had an internal battery for the internal clock that provided backup power when there was no external power. During the winter it was necessary to keep the cylinder unit clear of the snow so its listening device was not blocked. It was also necessary to occasionally cut tall grass away during the other seasons for the same reason. (See Figure 3-3.) Ease of Data Retrieval When the power harness was connected to the AAC, it automatically started collecting and storing data. When the USB cable was connected, it provided the user with an oppor- tunity to synchronize clocks and then access the internal stor- age device that contained the comma-separated (i.e., cvs) data files. This generally worked fairly well, but to get any data from the AAC, this required the counter to be disconnected from power and then connected via USB to the computer. There was no optional memory card or flash drive down- loading option. The power connection proved difficult to dis- connect when the temperatures were below freezing and the user’s fingers were cold. During the cold, the USB had inter- mittent problems connecting with the laptop computer for the data download, either due to the cold weather’s impact on the computer or the USB connection. When more than one unit was used, the raw data had to be manually manipulated to remove duplicate counts. There was no automated feature for this, and the process was cumbersome, time consuming, and prone to human error. Once the sample is taken, the user has to extrapolate it into an annual count. Performance in Various Weather and Lighting Conditions The temperature reached a low of -1°F during the study and the AAC continued to work. While the laptop that was required for data download did not seem to work well in below freezing temperatures, the AAC appeared to be undaunted by it. After several weeks at below freezing, the AAC was still operating without interruption. Being acoustically activated, lighting conditions did not have any impact on the device. Additionally, thunder had no discernable impact on it either (i.e., thunder did not trigger it to record). Service Contract Requirements The AAC is a fully functioning, standalone unit that did not require any outside support. Once purchased, the user had the ability to operate the unit without the need of any type of service contract. The manufacturer was extremely helpful, personally delivering the device and teaching the researcher how to use it. Figure 3-3. AAC deployed.

28 Cost Per Unit The cost of the AAC will vary depending on when it is pur- chased since the prices of its composite pieces vary based on their respective markets. At the time of acquisition, two units were purchased for $4,800 each. Accuracy Assessment The AAC was evaluated at four airports in several different locations. Appendix D contains the airport diagrams for the four airports and the locations where the AAC equipment was located. The results are shown by airport in terms of percent error. This error is defined as the difference between the mea- sured results and the actual results. The percent error is the ratio of the error to the actual results multiplied by 100. The smaller the error is, the higher the accuracy. When the percent error equals 100%, this means there were no correct measurements. A percent error was computed for all takeoffs correctly recorded for each counter in each location. This did not include any false positives. (See the next paragraph for more information on false positives.) In the case of the acoustical counter, the equipment is only supposed to count takeoffs, and the manufacturers indicate that takeoffs are to be doubled to calculate operations. Therefore, a theoretical percent error could be computed for operations where the takeoffs correctly recorded by the counter are doubled and compared to the sum of the actual takeoffs and actual landings. However, this was not done because of systematic errors during observation that may skew the results. For example, if the majority of aircraft consistently takeoff in the morning for business purposes while visual observations are being recorded, but return after visual observations have stopped for the day, those landings are never recorded. The assumption is made here that for every takeoff there is a landing, so the percent error would be the same for takeoffs as for total operations if the sample is taken over a long enough period to compensate for the reciprocal opera- tions that occurred before or after the counter was deployed. In addition to the percent error for takeoffs and opera- tions, percent errors were also calculated with false positives included. False positives can be a landing, a lawn mower, a taxi, or anything that is not a takeoff that triggers the coun- ter to record. Since in actual use of the equipment, a user would be unable to remove any false positives, these were also tracked and percent errors computed for takeoffs with the false positives included. The manufacturer designed the AAC so that “the analysis algorithm is set at a point where missed takeoffs (false nega- tives) are approximately offset by false positives” (Basil Barna). The manufacturer claimed that, “on balance the recorded count will be within 10% of the actual number of take offs” (Basil Barna). Observed errors are presented on the following pages for each airport where the AAC was studied. The most important information gained from the research on the AAC is sum- marized below: • There is no one level of accuracy that can be achieved with this equipment. • It is not a simple “plug and play” type of device. Significant time must be taken to test that the counter(s) is located correctly, but there is not an easy way to get the data from the counter. It has to be completely powered down and opened up, which makes testing a location for accuracy time consuming. • There is no one location that can be identified for the best performance (i.e., location is dependent on airport con- figuration, favored runway, and typical aircraft users). • Multiple units may be needed to achieve an acceptable per- formance on many airports because the distance the equip- ment is located perpendicular to the rotation point (lift-off) of the aircraft impacts accuracy. And the use of multiple units requires removal of duplicate counts from the raw data, which also requires additional time. • Airports with multiple/crossing runways prove extremely challenging to count accurately. • FAA Forms 7460 were required to be filed for each piece of equipment, and to receive a determination of no hazard, the equipment had to be located outside of the RSA. The longest study with the most sampling occurred at TYQ. This case study included visual observation over 15 days spanning seven months. Table 3-10 shows the overall results of this study. (Appendix D includes the airport diagram.) Although the manual does not discuss the use of two coun- ters, the length of TYQ’s runway (5,500 feet) as compared to the length of the runways for which this counter was initially designed, and or tested on, suggested more than one counter may be needed. Therefore, the AAC was located at various positions along the runway to determine the best location, and to determine if more than one counter was needed to adequately cover the runway. As described earlier, the user manual instructs for the equipment to be located adjacent to the runway near a typi- cal lift-off (rotation) point, with the best location being one that maximizes the sound of a takeoff and minimizes all other sounds. It additionally instructs that the equipment be close to, but a safe distance away from the runway, typically 10 to 20 feet. Based on TYQ’s RSA, all positions were required to be 250 feet from the runway centerline. Initial field evaluation determined that the counters per- formed best when located as close as possible perpendicularly to the aircraft’s rotation point, just as the manual instructs. As stated before, the manufacturer was extremely helpful and

29 loaned a third counter for the case study to help find the opti- mal locations. When located in the middle of the runway, the counter almost always picked up at least half of the takeoffs, but many takeoffs were missed because the point of rotation was either too far beyond or behind the counter. Therefore, these positions were augmented by locations approximately halfway between the midpoint and the ends. The results were as expected. When Runway 18 was favored, the counters at midpoint and close to the end of Runway 36 produced bet- ter results than the one near the end of Runway 18 and vice versa. The results indicate that the use of three counters gives the best performance for a runway of this length. In most locations, however, the results from the counters were less than what was visually observed (i.e., the equipment under- counted operations). When false positives were included, the percent error decreased. The manufacturer’s claim of ±10% was only achieved by the use of three counters on a runway of this length. And the claim is achieved by the inclusion of false positives. The number of false positives recorded was similar for each location, with the majority being from low approaches. Each counter rather equally missed touch-and- goes just under half the time. (See Tables 3-10 and 3-11.) During the study, single engine piston (SEP) aircraft were the most often missed takeoffs, but they were also the most prevalent aircraft activity at the airport (see Table 3-12). A case study on the AAC was also performed at EYE that included visual observations over three days. EYE’s RSA allowed the counters to be located 75 feet from the runway centerline, which was 175 feet closer than at TYQ. EYE’s runway is also 1,300 feet shorter than TYQ. Because of its length, the hypothesized location for the best results would be the midpoint of the runway (i.e., most aircraft would rotate within 2,100 feet). However, at this location the AAC missed the takeoffs more than half the time. (See Tables 3-13 and 3-14.) Overall, the midpoint on EYE likely performed worse than the midpoint at TYQ because the runway is shorter and, unlike at TYQ, the majority of aircraft are beyond the mid- point when they reach rotation speed. Because the midpoint performed poorly, the counters were moved to the first and second thirds of the runway to determine if these locations better represented the typical takeoff points of aircraft. The results were similar to that of TYQ in that the counter per- formed worse when it was located on the third of the run- way closest to end that the winds favored because the aircraft was well beyond that point at rotation speed. The opposite of this was also true in that the counter on the opposite end of the favored runway performed better. Additionally, when the middle counter results were added to the results of the more optimally located counter, the total error rate was less. How- ever, unlike TYQ, the counters did not perform as well at EYE overall, but the testing time was significantly less. During the EYE case study, SEP aircraft were the most often missed, but they were also the most prevalent aircraft activity at the airport. MEP aircraft were the next most often missed takeoffs. (See Table 3-15.) AAC Percent Error Result when Placed 250 ft. from Runway Centerline Location A = 1800 ft. from Runway 18 End Location B = 1800 ft. from Runway 36 End Location C = Midpoint of Runway LOCATION A B C A & B A & C B & C A, B, & C Percent Error for Takeoffs 42% 32% 35% 20% 28% 20% 17% Percent Error for Takeoffs when False Positives are Included 35% 26% 27% 13% 20% 13% 8% LOCATION A B C Touch-and-Go Percent Error Rate by Counter 48% 48% 42% Prepared by: Woolpert, Inc. INTERPRETATION EXAMPLE: 42% of the touch-and-goes were missed by the AAC in this location. INTERPRETATION EXAMPLE: A combination of counters positioned at locations A, B, and C produced operations counts 8% less than what actually occurred. Table 3-10. Overall results for Indianapolis Executive Airport—runway 18-36 (5,500 ft.  100 ft.)

30 AAC Percent Error Result when Placed 250 ft. from Runway Centerline Location A = 1,800 ft. from Runway 18 End Location B = 1,800 ft. from Runway 36 End Location C = Midpoint of Runway LOCATION A B C A & B A & C B & C A, B, & C Favored Runway = 18 Percent Error for Takeoffs 54% 28% 43% 24% 37% 23% 21% Percent Error for Takeoffs when False Positives are Included 47% 21% 34% 16% 28% 13% 11% LOCATION A B C A & B A & C B & C A, B, & C Favored Runway = 36 Percent Error for Takeoffs 19% 49% 16% 16% 11% 16% 11% Percent Error for Takeoffs when False Positives are Included 16% 46% 14% 14% 8% 14% 8% LOCATION A B C A & B A & C B & C A, B, & C Favored Runway = NA Percent Error for Takeoffs 0% 13% 13% 0% 0% 13% 0% Percent Error for Takeoffs when False Positives are Included 25% 0% 13% 25% 25% 0% 25% Note: A shaded cell with black text means the measured result was higher than the actual. Prepared by: Woolpert, Inc. INTERPRETATION EXAMPLE: A combination of counters positioned at locations A, B, and C produced operations counts 11% less than what actually occurred when Runway 18 was favored by the winds. Table 3-11. Favored runway results for Indianapolis Executive Airport—runway 18-36 (5,500 ft.  100 ft.) AAC Missed % Takeoffs by Type Type Percent of Activity (takeoffs, landings, taxis, etc.) Percent Takeoffs Missed SEP 80.8% 85.1% J 6.3% 5.3% MEP 5.7% 4.3% H 1.3% 3.2% GYRO 0.8% 1.1% METP 1.7% 1.1% GV 2.3% NA SETP 1.1% 0.0% SEP = single engine piston; MEP = multi-engine piston; J = jet; G = gyrocopter; GV = ground vehicle; SETP = single engine turbo prop; H = helicopter; METP = multi-engine turbo prop. Prepared by: Woolpert, Inc. INTERPRETATION EXAMPLE: 80.8% of the activity during the test was by SEP. 85.1% of the takeoffs missed by the AAC were performed by SEP. Table 3-12. Indianapolis Executive Airport missed takeoff analysis.

31 Table 3-13. Overall results for Eagle Creek Airport—runway 3-21 (4,200 ft.  75 ft.) AAC Percent Error Results when Placed at Midpoint on Runway, 75 ft. from Centerline Midpoint Percent Error for Takeoffs 63% Percent Error for Takeoffs when False Positives are Included 63% Touch-and-Go Percent Error Rate 75% 1,400 ft. from: RW 21 RW 3 Percent Error for Takeoffs 41% 95% Percent Error for Takeoffs when False Positives are Included 36% 95% Touch-and-Go Percent Error Rate 67% 100% Prepared by Woolpert, Inc. Table 3-14. Results by favored runway Eagle Creek Airport—runway 3-21 (4,200 ft.  75 ft.) AAC Percent Error Results when Placed 75 ft. from Runway Centerline 1400 ft. from 21 End and Midpoint RW 21 Midpoint Combined Favored Runway = 3 Percent Error for Takeoffs 41% 68% 36% Percent Error for Takeoffs when False Positives are Included 36% 68% 32% Midpoint and 1400 ft. from Runway 3 End Midpoint RW 3 Combined Favored Runway = 3 Percent Error for Takeoffs 57% 95% 57% Percent Error for Takeoffs when False Positives are Included 57% 95% 57% Prepared by: Woolpert, Inc. Table 3-15. Eagle Creek Airpark missed takeoff analysis. AAC Missed % Takeoffs by Type Type Percent of Activity (takeoffs, landings, taxis, etc.) Percent Takeoffs Missed SEP 77.0 85 MEP 9.3 10 J 6.2 2 METP 2.5 2 G 0.0 0 SETP 0.0 0 H 0.6 0 GV 4.3 NA SEP = single engine piston; MEP = multi-engine piston; J = jet; G = gyrocopter; GV = ground vehicle; SETP = single engine turbo prop; H = helicopter; METP = multi-engine turbo prop. Prepared by: Woolpert, Inc.

32 A case study on the AAC was also completed at I42. This airport was chosen for its shorter runway and narrower RSA, which would allow for the counter to be located closer to the runway. Additionally, this airport better represented the type of runway the AAC was designed for when developed. This study included visual observations over three days. Because of the short length of the runway, the vast majority of take- offs occurred near the midpoint, so the AAC was located at the midpoint at varying distance from the centerline. (See Appendix D for airport diagrams.) Because of the low traf- fic during the first day of testing, a local aircraft and pilot were enlisted to perform several hours of takeoffs, landings, and touch-and-goes during the second and third days. While this was the type of runway the AAC was designed for, it did not perform well. However, this may be because the majority of the operations were performed in a Cessna 172G with a Continental O-300 SER 145HP engine. (See Table 3-16.) The counters seemed to function better when moved farther from the runway centerline, which is contrary to expectations, but the Continental O-300 SER significantly affected the results. It did not seem to matter where the aircraft with this engine was when it rotated; the AAC registered it less than 10% of the time. And at 1,400 feet from either end, the units were opti- mally located for catching the rotation point. During testing, this aircraft consistently lifted off the ground within approxi- mately 200 feet of a point perpendicular of the counter loca- tion, but was not detected. The manufacture’s website states that the AAC may miss a takeoff if the aircraft is exception- ally quiet, and this proved true at I42. When the Continen- tal O-300 was removed from the evaluation, two of the AAC units caught every takeoff. Because three counters were located side-by-side at I42, this case study also looked at the consistency of the AAC. Although all three counters were the same, they did not per- form exactly the same. However, none recorded any false positives, so the error rates were the same with and without false positives. Like TYQ, the AAC undercounted operations. Finally, a case study of the AAC was performed at LAF to determine its effectiveness on an airport with crossing run- ways. The study included visual observation over six days. The RSA for LAF’s primary runway required the equipment to be located no closer than 250 feet from the centerline of Runway 10-28 and 150 feet from the centerline of Runway 5-23. When the study was developed, two counters were thought to be needed because of the two runways, and various locations were approved by the FAA based on the need for two coun- ters. However, two counters were insufficient to track traffic on this airport. In all probability, even three counters would likely not perform sufficiently if the winds did not consis- tently favor their locations. Table 3-17 shows the overall per- cent errors for each location studied while Table 3-18 shows the results based on favored runway. Note that a shaded cell with black text means that the measured result was higher than was visually observed (i.e., the counter over counted). The locations that produced the overall best results before false positives were included were a combination of B, C, and D. These results were the best because these locations had no error rate when Runway 10 was favored. They also had the lowest error rate when Runway 5 was favored, but when Runway 28 was favored, they did not correctly record any takeoffs. When false positives were included into the mix, a combination of locations C and D produced the best results overall. Either AAC Percent Error Results from Side-by-Side Evaluation at Midpoint of Runway at Varying Distances from Runway Centerline Continental O-300 SER Comprising 81% of Activity Locations A = 50 ft. from Runway Centerline AAC#1 AAC#2 AAC#3 Percent Error for Takeoffs 81% 94% 94% Percent Error for Takeoffs when False Positives are Included 81% 94% 94% Locations B = 75 ft. from Runway Centerline Percent Error for Takeoffs 83% 87% 87% Percent Error for Takeoffs when False Positives are Included 83% 87% 87% Locations B = 125 ft. from Runway Centerline Percent Error for Takeoffs 71% 43% 43% Percent Error for Takeoffs when False Positives are Included 71% 43% 43% Note: If an engine larger/louder than the Continental O-300 SER was in the aircraft with the majority of operations performed during this test, the equipment would likely have performed better. Prepared by: Woolpert, Inc. Table 3-16. Paoli Municipal Airport—runway 2-20 (2800 ft.  50 ft.)

Table 3-17. Overall results for Purdue University Airport—two runways (runway 10-28: 2,793 ft.  50 ft.; runway 5-23: 6,600 ft.  150 ft.) Location A = Midpoint of Runway 10-28, 250 ft. from Centerline Location B = 2,000 ft. from Runway 28 End, 250 ft. from Centerline Location C = 1,200 ft. from Runway 23 End, 150 ft. from Centerline Location D = Midpoint of Runway 10-28, 250 ft. from Centerline; 1,000 ft. from Runway 5-23 Centerline LOCATION A AAC Percent Error Results B C A & B A & C B & C A, B, & C Percent Error for Takeoffs 53% 82% 99% 49% 53% 82% 49% Percent Error for Takeoffs when False Positives are Included 44% 78% 99% 38% 44% 78% 38% LOCATION B C D B & C B & D C & D B, C, & D Percent Error for Takeoffs 52% 41% 70% 36% 52% 28% 21% Percent Error for Takeoffs when False Positives are Included 30% 25% 52% 9% 27% 5% 6% Note: A shaded cell with black text means the measured result was higher than the actual. Prepared by Woolpert, Inc. Location A = Midpoint of Runway 10-28, 250 ft. from Centerline Location B = 2,000 ft. from Runway 28 End, 250 ft. from Centerline Location C = 1,200 ft. from Runway 23 End, 150 ft. from Centerline Location D = Midpoint of Runway 10-28, 250 ft. from Centerline; 1,000 ft. from Runway 5-23 Centerline LOCATION A B C A & B A & C B & C A, B, & C Favored Runway = 23 Percent Error for Takeoffs 93% 93% 96% 93% 93% 93% 93% Percent Error for Takeoffs when False Positives are Included 85% 89% 96% 85% 85% 89% 85% Favored Runway = 28 Percent Error for Takeoffs 33% 77% 100% 27% 33% 77% 27% Percent Error for Takeoffs when False Positives are Included 23% 73% 100% 13% 23% 73% 13% LOCATION B C D B & C B & D C & D B, C, & D Favored Runway = 28 Percent Error for Takeoffs 100% 100% 100% 100% 100% 100% 100% Percent Error for Takeoffs when False Positives are Included 100% 100% 100% 100% 100% 100% 100% Favored Runway = 5 Percent Error for Takeoffs 69% 32% 88% 22% 69% 25% 22% Percent Error for Takeoffs when False Positives are Included 59% 24% 81% 8% 54% 12% 7% Favored Runway = 10 Percent Error for Takeoffs 15% 40% 35% 40% 15% 13% 0% Percent Error for Takeoffs when False Positives are Included 30% 8% 3% 13% 33% 30% 53% Prepared by: Woolpert, Inc. AAC Percent Error Results Table 3-18. Results by favored runway for Purdue University Airport—two runways (runway 10-28: 2,793 ft.  50 ft.; Runway 5-23: 6,600 ft.  150 ft.)

34 locations C and D or locations B, C, and D achieved the manu- facturer’s claimed error rate, and while it may be tempting to assume they would achieve this universally, these locations only work if the winds favor them, which they did over the six days the units were tested. Again, when the winds did not favor them, error rates of 100% were reached. In summary, while error rates of 5% and 6% were obtained with three counters, it was only because the equipment had counted non-takeoffs 20% of the time and the winds favored their positions during the evaluation. Adding more counters may reduce the percent error rate, but that would likely only be because they were counting more false positives. The more counters included, the more confusing it is to analyze the results and remove double or triple counts, and the process becomes increasingly susceptible to human error. During the study, SEP aircraft were again the most often missed takeoffs, but they were also the most prevalent aircraft activity at the airport. MEP aircraft were the next most often missed takeoffs. (See Table 3-19.) Mowing is a major function at all airports, and mowers have the potential to trigger an acoustically activated aircraft traffic counter. Since no mowing was done during any of the evalu- ations, a separate mowing study was performed to determine if and when a mower might trigger the counter. The results of the mower evaluation revealed that two of the three counters were triggered by the mower at 15 feet in front of the unit. All three were triggered by the mower at five-foot increments from 10 feet in front of to 15 feet behind the units. (See Table 3-20.) Sound-Level Meter Acoustical Counter Principle(s) of Operation and Intended Use The SMAC tested included a sound-level meter and special software package for identifying aircraft takeoffs. (See Fig- ure 3-4.) Paired together, the system is supposed to record sounds and then differentiate those that are takeoffs from other events. Once the appropriate parameters are set on the sound meter, data are recorded and stored on the instru- ment’s memory card inside the unit. The software then iden- tifies which noise events were aircraft takeoffs. These data are then sent to an electronic database. Since the system works with a Class 2 sound-level meter, it can also be used to measure noise in general. Type Percent of Activity (takeoffs, landings, taxis, etc.) Percent Takeoffs Missed SEP 95.3 95.2 MEP 4.3 3.0 J 0.0 1.8 GV 0.0 0.0 GYRO 0.0 0.0 SETP 0.0 0.0 H 0.4 0.0 METP 0.0 NA SEP = single engine piston; MEP = multi-engine piston; J = jet; G = gyrocopter; GV = ground vehicle; SETP = single engine turbo prop; H = helicopter; METP = multi-engine turbo prop. Prepared by: Woolpert, Inc. AAC Missed % Takeoffs by Type Table 3-19. Purdue University Airport missed takeoff analysis. Table 3-20. Mower evaluation—AAC side-by-side results. AAC#1 AAC#2 AAC#3 False Positives Mower 60 ft. in front of counter 0 0 0 Mower 55 ft. in front of counter 0 0 0 Mower 50 ft. in front of counter 0 0 0 Mower 45 ft. in front of counter 0 0 0 Mower 40 ft. in front of counter 0 0 0 Mower 35 ft. in front of counter 0 0 0 Mower 30 ft. in front of counter 0 0 0 Mower 25 ft. in front of counter 0 0 0 Mower 20 ft. in front of counter 0 0 0 Mower 15 ft. in front of counter 1 1 0 Mower 10 ft. in front of counter 1 1 1 Mower 5 ft. in front of counter 1 1 1 Mower 5 ft. behind counter 1 1 1 Mower 10 ft. behind counter 1 1 1 Mower 15 ft. behind counter 1 1 1 Total False Positives 6 6 5 Prepared by: Woolpert, Inc. Figure 3-4. SMAC.

35 Computer System Requirements A typical Microsoft Windows®-based computer with a SD memory card reader and Microsoft Excel® is needed to access and manipulate the data. The ASNL software reads the files from the SD card, computes the number of takeoffs, and then saves the results in an electronic database Data Provided The SMAC provided the user with the date and time of the event (i.e., takeoff) recorded and its Lmax (maximum sound pressure level). No individual aircraft characteristics were provided. Since the device only recorded takeoffs, the total events recorded were doubled to determine operations under the premise that for every takeoff there is a landing and vice versa. Ease of Portability The SMAC was completely portable. The noise meter and two 6-volt sealed lead-acid batteries were housed in a durable case with a sum total weight of the unit of 20 pounds. To move the unit, one only needed to simply grab the handle of the case and go. Durability The SMAC came housed in a sturdy Pelican® case. The only component exposed to the elements was the microphone, which was covered by a foam wind screen. This wind screen disappeared a few times during the study, likely the result of a curious animal. Ease of Installation and Airport Impacts As indicated previously, the FAA determined that any equipment installation on the airport, even if it were tempo- rary and outside the RSA, required an FAA approval (through the filing of an FAA Form 7460) in order to be in compliance with Title 14 of the CFR Part 77. In the case of this research, a Form 7460 airspace determination was needed for each loca- tion where the SMAC was evaluated. Since the SMAC was portable and simply sat on the ground, there were no perma- nent installation requirements. As such, there was no impact to the airport infrastructure. Several parameters have to be set on the sound meter before it could be left to count. Close attention was required when set- ting these or the system would not work correctly. To interpret the data from the SMAC, software was required, which was provided by the manufacturer. Initial installations were unsuc- cessful and required assistance from a company representative. The user manual instructs for the equipment to be located close to the runway. To receive a non-objectionable airspace determination from the FAA on the Form 7460 submittal, the equipment had to be located outside of the RSA of the airports where it was evaluated. Typical RSAs at non-towered airports range from 120 feet wide (60 feet each side of runway centerline) to 500 feet wide (250 feet each side of runway cen- terline), depending on the size of the aircraft that use the air- port. (Note: FAA AC 150/5300-13A, Airport Design, provides the width for all runway classifications in Appendix 7, Run- way Design Standards Matrix. Although some are wider than 500 feet, the maximum width of the RSA where the acoustic equipment was tested was 500 feet.) This distance was gener- ally farther away than the equipment was designed for use. Maintenance and Operation The SMAC required very little maintenance in the field outside of changing the batteries. Since there was no external power source and the unit did not come with a solar power option, the batteries required changing every one to two weeks, if not sooner in cold weather. During the winter, the external microphone stayed above the snow so it was never blocked. However, tall grass had to occasionally be cut away during the other seasons so as not to interfere with the micro- phone. A snow fall of approximately 15 inches or more would begin to block the microphone. Additionally, the microphone can become a bird perch, which makes occasionally changing the wind screen necessary due to the buildup of bird drop- pings. (See Figure 3-5.) Ease of Data Retrieval Except for opening the case, data retrieval from the SMAC was fairly easy. While the units are well protected from the weather, this protection proves cumbersome when the batter- ies need to be changed or data downloaded. To open the case, the mounting system for the microphone had to be removed. This required removing a small wing nut that was in tight Figure 3-5. SMAC deployed.

36 quarters, which was almost impossible to do while wearing gloves. When the wing nut is removed with cold fingers, it can easily be dropped and lost in the snow. The data was stored on a SD card inside the sound meter. This card was easily swapped out with an empty card in the field and then uploaded to the computer once in the office out of the elements. The software included with the equip- ment outputs the total number of events recorded. It then follows a statistical process for estimating annual operations from the sample, which is virtually the same statistical pro- cess that was described earlier. More specifically, the software read the files from the sound-level meter, saved the results in a Microsoft Excel® file, and used Visual Basic for Applications (VBA) macros in an Excel® template to produce a final report for the estimated annual operations. Performance in Various Weather and Lighting Conditions The temperature reached a low of -1°F during the study, and while the SMAC worked in these temperatures, the bat- teries only lasted a few days before they had to be replaced. Being acoustically activated, lighting conditions did not have any impact on the device. Also, thunder did not have any dis- cernable impact on it either (i.e., it did not trigger the unit to record). Service Contract Requirements The SMAC was a fully functioning, standalone unit that did not require any outside support. Once purchased, the unit can be operated without the need of any type of service contract. Cost Per Unit The cost of the SMAC will vary depending on when it is purchased since the prices of its composite pieces vary based on their respective markets. At the time of acquisition, two units were purchased for approximately $4,800 each. Accuracy Assessment The longest study with the most sampling occurred at TYQ. This case study included visual observations over 14 days spanning seven months. Table 3-21 shows the overall results of this study. (Appendix D includes the airport diagrams.) The most important information gained from the research on the SMAC is similar to what was gained on the ACC. It is summarized below: • There is no one level of accuracy that can be achieved with this equipment. • It is not a simple “plug and play” type of device. Significant time must be taken to test that the counter(s) are located cor- rectly, but there is not an easy way to determine if the equip- ment counted the aircraft without opening up the case. • There is no one location that can be identified for the best performance (i.e., location is dependent on airport con- figuration, favored runway, and typical aircraft users). • Multiple units may be needed to achieve an acceptable performance on many airports because the distance the equipment is located perpendicular to the rotation point (lift-off) of the aircraft impacts accuracy. And the use of multiple units requires removal of duplicate counts from the raw data, which also requires additional time. • Airports with multiple/crossing runways prove extremely challenging to count accurately. • FAA Forms 7460 were required to be filed for each piece of equipment, and to receive a determination of no hazard, the equipment had to be located outside of the RSA. The user manual for the SMAC did not indicate the need for more than one counter for a runway, but experience with the AAC indicated more may be needed, as proved to be the case. As described earlier, the user manual instructs for the equip- ment to be located close to the runway where it will detect the most takeoffs. Based on TYQ’s RSA, all positions were required to be 250 feet from the runway centerline. Two SMAC coun- ters were located side-by-side at the midpoint of the runway where they produced percent error rates of 87% each. When false positives were added in at this location, the error rate SIDE-BY-SIDE EVALUATION AT RUNWAY MIDPOINT SMAC#1 SMAC#2 Favored Runway = 18 Percent Error for Takeoffs 87% 87% Percent Error for Takeoffs when False Positives are Included 83% 83% Prepared by: Woolpert, Inc. SMAC Percent Error when Placed 250 ft. from Runway Centerline Table 3-21. Midpoint results for Indianapolis Executive Airport—runway 18-36 (5,500 ft.  100 ft.)

37 was reduced to 83% for both counters. (See Table 3-21.) This evaluation seemed to indicate that the majority of aircraft were not rotating close enough to the counter’s location per- pendicular to the runway to trigger the equipment. They were either rotating too far before or after the counter. The user manual states that the parameters used in the SMAC were established based on an accuracy study per- formed in November and December 2005 at Tipton Airport in Maryland. (Note: It does not state the accuracy obtained from that study.) It is important to note that the runway at Tipton is 3,000 feet in length, which is 2,500 feet shorter than the one at TYQ. This is obviously a contributing factor to why the SMAC did not perform as well at the midpoint of TYQ’s runway. Both units did, however, perform the same. The two SMAC counters were also placed at 1,800 feet from each runway end. At these locations they produced better results—error rates of 72% and 82%. (See Table 3-22.) As was the case with the AAC, two units performed better than one because they were able to “listen” over more area. When both counters were used together at these two locations, they pro- duced a combined percent error rate of 63%, which was almost cut in half—38%—when false positives were included. (See Table 3-23 for results based on favored runway.) This evaluation seemed to conclude that many landings were being counted in these locations, but close review of the data revealed that the false positives were mostly comprised of non-events (i.e., NOT a landing or a low approach). There were instances when one counter recorded several events just a minute or two apart. While this happened on both counters, it did not happen at the same time. One might conclude that the counters were trig- gered by birds sitting on the wind screen as these screens were covered in bird droppings. Regardless, the counter was cutting its error rate almost in half not by counting engine noise like it was supposed to, but by counting mostly non-events. Although three counters were not used in the case study, deploying counters at all three locations would reduce the error rate. However, the counter was clearly missing a significant number of takeoffs this distance from the runway centerline. A case study on the SMAC was also performed at EYE that included visual observations over three days. EYE’s RSA Table 3-22. Overall results on first and last third of runway at Indianapolis Executive Airport— runway 18-36 (5,500 ft.  100 ft.) LOCATION A B A & B Percent Error for Takeoffs 72% 82% 63% Percent Error for Takeoffs when False Positives are Included 61% 68% 38% Prepared by: Woolpert, Inc. SMAC Percent Error when Placed 250 ft. from Runway Centerline Location A = 1800 ft. from Runway 18 End Location B = 1800 ft. from Runway 36 End LOCATION A B A & B Favored Runway = 18 Percent Error for Takeoffs 81% 84% 74% Percent Error for Takeoffs when False Positives are Included 68% 71% 48% LOCATION A B A & B Favored Runway = 36 Percent Error for Takeoffs 49% 81% 38% Percent Error for Takeoffs when False Positives are Included 46% 62% 16% Prepared by: Woolpert, Inc. SMAC Percent Error when Placed 250 ft. from Runway Centerline Location A = 1800 ft. from Runway 18 End Location B = 1800 ft. from Runway 36 End Table 3-23. Results by favored runway at first and last third of runway—Indianapolis Executive Airport—runway 18-36 (5,500 ft.  100 ft.)

38 allowed the counters to be located 75 feet from the runway cen- terline, which was 175 feet closer than at TYQ. EYE’s runway is also 1,300 feet shorter than TYQ, so different locations were selected to determine if the runway could be counted with one counter, or if more would be needed. Based on the instruction manual, the hypothesized location for the best results would be the midpoint of the runway. At this location the SMAC caught takeoffs 70% of the time (30% error rate), and when the false positives were included, the percent error rate was reduced to 16%. (See Table 3-24.) The runway was also divided into thirds and counters located in the middle of the first and last third to see if the performance was better. (See Table 3-25 for results based on favored runway.) The error rate on the counter on the third closest to Runway 21 was significantly reduced when false positives were included. At closer look, the SMAC counted a few taxis, landings, and low approaches at this location, which reduced its error in undercount of takeoffs. Therefore, two counters on this runway would likely produce viable results when the winds changed to either’s favor. Although using three counters would likely produce a lower error rate, there would be a significant number of takeoffs that could be double counted. As described earlier, editing the raw data is time con- suming, cumbersome, and prone to human error. The false positives at EYE were equally distributed between landings, low approaches, and non-events. During the EYE case study, SEP aircraft were the only air- craft missed, but like the other airports, they were also the most prevalent aircraft activity. (See Table 3-26.) A case study on the SMAC was also completed at I42. Its smaller RSA allowed for the counter to be located closer to Table 3-24. Overall results Eagle Creek Airport—runway 3-21 (4,200 ft.  75 ft.) Midpoint Percent Error for Takeoffs 30% Percent Error for Takeoffs when False Positives are Included 16% Percent Error Rate for Touch-and-Goes 63% SMAC Percent Error when Placed at First and Last Third of Runway, 75 ft. from Centerline 1,400 Ft. from: RW 21 RW 3 Percent Error for Takeoffs 27% 24% Percent Error for Takeoffs when False Positives are Included 4% 24% Percent Error Rate for Touch-and-Goes 33% 80% Note: A shaded cell with black text means the measured result was higher than the actual. Prepared by: Woolpert, Inc. SMAC Percent Error when Placed at Midpoint on Runway, 75 ft. from Centerline Table 3-25. Results by favored runway for Eagle Creek Airport—runway 3-21 (4,200 ft.  75 ft.) Locations 1400 ft. from Runway 21 End and Runway Midpoint RW 21 Midpoint Combined Favored Runway = 3 Percent Error for Takeoffs 30% 30% 17% Percent Error for Takeoffs when False Positives are Included 4% 9% 13% Locations Runway Midpoint and 1400 ft. from Runway 3 End Midpoint RW 3 Combined Favored Runway = 3 Percent Error for Takeoffs 29% 24% 19% Percent Error for Takeoffs when False Positives are Included 24% 24% 14% Note: A shaded cell with black text means the measured result was higher than the actual. Prepared by: Woolpert, Inc. SMAC Percent Error when Placed 75 ft. from Runway Centerline

39 the runway. This airport better represented the type of run- way at which the SMAC was initially tested in Maryland. This study included visual observation over three days. Because of the length of the runway and the type of aircraft that typically use the facility, the vast majority of takeoffs occurred near the midpoint, so the SMAC was located at the midpoint at varying distance from the centerline. A local aircraft and pilot were enlisted to perform several hours of takeoffs, landings, and touch-and-goes. The SMAC performed better the closer it was to the runway. However, its consistency was question- able in this side-by-side evaluation. While one counter had only a 6% error rate at 50 feet from the runway, the other was 0% (both with the false positives added into the total). As was the case with the AAC, the SMAC had trouble picking up the Cessna 172G with a Continental 0-300 SER 145HP engine. At 75 feet and greater from the runway centerline, it missed this aircraft the vast majority of the time. (See Table 3-27.) Finally, a case study of the SMAC was performed at LAF to determine its effectiveness on an airport with multiple run- ways. The study included visual observations over six days. The RSA for LAF’s primary runway required the equipment to be located no closer than 250 feet from the centerline of Runway 10-28 and 150 feet from the centerline of Runway 5-23. When the study was developed, two counters were thought to be needed because of the two runways, and vari- ous locations were approved by the FAA based on the need for two counters and manufacturer instructions. However, two counters were insufficient to track traffic on this airport. (See Table 3-28.) Locations B and C provided the best results because the winds favored Runway 5 and Runway 10 during much of those evaluations, but the error rate was still greater than 50%. (See Table 3-29.) It is difficult to assume that even three counters would have produced much better results. When multiple runways are involved, it becomes increasingly difficult to locate a counter in a location that will optimally serve one runway without resulting in substantial false posi- tives from the other runway. And, with a runway as long as those at LAF, even two counters could not adequately count one runway. Mowing is a major function at all airports, and mowers have the potential to trigger an acoustically activated air- craft traffic counter. Since no mowing was done during any of the study, a separate mowing evaluation was performed to determine if and when a mower might trigger the SMAC. The results of the mower study revealed that a mower has the Table 3-26. Eagle Creek Airpark missed takeoff analysis. Type Percent of Activity(takeoffs, landings, taxis, etc.) Percent Takeoffs Missed SEP 80.4% 100% MEP 9.3% 0.0% J 6.2% 0.0% GV 2.5% NA GYRO 0.0% 0.0% SETP 0.0% 0.0% H 0.6% 0.0% METP 4.3% 0.0% SEP = single engine piston; MEP = multi-engine piston; J = jet; G = gyrocopter; GV = ground vehicle; SETP = single engine turbo prop; H = helicopter; METP = multi-engine turbo prop. Prepared by: Woolpert, Inc. SMAC Missed Percent Takeoffs by Type Table 3-27. Paoli Municipal Airport—runway 2-20 (2800 ft.  50 ft.) SMAC Percent Error Results from Side-by-Side Evaluation at Midpoint of Runway at Varying Distances from Runway Centerline Continental O-300 SER Comprising 81% of Activity Locations A = 50 ft. from Runway Centerline SMAC#1 SMAC#2 Percent Error for Takeoffs 13% 6% Percent Error for Takeoffs when False Positives are Included 6% 0% Locations B = 75 ft. from Runway Centerline Percent Error for Takeoffs 79% 79% Percent Error for Takeoffs when False Positives are Included 79% 79% Locations C = 125 ft. from Runway Centerline Percent Error for Takeoffs 71% 43% Percent Error for Takeoffs when False Positives are Included 57% 14% Note: If an engine larger/louder than the Continental O-300 SER was in the aircraft with the majority of operations, the equipment may have performed better at further distances from the runway centerline. Note: A shaded cell with black text means the measured result was higher than the actual. Prepared by: Woolpert, Inc.

40 SMAC Percent Error Location A = Midpoint of Runway 10-28, 250 ft. from Centerline Location B = 2,000 ft. from Runway 28 End, 250 ft. from Centerline Location C = 1,200 ft. from Runway 23 End, 150 ft. from Centerline LOCATION A C A & C Percent Error for Takeoffs 94% 99% 94% Percent Error for Takeoffs when False Positives are Included 91% 99% 91% LOCATION B C B & C Percent Error for Takeoffs 95% 72% 69% Percent Error for Takeoffs when False Positives are Included 94% 71% 65% Prepared by: Woolpert, Inc. Table 3-28. Overall results for Purdue University Airport—two runways (runway 10-28: 2,793 ft.  50 ft.; Runway 5-23: 6,600 ft.  150 ft.) SMAC Percent Error Location A = Midpoint of Runway 10-28, 250 ft. from Centerline Location B = 2,000 ft. from Runway 28 End, 250 ft. from Centerline Location C = 1,200 ft. from Runway 23 End, 150 ft. from Centerline LOCATION A B A & C Favored Runway = 23 Percent Error for Takeoffs 89% 96% 89% Percent Error for Takeoffs when False Positives are Included 81% 96% 81% Favored Runway = 28 Percent Error for Takeoffs 96% 100% 96% Percent Error for Takeoffs when False Positives are Included 96% 100% 96% LOCATION B C B & C Favored Runway = 28 Percent Error for Takeoffs 90% 100% 90% Percent Error for Takeoffs when False Positives are Included 90% 100% 90% Favored Runway = 5 Percent Error for Takeoffs 97% 61% 58% Percent Error for Takeoffs when False Positives are Included 95% 59% 54% Favored Runway = 10 Percent Error for Takeoffs 95% 83% 80% Percent Error for Takeoffs when False Positives are Included 93% 80% 75% Prepared by: Woolpert, Inc. Table 3-29. Results by favored runway Purdue University Airport—two runways (runway 10-28: 2,793 ft.  50 ft.; Runway 5-23: 6,600 ft.  150 ft.) potential to trigger the counter when within 5–10 feet of the equipment. (See Table 3-30.) Security/Trail Camera Principle(s) of Operation and Intended Use The S/TC tested was a digital camera with a passive infra- red (PIR) motion detector and a nighttime infrared illumina- tor all contained in a weather-resistant case. (See Figure 3-6.) The particular camera tested was originally designed for covert operations, such as security and wildlife study. The system operated on 12 AA batteries or from a 12 volt power pack charged by a solar panel. Images were stored on an inter- nal memory card. The optional additional solar panel was added for long-term use in cold weather and a cable box was added to the system to make it less conspicuous. Neither were required. The motion detector consisted of two horizontal detection bands each divided into six zones. The manual indicated the camera will capture movement up to 100 feet

41 away in daylight and 70 feet at night. These claims were either met or exceeded during our evaluation. Since the camera works based on passive infrared motion detection, it is designed to record any heat-based movement. Accordingly, it works well in detecting wildlife, one use for which it was initially designed. With the FAA requirements for wildlife hazard assessments and mitigation programs, cameras like these may be useful for wildlife monitoring on an airport. (See Figure 3-7.) Computer System Requirements A typical computer with a memory card reader is needed to access the images in the S/TC. Images can be uploaded to a computer with any standard image viewing software, or the viewing software provided with the equipment. The S/TC could accept memory cards up to 32 GB. Data Provided The S/TC evaluated produced images with a resolution of 1080p or 3.1mp with a .jpg file extension. The images included a date, time, temperature, and moon phase stamp and the image number in the series (e.g., 1 of 3). The images could be viewed with any standard image viewing software. The viewing soft- ware that came with the camera provided an easy way to catalog photos by camera location and aircraft type. The database could then be searched for all aircraft of a certain type, make, or model. The database was only limited by how much informa- tion the user tagged to each photo. Although the software did appear useful for cataloging, there did not appear to be a tally feature, so the aircraft had to be manually counted. It did allow for the creation of a video of the pictures, which, in addition to just appearing impressive, can provide a quick overview of the type of traffic an airport experiences. Figure 3-8a and Figure 3-8b show some of the images captured by the S/TC. SMAC#1 SMAC#2 False Positives Mower 60 ft. in front of counter 0 0 Mower 55 ft. in front of counter 0 0 Mower 50 ft. in front of counter 0 0 Mower 45 ft. in front of counter 0 0 Mower 40 ft. in front of counter 0 0 Mower 35 ft. in front of counter 0 0 Mower 30 ft. in front of counter 0 0 Mower 25 ft. in front of counter 0 0 Mower 20 ft. in front of counter 0 0 Mower 15 ft. in front of counter 0 0 Mower 10 ft. in front of counter 1 0 Mower 5 ft. in front of counter 1 1 Mower 5 ft. behind counter 0 1 Mower 10 ft. behind counter 0 0 Mower 15 ft. behind counter 0 0 Total False Positives 2 2 Prepared by: Woolpert, Inc. Table 3-30. Mower evaluation—SMAC side-by-side results. Figure 3-6. S/TC. Figure 3-7. Wildlife caught on the S/TC.

42 Ease of Portability The S/TC was completely portable. The camera could be mounted on a stake or inside the optional cable box. The solar panel did not come with a mounting pole, so one had to be fabricated. Outside of fabricating a mounting pole, both were easy to deploy and relocate where needed. (See Figure 3-6.) The weight of the camera was approximately two pounds. The solar panel and sealed battery weighed approximately 20 pounds. Durability The S/TC came housed in a sturdy all-weather case. The unit continued to work even when dropped and performed continuously for the duration of the study with the addition of the solar panel. Ease of Installation and Airport Impacts As indicated previously, the FAA determined that any equipment installation on the airport, even if it were tempo- rary and outside the RSA, required FAA approval (through the filing of FAA Form 7460) in order to be in compliance with Title 14 of the CFR Part 77. In the case of this research, a Form 7460 airspace determination was needed for each loca- tion where the S/TC was evaluated. Since the S/TC is portable, self-contained, and simply sticks into the ground on a short stake, there were no permanent installation requirements. As such, there was no impact to the airport infrastructure. Since there was no way to monitor the runway without stationing cameras to adequately cover every location where an aircraft might touch down or takeoff, the cameras were located on the taxiways/taxilanes to capture aircraft entering or exiting the runway. This was the same concept that was used for pneumatic counters in the past. The S/TC cameras were evaluated at four different airports at varying distances from the taxiway/taxilane centerlines. To receive a non-objectionable airspace determination from the FAA on the Form 7460 submittal, the equipment had to be located outside of the TSA of the airports where it was tested. Typical TSAs at non-towered airports range from 49 feet wide (24.5 feet each side of centerline) to 118 feet wide (59 feet each side of centerline) depending on the size of the aircraft that use the airport. (Note: FAA AC 150/5300-13A, Airport Design, provides the width for all taxiway classifications in Chapter, Taxiway and Taxilane Design. The maximum dis- tance the S/TC was tested was 300 feet, which is farther than its purported range of 100 feet.) Several parameters had to be set on the camera before it could be left to monitor movement. Since the camera records images based on infrared movement, close attention was required when setting these or the camera may not capture pictures at night well enough to read the aircraft registration numbers. Additionally, the number of pictures taken per event had to be enough to ensure the aircraft registration number was in view during at least one of the pictures. The user manual instructed that the equipment should be located within 100 feet of the desired subject in the daytime and 70 feet at night. The field of view was approximately 40°, but a walk test was encouraged with each installation to ensure the unit was working. Maintenance and Operation The S/TC required very little maintenance in the field once the solar panels were installed. The location where the unit was installed was maintained by airport ground crew, so no additional mowing or grass removal was needed to ensure the lens was not blocked. The solar panel had to be cleared of snow a few times to ensure the battery was charging. Ease of Data Retrieval Except for opening the cable box, data retrieval from the S/TC was fairly easy. While the units were inconspicuous with the cable box, taking the top on and off to get to the camera was cumbersome. The data was stored on a SD card inside the cam- era, which was easily swapped out with an empty card in the field and then uploaded to the computer once in the office out of the elements. However, determining if a particular target was captured required removing the card and reading it via a com- puter. This makes initial testing of a location time consuming. Performance in Various Weather and Lighting Conditions The temperature reached a low of -1°F during the durabil- ity testing and while the S/TC worked in these temperatures, Figure 3-8a. Typical image captured by the S/TC.

43 Figure 3-8b. Images captured by the S/TC.

44 the cameras functioned in the field only up to about two weeks with lithium batteries at below freezing temperatures. Without any type of rechargeable lithium AA batteries sold on the market, battery usage would be expensive in cold weather because each camera requires 12 batteries. Rechargeable NI-CAD batteries can likely be used in milder temperatures, but they do not last long in the cold. The manual purports that NiMH batteries will operate at temperatures down to -20°F and lithium batteries will operate to -40°F. Temperatures never reached this cold, but lithium batteries were initially used anyway; however, solar panels were eventually added. The units worked continuously with the addition of the pan- els. Occasionally snow was cleared to ensure charging. The S/TC purports a range of 70 feet at night, which was easily obtained. While the default settings to the camera gen- erally produced good results, some exceptions were made. A 15-second trigger quiet period reduced the number of times the same aircraft event was recorded and the fast shutter night mode provided clearer night images. (See Figure 3-9 for low light images.) There were times when the aircraft registra- tion number was not visible or the picture not clear enough (due to the low light, fog, rain, or snow) to determine the N-number. However, the aircraft types (e.g., SEP, jet, etc.) was almost always discernable in one of the three to five images taken per event (the user has the option to set the number of images the camera captures once it is triggered.) Surprisingly, besides taxing aircraft, the cameras also caught a few helicopters approaching and departing (see Fig- ure 3-8b) even though the helicopters seemed to be outside of the sensor’s maximum distance. However, like the VID equipment, they are not capable of capturing touch-and-goes without stationing some undetermined number of cameras to cover every location where an aircraft could touch down. Accordingly, the cameras have to be strategically located along taxiways and some type of touch-and-go factor would have to be determined and added to the total to achieve an accurate estimate. The night evaluation performed resulted in 95% accuracy of taxis recorded. Service Contract Requirements The S/TC was a fully functioning, standalone unit that did not require any outside support. Once purchased, the unit could be operated without the need of any type of service contract. Cost Per Unit The cost of the S/TC will vary depending on when it is pur- chased since the prices of its composite pieces vary based on their respective markets. At the time of acquisition, the units cost $550 each. The cable box was $150 and the additional solar panel was $300. Accuracy Assessment The S/TC was evaluated at four different airports. The lon- gest study with the most sampling occurred at TYQ where they were located approximately 70 feet from the taxiway centerline. The test included visual observation over 13 days spanning seven months. This study used two cameras pur- chased from the manufacturer and installed at the two loca- tions where aircraft have to pass to enter or exit the airport terminal area. (Note: For an airport with more entry and exits points, additional cameras would be needed.) Table 3-31 shows the overall results of this study. The user manual for the S/TC tested indicated that an object with a different temperature than the ambient temperature had to move into, or out of at least one of six motion detection zones in one of two detection bands, and that a walk test should be performed. While the walk test worked with the north facing Figure 3-9. S/TC images.

45 camera, it did not always catch aircraft. The instruction manual also indicated that as the ambient temperature approaches the temperature of the subject, the strength of the signal decreases and the range of the camera is reduced. Another point made in the manual was that if a subject is moving very slowly, it will not always trigger the motion sensor. The cameras generally worked well when they were set on high sensitivity except for SEP aircraft on the north facing camera. The north facing camera was on the taxiway that led directly to the end of Runway 18. The SEP aircraft often taxied at lower speeds past this camera as they entered the runway, as compared to the south camera on the taxiway that led to the parallel taxiway. The misses on the north camera appeared to increase at temperatures around 70°F and 80°F versus those at temperatures at 30°F and 40°F. However, the south camera did not have this same problem. In fact, the south camera did not miss a target. Finally, with 21% of TYQ’s operations from touch-and-goes during the study period, the percent error ratio for recording operations for the equipment as a whole would increase by this amount. A short-term study was also performed on the S/TC at EYE for a day. Units were located at 75 feet and 100 feet from one taxiway centerline and all aircraft that passed that location were observed and recorded. The two cameras detected every aircraft that passed in front of them. (See Table 3-32.) A short-term study was also performed on the S/TC at I42. The units were located at 35 feet, 50 feet, 75 feet, and 100 feet from one taxiway centerline at the only entrance to and from the terminal area and all aircraft that passed that location were observed and recorded. The cameras detected every aircraft that passed in front of them at this airport as well. (See Table 3-33.) A short-term study was also performed on the S/TC at LAF. At this airport the S/TC was attached to the terminal building to track every aircraft that taxied across the apron. While the apron is approximately 300 feet wide, most of the aircraft tax- ied in the middle third and were detected by the camera. At this location, the S/TC missed 19% of the aircraft that taxied in front of it. (See Table 3-34.) S/TC Percent Error Results Location A - North Facing Percent Error for Taxis Recorded 43% Location B - South Facing Percent Error for Taxis Recorded 0% Missed Taxis Analysis Single Engine Piston (SEP) 92.5% Multi-Engine Turbo Prop (METP) 5.0% Gyro 2.5% The north facing camera was located on the taxiway that leads directly to the end of Runway 18. The SEP often taxied at slower speeds past this camera because it led directly to the runway end. Touch-and-Go Activity 15% of the activity observed during the study were touch-and-goes, which are not recorded by the S/TC. Therefore, the error rate for actual operations would be 15% greater than taxis’ recorded results. Prepared by: Woolpert, Inc. Table 3-31. Indianapolis Executive Airport. Table 3-32. Eagle Creek Airport—runway 3-21. S/TC Percent Error Results from Side-by-Side Evaluation at Varying Distances from Taxiway Centerline Location A = 75 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Location B = 100 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Prepared by: Woolpert, Inc.

46 VID System/ADS-B Transponder Receiver Principle(s) of Operation and Intended Use The VID system tested was originally developed to auto- mate the billing process for landing fees. The spinoff use of the VID is aircraft traffic counting and airport security. The VID system tested combines electronic-based tracking and advanced video tracking. (See Figure 3-10.) One source of the electronic tracking is the FAA near real-time traffic data from the National Airspace System (NAS) known as the Aircraft Situation Display to Industry (ASDI). The data includes information on aircraft operating in radar control. The video tracking data comes from VID equipment (i.e., cameras) installed at a particular airport. For the system tested, the VID software and aircraft sensor systems worked together to provide a more comprehensive depiction of air- port activity than either technology alone would. The ASDI feed provided detailed aircraft data or the VID equipment captured an image of the aircraft registration number as it passed by the camera and the service provider analyzed the image. From both feeds, the VID system service provider delivered detailed information about the aircraft via a web portal. To augment the capability of its video image detec- tion system, the VID system tested also included a simple transponder receiver programmed to detect ADS-B and Mode S transmissions. S/TC Camera Results from Side-by-Side Evaluation at Varying Distances from Taxiway Centerline Location A = 35 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Location B = 50 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Location C = 75 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Location D = 100 ft. from Runway Centerline Percent Error for Taxis Recorded 0% Touch-and-Go Activity 18% of the activity observed during this study were touch-and-goes, which are not recorded by the S/TC. Prepared by: Woolpert, Inc. Table 3-33. Paoli Municipal Airport—single runway. S/TC Percent Error Results across 300 ft. Wide Apron Attached to Terminal Building Percent Error for Taxis Recorded 19% Note: This location is 200 feet wider than the purported detection range of the S/TC. While many aircraft taxied outside the reported range, several were still detected by the camera. Prepared by: Woolpert, Inc. Table 3-34. Purdue University Airport. Figure 3-10. VID system.

47 Computer System Requirements A typical computer with Internet access was used for view- ing the VID web portal and downloading data. Data Provided Detailed information about the aircraft was made available on a web portal, including the date, time, aircraft registra- tion number, call sign if applicable, activity type (e.g., arrival, departure), aircraft make and model designator (e.g., LJ40 for Lear Jet 40), maximum landing weight, runway design group, wingspan group, aircraft type (e.g., jet, piston), operator’s information (e.g., contact name, telephone number, address), and source of the data (e.g., camera, ASDI, or transponder receiver). Detailed activity reports could be produced that included all the activity by a particular aircraft, all arrivals, all departures, etc. Ease of Portability The VID system tested was not permanently installed, but was also not portable. Professional installation from the ser- vice provider was required. Durability The VID camera system was housed in a sturdy all-weather casing. The unit worked continuously for the duration of the study. The transponder receiver failed, but when it did work, it provided very little useful information that was not already provided from another source. Ease of Installation and Airport Impacts Since there was no way to monitor the runway without sta- tioning a massive number of cameras along its parallel axis to adequately cover every location where an aircraft might touch down, the more common practice is to locate cam- eras on the taxiways/taxilanes to capture aircraft entering or exiting the runway. As indicated previously, the FAA deter- mined that any equipment installation on the airport, even if it were temporary and outside the TSA, required an FAA approval (through the filing of an FAA Form 7460) in order to be in compliance with Title 14 of the CFR Part 77. In the case of this research, a Form 7460 airspace determination was needed for the locations where the VID was installed. To receive a non-objectionable airspace determination from the FAA on the Form 7460 submittal, the equipment had to be located outside of the TSA of the airport where it was evaluated. Typical TSAs at non-towered airports range from 49 feet wide (24.5 feet each side of runway centerline) to 118 feet wide (59 feet each side of runway centerline) depend- ing on the size of the aircraft that use the airport. (Note: FAA AC 150/5300-13A, Airport Design, provides the width for all taxiway classifications in Chapter 4, Taxiway and Taxilane Design. The maximum distance the VID equipment was studied was approximately 100 feet.) Two units were located outside of the TYQ taxiway safety areas of the only two taxi- way entrance points to the parallel taxiway. All aircraft enter- ing or exiting the runway for takeoff or landing must pass one of these points. The VID equipment itself (outside of the web portal and antenna) was self-contained and free standing with no exter- nal power supply needed. No digging was required, so there was no worry of hitting lighting or navigational aid (NAVAID) cabling. The system included cameras—each with two batter- ies, a solar panel, a night illumination source, and an Eth- ernet bridge antenna to communicate with the cameras. In the field, the cameras were located outside of the taxiway object free areas and pointed towards the taxiways exiting the runway and the parallel taxiway. The cameras initially had trouble distinguishing aircraft movement converging on and diverging from the camera, so adjustments were made to the viewing field so that aircraft passed through the field of view from left to right and vice versa. Maintenance and Operation The VID system required no maintenance from the user and none from the vendor during the evaluation program. As stated earlier, the transponder receiver equipment failed during the study. Ease of Data Retrieval The web portal to the activity data was straight forward and easy to navigate. Reports and data could be easily down- loaded into CSV files that could be imported into an elec- tronic database. Performance in Various Weather and Lighting Conditions The temperature reached a low of -1°F during the dura- bility testing and the VID performed without impact. There were times when the aircraft registration number was not vis- ible or the picture was not clear enough for the VID service provider to return the detailed aircraft information. However, the type of aircraft (e.g., SEP, jet, etc.) was still provided. Fig- ure 3-11 shows the typical images recorded from the VID. The night evaluation performed resulted in 94% accuracy of taxis recorded.

48 The ADS-B transponder receiver did not perform as expected. During actual visual observations, the transponder receiver had a 100% error rate. The unit did, however, record some aircraft. Over the time the receiver resided at the airport, a total of 20 events were recorded by five different aircraft. However, these same aircraft operated at the airport 129 times over the course of the study. When the VID service provider first provided a quote for the equipment, the company had a supported ADS-B transponder receiver product. However, by the time of the equipment installation, the company was no longer using the transponder equipment because of the low number of the U.S. aircraft fleet actually equipped with ADS-B. Because of this and other technical and software algo- rithm problems, the transponder receiver never performed well and provided no information that was not already pro- vided by the VID or the ASDI data. Service Contract Requirements The VID system required a contract. The equipment was not purchased outright, but installed for an initial deploy- ment price and the information was provided through the web portal with a contract, which typically covered a year. If the contract is not renewed, the equipment is removed. Cost Per Unit The cost of the VID system tested over the seven months of this study, which included two cameras and a transpon- der receiver unit, totaled $36,000. Without the transponder receiver, the seven month cost was $31,000. The cost of the Figure 3-11. VID images. ADS-B Technology By January 1, 2020, all aircraft must be equipped with ADS-B out technology to operate in the following airspace: 1. Class A, B, and C. 2. Class E airspace within the 48 contigu- ous states and the District of Columbia at and above 10,000 feet mean sea level (MSL), excluding the airspace at and below 2,500 feet above the surface. 3. Class E airspace at and above 3,000 feet MSL over the Gulf of Mexico from the coastline of the United States out to 12 nautical miles. 4. Around those airports identified in 14 CFR part 91, Appendix D (FAA 2014) According to the FAA Aerospace Forecast Fiscal Years 2013-2033, the U.S. fleet is made up of 7,024 commercial aircraft and 217,533 general aviation aircraft. As of February 24, 2014, only 3,391 aircraft had ADS-B out (FAA), which would equate to less than 2% of the U.S. fleet. With this low equipage rate, ADS-B is not a viable solution to counting aircraft at non-towered air- ports at this time, but may prove useful closer to the 2020 deadline. (Lee-Lopez 2014)

49 Table 3-35. Indianapolis Executive Airport. VID Percent Error Results North Facing Percent Error for Taxis Recorded 17% South Facing Percent Error for Taxis Recorded 10% Missed Taxis Analysis Single Engine Piston (SEP) 78.3% Multi-Engine Turbo Prop (METP) 8.7% Jet (J) 13.0% Notes: The north facing camera was located on the taxiway that lead directly to the end of Runway 18. The SEP often taxied at higher speeds past this camera because they did not have to slow down for a turn as compared to the south camera. ASDI Feed Operations reported on ASDI feed that were not on camera 5 Non-events recorded by ASDI (aircraft was not detected visually) 2 Touch-and-Go Activity 21% of the activity observed during the study were touch-and-goes, which are not recorded by VID or ASDI. Error rate for actual operations would be 21% greater than taxis recorded results. Transponder Receiver Percent Error for Operations Recorded by Transponder Receiver 100% Prepared by: Woolpert, Inc. service will vary from airport to airport depending on the airport’s configuration and the number of cameras needed to adequately cover runway entrance and exit points. Accuracy Assessment The VID equipment study at TYQ included visual observa- tions for 16 days spanning seven months. Like the S/TC, the VID equipment was located at the only two entrance and exit points from the terminal area. (See Appendix D for a diagram of the equipment locations.) Table 3-35 shows the overall results of this study. Like the S/TC, the VID camera facing north had a greater error rate, which may be a result of the same reasons. Also like the S/TC, the VID equipment cannot count touch- and-goes. With 21% of TYQ’s operations from touch-and- goes during the study period, the percent error ratio for the equipment as a whole would increase by this amount. While the ADS-B transponder receiver equipment had the potential to count touch-and-goes with the correct computer algorithms programmed, the unit did not perform well because of the very low number of aircraft equipped with ADS-B technology and because of technical issues with the equipment and software itself. When the receiver and associated algorithms did work, it only identified five aircraft that were not already identified by the cameras. During visual observations it had a 100% error rate for operations recorded. As with all the other equipment, the most missed aircraft were SEP aircraft, but they were also the most prevalent operations at the airport.

Next: Chapter 4 - Conclusions and Suggested Research »
Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports Get This Book
×
 Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Report 129: Evaluating Methods for Counting Aircraft Operations at Non-Towered Airports reviews techniques and technologies applied at airports without air traffic control towers to estimate aircraft operations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!