National Academies Press: OpenBook

Advanced Ground Vehicle Technologies for Airside Operations (2020)

Chapter: Appendix A - Enabling Technologies

« Previous: References
Page 147
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 147
Page 148
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 148
Page 149
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 149
Page 150
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 150
Page 151
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 151
Page 152
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 152
Page 153
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 153
Page 154
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 154
Page 155
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 155
Page 156
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 156
Page 157
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 157
Page 158
Suggested Citation:"Appendix A - Enabling Technologies." National Academies of Sciences, Engineering, and Medicine. 2020. Advanced Ground Vehicle Technologies for Airside Operations. Washington, DC: The National Academies Press. doi: 10.17226/26017.
×
Page 158

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A-1 This Appendix includes a discussion of the enabling technologies that may be used for AGVT followed by a description of the technologies that will be included in the AGVT applications that are evaluated in greater detail. Evaluation of Enabling Technologies The proposed AGVT rely on information provided by the sensing and enabling technolo- gies described below. Also discussed below are enabling technologies that may not be explicitly included in the applications evaluated but may play an important role in future airside AGVT, including existing airport technologies such as Airport Surface Detection Equipment, Model X (ASDE-X), Airport Surface Surveillance Capability (ASSC), Wide Area Augmentation System (WAAS) and Driver Enhanced Vision Systems (DEVS). LiDAR. LiDAR (light detection and ranging) is a sensor that measures the distance of a specific point by illuminating the point with modulated laser pulses, and measuring the reflected pulses in terms of both the timing and the phase shift in the frequency domain. Multiple points can be measured simultaneously by aligning an array of sensor pixels at different angles. Depending on whether it involves a rotating part, it is divided into two types: mechanical or solid-state which represent different strengths and readiness levels. Mechanical LiDAR arrange sensor pixels verti- cally, and spins them around a fixed axis, such that each pixel forms a high-resolution scanning plane, covering the whole circumference. Scanning planes are parallel with each other because the pixels are mounted on the same axis. Due to limitations of pixel size, calibration difficulty, and data bandwidth, the number of scanning planes (for vertical resolution) usually does not exceed 128, although mechanical LiDAR can obtain more than four thousand sample points around the scan- ning plane. As a result, the vertical resolution of mechanical LiDAR is significantly sparser than the horizontal resolution, which is inferior for airport applications since airplane wings span horizontally and could potentially be missed if they are between two scanning planes. Solid state LiDAR uses a phased array of pixels instead; pixels are aligned into a two-dimensional (2D) array and interference is used to achieve a directional beam. Solid state LiDAR can achieve point-wise scanning in a 2D field of view, with more uniform resolution on both vertical and hori- zontal directions. Although mechanical LiDAR is weak with respect to vertical resolution, it is more technologically ready than solid-state devices. Because the laser used on LiDAR has a shorter wavelength than the radio wave used on radar, LiDAR can obtain sub-centimeter resolution, which is more favorable for high-definition 3D mapping and localization. Millimeter Wave Radar. Millimeter wave radar is a sensor that implements a 2D phased array of emitters to beam a pointed pulse of modulated microwave with a wavelength in the A P P E N D I X A Enabling Technologies

A-2 Advanced Ground Vehicle Technologies for Airside Operations millimeter range. The system then receives and analyzes the reflections from obstacles in that direction. Timing and phase shift in frequency domain can provide information regarding the distance from the obstacle to the sensor. Two standards of wavelengths are commonly used today, with frequencies of 24 GHz and 77 GHz. The 24 GHz radar has relatively low prices but is large in size and has low resolution; the detection ranges are relatively small, typically within 65 ft (20 m). The 77GHz radar (76 to 81 GHz range) can be further divided into two groups. Mid-range and long-range radar can detect objects from approximately 100 ft up to 650 ft (30 m to 200 m) away from sensor, while short- and mid-range radars can measure distances up to 100 ft (30 m) (FTA, 2018). The resolution is much higher for 77 GHz radar systems (4 cm for 77 GHz radar and 75 cm for 24 GHz radar), because the smallest distinguishable dimension is proportional to working wavelength. Additionally, since each antenna element should be separated by at least half the wavelength, for the same detection power, the smaller dimensional profile of a 77 GHz radar means higher integration and easier deployment. Future systems and the systems recom- mended for deployment will be 77 GHz since the 24 GHz standard is gradually being phased out by the Federal Communications Commission (FCC) and the European Telecommunications Standards Institute (ETSI) (Shaffer, 2017). Inertial Measurement Unit (IMU). An inertial measurement unit (IMU) is a combination of an accelerometer and a gyroscope (aka gyro). It is a motion sensor and is rigidly mounted to the vehicle body to measure the linear acceleration and angular velocity of the vehicle. The accel- eration information can be used to determine the vehicle’s velocity and position relative to the initial location and velocity. Similarly, integration of the angular velocity can provide orientation information of the vehicle. The accelerometer and gyro information can also be fused for smaller error, and techniques for sensor fusion, such as Kalman Filtering, are reliable and have been implemented in aerospace industries. Because it works according to Newton’s Law, an IMU does not rely on any external perception, therefore it is not susceptible to environmental interference. These sensors are available with forms of integrated circuits or modules, and require low power and are easy to install. Notably, the measurement of position and orientation involves integra- tion of sensor data, which means that there might be a drift involved with propagation over time. Therefore, other sensors independent with IMU, such as GPS and visual odometry (e.g., by counting lane markings and correcting the vehicle’s position based on the coordinates of a known marker), are required to correct such measurement error. Inexpensive consumer IMUs require correction every second, while commercial-grade IMUs can maintain tolerable error for up to half a minute. Aerospace- and military-grade IMUs have outstanding accuracies but are expensive and may be difficult to maintain an adequate supply. Encoders. An encoder is a sensor that measures rotation of an axle. An encoder can be mounted on the wheel axle of the vehicle and can be used as an odometer since it measures how many revolutions the wheel rotated since its original location. By adding another encoder at the steering wheel, and relating the steering radius with the steering wheel rotation, it is possible to generate the vehicle path by propagating the odometer path with the curvature derived from the steering wheel input. However, information derived from the encoders cannot detect slipping of the wheel, nor can it recognize the vehicle being lifted or towed or otherwise moved by another carrier. As a result, the system needs to be initialized periodically using fused information from other sensors, such as IMU and GPS, to correct for the error introduced by tire skid. Commercial vehicles currently have standard wheel speed sensors for each wheel, which is based on the wheel encoder or vortex generator, which outputs voltage proportional to the angular speed of the wheel axle. The encoders can be used to measure and verify vehicle motion (speed and position). For sensor fusion theories in general, the more sensors or sources of information obtained, usually the more robust and accurate the final results will be.

Enabling Technologies A-3 Global Positioning Systems (GPS). The global positioning system (GPS) sensor uses satellites to localize itself in the world frame, informing longitude, latitude, altimeter, and velocity. The position data are within the global frame and do not drift with time. The accuracy can reach up to five meters for commercial-grade sensors, however, it may be compromised by obstacles overhead (it cannot work well indoors), and on the ramp, where it may be compromised by reflections from the terminal building. To further increase its accuracy, calibrated static ground localizers can be added (differential GPS), which will boost GPS accuracy up to the centimeter level. Differential GPS requires a static ground station to be set up with real-time kinematics (RTK) technology; coverage for RTK is much larger than the area of most airports, with approximately one-inch accuracy within a six-mile radius. The differential GPS technology may share similar components with the Wide Area Augmentation System (WAAS) or Ground-Based Augmentation System (GBAS), which also use ground differential stations to correct satellite localization errors. Camera Systems On Board Vehicle. A series of cameras on board the vehicle is used for capturing the surrounding environment and to support L2 and L3 automation. A front-facing camera is usually used for applications such as a lane departure warning, which uses a filter for extracting the lane line markings and then measures the relative position of the lane line with the camera. The flow chart for a lane departure warning system is illustrated in Figure A-1 (Wang et al., 2004). A hardware-accelerated implementation of the activities in the flow chart above has been implemented (Pankiewicz et al., 2008), which means the algorithm is suitable for real-time implementation and specialized hardware accelerator can be designed for the task. This could relieve the computational burden on the on-board generalized computer and make it feasible to use a low-power plug-in solution to improve functionality, which enhances the capability to retro fit new capabilities into existing vehicles. For example, if the main in-vehicle on-board central processing unit (CPU) is inadequate to perform the real-time computations required for lane detection, hardware-accelerated implementation allows hardware to be plugged in to provide additional computational capabilities which facilitates the retrofit of safety assist lane detection capabilities with lower-end and older CPUs. Stereo cameras are a set of cameras mounted on the vehicle frame that mimics the distance sensing mechanism of animal eyes. The computer uses the images of the two cameras, and computes the parallax, i.e., the position of an object in the different camera views compared to that of a virtual object at an infinite distance. A set of computer vision algorithms will then compute the distance of the object from the camera array. This requires obvious markers within the image to successfully locate the corresponding points. Application of this technology is demonstrated on commercial products such as Tesla Autopilot, which uses three front-facing cameras with different focal lengths combined to provide both a wide coverage and distance measurement. Though higher accuracy is ensured theoretically, it also calls for more complicated calibration process and more powerful computation hardware, which increase the price to some extent. At the other end of the spectrum, in some applications, the distance of an object can be estimated based on a learning process with a carefully-calibrated mono-camera system rather than a stereo camera system. A single camera may be used to enable adaptive cruising functionalities. Camera Input of Road Determine Lane Line Edges Camera Calibration + Perspective Transform Region of Interest Extraction from Camera Image Greyscale Conversion + Canny Edge Detection + Hough Transform Figure A-1. Flow chart for lane departure warning. Based on information presented by Wang et al., 2004.

A-4 Advanced Ground Vehicle Technologies for Airside Operations Modeling the AGVT Operating Environment. The perception of a comprehensive envi- ronmental model for an automated vehicle in motion breaks down into four main challenges (Mobileye Corp., 2018): • Free space: determining the drivable area and its delimiters. In the airside environment, this would also include designation of areas with additional operating rules or restrictions such as the movement area or runway threshold, which require authorization from ground control or air traffic control, collectively referenced as Air Traffic Control (ATC) for simplicity. • Driving Paths: determining the geometry of the routes within the drivable area. • Moving Objects: recognizing all aircraft and ground vehicles within the drivable area or path. • Scene Semantics: correctly interpreting the vast vocabulary of visual cues (explicit and implicit) in the environment. In the roadway environment, this would include traffic lights and their color, traffic signs, turn indicators, pedestrian gaze direction, on-road markings, etc. In the airside environment, this would include airfield markings and signs, runway guard lights, and the presence and activity of other ground vehicles, aircraft, and ramp workers. Higher automation levels require more precise perception of the information above, which requires more credibility of the sensor output, and thus requires less human interference and correction. Vision-based sensors include LiDAR, millimeter wave radar, and camera systems. For sensors such as LiDAR and millimeter wave radar, which are essentially arrays of distance measuring devices, sensing of free space and moving objects can be directly achieved, however, due to a lack of color, it may be difficult to interpret scene semantics accurately. On the other hand, camera systems can capture color, which gives another perspective of how things are segmented in the real world. However, a single camera may not measure distance as accurately as LiDAR and radar, and multiple cameras (stereo cameras) will involve much higher data and computa- tion bandwidth, which means they should be combined to achieve both high accuracy and low computational burden. Vehicle Localization. Accurate vehicle location information is critical to ensure safe opera- tion as the automation level increases. None of the vision-based sensors can measure the motion path of the vehicle directly. They either have to take the difference of two consecutive frames of measurement (optical flow method), or have to pass the data through simultaneous localization and mapping (SLAM) algorithms to extract the motion of the sensor. Such algorithms are based on a statistical optimization approach, therefore errors may be relatively large, and it may be difficult to deal with situations with few reference points correctly (e.g., a straight and open road with continuous markings, an open field, or a large ramp area). Therefore, direct motion-based sensors should be incorporated to provide accurate information about the driving paths of the vehicle. These direct motion-based sensors include IMUs, encoders, and GPS. • IMUs: While IMUs can provide acceleration and angular velocity information, the integration process of the signal may induce accumulated errors. • Encoders: Encoders can provide position information related to the starting point of the vehicle, described in wheel rotation, however it may introduce high error when the vehicle slips, i.e., wheel motion does not represent body motion. • GPS: GPS can provide accurate correction of the IMU and encoder data by providing longitude, latitude and altitude information, however, it may not provide speed information directly, and it will not work if the satellite signals are weak or if there are reflections as would be expected near the terminal building. GPS cannot penetrate buildings and is susceptible to interference. System Updates. Some airfield and operating information that affects the operation of AGVT may change over time, such as the operation of a new aircraft model at the airport, changes to the airfield operating areas due to construction projects, or the designation of a new airport hot

Enabling Technologies A-5 spot. These changes will require updates of the database that the AGVT utilize. For vehicles with networking capabilities such as Dedicated Short Range Communications (DSRC) technology, broadband cellular network technology (3G, 4G or 5G), or a combination of both in a wireless mesh network (WMN or mesh network), these updates can be accomplished with Over-the-Air (OTA) updates, in which case the update is provided to the information stored on the central server, and each vehicle checks and downloads the updated information on a regular (daily) basis. For vehicles without networking capabilities, information can be updated with manual transmission using a wired connection during regular vehicle maintenance. Human Machine Interface (HMI) using Voice Recognition Software. There are a variety of voice recognition (also called speech recognition) application programming interfaces (APIs). These programs perform speech to text and text to speech tasks, and have improved significantly since their inception decades ago. Examples include CMU Sphinx (offline, optimized for mobile devices) (Walker et al., 2004), Google Speech API (internet connection required) (Google, 2019), and open source options such as Automotive Grade Linux (Romoff, 2019). Because much of the required airport communication involves standard terminology, such as the aviation standard phraseology used by ATC, standard aviation phrases such as “Cleared to,” “Turn right/left,” “Hold,” with correlating position keywords such as “Taxiway Bravo,” “Runway 23,” would be sufficient for most tasks and to assure authorization for entry to and operation in the movement area and past the runway threshold or ILS boundary. The pipeline shown in Figure A-2 would be sufficient for speech recognition as a human machine interface for most AGVT airport applications. Long term, HMI with voice recognition may be less important since the FAA is increasingly utilizing Data Comm, which provides instructions from controllers using text instructions rather than voice communications. Both radio voice communications and Data Comm must be explicitly accepted through either a read back or by pushing a button, respectively. Data Comm is operational at 62 ATC towers, and deployment has been faster and less expensive than expected (FAA, 2019). Operations during Emergency or Irregular Operations. Vehicle actions associated with emergency commands such as “Clear the Runway” would need to be recognized and appro- priate actions, which may vary depending on vehicle location, would need to be programmed. Furthermore, in many emergencies non-standard phraseology would be used. For vehicles with a safety driver, the human can take control and take the appropriate action. For vehicles without a safety driver, it would be appropriate to provide a means for remote human control of the vehicle (which may simply be for the vehicle to stop in place or move to a designated location and stop), if needed due to an emergency situation. This could be enabled by connecting the vehicle emer- gency mode with detection of a hardware emergency button in the ATC, the ground operations, and other locations, as appropriate. It would also be possible to link into existing emergency systems, as appropriate, in which case a detection device can be connected to the line of the existing emergency button or telephone line and monitored based on its electricity level (voltage). When the button is pressed, the device will publish a message sent via the mesh network as applicable, which will switch all networked vehicles into emergency mode (i.e., stop or leave the movement area). For vehicles not connected to the mesh network, the information could be transmitted via radio frequency commands transmitted at an approved frequency. In the event of a radio failure at an airport, ATC may use light signals to communicate with ground vehicles; different colored lights and sequences have different meanings: steady green (cleared to cross, proceed or go), steady red (stop), flashing red (clear the runway or taxiway), Standard Phraseology Extraction Input Voice Speech to Text Command Composition Text to Speech Acknowledge by Playback Figure A-2. Voice recognition process.

A-6 Advanced Ground Vehicle Technologies for Airside Operations flashing white (return to starting point on airport) and alternating red and green (use extreme caution). It would be necessary to confirm that the AGVT can recognize the signals and respond accordingly, or have provisions for a person to intervene, as appropriate. These signals may also be used for aircraft, so it is important that the AGVT can differentiate the intended recipient of the signal. Modular Redundancy for Improved Reliability. For safety-critical systems, it is common to have redundant hardware to ensure the system is generating valid results. One method of achieving high reliability is called Triple Modular Redundancy (TMR) (Lyons and Vanderkulk, 1962), where each functional sensor is required to have two more additional backups. With three modules working together and pumping data into a microprocessor, there is a software called “voter” running on the processor to check the data by setting the output to be the majority result (i.e., if two out of three sensors have similar results, but the remaining one has far-differing reading, the output of the voter will be the average of the two similar results, considering sensor noise). This method is able to tolerate one failure out of three of these parallel components, which could be critical to the vehicle’s normal operation, and ensures data integrity while also warning the operator about the failing component. Path Planning and Control Methods for Vehicles. Vehicles L3 and above must have the capability to independently plan a path to move from Point A, which is usually its current position, to Point B, which is usually the target position for a specific task. The path planning algorithm will consider the vehicle’s kinematic constraints (e.g., minimum steering radius, minimum clearance on left and right, maximum deceleration rate, etc.). Research is underway by vendors such as Team Eagle and partners to vary the vehicle deceleration and braking in real time based on the current vehicle speed, location, and the pavement friction as measured by vehicle sensors. For many situations, e.g., perimeter security, aircraft pushback or tug, snow removal, FOD manage- ment and mowing, a set of standard paths may be predefined, which could simplify this tech- nology to become path following in normal situations. However, to further refine performance (e.g., adapt the path or speed of a tug due to different aircraft, or to adapt to atypical situations), it may be appropriate for the vehicle to have the capability for alternative or independent local path planning. Path planning will generate a series of via points (waypoints) for the vehicle to follow step-by-step to reach the target position and orientation. There will be a controller moni- toring the relative position of the waypoint to the vehicle and adjust the input to actuators (steering and throttle), to ensure the convergence of the vehicle’s position to the waypoint. The most commonly used controller is a proportional-integral-derivative (PID) controller, which takes the position error, its derivative (speed) and integration over time, and multiplies these values with three separate parameters and calculates the sum as output. Because a wheeled vehicle is non-holonomic, i.e., it cannot directly control its motion in the lateral direction without moving forward (since wheels typically do not skid sideways), there are special methodologies to deal with the coupling in literature (e.g., Rösmann et al. 2017). There are additional considerations for path planning for vehicles in a platoon. Path planning for platoons must address safe departure from formation under system failure. Previous research (e.g., Balch and Arkin, 1998) has studied formation control strategy, which guides the vehicle’s relative movements when maintaining the original formation is compromised by turning of the leading vehicle, an unprecedented obstacle in front of a following vehicle, etc. Path planning while cornering is another specific issue that must be addressed for platooning in formation; this is especially important for longer vehicle units, including tugs with aircraft, snowplows and tugs with multiple baggage carts with tugs (e.g., snowplows have relative displace- ments in both longitudinal and lateral directions). It is helpful for formation planning if a pre- defined pattern of motion is pre-issued to all vehicles, so that turning and cornering maneuvers can be planned and verified in advance. Pre-planned paths are practical since the designation of

Enabling Technologies A-7 Priority 1 (the first to be cleared), Priority 2 and Priority 3 areas is consistent for a given airport (FAA, 2016). Small changes to the pre-planned paths would be required when the lead driver adjusts the offset between the lead and following vehicle due to the different snow conditions. The localization of the following vehicle can be based on the position relative to the lead vehicle using GPS or based on the determination of the relative position obtained by a radar sensor. Although not evaluated, formation departure becomes increasingly important for larger platoons, in which case an abrupt departure may cause a domino effect for following vehicles. Additional challenges may result from situations such as icy conditions, which combined with unexpected stopping and close following conditions could cause a following vehicle to hit a malfunctioned vehicle. Depending on the situation, it may be most effective to have vehicles leave the formation (veer to the right while stopping) rather than merely stop. Automatic Dependent Surveillance—Broadcast (ADS-B). ADS-B is a radio broadcasting and receiving transponder device used primarily by aircrafts to broadcast information such as identification, location and heading. All aircraft operating in national airspace must have ADS-B by January 1, 2020. ADS-B can also be used for ground vehicles per AC 150/5220-26 (FAA, 2011b). Ground vehicle use of ADS-B is compatible with ADSE-X and ASSC airport surveillance systems, which are used at 44 of the nation’s busiest airports. ADS-B on ground vehicles may also be used for fleet management and to support Runway Incursion Warning Systems (RIWS) (FAA, 2012), which warn of upcoming runway thresholds, hot spots, and other locations that may require enhanced situational awareness. ADS-B regularly broadcasts the vehicle’s identification, position, speed, and heading information with the radio signal it emits, which will be picked up by ground antenna and aircraft antenna, indicating the vehicle’s presence and enhancing situational aware- ness. Airport ground vehicle ADS-B squitter units can operate on either the 1090 ES link or the 978 MHz/UAT link; 1090 ES link is used by various other systems and to avoid congestion caused by too many users. The FAA strongly prefers the use of the 978 MHz/UAT link. Only airport operations vehicles that will be in the movement area can be equipped with ADS-B; ground service equipment and other ramp operation vehicles are not authorized to be equipped with an ADS-B out squitter (FAA, 2011b). Currently 27 airports use ADS-B for ground vehicles and there are estimated to be 1700 ADS-B vehicle movement area transponders (VMAT) deployed. Not every vehicle has a transponder; for example, a platoon of snowplows may use a single transponder for the lead vehicle, or two transponders for the lead vehicle and the last vehicle. The largest U.S. airports receive federal support for the purchase of ADS-B systems for ground vehicles to increase safety. It is also possible to integrate vehicle location, movement, and identification data from other GPS based transponder and fleet management systems into ASDE-X and ASSC systems. Moreover, air traffic controllers often do not “turn on” the ground vehicle information in their displays, so they can focus their equipment resources and attention on the aircraft. If it is desirable to share ground vehicle information directly with ATC using an ADS-B out capability, it is technically feasible to have information from over 600 ground vehicles transmitted via a single ADS-B unit, which eliminates the need for each vehicle to have their own ADS-B transponder (each vehicle would require another GPS based transponder). GPS and transponder systems have advanced significantly since the ADS-B technology was developed and certified by FAA, so there are advantages to using another kind of GPS based transponder rather than an ADS-B transponder for ground vehicles. More recently developed GPS based transponder systems for ground vehicles typically have greater location accuracy and are less expensive, since they incorporate newer technology advances and since these systems did not incur the costs associated with FAA certification requirements. Airport Geographic Information Systems (GIS). Airport global information systems (GIS) provide a tool for gathering, managing and analyzing data. The GIS is primarily used for the information management regarding different functional zones of the airport (e.g., movement vs.

A-8 Advanced Ground Vehicle Technologies for Airside Operations non-movement area), their shapes and interconnection and boundaries, which provides valuable information for mission planning and geofencing for AGVT as well as for fleet management. FAA currently collects GIS data for airports with supporting information provided in AC 150/5300-18B (FAA, 2014). Cellular Networks. Cellular networks provide a means of wireless communication by setting up base stations in an array, with each station servicing an area within a given diameter. The mobile vehicle will connect to the station with best signal and establish a network link dynami- cally. Based on frequency and technologies used for modulation, it currently consists of 2G (second generation), 3G, or 4G network formats, and 5G communication is at the edge of commercialization (NHTSA, 2018). Cellular networking generally requires an array of ground base stations and relays (repeaters) and a central server. For 4G communication, a typical microcell repeater can cover a range of 1,000 to 2,000 yards (0.6 to 1.2 mi or 1 to 2 km), while a macrocell repeater/base station can service an area within up to 20 mi (35 km) radii (Zhang, 2012). Dedicated Short Range Communication (DSRC). Dedicated Short Range Communica- tion (DSRC) is a WiFi-like technology based on IEEE 802.11p standards. According to NHTSA (NHTSA, 2019), the DSRC utilizes frequencies within the 5.9 GHz range and involves seven channels for short-range V2X communications, which allows a vehicle to connect to everything. In the proposed applications, only vehicle-to-vehicle and vehicle-to-infrastructure communica- tions would be needed (described subsequently). DSRC is currently used for truck platooning in the roadway sector since it is less vulnerable to dead spots where a cellular signal or other communications is not available (Marshall, 2019), and is also less vulnerable to signal inter- ference and weather degradation. Inter-vehicular (Vehicle-to-Vehicle, V2V) and (Vehicle-to-Infrastructure, V2I) Connectivity. Vehicle-to-infrastructure (V2I) connectivity is useful so vehicles can obtain current and accurate information regarding authorization (e.g., runway status lights) or nearby hazards (e.g., runway incursion warning systems). Inter-vehicular connectivity (aka, vehicle-to-vehicle or V2V) is critical when multiple vehicles are working in close proximity or in a coordinated effort, such as platooning applications. For platooning applications, V2V communications is necessary for operations and safety to share information such as driver inputs (e.g., braking or steering), failure information, obstacle information, etc., among vehicles. V2V communications may utilize a mobile wireless network such as DSRC technology which can be established by putting trans- mitters on vehicles. For example, a transmitter can be placed on the lead vehicle and the following vehicle can connect to it to share sensor information. The vehicles can also connect to a 4G or mesh network, which is hosted by a central server. There are industrial platforms for data transmission over heterogeneous networks, such as ROS (robot operating systems) (Quigley et al., 2009), such that programs running on different machines can communicate through unified Transmission Control Protocol (TCP) connections, which is a mature standard for modern computer networks. To monitor and transmit information to and from remotely-operated vehicles, it is possible to use a wireless local area network (LAN) via one or more physical interfaces including DSRC (or other WiFi), or a cellular network, enabled by WMN technology. In terms of evaluation, the system performance is technology agnostic in this case and would work equally well using DSRC, other WiFi, 4G or 5G, so the selection would depend on local conditions, pricing and preferences. Ultrasonic Sensors (Sonar). Similar to the concept of measuring the time delay and frequency shift of reflected waves as used in radar and LiDAR, an ultrasonic sensor or sonar uses an ultrasonic wave instead of electromagnetic wave. Ultrasonic sensors have a much shorter sensing distance and are less directional. These characteristics make ultrasonic sensors suitable for short-range detection for imminent collision avoidance and situations where slow motion and a precise (less

Enabling Technologies A-9 than a yard or less than a meter) maneuver is required. Currently ultrasonic sensors are used for baggage loaders, and can be retrofit to existing systems to prevent aircraft damage (Textron, 2019). It is anticipated that this technology will be used in other equipment that must move in close proximity to the aircraft, such as cargo loaders (e.g., ULD loaders that raise loads from cargo dollies to the aircraft cargo hold). Gated Aperture. A gated aperture may be used to reduce glare, as demonstrated by studies at Mitsubishi Electric Research Labs where they evaluated a 4D ray sampling of camera charge- coupled device (CCD) sensors to reduce the glare effects of camera lenses (Raskar et al., 2008). A gated aperture adds an additional layer of pinhole array filters in front of the CCD sensor such that scattered rays of extremely bright objects as well as inner reflections from lenses can be cap- tured as noise pixels with high spatial frequency on the raw image captured by the CCD. A digital filter is then added to the raw image to remove unevenness of brightness caused by the pinhole array, as well as a bandpass filter to reduce noise caused by the glare. This method has signifi- cantly improved imaging quality under bright lights and glare, such that ghost images and spots are removed with minimal sacrifice in terms of the image clarity. Currently it is proposed for reducing the problems with reflection, particularly in conditions such as snow or glare. Gated apertures can also be used for LiDAR. Force Sensors. Force sensors convert mechanical strains caused by external forces to an electric signal. The strain gauge, for example, maps the intensity of the external force to the change of resistance, which can be picked up and magnified using a Wheatstone Bridge circuit (Hoffman, 1974). Force sensors can be used for a wide range of applications, ranging from snowplow blades to impact detection. Existing Airport Technologies. There are a number of technologies that are currently deployed at U.S. airports that may be synergistic or complimentary to the deployment of future AGVT. These technologies include surface surveillance systems, NextGen technologies (such as Data Comm, previously discussed), remote sensing technologies such as LiDAR for airport surveys (FAA, 2017), GIS for airport mapping (previously discussed), and enhanced vision systems for drivers (DEVS). A number of airports currently use ASDE-X and ASSC for surface monitoring and SMGCS and A-SMGCS to provide guidance in low visibility conditions. ASDE-X and ASSC are used by ATC to support situational awareness. These systems fuse information from ADS-B, radar and other sources to provide a real-time map of aircraft approaching and at the airport. Although ATC could view ground vehicles on these systems, the ground vehicle layer is typically not used so the limited bandwidth can be used to provide a display of the entire airport. Including the ground vehicles in the display would necessitate reducing the field of view, and moreover, including ground vehicles in the display would also increase the workload for ATC, who need to focus significant attention on the responsibility to ensure safe aircraft separation and operation during take-off and landing. SMGCS and A-SMGCS are Advanced Surface Movement Guidance and Control Systems (and A-SMGCS is an advanced system) that use surface movement radar, microwave and other sensors in conjunction with multilateration systems to enhance the safe movement of aircraft in low visibility conditions; at least 64 U.S. airports have A-SMGCS. As part of Next-Gen, many airports have implemented a WAAS (wide area augmentation system) and prior to that, many airports implemented a Ground Based Augmentation System (GBAS), which was previously called a Local Area Augmentation System (LAAS). These systems all provide differential correction for GPS and although not explicitly addressed in this evaluation, at some airports it may be possible to leverage these technologies for AGVT. Driver Enhanced Vision System (DEVS) provides pilots and ground vehicle operators with vision enhancement, navigation, and tracking on the airfield in limited visibility conditions

A-10 Advanced Ground Vehicle Technologies for Airside Operations due to darkness, fog, smoke, snow, or other environmental constraints. DEVS increases safety by providing enhanced situational awareness. Similar systems have been implemented in other sectors, including augmented reality technologies which have been integrated to the Enhanced Vision System by General Motors (Burns, 2010); this system projects overlaying schematics onto the windshield or augmented reality headsets (such as Microsoft Hololens) and can be used to highlight information to drivers such as objects aside the runway or taxiway, designated hotspots, etc. DEVS may be used by snow removal teams to enhance safety during blizzards with white-out conditions, and by aircraft rescue and fire fighting (ARFF) responders, who may use a system that is enhanced by thermal image capabilities. The requirements and standards for DEVS includes components for vision enhancement, navigation and tracking, which may be useful in the development of requirements for AGVT. FAA has also conducted research to investi- gate thermal camera applications for ARFF applications, include DEVS application, to increase the visibility of people, and to identify hot spots on aircraft (Short, Torres, and Kreckie, 2017). Summary of Enabling Technologies Table A-1 provides a summary of the enabling technologies proposed for the airside applica- tions that are evaluated in detail. Per the previous discussion, AGVT can utilize a variety of tech- nologies to accomplish the same task; this is recognized by the designation “Alt” in Table A-1, Table A-1. Summary of enabling technologies and infrastructure.

Enabling Technologies A-11 Table A-2. Overall estimated TRL for airside AGVT applications. which provides examples of alternative technologies that could also be used for the proposed applications. Technologies identified as “Alt” are not included in the applications that undergo more detailed evaluation. Table A-2 provides an estimated overall technology readiness level (TRL) for each of the appli- cations evaluated. Typically, some components required for deployment are relatively mature (e.g., radar for obstacle avoidance for things at close distances for slow moving equipment), and other components require significant development and validation (e.g., capabilities of LiDAR and supporting hardware and software for obstacle recognition and avoidance in the airside environment where a wide range of aircraft may be present). References Balch, T., & Arkin, R. C. (1998). Behavior-based formation control for multirobot teams. IEEE transactions on robotics and automation, 14(6), 926–939. FAA. (2009, June 12). Driver’s Enhanced Vision System (DEVS). Advisory Circular 150/5210-19A. https:// www.faa.gov/documentLibrary/media/Advisory_Circular/AC_150_5210-19A.pdf. FAA. (2011b, November 14). Airport Ground Vehicle Automatic Dependent Surveillance - Broadcast (ADS-B) Out Squitter Equipment. Advisory Circular 150/5220-26. https://www.faa.gov/documentLibrary/media/ Advisory_Circular/150-5220-26-consolidated-chg2.pdf. FAA. (2012, August 28). Performance Specification for Airport Vehicle Runway Incursion Warning System (RIWS). Advisory Circular 150/5210-25. https://www.faa.gov/documentLibrary/media/Advisory_Circular/ AC_150_5210-25.pdf. FAA. (2014, February 24). General Guidance and Specification for Submission for Aeronautical Surveys to NGS: Field Data Collection and Geographic Information System (GIS) Standards. Advisory Circular 150/5300-18B. https://www.faa.gov/documentLibrary/media/Advisory_Circular/150-5300-18B-chg1-consolidated.pdf.

A-12 Advanced Ground Vehicle Technologies for Airside Operations FAA. (2016, July 29). Airport Field Condition Assessments and Winter Operations Safety. Advisory Circular 150/5200-30D. https://www.faa.gov/documentLibrary/media/Advisory_Circular/150-5200-30D.pdf. FAA. (2017, August 29). Standards for Using Remote Sensing Technologies in Airport Surveys. Advisory Circular 150/5300-17C. https://www.faa.gov/documentLibrary/media/Advisory_Circular/150-5300-17C-Chg1.pdf. FAA. (2018a, February 2). Specification for L-853, Runway and Taxiway Retroreflective Markers. Draft Advisory Circular 150/5345-39E. https://www.faa.gov/documentLibrary/media/Advisory_Circular/draft-150-5345- 39E-industry.pdf. FAA. (2018b). Runway Status Lights. https://www.faa.gov/air_traffic/technology/rwsl/. FAA. (2019, March 11). Fact Sheet—Data Communications. https://www.faa.gov/news/fact_sheets/news_story. cfm?newsId=21994. FTA. (2018). Strategic Transit Automation Research Plan. FTA Research. https://www.transit.dot.gov/sites/fta. dot.gov/files/docs/research-innovation/114661/strategic-transit-automation-research-report-no-0116_0.pdf. Google. (2019). Cloud Speech-To-Text API. https://cloud.google.com/speech-to-text/docs/. Hoffman, K. (1974). Applying the Wheatstone Bridge Circuit. Germany: HBM. LEOS, Lufthansa. (2013). Pilot controlled dispatch towing—without engines running. http://www.lufthansa- leos.com/taxibot. Lyons, R. E., and Vanderkulk, W. (1962). The use of triple-modular redundancy to improve computer reliability. IBM journal of research and development, 6(2), 200–209. Marshall. (2019, March 20). A Cab’s-Eye View of How Peloton’s Trucks “Talk” to Each Other. WIRED. https:// www.wired.com/story/cabs-eye-view-how-pelotons-trucks-talk/?mbid=email_onsiteshare. Mobileye Corp. (2018). The Sensing Challenge. https://www.mobileye.com/our-technology/sensing/. NHTSA. (2016). Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety. Washington DC: U.S. Department of Transportation. NHTSA (2018). U.S. Department of Transportation Releases Request for Comment (RFC) on Vehicle-to-Everything (V2X) Communications. https://www.nhtsa.gov/press-releases/us-department-transportation-releases- request-comment-rfc-vehicle-everything-v2x. Pankiewicz, P., Powiertowski, W., and Roszak, G. (2008). VHDL implementation of the lane detection algo- rithm. 15th International Conference on Mixed Design of Integrated Circuits and Systems, Poznan, Poland, pp. 581–584. Quigley, M., Gerkey, B., Conley, K., et al. (2009). ROS: an open-source Robot Operating System. IEEE Inter- national Conference for Robotics and Automation. Raskar, R., Agrawal, A., Wilson, C., and Veeraraghavan, A. (2008). Glare Aware Photography: 4D Ray Sampling for Reducing Glare Effects of Camera Lenses. 2008 ACM SIGGRAPH. Romoff, R. (2019, March 1). Automotive Grade Linux Releases Open Source Speech Recognition APIs. The Linux Foundation. https://www.linuxfoundation.org/news/2019/03/automotive-grade-linux-releases-open- source-speech-recognition-apis/. Rösmann, C., Hoffmann, F., and Bertram, T. (2017). Kinodynamic Trajectory Optimization and Control for Car-Like Robots, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, British Columbia, Canada. Shaffer, B. (October 2017). Why are automotive radar systems moving from 24GHz to 77GHz? Texas Instruments E2E Community Blogs. https://e2e.ti.com/blogs_/b/behind_the_wheel/archive/2017/10/25/why-are-auto- motive-radar-systems-moving-from-24ghz-to-77ghz. Short, M., Torres, J., and Kreckie, J. (2017). Thermal Imaging for Aircraft Rescue and Fire Fighting Applications. No. DOT/FAA/TC-17/27. Textron. (2019). Smart Sense. Textron Ground Service Equipment. https://textrongse.txtsv.com/vehicles/ smart-sense. U.S. DOT. (2018). Preparing for the Future of Transportation, Automated Vehicles 3.0. https://www.transportation. gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation- automated-vehicle-30.pdf. Walker, W., Lamere, P., Kwok, P., Raj, B., Singh, R., Gouvea, E., Wolf, P., and Woelfel, J. (2004). Sphinx-4: A flexible open source framework for speech recognition. Wang, Y., Teoh, E.K., and Shen, D. (2004). Lane detection and tracking using b-snake. Image Vision Computing. 22(4):269–280. Zhang J. (2012). Tutorial on Small Cell/HetNet Deployment Part 1: Evolutions towards small cell and Het- Net. IEEE Globecom’12 Industry Forum, 2012. http://globecom2012.ieee-globecom.org/downloads/t1/ 1SmallCellTutorialGlobecom12Jie_v1b.pdf.

Next: Appendices B Through S »
Advanced Ground Vehicle Technologies for Airside Operations Get This Book
×
 Advanced Ground Vehicle Technologies for Airside Operations
Buy Paperback | $87.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Recent advancements in automated and advanced driving technologies have demonstrated improvements in safety, ease and accessibility, and efficiency in road transportation. There has also been a reduction in costs in these technologies that can now be adapted into the airport environment.

The TRB Airport Cooperative Research Program's ACRP Research Report 219: Advanced Ground Vehicle Technologies for Airside Operations identifies potential advanced ground vehicle technologies (AGVT) for application on the airside.

Appendices B Through S are online only. Appendix A, on enabling technologies, is included within the report.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!