Ad Hoc Teams—Teams that form rapidly and uniquely for short-term tasks and missions.
Adaptable Automation—Automation that can be activated or have its level of automation modified by the human in real time during system operation.
Adaptive Automation—Automation that automatically changes its performance or level of automation based on time, human performance or state, or other predefined characteristics of team performance.
Agile Software—An approach to software development that produces rapid iterative sections of software or “sprints,” each of which works to define requirements for a limited set of functions, and then develop, integrate, and test the associated software. Agile software-development approaches feature a focus on (1) individuals and interactions over processes and tools; (2) working software over comprehensive documentation; (3) customer collaboration over contract negotiation; and (4) responding to change over following a plan (Abrahamsson et al., 2017; Beck et al., 2001; Cockburn, 2002).
Artificial Intelligence (AI)—Systems that seek to provide the intellectual processes characteristic of people, such as the ability to reason, discover meaning, generalize, or learn from past experience (Copeland, 2021). AI systems may be applied to parts of a task (e.g., perception and categorization, natural language understanding, problem solving, reasoning, system control), or to a combination. AI software approaches may involve symbolic approaches (i.e., rule-based or case-based reasoning), often taking form as decision-support systems; may apply other advanced algorithms such as Bayesian belief-nets, fuzzy systems, and connectionist or machine learning-based approaches (e.g., logistic regression, decision trees, or neural networks); or may incorporate hybrid architectures that include more than one algorithmic approach.
AI Auditability—The ability to document and assess the data and models used in developing an AI-embedded system.
AI Explainability—The ability to provide satisfactory, accurate, and efficient explanations of the results (i.e., recommendations, decisions, and/or actions) of an AI system.
Automation—A device that performs functions independently, without continuous input from an operator (Groover, 2020). Automation can be fixed (mechanical) or programmable (based on defined rules and feedback loops to ensure proper execution), either via a static set of software commands, or involving flexible, rapid customization by a human operator. Tasks may be fully automated (i.e., autonomous) or semi-automated, requiring human oversight and control for portions of the task. It is also often defined as “the execution by a machine agent (usually a computer) of a function that was previously carried out by a human. What is considered automation will therefore change with time” (Parasuraman and Riley, 1997, p. 231).
Automation Conundrum—“The more automation is added to a system, and the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed. More automation refers to automation use for more functions, longer durations, higher levels of automation, and automation that encompasses longer task sequences” (Endsley, 2017, p. 8).
Autonomy—Systems that have a set of intelligence-based capabilities that can respond to situations that were not explicitly programmed or were not anticipated in the design (i.e., systems that can generate decision-based responses). Autonomous systems have a degree of self-government and self-directed behavior (serving as a human’s proxy for decisions) (USAF, 2013). Systems may be fully autonomous or partially autonomous (i.e., requiring human actions or inputs for portions of the task).
Bias—A preference toward certain information or options. Bias is created through systematic error introduced by selecting or encouraging one outcome or answer over others (Merriam-Webster, 2021). In the case of AI, bias may be introduced through a limited set of training data that fails to consider the wider range of circumstance where it may be employed, or by algorithms that focus on features in the datasets that may be incidental to performance.
Black-Box AI—AI systems in which the reasoning and processes are not transparent or observable.
Brittleness—The inability of automation to perform at the limits of its designed performance envelope, resulting in often unexpected system failures.
Common Ground—Mutual knowledge, beliefs, and assumptions required to support interdependent actions in teams (Klein, Feltovich, and Woods, 2005, p. 146) (see also shared situation awareness and shared mental models).
Context of Use—Includes characteristics of the users, the activities they perform, how the work is distributed across people and machine agents, the range and complexity of situations that can arise, as well as the broader sociotechnical environment in which the system will be integrated (NRC, 2007).
Cooperation—“Negotiating and aligning individual goals when they differ from a joint goal” (Chiou and Lee, 2021, p. 10), with individual teammates willing to give up individual benefit to achieve greater benefits for the team.
Directability—“One’s ability to influence the behavior of others and complementarily be influenced by others” (Johnson and Bradshaw, 2021, p. 390).
Distributed Teams—Teams that are distributed spatially (e.g., blocked from view by objects, in separate rooms, or separate geographical areas) or temporally.
Explainability—Support for understanding the logic, process, factors, or reasoning upon which a system’s actions or recommendations are based.
Flexible Autonomy—Automation in which the level of automation can change dynamically over time for different functions, using either adaptive or adaptable approaches.
Granularity of Control (GOC)—The degree of specificity of control inputs that are required to interact with the system. GOC can range “from (a) manual control; to (b) programmable control, requiring the programming of each task parameter and specification; (c) Playbook control, selecting from a Playbook of preset, yet adaptable, behaviors (Miller, 2000); and (d) goal-based control, where only a high-level goal needs to be provided to the system (USAF, 2015)” (Endsley, 2017, p. 17).
Human-AI Team—A team consisting of “one or more people and one or more AI systems requiring collaboration and coordination to achieve successful task completion” (Cuevas et al., 2007, p. 64).
Human-Systems Integration (HSI)—Addresses human considerations within the system design and implementation process, with the aim of maximining total system performance and minimizing total ownership costs (Boehm-Davis, Durso, and Lee, 2015). HSI incorporates human-centered analyses, models, and evaluations throughout the system lifecycle, starting from early operational concepts, through research, design, and development, and continuing through operations (NRC, 2007). Within the DOD, HSI is divided into seven domains: manpower, personnel, training, human factors engineering, safety and occupational health, force protection and survivability, and habitability.
Ironies of Automation—The more advanced the automation, the more crucial the contribution of the human; the less likely the human is to have the manual skills necessary; and the more likely that workload will be high and more advanced cognitive skills will be needed when humans take over task performance (Bainbridge, 1983).
Level of Automation (LOA)—The amount of control or authority that is granted to the automation (or AI system) for a given task or function.
Lumberjack Effect—“More automation yields better human-system performance when all is well but induces increased dependence, which may produce more problematic performance when things fail” (Onnasch et al., 2014, p. 477).
Meaningful Human Control—“The ability to make timely, informed choices to influence AI-based systems that enable the best possible operational outcomes” (Boardman and Butcher, 2019, p. 7-1).
Mental Model—“Mechanisms whereby humans are able to generate descriptions of system purpose and form, explanation of system functioning and observed system states, and predictions of future states” (Rouse and Morris, 1985, p. 7).
Model Drift—Occurs when the relationship between input and output data changes over time, negatively affecting the accuracy of the model’s predictions (Widmer and Kubat, 1996).
Multi-Domain Operations (MDO)—Dynamic and distributed combinations of actions across the traditionally separate air, land, maritime, space, cyberspace, information, and electro-magnetic spectrum domains to achieve synergistic and combined effects with improved mission outcomes.
Multi-Domain Operations Command and Control (MDC2)—Connected and “distributed sensors, shooters, and data from all domains to joint forces, enabling coordinated exercise of authority to integrate planning and synchronize convergence in time, space, and purpose” (USAF, 2020, p. 6). Also called joint all domain command and control (JADC2).
On-the-Loop Control—Operations in which people oversee a system that is operating at a high level of automation at very fast timeframes and/or volumes exceeding human capacity. There is no expectation that people will be able to monitor or intervene in operations prior to automation errors occurring, however, it may be possible to take actions to turn off the automation or change automation behaviors in an outer control loop.
Out-of-the-Loop (OOTL)—The tendency for people working with automated systems to be slower to detect a problem with system performance and slower to understand the problem once detected.
Playbook—A set of plays that are templates of behavior for automation, known to be effective at accomplishing specific goals (Miller, 2000).
Resilient Teams—Groups of people and/or automated agents that have the capacity to respond to change and disruption in a flexible and innovative manner to achieve successful outcomes.
Responsivity—“The input–output gain of a detector system, reflecting an ability to adjust to sudden, altered conditions in the environment and to resume stable operation” (Chiou and Lee, 2021, p. 6). Automation or AI responsivity refers to the “degree to which the automation effectively adapts to the person and situation” (Chiou and Lee, 2021, p. 6).
Shared Mental Model—A consistent understanding and representation of how systems work across teammates (i.e., the degree of agreement of one or more mental models). This includes models of the technology and equipment, models of taskwork, models of teamwork, and models of teammates (e.g., knowledge, skills, attitudes, preferences) (Cannon-Bowers, Salas, and Converse, 1993).
Shared Situation Awareness—“The degree to which team members possess the same SA on shared SA requirements” (Endsley and Jones, 2001, p. 48).
Situation Awareness (SA)—“The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” (Endsley, 1988, p. 97).
Situation Structure—A 2 x 2 matrix that “specifies the choices available to each actor in a dyad, the outcomes of their choices, and how those outcomes depend on the choices of the other agent” (Chiou and Lee, 2021, p. 9).
Social Intelligence—An “aggregated measure of self- and social-awareness, evolved social beliefs and attitudes, and a capacity and appetite to manage complex social change” (Ganaie and Mudasir, 2015, p. 23).
Supervisory Control—Control by a human operator of automation, which, at a lower level, is controlling a dynamic system. The human operator handles higher-level tasks and determines the goals of the overall system, monitors the system to determine whether operations are normal and proceeding as desired, and diagnoses difficulties and intervenes in the case of abnormality or undesirable outcomes (Sheridan, 1986; Sheridan and Johannsen, 1976).
Taskwork—Activities, skills, and knowledge associated with carrying out the tasks required for a job (i.e., the functioning, operating procedures, and capabilities and limitations of equipment and technology; task procedures, strategies, constraints; relationships between components; and likely contingencies and scenarios). Taskwork is directly related to the goals of a team (Cannon-Bowers et al., 1993), and is often contrasted with teamwork.
Team—A “distinguishable set of two or more people who interact dynamically, interdependently, and adaptively toward a common and valued goal/objective/mission, who have each been assigned specific roles or functions to perform, and who have a limited lifespan of membership” (Salas et al., 1992, p. 4).
Teammate—A fellow member of a team. Teammates may be human or non-human (e.g., an animal, bird, robot, or autonomous software agent).
Teamwork—An interrelated set of knowledge, skills, and attitudes that facilitate coordinated, adaptive performance in teams. This includes an understanding of roles, responsibilities, interdependencies and interaction patterns, communications, and information flow (Cannon-Bowers et al., 1993). Teamwork is often contrasted with taskwork.
Team Mental Model—A mental model of one’s teammate(s) that provides an understanding of teammates’ capabilities, limitations, current goals and needs, and current and future performance (Cannon-Bowers, Salas, and Converse, 1993).
Team Situation Awareness—“The degree to which every team member has the SA required for his or her responsibilities” (Endsley, 1995, p. 39).
Theory of Mind—The mental capacity to understand other people and their behavior by ascribing mental states to them.
Transparency—The understandability and predictability of the system (Endsley, Bolte, and Jones, 2003), including its “abilities to afford an operator’s comprehension about an intelligent agent’s intent, performance, future plans, and reasoning process” (Chen et al., 2014a, p. 2).
Trust—The attitude that an “agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” (Lee and See, 2004, p. 2). Trust can mediate the degree to which people rely on each other as well as on a technology, such as AI.
White-Box AI—AI approaches that can explain how they behave, how they produce predictions, and what the influencing factors are (i.e., transparent approaches).