6
Augmentative and Alternative Communication and Voice Products and Technologies
The ease and simplicity of use of typical natural speech mask the complexity of a speech production process that involves precise control and coordination of respiration, voice, articulation, and language comprehension and expression (van der Merwe, 2009). For many, speech is the external expression of language, and the motor skills involved are performed with accuracy and speed, without conscious control (Netsell, 1982). With impairment, alterations in speech subsystems become apparent and the complexity revealed. The primary rationale for individuals’ electing to use augmentative and alternative communication (AAC) is the inability of their natural speech to meet all of their daily communication needs. Although the severity of impairment plays a role in determining AAC needs and appropriate interventions, other factors include level of communication complexity, skills of communication partners, communication environments and environmental factors, rate of communication, and proficiency at strategic communication, among others. The delicate balance that yields automaticity of natural speech planning, programming, and execution is not replaced by AAC systems, nor does AAC fully mitigate impairments in natural speech production.
Although the primary focus of this chapter is AAC systems, the discussion also briefly addresses voice restoration technologies that support communication associated with head and neck cancer treatments. AAC refers to all types of communication other than oral speech (e.g., pictures, symbols, writing, hand gestures) (ASHA, 2016a). AAC systems may be unaided (e.g., signing, gestures) or aided (Beukelman and Mirenda, 2013). Aided AAC systems include nontechnology assistive products (e.g., communication
boards, books) and technology-based products (e.g., speech-generating devices [SGDs], mobile technologies). This chapter begins with an overview of the conditions benefiting from the use of AAC technologies, which is followed by a detailed taxonomy of AAC and voice products and technologies. Next is a review of the clinical considerations entailed in comparing natural speech and technology-based voice output systems. Evaluation and monitoring, training and adaptation, and access and availability are then addressed in turn. The chapter next considers voice restoration following head and neck surgery. The final section presents findings and conclusions. Before proceeding, it is important to note that the research in this field often has focused on specific areas and populations, making generalizations across studies problematic and highlighting the need for AAC-specific research across adult populations (Bourgeois, 2013).
OVERVIEW OF CONDITIONS BENEFITING FROM AAC TECHNOLOGIES
Prevalence of AAC Need
An estimated 1.3 percent of Americans (about 4 million people) cannot reliably meet their daily communication needs using natural speech (Beukelman and Mirenda, 2013), and the prevalence and complexity of communication disorders increase with age (Yorkston et al., 2010a). Additionally, many individuals with other disabilities (e.g., developmental, physical) have co-occurring communication disabilities (Lawthers et al., 2003; Perry et al., 2004). Although datasets on the prevalence of AAC use are limited, increases in the number of individuals requiring AAC have been observed (Light and McNaughton, 2012). Factors contributing to this increase include the rising incidence of autism spectrum disorders (CDC, 2011, 2014); advances in medical intervention that have resulted in improved survival, albeit with lifelong disability (Durkin et al., 2016; Hustad and Miles, 2010; Vincer et al., 2006); increased life spans of individuals with communication disability (Balandin and Morgan, 2001); and increased overall life expectancy (Gaskin et al., 2016; Segalman, 2011). Improvements in AAC technology that better account for the unique cognitive and linguistic skills of persons with physical and cognitive disabilities have resulted in new opportunities for the appropriate provision of AAC services (Beukelman and Mirenda, 2013; Light and McNaughton, 2012).
Medical Conditions Benefiting from AAC
Prevalent conditions leading to a need for AAC include Alzheimer’s disease, Parkinson’s disease, autism spectrum disorder, learning difficulties, stroke, cerebral palsy, head/brain injury, profound and multiple learning
disabilities, and motor neuron disease/amyotrophic lateral sclerosis (ALS) (Perry et al., 2004; Wodka et al., 2013). Other conditions include, but are not limited to, head and neck cancers (Sullivan et al., 2007b), aphonia/voice impairment (Rousseau et al., 2015), progressive illnesses (e.g., multiple sclerosis, Huntington’s disease) (Beukelman et al., 2007c), dementia (Bourgeois, 1992; Bourgeois et al., 2001), primary progressive aphasia (King et al., 2007), brainstem impairment/locked-in syndrome (Culp et al., 2007), genetic associations/syndromes (e.g., Prader-Willi, William’s, Rett, Angelman, Fragile X, Down, 22q.11 deletion) (Brady et al., 2006; McDuffie et al., 2016), and other neuromuscular diseases (e.g., muscular dystrophy, spinal muscular atrophy) (Ball et al., 2012, 2016a; Fried-Oken et al., 2015). In an Australian sample, the age range of the largest number of people with complex communication needs was 19 to 40 years. Most individuals with such needs as a result of congenital conditions were in the same age range, with cerebral palsy (46 percent), genetic/congenital syndromes (37 percent), and autism spectrum disorder (48 percent) predominating. The same study found that some conditions associated with complex communication needs increase with age (e.g., stroke, dementia, laryngectomy, Parkinson’s disease, Huntington’s disease) (Perry et al., 2004).
Data on 2014 Medicare services (see Appendix C) indicate that the majority (168/227, or 73 percent) of SGDs funded were in the E2510 category (SGD, synthesized speech output, multiple message formulation methods). Although the reason for the predominance of this category is unknown, funding, professional training, availability of AAC assessment teams, and public awareness likely contribute. Many individuals use this type of AAC device to produce complex language, while others use the sophisticated features of the device to support beginning communication skills (Brock et al., 2017; Ganz et al., 2015).
A potential misalignment exists between clinician perceptions of the need for AAC and actual need (Hustad and Miles, 2010). This misalignment may produce underestimated numbers of individuals who would benefit from AAC based on clinician (e.g., speech-language pathologist [SLP], physician) identification alone. There is no evidence to support the idea that persons with complex communication needs who undergo AAC evaluation receive no recommendation for AAC technology. The greater challenge is that there are few SLPs to provide AAC evaluation and treatment services, as is discussed later in this chapter.
TAXONOMY
AAC systems are used to establish functional communication when natural speech methods are insufficient to achieve daily communication goals and meet communication needs (Beukelman and Mirenda, 2013). Aided AAC systems can be categorized into nontechnology and technology-based
products. Nontechnology products are nonelectronic boards or books that contain images that the individual selects to convey messages (e.g., picture symbols, alphabet boards, photograph books). Technology-based systems employ hardware and software to produce visual output, that is, digitally displayed messages (i.e., dynamic or static displays) or voice output (verbal messages [SGDs and mobile AAC technologies]). For the purposes of this report, the term “AAC technology” refers generally to technology-based communication systems with voice output, and it includes both SGDs and mobile AAC technologies. Voice output may be digitized, synthesized, or a combination of the two. Box 6-1 summarizes the definitions relevant to the AAC taxonomy used in this chapter (see also Table 6-1 and Annex Table 6-1 at the end of this chapter).
Technology-based AAC systems include a number of features that need to be considered when these systems are selected for particular individuals (see Table 6-1). Table 6-2 summarizes the ways in which vocabulary and messages are represented and generated for communication using technology-based AAC systems. To optimize a particular individual’s communication performance, any number of features may need to be personalized or customized by an SLP or other qualified team member. While careful selection of these features may partially mitigate a communication impairment, training in use of the selected AAC technology alone cannot eliminate environmental and personal barriers that may impact use.
AAC Software
Important features of AAC software include (1) language/message representation methods, (2) vocabulary selection and organization based on communication needs and personal preferences, and (3) language/message generation options (Hill and Corsi, 2012). The features shown in Table 6-2 are not mutually exclusive, and multiple methods are often integrated into communication (e.g., a combination of direct selection for typical use and scanning for selection when fatigued; word-by-word message formulation strategies for novel utterances combined with preformulated messages for rapid access to frequently used utterances). One consideration in the selection of software features is the additional cognitive tasks associated with each option or combination of options; successful communication in the context of the cognitive, visual, and learning demands of complex AAC systems is influenced by an individual’s language and cognitive status (Light and McNaughton, 2013; Rowland et al., 2003). While extensive evidence supports the benefits of some software and apps for language and access methods, little evidence exists for others as yet (Caron and Light, 2016) (see also Annex Table 6-2 at the end of this chapter).
Software Message Management Features
To communicate with AAC, individuals employ formulation, storage, and retrieval (words, codes, messages) strategies (Beukelman and Mirenda, 2013). A variety of software options are used to manage and generate messages, including but not limited to spelling letter-by-letter, using symbols
TABLE 6-1
Communication-Related Features of Aided AAC Systems
Feature | No-Technology AAC | Digitized SGD | Synthesized SGD | Mobile AAC Technology | ||||
---|---|---|---|---|---|---|---|---|
Minutes of recording time | ≤8 | 9–20 | 21–40 | >40 | >40 | >40 | >40 | |
Digitized voice output | X | X | X | X | X | |||
Synthesized voice output | X | X | X | |||||
Message banking | X | X | X | X | X | X | X | |
Voice banking | X | X | X | X | ||||
Visual output | X | X | X | X | X | X | X | |
Preprogrammed messages | X | X | X | X | X | X | X | X |
Message formulation | X | X | X | X | X | |||
Battery operated | X | X | X | X | X | X | X | |
Rechargeable | X | X | X | X | X | X | ||
Fixed display | X | X | X | X | X | |||
Dynamic display | X | X | X | |||||
Physical contact/direct selection | X | X | X | X | X | X | X | X |
Scanning | X | X | X | X | X | X | X | |
Multiple access options | X | X | X | X | X | |||
Eye gaze access | X | X | X | |||||
Graphic/symbol representation | X | X | X | X | X | X | X | X |
Text representation | X | X | X | X | X | |||
Photo/visual scene representation | X | X | X | X | X | X | X | X |
Rate acceleration: encoding | X | X | X | X | X | X | X | X |
Rate acceleration: prediction | X | X | X | |||||
Visual adjustments (spacing, font, brightness) | X | X | X | X | X | X | ||
Auditory adjustments (volume) | X | X | X | X | X | X | X | |
Funding (Medicare/Medicaid, insurers, VA) | X | X | X | X | X | X | ||
Assessment and Training | X | X | X | X | X | X | X |
NOTE: AAC = augmentative and alternative communication; SGD = speech-generating device; VA = U.S. Department of Veterans Affairs.
TABLE 6-2
Components of AAC Technology
LANGUAGE COMPONENTS | ||
Language Representation | Vocabulary | Method of Utterance Generation |
|
|
|
HARDWARE AND SOFTWARE COMPONENTS | ||
Display Features | Control and Selection Methods | Outputs |
|
|
|
OTHER COMPONENTS AND SERVICES | ||
System Options | Manufacturer Options | Other Supports |
|
|
|
SOURCES: Adapted from Hill, 2010; Hill and Corsi, 2012.
to represent words and messages, sequencing icons to represent words and messages, selecting individual words from a display to generate word-byword messages, and selecting partial and full messages that have been programmed and stored for retrieval. Each variation is appropriate for some individuals, and given the complex cognitive demands of these systems, careful consideration is required across a wide range of technologies to match individuals with the most appropriate systems (Higginbotham et al., 2007; Light and McNaughton, 2013; Mizuko et al., 1994; Ratcliff, 1994; Rowland et al., 2003; Thistle and Wilkinson, 2013; Wagner and Jackson, 2006).
Aided symbol representation includes the visual, auditory, or tactile presentation of communicative messages, symbols, and codes from which the
individual selects (Beukelman and Mirenda, 2013). Aided symbols include two-dimensional symbols that can represent other items, and may include tangible objects (e.g., miniatures, partial objects), textures (e.g., a piece of spandex to represent swimming/a swimming suit), picture symbols (e.g., photographs, drawings, codes), and orthographic symbols (e.g., alphabet, Braille) (Beukelman and Mirenda, 2013). Most AAC technologies use aided symbols with visual displays of pictures, alphabet, pictorial symbols, or codes. For individuals with visual or other impairments, AAC technologies may present spoken messages or offer tactile representation of items (e.g., objects, textures, shapes). Although a wide array of strategies is used with communication software, methods for representing language or messages can be identified as (1) alphabet- or text-based methods, (2) single-meaning picture symbols, (3) visuals scenes, or (4) multimeaning icons or semantic compaction (Beukelman et al., 2015; Ganz et al., 2015; Gevarter et al., 2014; Light and McNaughton, 2012; Therrien and Light, 2016).
Keystroke and Rate Manipulation
People who rely on AAC often select components of messages one at a time from the display. A number of strategies have been developed to reduce the time and effort this process requires (Beukelman and Mirenda, 2013; Hoag et al., 2009).
Encoding Considered sequential building of sounds in words (Hartsuiker et al., 2005), encoding in the case of AAC technologies involves converting electronic data into a standard format that can be sent within the device and later decoded as communication output (Barrett and King, 2005). Encoding strategies typically found in AAC technologies involve word and message features in the communication software.
Stored words Alphabet- or text-based methods, including alpha, alphanumeric, letter-category, and numeric codes, may be used to represent words. Alpha codes typically employ truncation (e.g., use the first few letters of a word, such as sched = schedule) or contraction (e.g., use the most salient letters, such as schdl = schedule). Alphanumeric codes use letter–number combinations (e.g., sched1 = work schedule, sched2 = travel schedule, sched3 = home schedule). Letter-category codes involve indicating a category with the first letter and then the word with the second letter (e.g., S = my schedules, SW = work schedule, ST = travel schedule). Numeric codes have limited use but are helpful when display space is limited (e.g., assignment of an arbitrary number, such as 15 = work schedule). Morse code is another encoding system available in some AAC technologies; dot and dash combinations are used to access the alphabet, punctuation, numbers, and computer functions (King, 2000).
Symbols Symbols and icons may be used to represent words. In the case of single-meaning symbols, one symbol represents one word. Symbol representation of words may result in keystroke savings over the course of a conversation. Since an individual’s vocabulary typically includes several thousand words, however, this method requires the availability of an equal number of symbols. Sequencing two or three icons to access a word is another keystroke-saving approach that offers rate enhancement value for some individuals with severe physical disabilities, such as cerebral palsy (McNaughton et al., 2002). Use of multimeaning icons or semantic compaction (Baker, 1986; Chang et al., 1992) entails combining teachable icon sequences based on semantic relationships to represent a word. This representation method involves sequencing a small set (single display) of icons to reduce navigation among symbols/text.
Messages Alpha, alphanumeric, letter-category, and numeric codes and single- and multimeaning symbols/icons also may be used to represent phrases and messages. Some individuals use salient letter codes to indicate the relevant message content (e.g., OD = please open the door for me). Color encoding also may represent contextual (e.g., red = body parts, blue = work supplies) or linguistic (e.g., green = nouns, orange = verbs) categories (Thistle and Wilkinson, 2009). Communication software programs using symbols and icon sequences may contain prestored messages. Thus, one picture symbol or icon sequence may produce a greeting, provide the individual’s name/address, or access a prestored presentation for a workplace meeting or conference.
Prediction Letter, word, and message prediction involves active retrieval in which options change according to the portion of the word/message already formulated. As with the now commonplace texting keyboard on many smartphones and tablets, algorithms predict content based on the probability of letter occurrence, letter combinations, and linguistic context to provide a set of options for the target message. Types of prediction include word completion, next-word prediction, linguistic prediction, message prediction, and icon prediction (Dowden, 2016).
Hardware Components
AAC technologies offer myriad hardware options for the message display, selection method, and output and input.
Display
Those AAC system components used to present the language components to the person with communication needs are commonly referred to as
the display. AAC displays generally are of one of four types: fixed, dynamic, visual scenes, or hybrid (Beukelman and Mirenda, 2013).
Fixed display In fixed, or static, displays, graphic symbols are displayed in fixed locations, typically in a grid layout with symbols shown in cells that have fixed locations (Drager et al., 2003). The number of symbols or messages that a fixed display can present to the individual is limited (fewer than 150 in the largest displays) because each available item is visible at all times; as a result, some AAC systems utilize multiple, often hierarchical displays to accommodate various communication needs, environments, and listeners (Bruno and Trembath, 2006; Hochstein et al., 2003). Nonelectronic-aided AAC systems and most digitized AAC technologies employ fixed displays.
Dynamic display AAC technologies typically employ computer-based dynamic displays that change to a new set of symbols (pages) automatically when activated. Multiple levels of displays accommodate myriad individual vocabulary and linguistic needs (Drager et al., 2003). As with fixed displays, the majority of dynamic displays are presented in a grid or matrix, with items arranged in rows and columns. In contrast with fixed displays, however, the number of symbols or messages that a dynamic display can present to the individual is not limited by what is visible; such displays offer symbols that are not visible but can be accessed through page linking (Drager et al., 2003). Dynamic displays provide a range of organizational strategies that make complex language constructions possible (Bruno and Trembath, 2006; Drager et al., 2003). Some digitized AAC technologies and most synthesized devices employ dynamic displays (Beukelman and Mirenda, 2013).
Visual scene display In contrast with the grid format of many dynamic displays, visual scene displays provide context for the user by integrating a picture, photograph, or virtual environment within a visual image (e.g., showing people, objects, and events against the background in which they occur) (Beukelman et al., 2015; Dietz et al., 2006; Thistle and Wilkinson, 2015). Visual scene displays may be used across a wide range of AAC technologies (e.g., photographic images placed on a digitized device, digital images placed on devices that support programming “hot spots”). Visual scene displays support interactive communication across a variety of ages and disability groups (Beukelman et al., 2015; Brock et al., 2017; Ganz et al., 2015; Gevarter et al., 2014; Therrien and Light, 2016; Ulmer et al., 2016).
Hybrid display Hybrid displays typically consist of a fixed display with a dynamic component (e.g., indicator lights that highlight items, word prediction on alphabetic displays) (Beukelman and Mirenda, 2013). In other
cases, a visual scene display may be embedded with dynamic hotspots that move the display away from the visual scene to a text or grid display (Gevarter et al., 2016). Hybrid displays may be used across the full range of AAC technology types.
Selection Method
AAC systems typically provide two methods for selecting elements on the display and producing messages: direct selection and scanning.
Direct selection Direct selection, available as an option on most AAC systems, allows the user to select a desired item without intervening supports. The most common form of direct selection involves a finger point or pressure (i.e., physical contact); however, direct selection may also involve pointing with another body part or activating an item without physical contact (e.g., head/mouth stick, eye gaze, head mouse, eye-safe laser) (Ball et al., 2010b; Fager et al., 2012; Hanson et al., 2016). Brain–computer interfaces show promise but are still under study; they are currently available only in selected clinics primarily as components of research programs (Akcakaya et al., 2014; Barreto et al., 2000; Fried-Oken et al., 2015; Hill et al., 2014; Wolpaw et al., 2000).
Scanning Scanning is an alternative selection method commonly used by individuals who are unable to choose items directly, most commonly because of impaired motor control (Beukelman and Mirenda, 2013). Scanning involves presenting items on the display by moving progressively through a predetermined pattern (e.g., row-column, circular, linear, group-item). Scanning requires that the user wait while the system sequentially presents undesired items before reaching the item of choice; a switch is used to activate the scanning movement and select the item/message (Beukelman and Mirenda, 2013).
Output Capabilities
Aided AAC systems provide a variety of message output modes, including digitized and synthetized speech, nonelectronic-aided symbols, and print (Beukelman and Mirenda, 2013). Visual output (e.g., aided symbols or text viewed on a display) is used primarily to support messages when natural, digitized, or synthesized speech is not understood or available. Individuals with impaired natural speech may use synthesized or digitized speech to gain listeners’ attention, produce utterances at a distance, communicate in group conversations, and talk on the telephone, among many other activities (Alamsaputra et al., 2006; Hanson et al., 2016; Hill, 2010).
AAC technologies provide speech output using digitized, synthesized, or combined digitized and synthesized speech (Drager et al., 2010). Digitized and synthesized speech incorporated into electronic communication devices has resulted in significant advances in AAC (Alamsaputra et al., 2006).
Digitized speech Digitized speech refers to human voice stored as segments of sound waves (Schlosser, 2003). It consists of natural speech that has been recorded with a microphone, converted to a digital signal, and stored and retrieved in word or message form (Beukelman and Mirenda, 2013). Message banking, a strategy that involves storing digitized speech, is used primarily by individuals who, retaining intelligible speech but anticipating its loss (e.g., because of degenerative disease or head/neck cancer), want to record their own voice for use in future communication systems (Costello, 2011, 2014). Message banking strategies are not effective for individuals who are referred late for AAC evaluation, already presenting with moderate to severe speech impairment (Nordness et al., 2010). Although most technology-based AAC systems provide a feature for digitally recording a message, many provide a limited amount of storage space for such recordings (see Table 6-1). Prerecorded stored messages cannot be modified for spontaneous or real-time communication.
Synthesized speech Synthesized speech is computer-generated according to a set of rules in a mathematical algorithm (Drager et al., 2010). Text-to-speech synthesis, a common method for generating synthetic speech for AAC technologies, involves extracting speech sound components from words and then combining them to form natural-sounding synthetic voices (Beukelman and Mirenda, 2013; Drager et al., 2010). In contrast with the stored messages from digitized speech, synthesized speech systems allow the user to generate speech for each utterance and therefore provide greater novel message flexibility.
New options for creating a personalized synthetic voice that combine components of digitized and synthesized output have emerged and appear promising. However, the intelligibility and effectiveness of these options currently remain under study (Bunnell et al., 2015; Jreige et al., 2009; Patel et al., 2015; Yamagishi et al., 2012).
CLINICAL CONSIDERATIONS
Overall, people who rely on AAC for daily interactions value situational flexibility, reliability, learning ease, and intelligibility of output in their communication devices, as reflected in characteristics described by individuals who use AAC and their facilitators as research priorities (O’Keefe et al., 2007). In the study by O’Keefe and colleagues (2007),
AAC consumers with cerebral palsy and spinal cord injury indicated as priority needs (1) preparing people who use AAC to participate and have success in social relationships (e.g., friendships, dating) and employment; (2) improving AAC technologies and optimal, rapid service delivery; and (3) improving literacy among people who use AAC (O’Keefe et al., 2007). SLPs typically measure the function of speech subsystems, including intelligibility, comprehensibility, and efficiency, in conducting evaluations (ASHA, 2004c). One method for evaluating AAC systems and factors impacting their use involves comparing the effectiveness of natural speech with AAC options. Evidence from pediatric populations suggests that the extent of AAC use is directly related to the extent of communication need that is managed effectively with natural speech (Oommen and McCarthy, 2014). Although research is lacking for the full range of populations that may benefit from AAC (Light and Drager, 2007), considerations for comparing natural speech and technology-based voice output are summarized below for message and overall communication intelligibility, comprehensibility and listener comprehension, efficiency, and effectiveness.
Intelligibility
Broadly considered the measured understandability of speech, intelligibility is defined as the degree to which a person’s natural speech is understood by a communication partner (Yorkston et al., 1992). Intelligibility is a key criterion for determining the severity of speech-production disorders (Yorkston et al., 2010b), as reduced intelligibility may critically limit vocational, educational, and social participation (Hustad, 2008). It follows that intelligibility is a key criterion for AAC output, as reductions result in increases in communication breakdowns (Ball et al., 2001, 2002). Many AAC speakers retain some functional natural speech with limited degrees of intelligibility; as a result, they may use AAC technology in some speaking situations and natural speech in others.
Standard procedures for intelligibility assessment commonly involve transcription (identification) of individual sounds, words, or sentences from decontextualized utterances (Yorkston et al., 1992). Each intelligibility assessment type yields task-specific information: phoneme intelligibility measures the understandability of discrete sound productions; word intelligibility measures the understandability of single-word utterances; and sentence intelligibility measures speech production in longer utterances characteristic of typical occupational interactions (Kent et al., 1989; Yorkston and Beukelman, 1981; Yorkston et al., 1992). Research has identified factors contributing to the intelligibility of synthesized speech, including quality of synthesis (Greene et al., 1986; McNaughton et al., 1994), message length (Mirenda and Beukelman, 1987, 1997), and rate
(Higginbotham et al., 1994). Continued advances in the quality of synthesized speech are expected to result in output that is increasingly comparable to natural speech.
Comprehensibility and Listener Comprehension
In contrast with intelligibility, in which the speech signal is extracted from context, comprehensibility is the degree to which speech is understood when combined with available relevant information (e.g., linguistic context, physical environment, gestures, and conversational topic) (Duffy, 2013; Yorkston et al., 1996). Assessment of comprehensibility, like that of intelligibility, involves transcribing verbal productions, except for the supplementation of verbal productions with contextual information (Hustad, 2008). When speech impairment is present, the addition of this contextual information usually results in comprehensibility scores superior to those for intelligibility (Hustad, 2008). Research has identified factors contributing to the comprehensibility of synthesized speech, including environment (noise, quiet) (Nelson et al., 2005), native language (Alamsaputra et al., 2006; Reynolds et al., 1996), message predictability and meaningfulness (Hoover et al., 1987; Oshrin and Siders, 1987; Slowiaczek and Nusbaum, 1985), and linguistic context (Beukelman and Mirenda, 2013; Drager and Reichle, 2001; Marics and Williges, 1988). Quantifying the comprehensibility of AAC output involves providing structured opportunities for transcription of messages in specific, functional contexts that are relevant to the individual (e.g., employment environments, topics, messages).
Separately, comprehension measures the ability of listeners to interpret the meaning of messages produced, which is evaluated by examining a listener’s ability to answer questions about the message or utterance content (Hustad, 2008). Unlike scores on intelligibility and comprehensibility, comprehension scores do not reflect the severity of speech impairment; indeed, these scores tend to be higher than intelligibility scores, particularly for individuals with moderate to severe speech disability (Hustad, 2008).
Efficiency
Communication efficiency, often quantified by measures of speaking rate (i.e., intelligible words per minute [wpm], comprehensible wpm), refers to the rate at which understandable information is conveyed (Duffy, 2013). Measures of communication efficiency are key indicators of perceived normalcy of communication in social contexts because intelligibility efficiency reflects functional limitations, while comprehensibility efficiency reflects the ability to participate effectively in daily interactions (Duffy, 2013). Little
research has addressed efficiency of comprehension for natural, digitized, or synthesized speech, although it likely influences such participation.
Typical speaking rates for people unaffected by speech-language disability (i.e., unimpaired intelligibility and comprehensibility) vary by task: paragraph reading rates range from 160 to 170 wpm (Fairbanks, 1960); sentence reading rates are approximately 190 wpm (Yorkston and Beukelman, 1981); and a much wider range of 150 to 250 wpm is noted for conversational utterances (Goldman-Eisler, 1986) because these utterances are influenced by the cognitive load of the task at hand (Yorkston et al., 2010b). Speakers with dysarthria tend to speak at slower rates, reflected in mean syllable durations of 246-249 milliseconds, relative to unimpaired speakers, with a typical rate of 198 milliseconds (Darley et al., 1975; Yorkston et al., 2010b).
Efficiency of AAC output also is impacted by measures of rate, which are influenced by the same factors associated with spoken messages but also by the interaction with AAC technology and by physical (e.g., motor, sensory, perceptual) ability and access methods (Higginbotham et al., 2007). AAC communication rates reflect such factors as message formulation and message delivery time. Communication rates 15-25 times slower than those of spoken speech are common for AAC (Beukelman and Mirenda, 2013), with a speaking rate of 10 wpm having been reported when alphabet-based rate acceleration strategies are combined (Newell et al., 1998). A significant objective in selecting an individual’s optimal AAC system is to heighten message communication rates to those typical of natural speech and permit more efficient communication (Wisenburn and Higginbotham, 2009). The ideal balance of rate and content for AAC selection remains under study (Haidet et al., 2012; Lesher et al., 1998; Trnka et al., 2008; Wisenburn and Higginbotham, 2009).
Effectiveness
Considered a component of participation in daily interactions, self-perceived communication effectiveness also may reflect efficiency (McAuliffe et al., 2010). Ratings of communication effectiveness have demonstrated a positive correlation with intelligibility (Ball et al., 2004). However, efficiency and intelligibility factors are not the sole contributors to effectiveness (Donovan et al., 2008; Dykstra et al., 2015; McAuliffe et al., 2010). In some research, speakers with ALS and their listeners have expressed similar perceptions of communication effectiveness (Ball et al., 2004), whereas speakers with Parkinson’s disease and traumatic brain injury have perceived their communication effectiveness as higher relative to listeners and expressed the view that intelligibility is not significantly related to effectiveness (Donovan et al., 2008; McAuliffe et al., 2010). Additionally, speakers with ALS have
rated their communication effectiveness as poor even though the intelligibility of their utterances remained above 90 percent, potentially a reflection of the influence of effort and fatigue on perceptions of communication effectiveness (Ball et al., 2004). A research focus on the effectiveness of AAC communication is emerging (Beukelman et al., 2015; Fried-Oken et al., 2012; Higginbotham et al., 2007). Focus group participants in a study by O’Keefe and colleagues (2007, p. 95) highlighted the need for emphasis on aspects of participation, stating, “Don’t make the use of technology an end goal; instead show me how to communicate satisfactorily to get and keep a job.” and “Don’t make the design and use of [AAC] our center of attention; concentrate on how I use communication to find a wife.”
EVALUATION AND MONITORING
The ultimate goal of an AAC assessment is to recommend an AAC system and design treatment that will assist the individual in achieving “the most effective interactive communication possible” (ASHA, 2016b). Successfully matching an individual to the appropriate communication technology is a complex process. The following subsections describe required elements of comprehensive evaluation and monitoring of the achievement of functional communication goals.
Team Approach
The dynamic and multidimensional nature of disability results in complexities that are best addressed by interdisciplinary assessment teams (Fried-Oken and Granlund, 2012; Raghavendra et al., 2007; WHO, 2002). The members of the AAC team vary depending on individual user abilities, expectations, and communication needs and the availability of services. At a minimum, AAC team members include the individual with a communication disability; key communication partners (e.g., caregivers, partner, adult children); an SLP; and the individual’s physician (Beukelman et al., 2008; Binger et al., 2012; Dietz et al., 2012). The SLP typically is the lead professional in the AAC team evaluation process and is likely to provide the intervention with AAC technology. The roles of the team members in AAC may be filled by many different people and may overlap. Importantly, the team approach makes the individual and family central contributors, interacting with the rest of the team to ensure their full participation and information sharing (Binger et al., 2012; Hill et al., 1998). Table 6-3 illustrates the roles of personnel involved in AAC assessment and treatment beyond the individual with communication needs, who is involved in every aspect listed in the table (Beukelman et al., 2008; Binger et al., 2012).
TABLE 6-3
Personnel Involved in AAC Assessment and Treatment
Personnel | Assessment and Treatment Involvement |
---|---|
AAC Finder | Identify and refer Report case history |
General-practice speech-language pathologist | Identify and refer Acquire and evaluate case history Evaluate speech-language capacity and related domains Identify and recommend AAC options Acquire funding Establish and provide treatment |
AAC clinical specialist | Evaluate case history Determine diagnostic questionsa Identify and recommend AAC options Acquire funding Establish and provide treatment Provide AAC technical support |
AAC facilitator (communication partner) | Identify and refer Report case history Contribute to diagnostic questions Participate in evaluation and treatment Advocate for individual Provide support across transitions Provide AAC technical support |
Collaborating professional (e.g., occupational therapist; physical therapist; vision, hearing, rehabilitation medicine specialists) | Identify and refer Report/evaluate case history Contribute to diagnostic questions Participate in evaluation and treatment Access troubleshooting |
AAC manufacturer/vendor | Identify AAC options Assist in evaluation process Provide equipment loans/trials Facilitate funding of selected AAC Provide AAC devices and accessories Provide AAC technical support |
AAC technology training agency | Facilitate AAC evaluation and intervention Identify and recommend AAC options Establish and provide treatment Provide equipment loans/trials Provide AAC training Provide AAC technical support |
a AAC clinical specialists may perform testing associated with a collaborating professional (e.g., vision, hearing, physical skills, cognition) as related specifically to communication and access.
Assessment
An AAC assessment requires integration of a broad scope of information to determine an appropriate recommendation and its implementation (Beukelman and Mirenda, 2005). The complexity of the assessment is influenced by such factors as the user’s characteristics (e.g., skills, communication needs, environments), AAC team dynamics, rapidly occurring changes in technology, limited preprofessional training, and limited research on AAC clinical decision making (Dietz et al., 2012).
AAC assessment identifies daily communication needs, details functional communication goals, outlines individual/family supports, and generates treatment recommendations (Beukelman and Mirenda, 2013; Light and McNaughton, 2013; Williams et al., 2008). To enable comprehensive participation, evidence supporting communication needs for educational endeavors, vocational training, transition activities, and employment is integrated into the assessment. In addition, many funding sources require a medical necessity for communication; therefore, interactions needed for medical/health interactions are often identified. A sequential process of AAC assessment includes: (1) identifying communication needs and completing subsequent referral, (2) collecting information relevant to communication status and needs, (3) determining diagnostic questions and communication goals, (4) developing and completing evaluation procedures, (5) ascertaining and recommending AAC interventions, (6) securing funding, and (7) repeating steps 2-6 as additional needs arise (Binger et al., 2012). Assessments typically involve dynamic procedures designed to identify individual skills and strengths that can be used to support functional communication, gauge the impact of modifications on performance, and determine effort required for successful interactions (King et al., 2015). Although various models, frameworks, and guidelines are used in AAC assessments, feature matching (i.e., matching the user to AAC technology) and system trials are standard components (Beukelman and Mirenda, 2013; Cook and Polgar, 2008; Hill, 2004; Scherer and Craddock, 2002; Zabala et al., 2005). A brief description of typical assessment processes used to determine communication abilities, needs, and AAC options follows. Medical and communication diagnosis, prognosis, communicative needs, and functional abilities provide the basis for matching individuals with appropriate AAC systems by creating a customized system that supports communication based on the individual’s skills. There are no prerequisite skills (e.g., cognitive, motor, language/literacy) for using AAC technologies (Light and McNaughton, 2012; Snell et al., 2010).
Demographics, Background, and Communication Needs
In addition to demographic and diagnostic information, the individual’s educational, vocational, and previous clinical experiences are noted; this history informs assessment procedures and the AAC options presented. For example, an individual with long-standing developmental disabilities (e.g., cerebral palsy, cognitive impairment) may have previous experience with AAC systems that will reveal prior successes/failures with specific devices, while other individuals may have no prior AAC experience. For some individuals (e.g., those with ALS), a delay in referral for an AAC assessment creates an urgency to identify an immediate means for communicating (Nordness et al., 2010). Interviews and questionnaires provide information about the individual’s (and his or her primary communication partners’) values, beliefs, motivations, and expectations regarding AAC; current communication status and communication necessary to support daily communication activities; and environmental factors that may influence successful AAC implementation (Binger et al., 2012; Romski and Sevcik, 2005).
Speech/Oral Motor Skills
Individuals are candidates for AAC intervention if their natural speech is not sufficiently functional to meet all of their daily communication needs (Beukelman and Mirenda, 2013). A person may find that his or her dysarthric natural speech is functional for interactions with a spouse at home in a quiet or context-rich environment, but that this same speech will not support vocational interactions; therefore, natural speech fails to meet all of the person’s communication needs. Unintelligible speech or significantly reduced rate of speech influences functionality, thus supporting a person’s need for AAC technology.
Cognitive–Linguistic Considerations
Beyond current language ability, conversational needs, and communication contexts, AAC techniques and symbols and/or strategies are evaluated to identify optimal communication performance (Hill and Corsi, 2012; Hill et al., 2010; Romich et al., 2005). Frequently, linguistic evaluations are conducted prior to the AAC assessment as part of a standard speech-language evaluation. Important considerations include the individual’s
- receptive (comprehension), expressive (speaking and writing), and pragmatic (social) language skills, which influence the selection of an optimal language representation and messaging system; and
- cognitive and executive functioning skills (e.g., attention, focus, orientation, organization, and sequencing), which influence the selection of an optimal AAC system and the individual’s functional and strategic implementation of that system.
Fine/Gross Motor Skills and Mobility
Assessment of physical skills in the context of an AAC system includes identifying input selection techniques, transporting the device, and ensuring proper seating and positioning (Cooper et al., 2009; Costigan and Light, 2010). The identification of appropriate input selection techniques is influenced by body or extremity (e.g., finger, hand, knee, toe, head) range of motion, accuracy and consistency of movement, degree of force required to activate the device or a switch, the speed at which the individual can activate and release the device or switch, and the length of time and frequency with which the individual can repeat the movement before becoming fatigued. Assessment of input selection may entail evaluating the person’s access to the device using available movements (e.g., digit of the hand, foot, eye gaze, stylus, mouse or head mouse, joystick, head stick or mouth stick). Many device features may be modified to improve selection accuracy and efficiency (e.g., accept or release time, display size or orientation, touch guides, key guards). Likewise, assessment includes identifying impacts of such modifications on communication performance, device transport (e.g., weight, size), and effective interactions. When direct selection is ineffective as an access technique, the individual’s ability to scan using one or multiple switches is assessed, which entails evaluating access to AAC technologies indirectly by means of switch activation. Options include activation using body part movement or function (e.g., hand or arm, foot or leg, head, blink, motion, voice). Device features may be modified to improve selection accuracy and efficiency (e.g., scan method, scan rate, highlighting, repeat scans), and the assessment includes comparing positive and negative impacts of the various options.
Although many individuals who communicate with AAC ambulate independently, many have complex disability that requires the use of rollators, wheelchairs, or alternative seating and positioning. Many changes in AAC technologies that have occurred in recent years influence how they are both used and transported (McNaughton and Light, 2013). Alternative seating may require a means of mounting the AAC system to enable access as the person is positioned throughout the day and during transport; the AAC team makes such decisions about enhanced access as part of the assessment process (Beukelman et al., 2008; Binger et al., 2012). Mounting systems can be fitted to a wheelchair, and commercial mounting products offer a variety of features (e.g., swing-away, folding, rolling). The assessment includes
making comparisons to identify features that will benefit an individual or are needed to maximize use of the AAC.
Vision and Hearing
Assessment of functional vision and hearing influences the selection and use of an AAC system: visual skills inform decisions regarding the size, type, and placement of symbols, while hearing informs decisions regarding voice output system needs (Beukelman and Mirenda, 2013; Hill, 2010; Hill and Corsi, 2012). Assessment includes identifying the appropriate number of locations on a display to accommodate vision abilities and needs and controls for auditory output (e.g., volume, voice output, speech rate, or pitch).
Choice of an Appropriate AAC Technology for the Individual
AAC assessment and prescription entails a systematic approach to matching an individual’s abilities, communication needs, and expectations to specific AAC features. The assessment team identifies the user’s current communication needs and then attempts to anticipate the future by considering potential changing needs and skills (e.g., transitions, skill development, degeneration). The intent is to optimize functional interactions in all communication situations (ASHA, 2004b; Glennen and DeCoste, 1997; Scherer, 2002, 2005; Scherer and Craddock, 2002).
Previous AAC Experience
Identification of previous AAC interventions is helpful in determining categories and features of AAC devices that may meet the individual’s abilities, needs, and expectations. The effectiveness of previously implemented features also can be evaluated. In some cases, updates to AAC technologies may impact performance.
Selection of AAC Device Features
Informing the individual and family of various AAC technology options is a critical step in feature matching, helping to remove bias from the selection process. The taxonomy of AAC devices presented earlier and in Annex Tables 6-1, 6-2, and 6-3 at the end of this chapter reveals the complexity of AAC features and their combinations. The AAC team seeks to identify AAC device features that support identified communication goals, which may involve medical, social, educational, and/or vocational interactions. Device features identified as important to effective AAC implementation by SLPs and individuals who use AAC include ease of use (e.g., efficiency,
reliability, suitability, adjustability), design (e.g., comfort, size, portability, durability), performance (e.g., battery life, rapid use, simple maintenance, rechargeable during use), integrated software and layout (e.g., ability to find words and messages easily and produce spontaneous messages), and voice output (e.g., rapid speech production, having an alternative output method) (Judge and Townend, 2013; O’Keefe et al., 2007).
The rise in the use of mobile technologies (e.g., smartphones, tablets) in the United States and the vast array of applications available for download have opened the door to the use of such technologies by individuals with complex communication needs (McNaughton and Light, 2013). Mobile AAC technologies may be a good match for some individuals and can offer certain benefits over traditional AAC systems such as SGDs. Often the mobile technologies are smaller and less expensive than traditional AAC systems, and they offer the myriad features typical of such devices that go well beyond the AAC function (McNaughton and Light, 2013). In addition, because they are mainstream technologies, their use as AAC devices promotes social acceptance. As one AAC user noted, “Using an iPad, Blackberry, or iPhone . . . is not another thing that makes me different. It wasn’t using a strange, unfamiliar device to communicate with this group” (Hyatt, 2001, p. 25; McNaughton and Light, 2013). Yet, while ease of access to and social acceptance of AAC mobile technologies are benefits attending such devices, they come with a downside. Focusing on the technologies themselves ignores the most important element of any communication device for someone with complex communication needs—its ability “to facilitate effective communication and fuller participation in society” for that individual (McNaughton and Light, 2013, p. 110). Regardless of the category of devices being considered, whether a variety of SGDs or different mobile AAC technologies, it is important that consumers and providers be aware of all of the available options and engage in a process of evaluation and decision making that will result in matching the individual with the most appropriate device to meet his or her communication needs (McNaughton and Light, 2013).
AAC Technology Trials
Practice with an assortment of AAC technologies that have been matched to the user’s needs optimizes assessment outcomes for both the individual and the AAC team and illustrates the strengths and weaknesses of various options or combinations thereof. The format and methods used for practice, known as trials, are at the discretion of the SLP, the individual and family, and the other AAC team members. Upon completion of system trials, team members reach informed consensus on the optimal system.
Trial Decisions
As the lead professional on the AAC team, the SLP typically selects an array of AAC technologies to be used during trials. Individuals may have suggestions about products they have heard about, seen, and hope to try, which are integrated into the evaluation process to the extent possible (e.g., based on availability, individual access options, and/or appropriate representation). Trials may reflect professional experience and preferences with respect to AAC technology; therefore they tend to be idiosyncratic across SLPs (Glennen, 2000). Selecting AAC technologies for trial includes ensuring that the individual is aware of various options and the personal abilities and communication needs they address. Summarizing trial results may serve to highlight evidence of individual communication performance with the different technologies and features, thus adding support for device selection by providing a personalized performance profile.
Rationale for AAC Technology Selection
Integrating information on clinical implementation, personal performance, and external participation (e.g., in the community, home, and workplace) gleaned from AAC technology trials guides decision making and optimizes the selection of an AAC system. Clinical and personal evidence are based on comprehensive assessment of communication ability and the domains influencing communication, daily communication needs, functional communication goals, and personal preferences. The trial process allows the AAC team to identify specific components, features, and tools necessary to meet the individual’s communication needs.
Monitoring
AAC clinical services are intended to support the myriad communication needs of the augmented communicator (Higginbotham and Engelke, 2013). To enable the user to accomplish the most effective communication, quantitative and qualitative performance measures are gathered intermittently. Performance measurement typically includes examining clinical evidence and communication performance in a clinical setting. Participation and outcome measures also are used to monitor progress toward achieving optimal use of the recommended AAC system (ASHA, 2004b).
AAC outcome measurement involves evaluating AAC technology-based interactions during specific communication activities, then comparing achieved outcomes with the intended results or desired goals. Measuring the effectiveness of AAC communication requires having appropriate measurement instruments and methodologies available (Anderson et al., 2016;
Smith, 1996). Measures of real-time communication functioning and research investigating the impact of AAC technologies on communicative participation are sparse. Various instruments collect measures of satisfaction and self-reported outcomes for various assistive products and technologies; some include AAC, while others can be modified for evaluating AAC results (Anderson et al., 2016; Demers et al., 2002; Jutai et al., 1996; Scherer and Craddock, 2002). Similarly, some AAC devices have a data logging feature that automatically records the communicator’s utterances (Higginbotham et al., 2002; Hill, 2004) and provides a file for analysis and tracking of communication trends (Hill, 2010).
Monitoring also includes tracking acceptance and abandonment of AAC devices. High levels of acceptance of AAC technologies have been documented for a variety of individuals across disability groups. Among individuals with communication impairment resulting from traumatic brain injury using devices based on assessment recommendations, acceptance of synthesized high-tech devices was more than 94 percent, and acceptance of digitized low-tech devices was 100 percent (Fager et al., 2006). Likewise, individuals with ALS have been found to have an acceptance rate of synthesized high-tech devices of approximately 96 percent (Ball et al., 2004, 2007). Other individuals with progressive disease also have demonstrated acceptance of AAC technologies (Beukelman et al., 2007a), including those with primary progressive aphasia (Fried-Oken et al., 2015), spinal muscular atrophy (Ball et al., 2012), and dementia (Bourgeois, 1991; Fried-Oken et al., 2015). Among individuals with aphasia, communication partner strategies have been shown to improve acceptance of AAC technologies (Ball and Lasker, 2013).
Prominent factors in acceptance of AAC include intervention timing (e.g., early referral, regular reevaluations, and continual treatment); involvement of communication partners from the onset (e.g., to establish AAC acceptance and use); and ongoing monitoring and adjustment over time (e.g., integration of new strategies, accommodation of changes in technology or personal ability, integration of multiple modalities to capitalize on strengths) (Fried-Oken et al., 2015). Factors potentially influencing acceptance of AAC mobile technologies include functionality and interconnectivity, consumer empowerment in accessing AAC options, social acceptance of AAC in the mainstream, ease of acquisition, and affordability (McNaughton and Light, 2013).
Data are lacking on abandonment, or the inappropriate discontinuation of AAC technology determined appropriate by the AAC team. Factors influencing abandonment have been reported to include communication partners’ belief that they can understand natural speech; insufficient opportunities to engage in conversation; lack of communication partners’ motivation; individual preference for other communication methods; and
insufficient or inadequate education/preparation for use or maintenance of the device (e.g., programming, generation of timely and appropriate messages, system upkeep) (Johnson et al., 2006). Lack of support from a communication facilitator or partner has been cited as influential in the abandonment of AAC interventions for people with traumatic brain injury (Fager et al., 2006). Factors influencing potential abandonment of AAC mobile technologies include a possible shift in the essential focus on communication to a focus on the technology; the lack of a structured assessment process to identify optimal features for communication and support for a wide variety of communication functions and contexts; and access restricted to mainstream options (Kagohara, et al., 2013; McNaughton and Light, 2013).
TRAINING AND ADAPTATION
Based on the complexity of the AAC system, a wide range of training and adaptation requirements exist, from those in which the individual “turns on and uses” to those requiring multiple learning sessions and ongoing system programming to support interactions with new communicative partners, topics, and/or situations. Little information is available regarding specific training needs and adaptation times across AAC systems.
The need to rely on AAC may result from a wide range of developmental, physical, cognitive, and/or social impairments (Ball et al., 2010a). For many individuals, these impairments are chronic, requiring AAC across the life span and through numerous life transitions (Lilienfeld and Alant, 2009; Mirenda, 2003). During transitions, AAC strategies and system features that have been effective in one communication environment may become less effective in new ones (Hamm and Mirenda, 2006; Lund and Light, 2006). Likewise, depending on the type of disability, individuals who rely on AAC to communicate may find that existing AAC strategies become less effective, generally as a result of the natural course of the medical condition that has resulted in limited spoken communication (either degenerative or gradually improving and eventually stabilizing) (Beukelman et al., 2007b). Degenerative conditions include ALS, primary progressive aphasia, and dementia, among others. For individuals with degenerative conditions, AAC systems are managed so as to maintain effective communication through speech, language, cognitive, or motor control decline. Improving and stabilizing conditions include stroke/aphasia, traumatic brain injury, cerebral palsy, cognitive impairment, and locked-in syndrome, among others. As with degenerative conditions, AAC supports interactions across multiple settings in the context of improving speech, language, cognitive, or motor control (Beukelman et al., 2007b).
AAC system adaptations occur after the original evaluation for four
primary reasons: (1) physical changes that result in a need for a new access method, (2) cognitive changes that result in a need for new/updated message representations, (3) changes to other equipment that result in a need for new/updated mounting of the AAC system, and (4) living or vocational setting changes that result in the need to interact in new communication contexts with different partners. Individuals with degenerative, improving, and relapsing-remitting conditions require frequent adjustments to AAC access, commonly to accommodate physical changes. Some adjustments to the AAC software presentation may be necessary based on increasing/decreasing vocabulary and linguistic complexity (e.g., someone with Alzheimer’s disease wants to maintain a key skill at work, or someone with cognitive impairment is promoted and wishes to train for new interactions).
AAC Use and Prognosis for Occupational Success
Speech and language disorders encompass a wide range of impairments (e.g., congenital, acquired, degenerative) that affect an individual’s ability to communicate functionally using natural speech (ASHA, 2016a; Perry et al., 2004; Wodka et al., 2013). Communication competence with AAC is complicated not simply by the need to have knowledge of and skills in a native language but also by the need to learn the language software of the AAC system (Drager et al., 2003).
Although data on the subject are sparse, successful employment among individuals who require AAC tends to be dependent on the discrete job requirements and flexibility of employers; successful employment outcomes have been reported for professional jobs with text-centered interactions (i.e., written or text-generated speech) (Fried-Oken, 1993; McNaughton et al., 2001). Individuals have reported as benefits of employment personal expectations (e.g., desire for success, serving as a model for others with disability), finances (e.g., gaining independence), and positive workplace experiences (e.g., enjoying work activities and workplace interactions) (McNaughton and Richardson, 2013; McNaughton et al., 2002). Telework has been shown to have benefits for some individuals who communicate with AAC (e.g., work efficiency, flexible schedule, coworker interactions), although some problems exist (e.g., slow home Internet speeds, need to purchase one’s own office equipment, easy access to distractions) (McNaughton et al., 2014). A growing number of individuals who communicate with AAC (e.g., those with autism spectrum disorder or complex communication needs) expect to participate fully in community and workplace activities but require supports (e.g., training and experience valued by employers, academic and vocational training, identification of jobs that are a good match) (Bryen et al., 2007; Howlin et al., 2005; Light and McNaughton, 2012; McNaughton and Arnold, 2013; Wehman et al.,
2012). AAC technologies have been shown to increase employability ratings compared with natural dysarthric speech based on perceived credibility, strength and knowledgeability involving highly skilled positions, verbal ability, and interactivity (Stern et al., 2017).
Individuals who communicate with AAC can obtain and maintain employment (Hourcade et al., 2004; Light et al., 1996; McNaughton and Bryen, 2007), but this is the case for only a small percentage of these individuals because of a number of barriers to their employment (Feinstein et al., 2013; Light et al., 1996). Successful employment has been documented primarily with government agencies and advocacy organizations (McNaughton et al., 2002). Because AAC technologies are subject to breakdown, they require technical support and repair that results in loss of access to communication, and a loss of effective communication, however temporary, will impact an individual’s ability to fulfill work responsibilities.
Language, literacy, and education are critical factors for the employment of individuals with physical disabilities, and communication competence for basic workplace interactions is essential for employment of individuals with developmental disabilities (Collier et al., 2012; McNaughton et al., 2002). Development of these skills must begin at an early age if academic, social, and communication skills are to be integrated successfully into the workplace (McNaughton et al., 2002).
Interpersonal communication (e.g., responding to others, participating in conversations, putting others at ease) is an important work-related social-relational skill (Light and McNaughton, 2014). One model of communication competence cites linguistic, operational, social, and strategic abilities, as well as motivation, attitude, confidence, and resilience, as influencing success with AAC technology (Light, 2003; Light and McNaughton, 2014; Thistle and Wilkinson, 2013). Training in social-relational interactions can have a positive impact on communication competence (Kent-Walsh and McNaughton, 2009; Light et al., 1999). Skill in such interactions is important for the communication partners of AAC speakers as well, yet most individuals in a community are unlikely to have had a conversation with such an individual. The lack of partner skill may limit communication effectiveness as much as, if not more than, the AAC technology. Indeed, as noted above, issues related to partner training and AAC technology supports are associated with abandonment of the technology (Johnson et al., 2006).
Integration of AAC software into mainstream technologies can enable easy and rapid interactions that are required for employment, particularly telework (AAC-RERC, 2011), while also providing access to a wide variety of other information (e.g., online services, entertainment, education, health care, public services, employment, health and safety, tools) (Shane et al., 2012). Barriers to control of mainstream technologies may be encountered by individuals with physical limitations who are unable to perform certain
movements (e.g., swipe, pinch, use a keyboard or touch screen) without integration of the alternative access available through the AAC technology or implementation of the adapted access options that are increasingly available (e.g., accelerometers, eye gaze, pattern recognition) (Shane et al., 2012).
Reduced communication rates associated with AAC likely interfere significantly with communication interactions, particularly in educational and employment contexts with speakers accustomed to exchanging information at a rapid pace (Higginbotham et al., 2007; McNaughton and Bryen, 2007). Even if an individual is matched with an appropriate device, receives extensive training, and becomes competent in using an AAC system, he or she may not engage adequately in a real-time discussion in a board room because of limitations imposed by the interrelationship among the method of communication; the AAC technology features; and the individual’s physical disability, cognitive/linguistic skills, and skills in interacting with a communication partner (Higginbotham et al., 2007). Similarly, various service industry positions require certain (as yet unestablished) interaction pacing to sustain engagement. Communication inefficiencies (reduced comprehensibility) and message timing limitations (time required to formulate a message) interfere with effective communication on the part of many individuals who rely on AAC to communicate (Hanson et al., 2016; Rodriguez et al., 2016; Trnka et al., 2008). Communication applications with various features and strategies may not enhance the rate of communication sufficiently to support individual participation by generating rapid utterances (Newell et al., 1998), and little research published to date supports the notion that word prediction enhances rate (Yang et al., 2009). Other human factors, moreover, such as increased visual monitoring and motor control, influence communication rates when rate enhancement strategies are employed (Beukelman and Mirenda, 2013).
With few exceptions, digitized speech in AAC is associated with greater intelligibility relative to synthesized speech (Drager et al., 2006). Research has demonstrated that low-quality synthesized speech is sufficiently inferior to human speech to have significantly compromised value for functional AAC; however, the quality of synthesized speech has shown dramatic improvements in recent years (Drager and Reichle, 2001; Fucci et al., 1995; Venkatagiri, 2003). Still, multiple investigations have demonstrated that digitized and synthesized systems are not sufficiently intelligible for all listeners in all environments (Alamsaputra et al., 2006).
Communication with Natural Speech: Effects on Prognosis
Effective speakers produce appropriate messages and are active and efficient in relaying them to control, influence, and direct the environment (Yorkston et al., 2010b). People use speech in their daily environments and
have individually unique speaking demands that vary based on such factors as employment, life situation, recreational and community involvement, and particular communication preferences (Anderson et al., 2016). And evidence suggests that communication impairments often result in loss of independence and reduced quality of life (Müller et al., 2001).
The impact of employment cannot be overstated. Employment plays a key role in socioeconomic status, personal self-image, and quality of life (Blackstone, 1993; McCarthy, 1986; McNaughton et al., 2001). Emerging evidence indicates that perceived hireability may be limited when individuals communicate with even mild dysarthric natural speech instead of using AAC technologies for communication (Stern et al., 2017). Evidence indicates further that individuals who work for pay tend to report higher speech usage than those who are nonworking; indeed, a large percentage (74 percent) of those working for pay rank speech usage as the most important activity for work and describe it as either “extremely” or “very” important to their work (Anderson et al., 2016). It is difficult, however, to identify the need for communication associated with various jobs. Positions labeled as requiring no verbal communication may nonetheless have a speaking requirement that was not identified by the employer (e.g., a surveillance system operator may watch monitors to prevent shoplifting in a business, but in some way he or she must report incidents when observed).
Social Security Administration Disability Evaluation: Natural Speech and AAC
Based on regulations, the U.S. Social Security Administration (SSA) considers both natural speech and speech supported by AAC in disability determinations. SSA disability evaluation considers “the use of speech by any means and includes the use of mechanical or electronic devices” in determining whether an individual’s speech disorder is “severe enough to prevent an individual from doing any gainful activity” (SSA, n.d., 2.00 Special Senses and Speech). The category of impairment defined as “loss of speech due to any cause, with inability to produce by any means speech that can be heard, understood, or sustained” relates to persistent ineffective speech or communication (e.g., SSA, n.d., 2.09 loss of speech, 11.04A aphasia), significant interference with communication (e.g., SSA, n.d., 11.07 cerebral palsy), or unintelligible speech (e.g., SSA, n.d., 11.11 post-polio syndrome).1
The Program Operations Manual System Policy for Evaluation of Speech Impairments (SSA, 2017) identifies three attributes pertinent to
___________________
1 This sentence has been revised to reflect the updated Listing of Impairments for Neurological Disorders.
evaluation of speech proficiency: (1) audibility, (2) intelligibility, and (3) functional efficiency. Audibility encompasses loudness or intensity of speech in such contexts as quiet, noise, and riding in automobiles, as well as voice that becomes inaudible with use (as might be experienced with some conditions impacted by fatigue or respiratory insufficiency). Intelligibility, or the ability to articulate accurately, encompasses frequency of articulation errors, the extent to which the person is asked to repeat utterances, and how well the person is understood by strangers (the policy refers specifically to esophageal speech understood by people unfamiliar with this type of speech production). Finally, functional efficiency encompasses the ability to sustain consecutive speech, the number of words spoken without interruption/hesitancy, and the time lapse prior to speaking fatigue. Although specific measures are not indicated, the policy notes that if at least one of these attributes is missing, overall speech is not considered effective.
ACCESS AND AVAILABILITY
AAC Clinician Expertise
The Scope of Practice in Speech-Language Pathology of the American Speech-Language-Hearing Association (ASHA) (ASHA, 2016e) provides the conceptual framework within which SLPs provide clinical services. Although SLP graduate training programs meet multimodal communication standards, many universities do not offer a dedicated course in AAC, many AAC courses are not required components of the curriculum, few programs offer more than one such course, and students often graduate without having a supervised AAC clinical experience.
ASHA’s Knowledge and Skills document outlines the responsibilities, knowledge, and general skills for SLPs in the area of AAC (ASHA, 2016c). Proficiencies required of an SLP for providing AAC services include the following:
- Knowledge of the broad array of . . . [current] devices that are designed specifically for AAC purposes and their respective features.
- Knowledge of the performance differences of the broad array of [AAC technologies] (e.g., different forms of computer hardware and software, as well as adaptations such as touch screens and expanded keyboards that are intended for purposes that include but are not limited to communication) and their respective features.
- Knowledge of how language is generated on AAC systems during communication.
- Matching features of AAC systems to capabilities of individuals being considered for those same systems.
- Customizing AAC systems to meet individuals’ needs and skills.
- Modifying AAC systems as individuals’ communication abilities and needs change and new technologies arise. (ASHA, 2016c)
Individuals who communicate with AAC experience significant barriers to obtaining and learning to use AAC technology. As discussed below, funding is a concern for these individuals, but the greater barrier at present appears to be the lack of trained SLPs to provide assessment and intervention services. Persons who provide daily support to these individuals often do not receive needed training (Beukelman et al., 2009; McNaughton et al., 2001; Ratcliff and Beukelman, 1995), and as noted above, many graduates of SLP programs fail to receive sufficient training in AAC (Collier and Blackstein-Adler, 1998; Costigan and Light, 2007; Crema and Moran, 2012; Koul and Lloyd, 1994; Matthews, 2001; Robinson and Sadao, 2005). ASHA’s 2015 end-of-year membership report cites 156,254 certified SLP members (ASHA, 2015a), whereas the AAC Special Interest Group had 3,239 members, reflecting approximately 2 percent of the association’s total membership.
Few, if any, structured programs offer AAC training to SLPs beyond entry-level (Certificate of Clinical Competence) education (Koul and Lloyd, 1994). As a result, SLPs must obtain such training by attending numerous AAC-specific conferences and workshops, completing training with a variety of AAC technologies, reading AAC journals and periodicals, and participating in professional organizations with a focus on AAC (e.g., the ASHA AAC Special Interest Group, the International Society for Augmentative and Alternative Communication) (Beukelman et al., 2009).
AAC Funding Factors
Chapter 7 addresses major sources of coverage and funding for assistive products and technologies, including AAC. This section highlights a few funding considerations that are specific to AAC technology. Funding policies and practices can affect the adequacy of AAC evaluation, as well as funding approval for prescribed AAC technologies. Funding obstacles also may impact receipt of AAC training to maximize employment potential.
Current Procedural Terminology (CPT) codes for speech-language services are both time- and procedure-based (ASHA, 2016d). The time-based codes relevant to AAC include the first hour of an AAC evaluation, each additional 30 minutes of the evaluation, standardized cognitive performance testing per hour, and aphasia evaluation per hour. All other codes are procedure-based; the CPT code is reported once for the procedure and is based on a typical session regardless of the appointment length. Codes include evaluation of speech sound production; evaluation of language comprehension and expression; behavioral and qualitative analysis of voice and
resonance; therapeutic services for the use of non-speech-generating AAC; therapeutic services for the use of AAC technology, including programming and modification; and repair/modification of AAC devices. Although modification, repair, or replacement of unrepairable systems often is fund-able, no provision is made for interim communication support while these processes are taking place. As a result, it is important to highlight the fact that as technologies or computer-based equipment, AAC systems are subject to breakdown, thus requiring technical support and repair; as noted earlier, loss of access to communication, albeit temporary, will likely impact an individual’s ability to fulfill work responsibilities.
Funding mechanisms for the purchase of prescribed AAC technology may influence the prescription of a particular system. Typical funding sources for adults who would benefit from AAC technology include private insurance companies, the Veterans Health Administration, and Medicare or Medicaid. Additional funding mechanisms available to some individuals include state telephone equipment distribution programs, vocational rehabilitation programs, private pay, and charitable programs. Medicare Advantage plans are implemented under contract with private insurers through policies that provide Medicare (Parts A and B) benefits (CMS, 2017a), and individual policies may differ regarding coverage of SGDs. Medicare Supplemental Insurance (Medigap) policies are designed to cover some health care costs associated with Medicare (e.g., coinsurance, deductibles, copayments); these policies may be purchased from private insurers (CMS, 2017b). For funding of SGDs, Medigap policies cover supplemental costs associated with obtaining items covered by Medicare (Medicare typically covers 80 percent of approved SGD costs, and a Medigap policy will cover the 20 percent copayment). Medigap policies do not cover items not approved by Medicare (e.g., mobile AAC technologies, hearing aids) (CMS, 2017b).
Not all AAC technology solutions have been assigned a Healthcare Common Procedure Coding System (HCPCS) code. Delays in assigning codes may occur when technology innovations are added quickly to the market, as is common for rapidly changing technology. If uncoded technology is identified as the best match for and preferred by the individual, funding for that technology may or may not be available. Moreover, ongoing costs associated with mobile technologies (e.g., data rates, access to the Internet, cellphone fees) reduce the use of these AAC technologies for some individuals who could benefit from them (AAC-RERC, 2011), while others may opt for an SGD because it is covered by insurance even though it provides fewer features and is a poorer match for their communication needs (McNaughton and Light, 2013). Conversely, some individuals may purchase a mobile AAC technology thinking it will save them money, only to discover that it is not a good match for their AAC needs. Regardless of
the funding source, a range of appropriate AAC technology solutions are presented as part of the evaluation, with the ultimate goal of achieving the best communication match and meeting personal preferences. It remains the case, however, that funding options, cost, and affordability often influence which device is prescribed.
One benefit of Centers for Medicare & Medicaid Services (CMS) funding programs is the establishment of an assessment procedure and specific requirements (CMS, 2001). Some alternative funding options (e.g., equipment lending libraries, private purchase of mobile AAC technologies) do not link appropriate evaluations with AAC selection, trained providers, communication specialists, or indeed any criteria (AAC-RERC, 2011). At times, obtaining equipment in such a way results in a substantial cost savings and an appropriate communication solution; however, chance often determines whether the individual makes an inappropriate purchase that may ultimately prove more costly in terms of money, motivation, and effort.
One concern associated with CMS funding programs with respect to multimodal communication and employment is the requirement that the individual abandon all other forms of communication before selecting an SGD. The requirement is that all other forms of treatment be “considered and ruled out” prior to selection of an AAC option (CMS, 2001).
Individuals with disabilities also face challenges with funding for AAC technologies as they go through transitions. Youth transitioning from education-based services may face questions regarding ownership of AAC products and technologies; if a school system made the purchase, the AAC technology currently used by the individual may be retained by the school. In such situations, an AAC reevaluation and funding approval are required for the individual to have access to an AAC system that meets his or her communication goals and supports the person’s continued education, vocational training, and employment.
Although many individuals in need of communication systems have funding available for the purchase of AAC technology, most clinical providers do not. Notably, many clinical facilities do not provide AAC evaluations because of the high cost of purchasing and maintaining AAC technologies, software, and access options. Anecdotal evidence from providers nationwide indicates that assessment sites maintaining updated equipment most commonly are those affiliated with university educational/research programs. Few hospitals maintain evaluation centers with current equipment. Many individual clinicians will arrange to borrow equipment from other evaluation centers (e.g., state offices for assistive technology, disability-specific loan closets) and manufacturers to gain access to appropriate equipment. These funding factors impact access to appropriate evaluations and equipment needed to implement trials.
VOICE RESTORATION FOLLOWING HEAD AND NECK SURGERY
Prevalence of Need
The American Cancer Society estimated 59,000 cases of head and neck cancer in the United States in 2015 (American Cancer Society, 2015). Individuals with head and neck cancer acquire communication needs as a result of various cancer treatments, including surgical resection, radiation, and chemotherapy. Surgical treatments may involve resection of head/neck structures and tissue that may result in partial or complete removal of the larynx, vocal cords, and articulatory structures, in turn resulting in loss of voice and/or speech.
Voice Restoration Taxonomy
Some individuals with head and neck cancer may benefit from various categories of AAC technology, including mobile technologies, SGDs (HCPCS: E2500-E2510), and communication software and apps (HCPCS: E2511) (Ball et al., 2016b; Beukelman and Mirenda, 2013; Happ et al., 2004; Sullivan et al., 2007a,b):
- Mobile technologies—Communication applications are available for use on both iOS and Android platforms, although the number of options is currently greater on the former (Ball et al., 2016b). Such mobile technologies are now intrinsic to daily life for people from many cultures, languages, and traditions, and as such may provide a readily accessible means of supporting communication without adding to visible disability (McNaughton and Light, 2013). These technologies are summarized in Annex Tables 6-1 and 6-3.
- Speech-generating devices—Designed specifically for communication, SGDs may provide the most effective means of meeting communication needs through highly customizable and variable features (Beukelman and Mirenda, 2013; McNaughton and Light, 2013). Other SGD features that may be of particular importance for individuals with communication needs relate to available language options and options for connectivity to other computer technologies (Ball et al., 2016b). These technologies are described in detail in Tables 6-1 and 6-2 and Annex Tables 6-1 and 6-3.
- Communication software and apps—Communication software programs and apps provide options for individuals with head and neck cancer to communicate using direct access. They also may be used to design and print low-tech communication displays that can often be practical for communicating basic messages in acute
care or other temporary settings. Software may represent language using symbols other than traditional text (e.g., pictures, drawings) and therefore may be helpful to individuals with literacy and/or cognitive limitations (Ball et al., 2016b). The framework for these technologies is described in Table 6-2, while details are provided in Annex Table 6-2.
Although some individuals benefit from the specified AAC technologies, individuals typically are supported with voice restoration after undergoing head/neck cancer interventions (Tang and Sinclair, 2015). Similarly, those having undergone a tracheostomy, who retain the body structures and functions necessary to produce speech but whose respiratory flow is redirected away from the vocal cords, are supported with speech restoration (Lichtman et al., 1995). For some, speech becomes functional when diminished speech intensity is supported by amplification (Andreetta et al., 2016). The various options are described below and summarized in Table 6-4.2
Functional Speech Following Head/Neck Cancer Surgery and Radiation
For individuals postsurgery, “speech outcomes are the strongest predictor of health-related quality of life, inhibiting a person’s ability to return to work, establish or maintain relationships, or participate in everyday activities” (Bolt et al., 2016, p. E1). Psychosocial quality of life decreases as a result of loss of voice after head and neck cancer treatments. Individuals may experience feelings of solitude, limitations in social relationships that result in social withdrawal, and reduced sexual enjoyment (Babin et al., 2009; Singer et al., 2008; Tang and Sinclair, 2015). Key factors in participation in communication include severity of speech loss, cognitive function (perhaps associated with cancer-related cognitive impairment), and extent of surgical resection (Bolt et al., 2016).
One study found that at 3 months following head/neck cancer treatment, 63 percent of individuals postsurgery (55 percent postsurgery with radiation) described broadly functional speech (i.e., perceived as possibly distorted but 100 percent intelligible) with natural speech or when accessing a tracheoesophageal voice prosthesis (TEP), an artificial/electrolarynx (AL/EL), and/or esophageal speech (ESS) (Perry and Shaw, 2000). Another 22 percent of individuals postsurgery (26 percent postsurgery with radiation) reported at least moderate speech disabilities (i.e., perceived as intelligible only when communication partners knew the message context) using the same assistive methods of communication (i.e., TEP, AL/EL, ESS), citing
___________________
2 The images in Table 6-4 serve as examples of device categories only and should not be considered an endorsement of specific products or manufacturers.
a frequent need to repeat spoken messages and use writing to supplement speech to convey intended meaning (Perry and Shaw, 2000). Finally, 12 percent of individuals postsurgery (19 percent postsurgery with radiation) reported poor speech (occasional to no functional communication and/or at least 50 percent unintelligible) with the same assistive communication methods (i.e., TEP, AL/EL, ESS) (Perry and Shaw, 2000). These reports thus indicate a range of 34-45 percent of individuals undergoing head and neck cancer treatments who, although receiving benefit from voice restoration strategies, will likely require AAC technology to achieve fully functional communication.
Functional Speech Following Laryngectomy
For individuals who produce functional speech following a laryngectomy, four primary voice restoration methods are used: (1) esophageal speech, (2) tracheoesophageal voice prosthesis, (3) artificial or electrolarynx (Perry and Shaw, 2000), and (4) voice amplification (see p. 252).
Esophageal Speech
In ESS (see Figure 6-1), air from the mouth is transferred into the upper esophagus, where the released air causes the pharyngo-esophageal tissue to vibrate and produce a low-pitched voice (Enderby et al., 2009). This voice restoration strategy does not involve assistive technology; instead, ESS is produced by the individual’s injecting (essentially swallowing) air into the esophagus and then releasing it in a controlled manner to cause the soft tissue to vibrate and produce voicing (Tang and Sinclair, 2015).
Tracheoesophageal Voice Prosthesis
At present, the most common voice restoration strategy for individuals with a complete laryngectomy is the placement of a one-way valve in the tracheoesophageal wall that allows respiratory air to flow from the lungs to the esophagus, where soft tissue vibrates and produces substitute voicing. This voice restoration method involves fitting a prosthesis through a surgically created puncture (e.g., stoma) between the trachea and the esophagus (Enderby et al., 2009). Voice is created by closing the stoma using one’s fingers or a hands-free valve (see Figure 6-2).
Artificial/Electrolarynx
This voice restoration strategy involves an electrolarynx, an external device that produces vibrations in the oral cavity or pharyngeal mucosa
TABLE 6-4
Voice and Speech Restoration and Amplification Taxonomy
Feature |
AL/EL (L8500)
|
TSV (L8501)
|
TEPpt (L8507)
|
TEPpr (L8509)
|
AMP (L8510)
|
ESS |
---|---|---|---|---|---|---|
Total laryngectomy | X | X | X | X | X | |
Partial laryngectomy | X | |||||
Tracheostomy, intact larynx | X | |||||
Professional insertion | X | |||||
Patient insertion | X | |||||
Hands free use | X | X | X | X | X | |
Hand held use | X | X | ||||
Shunt air into esophagus | X | X | X | |||
Vibrate head/neck tissue | X | X | X | X | ||
Surgical placement | X | X | ||||
Voice amplification | X | |||||
Battery, rechargeable | X | X |
NOTE: AL/EL = artificial/electrolarynx; AMP = voice amplifier; ESS = esophageal speech; TEP = tracheoesophageal voice prosthesis; TEPpr = TEP, provider inserted; TEPpt = TEP, patient inserted; TSV = tracheostomy speaking valve.
SOURCES: A. Servox Digital electrolarynx, Bruce Medical; B. PMV 2001 (purple®), Passy Muir, Inc.; Image courtesy of Passy Muir, Inc. Irvine, CA. C. Blom-Singer® Duckbill–Patient Changeable Voice Prosthesis, InHealth Technologies. Image courtesy of InHealth Technologies, www.inhealth.com; D. Blom-Singer® Classic™ Voice Prosthesis–Clinician Placed (non-sterile), InHealth Technologies. Image courtesy of InHealth Technologies, www.inhealth.com; E. UltraDisk DVA 10W Portable Voice Amplifier, UltraDisk, www.ultradisk.co.uk.

SOURCE: THANC Foundation, 2017. Copyright © 2017 Jill Gregory & Kellie Holoski, Head & Neck Cancer Guide. All rights reserved. Available at: www.headandneckcancerguide.org.

SOURCE: THANC Foundation, 2017. Copyright © 2017 Jill Gregory & Kellie Holoski, Head & Neck Cancer Guide. All rights reserved. Available at: www.headandneckcancerguide.org.
(Tang and Sinclair, 2015). An electrolarynx is a small handheld, battery-operated device that, when activated by pressing buttons on the device, vibrates air in the oral cavity to approximate the sound of voicing (Enderby et al., 2009). The device may be positioned on the neck, under the chin (see Figure 6-3), or on the cheek; it also may be used with an oral adapter to vibrate air in the oral cavity.

SOURCE: THANC Foundation, 2017. Copyright © 2017 Jill Gregory & Kellie Holoski, Head & Neck Cancer Guide. All rights reserved. Available at: www.headandneckcancerguide.org.
Functional Speech Following Tracheostomy
In tracheostomy, the vocal mechanism typically remains fully functional; however, respiratory air is directed through the tracheostomy tube instead of upward through the vocal cords. As a result, voicing is difficult to impossible without use of a tracheostomy speaking valve (TSV) (Hoffman et al., 2008). A TSV, a small one-way valve prosthesis that is placed on the end of a tracheostomy tube, is designed to redirect exhaled air upward through the vocal cords in the larynx (Hoffman et al., 2008). All TSVs have similar components, but their engineering/design varies. In all TSVs, a diaphragm either (1) remains open and closes on expiration or (2) remains closed and opens when inspiratory effort is applied. All valves close during expiration, and all attach to the hub of a tracheostomy tube (Leder, 1994). Individuals having undergone tracheostomy often benefit from a speaking-valve prosthesis (see Figure 6-4) that uses a one-way valve to redirect exhaled air from the trachea upward through intact vocal cords to produce natural voicing.

SOURCE: Illustration courtesy of Passy Muir, Inc., Irvine, CA.
Functional Speech with Diminished Vocal Intensity
Voice Amplification
Individuals who retain function of the vocal cords may benefit from voice amplification to address dysphonia or hypophonia (Andreetta et al., 2016). Typically a speaker-type voice amplifier fitted with a head-mounted or lavaliere microphone, this device is designed to amplify the fundamental frequency of voice. It has been shown to increase the intelligibility of speech in noisy situations and when the individual produces insufficiently loud speech (Andreetta et al., 2016). Similarly, those having undergone total laryngectomy who communicate with TEP or ESS often benefit from amplification (Happ et al., 2004; Hilgers et al., 1990).
Functionality of Voice Restoration Technologies
Considered the gold standard, TEPs are deemed effective for many individuals postlaryngectomy and are associated with low occurrence of medical complications (Calkovsky and Hajtman, 2015; Tang and Sinclair, 2015). Still, the resulting voice is sometimes considered monotonous and unpleasant.
Studies have shown that voice restoration with tracheoesophageal puncture is superior to that with an electrolarynx and ESS (Clements et al.,
1999; Eadie et al., 2016; Finzia and Bergman, 2001; Ward et al., 2003). In addition to surgery and radiation, factors associated with head and neck cancer influence speech intelligibility. Decreases in intelligibility are associated with (1) increases in tumor size; (2) increases in the volume of tissue resected; (3) the need for reconstructive surgery; and (4) tumor site, with poorer intelligibility in cases involving the floor of the mouth or lower alveolar crest (Blyth et al., 2014; Borggreven et al., 2007). One recent study found lowest stress and perceived handicap with ESS, followed by an electrolarynx and then tracheoesophageal puncture (Saltürk et al., 2016). One critical consideration is that all voice restoration methods (TEP, ESS, and electrolarynx) rely on the articulatory musculature to produce speech. Thus, loss of articulatory musculature during surgical resection has significant consequences for the production of intelligible speech (Tang and Sinclair, 2015).
Voice Restoration and AAC
Given developments in communication systems for people with unmet communication needs, ESS and electrolarynges are not the sole options available, and in many cases, they may no longer be acceptable to some individuals. The increase in availability and acceptability of these technologies and communication applications for mobile technologies has significantly changed functional communication intervention (e.g., McNaughton and Light, 2013). These factors play an obvious role in determining functional communication interventions using AAC technologies. The situational effectiveness of communication with ESS, an electrolarynx, or both ranges from 80 to 100 percent intelligibility (Sullivan et al., 1993).
Caregivers of adults with complex communication needs have identified as highly important the need for viable modes of communication to (1) regulate the behavior of others for basic wants and needs (e.g., getting needs met, giving instructions/directions, providing clarifications); (2) stay connected with friends and family members (e.g., social closeness); and (3) discuss important issues (e.g., information transfer) (Fried-Oken et al., 2006). AAC supports are necessary given that the overwhelming majority of individuals with severe speech impairments have no access to appropriate communication modalities when hospitalized. They therefore struggle to provide medical information and to have their medical needs met, and they are at increased risk for poor health outcomes (Blackstone et al., 2015; Hemsley and Balandin, 2014).
Factors Affecting Device Use
Speakers have reported using multiple communication methods based on the complexity of communication in various environments, using writing, gestures, and/or interpreters to supplement spoken communication while speaking in situations with background noise or via intercoms (Sullivan et al., 1993). Although written supplementation of spoken communication may be useful in some situations, it does not produce audible output and is limited by literacy skills for some individuals. Indeed, 17.1 percent of individuals with head and neck cancer read at or below the 8th-grade level (Jesse et al., 2015). Because individuals with reduced speech intelligibility tend to experience reduced quality of life, timely identification of such individuals is an important component of their cancer treatment so that they can be provided with appropriate communication options that facilitate their overall recovery (Borggreven et al., 2007).
Voice Restoration Evaluation and Monitoring
Rapid, effective voice and speech restoration is associated with preventing psychosocial and economic consequences of loss of speech (Blom, 2000). Optimal levels of communication support for individuals with head and neck cancer need to be identified throughout the phases of cancer treatment, with consideration of variations/transitions in medical status and personal needs over time. Targeted interventions need to be developed in the context of the cancer site (e.g., tongue, maxilla, larynx), phase of recovery (e.g., presurgical, acute postsurgical, speech restorative), preexisting communication skills and demands, and ongoing communication needs (Sullivan et al., 2007a). Each voice restoration method has specific monitoring and evaluation procedures. To ensure that individuals with head and neck cancer can successfully meet all their communication needs, AAC assessment and intervention procedures are implemented in conjunction with voice restoration strategies (Ball et al., 2016b). AAC service-delivery intervals for these individuals are established to support presurgical care, acute care (immediately postsurgery), initial outpatient care, and ongoing outpatient AAC support (e.g., treatment change or new disease states (Sullivan et al., 2007a).
The goal of presurgical AAC assessment is to identify communication needs, determine communication options for implementation immediately postsurgery, and evaluate the potential effectiveness of various AAC options. At this stage, a communication needs assessment is completed, individual patterns of communication are established (e.g., interest in and use of communication), and potential supports and needs following surgery are identified (Ball et al., 2016b). The goal of acute care AAC assessment
is to evaluate the effectiveness of short-term communication techniques that have been identified and continue to evaluate AAC options for longer-term implementation. The goal of outpatient AAC assessment is to identify daily communication needs that are not being met by the selected voice restoration procedures (Ball et al., 2016b). As voice restoration procedures are implemented, communication breakdowns, intelligibility and comprehensibility, and communication efficiency (intelligible words per minute) are monitored (Sullivan et al., 1993). Finally, the goal of ongoing AAC intervention is to evaluate new communication needs and any sources of communication breakdown as well as to identify communication options for addressing these issues. Cancer recurrence or new health conditions, for example, may require additional medical treatments that impact communication (Ball et al., 2016b).
AAC assessment for individuals with head and neck cancer often differs from a typical lengthy AAC assessment process that yields a communication system following a series of assessment sessions and trials (Ball et al., 2016b). Instead, the focus is on supporting communication in a rapid, just-in-time manner (i.e., methodically targeting communicative supports as needed). When long-term use of AAC technology is indicated, a comprehensive AAC evaluation may be required.
Individuals with head and neck cancer receiving AAC commonly require (1) lightweight portability (independent, unimpaired ambulation); (2) direct access (full use of hands, sufficiently large keyboard to provide accurate message selection); (3) high-quality display (visibility in multiple environments); (4) traditional orthography (if literacy supports message formulation, native language text); (5) message formulation (few predetermined messages with some repeated/personal messages recorded prior to surgery, formulation of new messages with text-to-speech); (6) rate acceleration (features that speed rate of communication); and (7) ease of use (brief period of time required to learn how to use the device) (Beukelman and Mirenda, 2013). Desired features of mobile communication systems include those mentioned above (e.g., portability, high-quality display). Other desired features include options to obtain extended battery life (e.g., communication during an 8-hour work shift), durability and protection (e.g., a case that increases the durability of the system without compromising access) with screen protection, and voice output amplification. If the user places a high premium on small devices but cannot isolate individual items on the display because of hand/finger size or mobility, it is also important to identify a stylus that will provide access to the keyboard and a means of ensuring its location without loss (i.e., storage slot).
Voice Restoration Training and Adaptation
Individuals report that ESS is more difficult to learn than other communication methods, and success depends on individual motivation and length of time practicing. Other factors include training method, timing of training postsurgery, and type of training (group versus individual) (Kresic et al., 2015; Staffieri et al., 2006).
Individuals with communication problems following head/neck cancer interventions but without other speech/language problems have an undamaged language system (Enderby et al., 2009). These individuals can participate in an AAC assessment to choose a system that best addresses their needs (Fox and Rau, 2001). The goals of AAC for individuals with head and neck cancer are to augment intelligibility, decrease communication breakdowns or miscommunications, enable repair when communication breakdowns occur, and provide alternative means of communication when the voice restoration methods employed result in ongoing unmet communication needs (Ball et al., 2016b).
Often, recommended communication strategies involve simple methods, and clinicians may not see a need for direct instruction; however, not all individuals adapt to their lack of communication and the implementation of new communication methods without instruction (Sullivan et al., 2007b). Moreover, most medical professionals (e.g., nurses, physicians) receive no instruction in interacting with individuals who are unable to communicate effectively via natural speech (Hemsley and Balandin, 2014). Therefore, some form of instruction and therapeutic support for both individuals and providers is likely to yield improved patient–provider communication, which in turn can influence satisfaction with and outcomes of treatment (Downey and Happ, 2013; Hemsley and Balandin, 2014).
Voice Restoration Access and Availability
Caregivers of individuals with head and neck cancer have reported that they primarily taught themselves communication strategies for identifying problems and meeting individual needs, which required intensive effort and creativity on their part. These reports indicate that, postsurgically, these individuals and their caregivers are in critical need of assistance in meeting communication needs (McGrory, 2011). Similarly, nurses have attributed nurse–patient communication breakdowns to the lack of readily manageable and interpretable communication systems (Happ et al., 2004).
The complexity of funding policies impacts access to voice restoration in many cases. Some insurers will not pay for an electrolarynx or voice prosthesis, the latter of which often must be replaced on a routine basis (as often as monthly, although commonly every 2-3 months). As a result, clinicians or treating facilities must provide voice prostheses at their own expense, or the patient must bear the cost. Also, non-indwelling voice prostheses are considered durable medical equipment by CMS, whereas indwelling prostheses are not. The result can be problems with respect to training of SLPs in how to manage non-indwelling prostheses and how to instruct individuals in their insertion and long-term use. Medicare administrative contractors require that a TEP not be distributed directly to an individual but instead directly to a professional, one device at a time, and that a provider visit occur at the time the TEP is distributed and billed (Satterfield, 2015). A separate funding issue is that, at present, Medicare payment for a TEP covers approximately one-half the cost of actually obtaining the device; as a result, many clinical practices no longer provide TEPs (Satterfield, 2015).
Access to voice restoration methods depends on the availability of appropriately trained professionals, which varies by region. ASHA certification (Certificate of Clinical Competence-SLP) is necessary; however, meeting certification requirements is not sufficient to qualify an SLP to perform TEP care as outlined by preferred practice standards (ASHA, 2004a). SLPs require extensive additional training to manage voice restoration options (e.g., anatomy and physiology, instrumentation, TEP and related materials, instruction of individuals in the use of ESS, identification of appropriate TEP candidates, TEP sizing/removal/reinsertion, safety issues) (ASHA, 2004a). Limited numbers of these experts are available, and there have been anecdotal reports of situations in which local surgeons have provided laryngectomy and primary TEP care when no trained SLP was available in the region to provide the necessary pre-/postoperative assessment and interventions, potentially leaving the individual with no voice restoration options. Limited numbers of SLPs specialize in voice assessment and interventions. A survey of providers indicated that 5 percent of SLPs’ adult service delivery time was spent in the area of voice (including but not limited to voice restoration), with SLPs in outpatient clinics spending significantly more time, although still negligible (12 percent; p = .000), than those in other medical facilities (e.g., skilled nursing facility, U.S. Department of Veterans Affairs facility, hospital, long-term acute care facility) (ASHA, 2015b).
FINDINGS AND CONCLUSIONS
Findings
Need for Augmentative and Alternative Communication
6-1. Severe impairments of natural speech result in complex communication needs that interfere with daily interactions and employment outcomes.
6-2. Research in the field of augmentative and alternative communication (AAC) often focuses on specific areas and populations, making generalizations across studies problematic.
6-3. Individuals with amyotrophic lateral sclerosis (ALS) may be referred for AAC assessment and treatment beyond the time when they could remain at or return to work; this and other factors may increase the urgency of the need for AAC and/or limit AAC acceptance.
6-4. Individuals receiving voice restoration head and neck cancer treatments may also require AAC to achieve fully functional communication.
6-5. The complexity of AAC systems is demonstrated by the multiple features and components that must be identified, evaluated, and manipulated to address the specific abilities, needs, and expectations of each individual.
6-6. Individualized, contextual needs are variable and cannot be generalized within a specific disability group (i.e., individuals have communication skills and needs that are not based on a diagnosis such as cerebral palsy, ALS, or head and neck cancer).
6-7. Individuals who require AAC have complex communication needs, which often change over the course of their impairment (e.g., improving or degenerating communication capabilities) so that individuals require ongoing monitoring and/or intervention to maintain or improve their communication performance.
6-8. Different considerations are entailed in communicating with an electrolarynx or tracheoesophageal voice prosthesis, which requires operational competence, versus AAC, which requires language representation, cognitive and device-based message formulation, and social and operational competencies.
Prognosis for Occupational Success
6-9. Individuals who communicate with AAC can obtain and maintain employment if they are provided early educational preparation; attain high levels of language competence, literacy, and education; and achieve competency in workplace communication interactions.
6-10. Established measures of real-world communicative functioning are sparse, and research investigating the impact of AAC products and technologies on real-world communicative functioning is extremely limited.
6-11. Direct instruction in communication techniques improves clinical outcomes for persons with AAC needs.
6-12. As technologies or computer-based equipment, AAC systems are subject to breakdown, thus requiring technical support and repair; loss of access to communication in the interim will likely impact an individual’s ability to fulfill work responsibilities.
6-13. Occupational title listings may indicate no need for speaking, but an occupation often has a speaking requirement nonetheless.
Access and Availability
6-14. Original Medicare benefits are based on medical necessity and cover 80 percent of an approved device’s fee schedule, and some individuals may not have the 20 percent copay. The result is that the cost of a speech-generating device (SGD) may remain prohibitive for many people (e.g., a $20,000 AAC system would require a $4,000 out-of-pocket expense). If the SGD is not approved based on the fee schedule, the entire cost falls to the individual.
6-15. Medicaid funding varies by state, with some states having specific criteria for assessments; limiting access to treatment; and/or providing insufficient funding, especially for higher-cost AAC technologies.
6-16. Private health insurance may exclude coverage of AAC systems, even when other types of durable medical equipment are covered.
6-17. Some funding options (e.g., equipment lending libraries, private purchase of mobile technologies) do not link appropriate evaluations with AAC selection, resulting at times in inappropriate recommendations and purchases.
6-18. Some coverage requires that an individual abandon attempts to improve natural speech before qualifying for AAC support.
6-19. School districts that have provided AAC systems for children often retain the devices; as a result, children transitioning from school into postsecondary/vocational settings must navigate the transition while completing the AAC assessment, funding, and new learning processes. Some children may even have to learn entirely new language representation, messaging, and access methods before they can engage in essential communication.
6-20. Required education for preprofessional speech-language pathologists (SLPs) is limited, as a number of university programs still do not have a required AAC course.
6-21. The 2015 American Speech-Language-Hearing Association end-of-year membership report showed that there were 156,254 certified SLP members, although the AAC Special Interest Group comprised only 3,239 members, approximately 2 percent of the total membership. These numbers are indicative of the relatively small number of SLPs with AAC expertise.
6-22. High equipment costs and continual technology developments result in limited availability of AAC systems for use in the assessment, equipment trial, and intervention processes in clinical settings.
Conclusions
Prognosis for Occupational Success
6-1. Data on the prevalence and use of AAC systems by adults are fragmented and limited, resulting in incomplete knowledge of employability, vocational effectiveness, and overall employment outcomes. [Findings 6-2, 6-9]
6-2. Establishing objective measures of real-world communicative functioning will promote improved understanding of the effects of AAC products and technologies on actual practical and interactive communicative function. [Finding 6-10]
6-3. Although great progress has been achieved in AAC systems, use of an SGD does not fully mitigate the impact of a severe communication impairment. In addition, even when provided with optimal assessment, funding resources, AAC systems, interventions, and supports, individuals may not achieve their potential because of any number of environmental and personal factors that influence communication performance in employment contexts. [Findings 6-5, 6-6, 6-11, 6-12]
Access and Availability
6-4. Access to SLPs and other professional members of an AAC team with relevant knowledge, skills, and expertise is necessary and currently limited. [Findings 6-4, 6-5, 6-6, 6-7, 6-10, 6-19, 6-20, 6-21]
6-5. Limited availability of AAC systems in the clinical setting impedes proper assessment, equipment trial, and intervention processes to the detriment of the individual’s participation in educational and vocational settings. [Finding 6-22]
6-6. Differences in funding policies among various programs significantly limit access to AAC technology and clinical services. [Findings 6-14, 6-15, 6-16, 6-17, 6-18, 6-20]
REFERENCES
AAC-RERC (Augmentative and Alternative Communication-Rehabilitation Engineering Research Center). 2011. Mobile devices and communication apps: An AAC-RERC white paper. http://aac-rerc.psu.edu/index.php/pages/show/id/46 (accessed February 22, 2017).
Akcakaya, M., B. Peters, M. Moghadamfalahi, A. R. Mooney, U. Orhan, B. Oken, D. Erdogmus, and M. Fried-Oken. 2014. Noninvasive brain-computer interfaces for augmentative and alternative communication. IEEE Reviews in Biomedical Engineering 7:31-49.
Alamsaputra, D. M., K. J. Kohnert, B. Munson, and J. Reichle. 2006. Synthesized speech intelligibility among native speakers and non-native speakers of English. Augmentative and Alternative Communication 22(4):258-268.
American Cancer Society. 2015. Cancer facts & figures 2015. Atlanta, GA: American Cancer Society. http://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2015/cancer-facts-and-figures-2015.pdf (accessed January 22, 2017).
Anderson, L., C. R. Baylor, T. L. Eadie, and K. M. Yorkston. 2016. Describing speech usage in daily activities in typical adults. Journal of Voice 30(1):42-52.
Andreetta, M. D., S. G. Adams, A. D. Dykstra, and M. Jog. 2016. Evaluation of speech amplification devices in Parkinson’s disease. American Journal of Speech-Language Pathology 25(1):29-45.
ASHA (American Speech-Language-Hearing Association). 2004a. Knowledge and skills for speech-language pathologists with respect to evaluation and treatment for tracheoesophageal puncture and prosthesis: Knowledge and skills. http://www.asha.org/policy (accessed February 13, 2017).
ASHA. 2004b. Preferred practice patterns for the profession of speech-language pathology [preferred practice patterns]. http://www.asha.org/policy/PP2004-00191 (accessed November 16, 2016).
ASHA. 2004c. Roles and responsibilities of speech-language pathologists with respect to augmentative and alternative communication: Technical report. http://www.asha.org/policy/TR2004-00262 (accessed January 22, 2017).
ASHA. 2015a. Highlights and trends: Member and affiliate counts, year-end 2015. http://www.asha.org/uploadedFiles/2015-Member-Counts.pdf (accessed January 18, 2017).
ASHA. 2015b. SLP Health Care Survey 2015 caseload characteristics. http://www.asha.org/uploadedFiles/2015-SLP-Health-Care-Survey-Caseload.pdf (accessed February 13, 2017).
ASHA. 2016a. Augmentative and alternative communication (AAC). http://www.asha.org/public/speech/disorders/AAC (accessed October 25, 2016).
ASHA. 2016b. Augmentative and alternative communication decisions. http://www.asha.org/public/speech/disorders/CommunicationDecisions (accessed October 25, 2016).
ASHA. 2016c. Augmentative and alternative communication: Knowledge and skills for service delivery [knowledge and skills]. http://www.asha.org/policy/KS2002-00067.htm (accessed November 16, 2016).
ASHA. 2016d. Current procedural terminology (CPT) codes: Speech language pathology. http://www.asha.org/Practice/reimbursement/coding/SLPCPT (accessed December 31, 2016).
ASHA. 2016e. Scope of practice in speech-language pathology [scope of practice]. http://www.asha.org/policy/SP2016-00343 (accessed November 16, 2016).
Babin, E., D. Beynier, D. Le Gall, and M. Hitier. 2009. Psychosocial quality of life in patients after total laryngectomy. Revue de Laryngologie—Otologie—Rhinologie 130(1):29-34.
Baker, B. 1986. Using images to generate speech. Byte 11(9):160-168.
Balandin, S., and J. Morgan. 2001. Preparing for the future: Aging and alternative and augmentative communication. Augmentative and Alternative Communication 17(2):99-108.
Ball, L. J., and J. Lasker. 2013. Teaching partners to support communication for adults with acquired communication impairment. Perspectives on Augmentative and Alternative Communication 22(1):4-15.
Ball, L. J., A. Willis, D. R. Beukelman, and G. L. Pattee. 2001. A protocol for identification of early bulbar signs in amytrophic lateral sclerosis. Journal of the Neurological Sciences 191(1-2):43-53.
Ball, L. J., D. R. Beukelman, and G. L. Pattee. 2002. Timing of speech degeneration in people with amyotrophic lateral sclerosis. Journal of Medical Speech-Language Pathology 10(4):231-235.
Ball, L. J., D. R. Beukelman, and G. L. Pattee. 2004. Communication effectiveness of individuals with amyotrophic lateral sclerosis. Journal of Communication Disorders 37(3): 197-215.
Ball, L. J., D. R. Beukelman, and L. Bardach. 2007. Amyotrophic lateral sclerosis. In Augmentative communication strategies for adults with acute or chronic medical conditions, edited by D. R. Beukelman, K. L. Garrett, and K.M. Yorkston. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 287-316.
Ball, L. J., K. Stading, and D. Hazelrigg. 2010a. AAC considerations during the transition to adult life. In Transition strategies for adolescents & young adults who use AAC, edited by D. McNaughton and D. R. Beukelman. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 201-218.
Ball, L., A. Nordness, S. Fager, K. Kersch, B. Mohr, G. L. Pattee, and D. R. Beukelman. 2010b. Eye-gaze access of AAC technology for persons with amyotrophic lateral sclerosis. Journal of Medical Speech-Language Pathology 18(3):11-23.
Ball, L. J., S. Fager, and M. Fried-Oken. 2012. AAC in progressive neuromuscular disease. Physical Medicine and Rehabilitation Clinics of North America 23(3):689-699.
Ball, L. J., S. Evans, S. Chavez, M. Leach, and K. Smart. 2016a (unpublished). Communication of children diagnosed with spinal muscular atrophy type 1: A parent survey.
Ball, L. J., J. Kent-Walsh, and N. A. Harrington. 2016b. Consideration of communication options in head and neck cancer: Augmentative and alternative. In Cases in head and neck cancer: A multidisciplinary approach, edited by B. H. Ruddy, H. Ho, C. Sapienza, and J. J. Lehman. San Diego, CA: Plural Publishing, Inc. Pp. 207-218.
Barreto, A. B., S. D. Scargle, and M. Adjouadi. 2000. A practical EMG-based human-computer interface for users with motor disabilities. Journal of Rehabilitation Research & Development 37(1):53-63.
Barrett, D., and T. King. 2005. Computer networking illuminated. Sudbury, MA: Jones and Bartlett Publishers.
Beukelman, D. R., and P. Mirenda. 2005. Augmentative and alternative communication: Supporting children and adults with complex communication needs. 3rd ed. Baltimore, MD: Paul H. Brookes Publishing Co.
Beukelman, D. R., and P. Mirenda. 2013. Augmentative and alternative communication: Supporting children and adults with complex communication needs. 4th ed. Baltimore, MD: Paul H. Brookes Publishing Co.
Beukelman, D. R., S. Fager, L. J. Ball, and A. Dietz. 2007a. AAC for adults with acquired neurological conditions: A review. Augmentative and Alternative Communication 23(3): 230-242.
Beukelman, D. R., K. M. Yorkston, and K. L. Garrett. 2007b. An introduction to AAC services for adults with chronic medical conditions: Who, what, when, where, and why. In Augmentative communication strategies for adults with acute or chronic medical conditions, edited by D. R. Beukelman, K. L. Garrett, and K. M. Yorkston. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 1-16.
Beukelman, D. R., K. L. Garrett, and K. M. Yorkston. 2007c. Augmentative communication strategies for adults with acute or chronic medical conditions. Baltimore, MD: Paul H. Brookes Publishing Co.
Beukelman, D. R., L. J. Ball, and S. Fager. 2008. An AAC personnel framework: Adults with acquired complex communication needs. Augmentative and Alternative Communication 24(3):255-267.
Beukelman, D. R., E. K. Hanson, E. Hiatt, S. Fager, and D. Bilyeu. 2009. AAC technology learning part 3: Regular AAC team members. Augmentative and Alternative Communication 21(3):187-194.
Beukelman, D. R., K. Hux, A. Dietz, M. McKelvey, and K. Weissling. 2015. Using visual scene displays as communication support options for people with chronic, severe aphasia: A summary of AAC research and future research directions. Augmentative and Alternative Communication 31(3):234-245.
Binger, C., L. J. Ball, A. Dietz, J. Kent-Walsh, J. Lasker, S. Lund, M. McKelvey, and W. Quach. 2012. Personnel roles in the AAC assessment process. Augmentative and Alternative Communication 28(4):278-288.
Blackstone, S. W. 1993. Employment in the AAC community: A problem for some . . . but not for everyone. In Proceedings of the Pittsburgh Employment Conference for Augmented Communicators, edited by R. V. Conti and C. Jenkins-Odorisio. Pittsburgh, PA: SHOUT Press. Pp. 3-6.
Blackstone, S. W., D. R. Beukelman, and K. M. Yorkston. 2015. Patient-provider communication: Roles for speech-language pathologists and other health care professionals. San Diego, CA: Plural Publishing, Inc.
Blom, E. D. 2000. Current status of voice restoration following total laryngectomy. Oncology 14(6):915-922.
Blyth, K. M., P. McCabe, R. Heard, J. Clark, C. Madill, and K. J. Ballard. 2014. Cancers of the tongue and floor of mouth: Five-year file audit within the acute phase. American Journal of Speech-Language Pathology 23(4):668-678.
Bolt, S., T. Eadie, K. Yorkston, C. Baylor, and D. Amtmann. 2016. Variables associated with communicative participation after head and neck cancer. JAMA Otolaryngology—Head & Neck Surgery 142(12):1145-1151.
Borggreven, P. A., I. M. Verdonck-de Leeuw, M. J. Muller, M. L. C. H. Heiligers, R. de Bree, N. K. Aaronson, and C. R. Leemans. 2007. Quality of life and functional status in patients with cancer of the oral cavity and oropharynx: Pretreatment values of a prospective study. European Archives of Oto-Rhino-Laryngology 264(6):651-657.
Bourgeois, M. S. 1991. Communication treatment for adults with dementia. Journal of Speech, Language, and Hearing Research 34(4):831-844.
Bourgeois, M. S. 1992. Evaluating memory wallets in conversation with persons with dementia. Journal of Speech, Language, and Hearing Research 35(6):1344-1357.
Bourgeois, M. S. 2013. Restricted literature base limits interpretation of meta-analysis of the effectiveness of communication-enhancing interventions in dementia. Evidence-based Communication Assessment and Intervention 7(1):1-3.
Bourgeois, M. S., K. Dijkstra, L. Burgio, and R. Allen-Burge. 2001. Memory aids as an augmentative and alternative communication strategy for nursing home residents with dementia. Augmentative and Alternative Communication 17(3):196-210.
Brady, N., D. Skinner, J. Roberts, and E. Hennon. 2006. Communication in young children with fragile X syndrome: A qualitative study of mothers’ perspective. American Journal of Speech-Language Pathology 15(4):353-364.
Brock, K., R. Koul, M.Corwin, and R. Schlosser. 2017. A comparison of visual scene and grid displays for people with chronic aphasia: A pilot study to improve communication using AAC. Aphasiology 1-25.
Bruno, J., and D. Trembath. 2006. Use of aided language stimulation to improve syntactic performance during a weeklong intervention program. Augmentative and Alternative Communication 22(4):300-313.
Bryen, D. N., B. B. Potts, and A. C. Carey. 2007. So you want to work? What employers say about job skills, recruitment and hiring employees who rely on AAC. Augmentative and Alternative Communication 23(2):126-139.
Bunnell, H. T., J. Chandlee, J. Lilley, J. Gray, B. Moyers, and B. Warren. 2015. ModelTalker. https://www.modeltalker.org (accessed January 24, 2017).
Calkovsky, V., and A. Hajtman. 2015. Primary prosthetic voice rehabilitation in patients after laryngectomy: Applications and pitfalls. Advances in Experimental Medicine and Biology 852:11-16.
Caron, J. G., and J. Light. 2016. Social media experiences of adolescents and young adults with cerebral palsy who use augmentative and alternative communication. International Journal of Speech Language Pathology 1-13.
CDC (U.S. Centers for Disease Control and Prevention). 2011. Autism and Developmental Disabilities Monitoring (ADDM) Network. Atlanta, GA: CDC.
CDC. 2014. Prevalence of autism spectrum disorder among children aged 8 years: Autism and Developmental Disabilities Monitoring Network, 11 sites, United States, 2010. Morbidity and Mortality Weekly Report 63(2).
Chang, S. K., G. Costagliola, S. Orefice, G. Polese, and B. R. Baker. 1992. A methodology for iconic language design with application to augmentative communication. Proceedings of the 1992 IEEE Workshop on Visual Languages, Seattle, WA, September 15-18.
Clements, K., C. Rassekh, H. Seikaly, J. A. Hokanson, and K. H. Calhoun. 1999. Communication after laryngectomy: An assessment of patient satisfaction. Archives of Otolaryngology—Head & Neck Surgery 123(5):493-496.
CMS (Centers for Medicare & Medicaid Services). 2001. CMS Pub. 100-3, Medicare National Coverage Determinations Manual, Chapter 1, Section 50.1. LCD Title Speech Generating Devices, ID number L108. Baltimore, MD: CMS.
CMS. 2017a. Medicare Advantage Plans cover all Medicare services. https://www.medicare.gov/what-medicare-covers/medicare-health-plans/medicare-advantage-plans-cover-allmedicare-services.html (accessed April 6, 2017).
CMS. 2017b. What’s Medicare supplemental insurance (Medigap)? https://www.medicare.gov/supplement-other-insurance/medigap/whats-medigap.html (accessed April 6, 2017).
Collier, B., and S. Blackstein-Adler. 1998. Building competencies in augmentative and alternative communication among professionals. Augmentative and Alternative Communication 14(4):250-260.
Collier, B., S. W. Blackstone, and A. Taylor. 2012. Communication access to businesses and organizations for people with complex communication needs. Augmentative and Alternative Communication 28(4):205-218.
Cook, A. M., and J. M. Polgar. 2008. Cook & Hussey’s assistive technologies: Principles and practice. St. Louis, MO: Mosby Elsevier.
Cooper, L., S. Balandin, and D. Trembath. 2009. The loneliness experiences of young adults with cerebral palsy who use alternative and augmentative communication. Augmentative and Alternative Communication 25(3):154-164.
Costello, J. M. 2011. Last words, last connections how augmentative communication can support children facing end of life. The ASHA Leader 1-6.
Costello, J. M. 2014. Message banking, voice banking and legacy messages. Boston, MA: Boston Children’s Hospital.
Costigan, F. A., and J. Light. 2007. A research review of preservice training in augmentative and alternative communication for speech-language pathologists, special education teachers, and occupational therapists. Poster presented at the Annual Convention of the American Speech-Language-Hearing Association, Boston, MA.
Costigan, F. A., and J. Light. 2010. A review of preservice training in augmentative and alternative communication of speech-language pathologists, special education teachers, and occupational therapists. Assistive Technology 22(4):200-212.
Crema, C., and N. Moran. 2012. Training speech-language pathologists of adult clients on the implementation of AAC into everyday practice. Perspectives on Augmentative and Alternative Communication 21(2):37-42.
Culp, D., D. R. Beukelman, and S. Fager. 2007. Brainstem impairment. In Augmentative communication strategies for adults with acute or chronic medical conditions, edited by D. R. Beukelman, K. L. Garrett, and K. M. Yorkston. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 59-90.
Darley, F., A. Aronson, and J. Brown. 1975. Motor speech disorders. Philadelphia, PA: Saunders.
Demers, L., R. Weiss-Lambrou, and B. Ska. 2002. The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): An overview and recent progress. Technology and Disability 14(3):101-105.
Dietz, A., M. McKelvey, and D. R. Beukelman. 2006. Visual scene displays (VSD): New AAC interfaces for persons with aphasia. Perspectives on Augmentative and Alternative Communication 15(1):13-17.
Dietz, A., W. Quach, S. K. Lund, and M. McKelvey. 2012. AAC assessment and clinical-decision making: The impact of experience. Augmentative and Alternative Communication 28(3):148-159.
Donovan, N. J., D. L. Kendall, M. E. Young, and J. C. Rosenbek. 2008. The communicative effectiveness survey: Preliminary evidence of construct validity. American Journal of Speech-Language Pathology 17(4):335-347.
Dowden, P. A. 2016. UW augmentative and alternative communication. http://depts.washington.edu/augcomm (accessed November 5, 2016).
Downey, D., and M. B. Happ. 2013. The need for nurse training to promote improved patient-provider communication for patients with complex communication needs. Perspectives on Augmentative and Alternative Communication 22(2):112.
Drager, K. D., and J. E. Reichle. 2001. Effects of age and divided attention on listeners’ comprehension of synthesized speech. Augmentative and Alternative Communication 17(2):109-119.
Drager, K. D., J. C. Light, J. C. Speltz, K. A. Fallon, and L. Z. Jeffries. 2003. The performance of typically developing 2 1/2-year-olds on dynamic display AAC technologies with different system layouts and language organizations. Journal of Speech, Language, and Hearing Research 46(2):298-312.
Drager, K. D., E. A. Clark-Serpentine, K. E. Johnson, and J. L. Roeser. 2006. Accuracy of repetition of digitized and synthesized speech for young children in background noise. American Journal of Speech-Language Pathology 15(2):155-164.
Drager, K. D. R., J. Reichle, and C. Pinkoski. 2010. Synthesized speech output and children: A scoping review. American Journal of Speech-Language Pathology 19(3):259-273.
Duffy, J. R. 2013. Motor speech disorders: Substrates, differential diagnosis, and management. 3rd ed. St. Louis, MO: Elsevier Mosby.
Durkin, M. S., R. E. Benedict, D. Christensen, L. A. Dubois, R. T. Fitzgerald, R. S. Kirby, M. J. Maenner, K. Van Naardent Braun, M. S. Wingate, and M. Yeargin-Allsopp. 2016. Prevalence of cerebral palsy among 8-year-old children in 2010 and preliminary evidence trends in its relationship to low birthweight. Paediatric Perinatal Epidemiology 30(5):496-510.
Dykstra, A. D., S. G. Adams, and M. Jog. 2015. Examining the relationship between speech intensity and self-rated communicative effectiveness in individuals with Parkinson’s disease and hypophonia. Journal of Communication Disorders 56(2015):103-112.
Eadie, T. L., D. Otero, S. Cox, J. Johnson, C. R. Baylor, K. M. Yorkston, and P. C. Doyle. 2016. The relationship between communicative participation and postlaryngectomy speech outcomes. Head & Neck 38(Suppl. 1):E1955-E1961.
Enderby, P., C. Pickstone, A. John, K. Fryer, A. Cantrell, and D. Papaioannou. 2009. Resource manual for commissioning and planning services for SLCN. London, UK: Royal College of Speech & Language Therapists. https://www.rcslt.org/speech_and_language_therapy/commissioning/aphasia_plus_intro (accessed January 22, 2017).
Fager, S., K. Hux, D. R. Beukelman, and R. Karantounis. 2006. Augmentative and alternative communication use and acceptance by adults with traumatic brain injury. Augmentative and Alternative Communication 22(1):37-47.
Fager, S., L. Bardach, S. Russell, and J. Higginbotham. 2012. Access to augmentative and alternative communication: New technologies and clinical decision-making. Journal of Pediatric Rehabilitation Medicine 5(1):53-61.
Fairbanks, G. 1960. Voice and articulation drillbook. 2nd ed. New York: Harper & Row.
Feinstein, C. S., J. Lemanowicz, M. Cunningham, R. Whiting, and The Institute on Disabilities. 2013. Independent monitoring for quality (IM4Q): A statewide summary for the Pennsylvania Office of Developmental Programs Statewide Steering Committee on Independent Monitoring. Philadelphia, PA: Institute on Disabilities, Temple University.
Finzia, C., and B. Bergman. 2001. Health-related quality of life in patients with laryngeal cancer: A post-treatment comparison of different modes of communication. The Laryngoscope 111(5):918-923.
Fox, L. E., and M. T. Rau. 2001. Augmentative and alternative communication for adults following glossectomy and laryngectomy surgery. Augmentative and Alternative Communication 17(3):161-166.
Fried-Oken, M. 1993. Do AAC users with degenerative neurological disease remain or return to the work force? In The First Annual Pittsburgh Employment Conference for Augmented Communicators, edited by R.V. Conti and C. Jenkins-Odorisio. Pittsburgh, PA: SHOUT Press. Pp. 73-78.
Fried-Oken, M., and M. Granlund. 2012. AAC and ICF: A good fit to emphasize outcomes. Augmentative and Alternative Communication 28(1):1-2.
Fried-Oken, M., L. Fox, M. T. Rau, J. Tullman, G. Baker, M. Hindal, N. Wile, and J.-S. Lou. 2006. Purposes of AAC device use for persons with ALS as reported by caregivers. Augmentative and Alternative Communication 22(3):209-221.
Fried-Oken, M., D. R. Beukelman, and K. Hux. 2012. Current and future AAC research considerations for adults with acquired cognitive and communication impairments. Assistive Technology 24(1):56-66.
Fried-Oken, M., A. Mooney, and B. Peters. 2015. Supporting communication for patients with neurodegenerative disease. Neurorehabilitation 37(1):69-87.
Fucci, D., M. Reynolds, R. Bettagere, and M. D. Gonzales. 1995. Synthetic speech intelligibility under several experimental conditions. Augmentative and Alternative Communication 11(2):113-117.
Ganz, J. B., E. R. Hong, W. Gilliland, K. Morin, and N. Svenkerud. 2015. Comparison between visual scene displays and exchange-based communication in augmentative and alternative communication for children with ASD. Research in Autism Spectrum Disorders 11:27-41.
Gaskin, J., J. Gomes, S. Darshan, and D. Krewski. 2016. Burden of neurological conditions in Canada. Neurotoxicology [E-pub ahead of print].
Gevarter, C., M. F. O’Reilly, L. Fojeski, N. Sammarco, J. Sigafoos, G. E. Lancioni, and R. Lang. 2014. Comparing acquisition of AAC-based mands in three young children with autism spectrum disorder using iPad applications with different display and design elements. Journal of Autism and Developmental Disorders 44(10):2464-2474.
Gevarter, C., M. F. O’Reilly, M. Kuhn, L. Watkins, R. Ferguson, N. Sammarco, L. Rojeski, and J. Sigafoos. 2016. Assessing the acquisition of requesting a variety of preferred items using different speech generating device formats for children with autism spectrum disorder. Assistive Technology 1-8.
Glennen, S. 2000. AAC assessment myths and realities. Paper presented at ASHA SID 12 Leadership Conference on Augmentative and Alternative Communication, Sea Island, GA.
Glennen, S., and D. C. DeCoste. 1997. The handbook of augmentative and alternative communication. San Diego, CA: Singular Publishing Group, Inc.
Goldman-Eisler, F. 1986. Cycle linguistics: Experiments in spontaneous speech. New York: Academic Press.
Greene, B. G., J. S. Logan, and D. B. Pisoni. 1986. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems. Behavior Research Methods, Instruments, & Computers 18(2):100-107.
Haidet, P., R. E. Levine, D. X. Parmelee, S. Crow, F. Kennedy, P. A. Kelly, L. Perkowski, L. Michaelsen, and B. F. Richards. 2012. Perspective: Guidelines for reporting team-based learning activities in the medical and health sciences education literature. Academic Medicine 87(3):292-299.
Hamm, B., and P. Mirenda. 2006. Post-school quality of life for individuals with developmental disabilities who use AAC. Augmentative and Alternative Communication 22(2):134-147.
Hanson, E. K., E. Goldhammer, and T. Bethard. 2016. Telephone talk: Effects of two access methods on phone call success. Augmentative and Alternative Communication 32(3):219-226.
Happ, M. B., T. Roesch, and S. H. Kagan. 2004. Communication needs, methods, and perceived voice quality following head and neck surgery. Cancer Nursing 27(1):1-9.
Hartsuiker, R., R. Bastiaanse, A. Postma, and F. Wijnan. 2005. Phonological encoding and monitoring in normal and pathological speech. New York: Psychology Press.
Hemsley, B., and S. Balandin. 2014. A metasynthesis of patient-provider communication in hospital for patients with severe communication disabilities: Informing new translational research. Augmentative and Alternative Communication 30(4):329-343.
Higginbotham, D. J., and C. Engelke. 2013. A primer for doing talk-in-interaction research in augmentative and alternative communication. Augmentative and Alternative Communication 29(1):3-19.
Higginbotham, D. J., A. L. Drazek, K. Kowarsky, C. Scally, and E. Segal. 1994. Discourse comprehension of synthetic speech delivered at normal and slow presentation rates. Augmentative and Alternative Communication 10(3):191-202.
Higginbotham, D. J., G. W. Lesher, B. J. Moulton, and G. J. Rinkus. 2002. Automated data logging in augmentative communication. In Emerging and accessible telecommunications, information and healthcare technologies—engineering challenges in enabling universal access, edited by J. Winters, C. Robinson, R. Simpson, and G. Vanderheiden. Arlington, VA: Resna Press. Pp. 82-94.
Higginbotham, D. J., H. Shane, S. Russell, and K. Caves. 2007. Access to AAC: Present, past, and future. Augmentative and Alternative Communication 23(3):243-257.
Hilgers, F., A. Ackerstaff, N. Aaronson, P. Schouwenburg, and N. Van Zandwijk. 1990. Physical and psychosocial consequences of total laryngectomy. Clinical Otolaryngology & Allied Sciences 15(5):421-425.
Hill, K. 2004. Augmentative and alternative communication and language: Evidence-based practice and language activity monitoring. Topics in Language Disorders 24(1):18-30.
Hill, K. 2010. Advances in augmentative and alternative communication as quality-of-life technology. Physical Medicine and Rehabilitation Clinics of North America 21(1):43-58.
Hill, K., and V. Corsi. 2012. The role speech language pathologists in assistive technology assessments. In Assistive technology assessment: A handbook for professionals in disability, rehabilitation and health professions, edited by M. J. Scherer and S. Federici. London, UK: Taylor & Francis Group. Pp. 301-336.
Hill, K., R. Lytton, and S. Glennen. 1998. The role of manufacturers’ consultants in delivering AAC services [poster]. Paper presented at 1998 International Society for Augmentative and Alternative Communication (ISAAC) Conference, Dublin, Ireland, August 24-27.
Hill, K., B. Romich, and G. Vanderheiden. 2010. Augmentative communication. In The biomedical engineering handbook, 4th ed., edited by J. D. Bronzino. Boca Raton, FL: CRC Press.
Hill, K., T. Kovacs, and S. Shin. 2014. Reliability of brain-computer interface language sample transcription procedures. Journal of Rehabilitation Research & Development 51(4):579-590.
Hoag, L., J. Bedrosian, and K. McCoy. 2009. Theory-driven AAC practices with adults who use utterance-based systems: The case of conversational rule violations. Perspectives on Augmentative and Alternative Communication 18(4):146-153.
Hochstein, D. D., M. A. McDaniel, S. Nettleton, and K. H. Neufeld. 2003. The fruitfulness of a nomothetic approach to investigating AAC: Comparing two speech encoding schemes across cerebral palsied and nondisabled children. American Journal of Speech-Language Pathology 12(1):110-120.
Hoffman, L., J. Bolton, and S. Ferry. 2008. Passy-Muir Speaking Valve use in a children’s hospital: An interdisciplinary approach. Perspective on Voice and Voice Disorders 18:76-86.
Hoover, J., J. Reichle, D. Van Tassel, and D. Cole. 1987. The intelligibility of synthesized speech: Echo II versus votrax. Journal of Speech, Language, and Hearing Research 30:s425-s431.
Hourcade, J., T. E. Pilotte, E. West, and P. Parette. 2004. A history of augmentative and alternative communication for individuals with severe and profound disabilities. Focus on Autism and Other Developmental Disabilities 19(4):235-244.
Howlin, P., J. Alcock, and C. Burkin. 2005. An 8 year follow-up of a specialist supported employment service for high-ability adults with autism or Asperger syndrome. Autism 9(5):533-549.
Hustad, K. C. 2008. Comprehension and intelligibility scores for speakers with dysarthria. Journal of Speech, Language, and Hearing Research 51:562-573.
Hustad, K. C., and L. K. Miles. 2010. Alignment between augmentative and alternative communication needs and school-based speech-language services provided to young children with cerebral palsy. Early Childhood Services (San Diego, California) 4(3):129-140.
Hyatt, G. W. 2011. The iPad: A cool communicator on the go. Perspectives on Augmentative and Alternative Communication 20(1):24-27.
Jesse, M. T., N. Fei, E. Goldstein, I. Rakitin, L. Shama, F. Hall, and T. Ghanem. 2015. Head and neck cancer screenings and human papillomavirus knowledge across diverse suburban and urban populations. American Journal of Otolaryngology 36(2):223-229.
Johnson, J. M., E. Inglebret, C. Jones, and J. Ray. 2006. Perspectives of speech language pathologists regarding success versus abandonment of AAC. Augmentative and Alternative Communication 22(2):85-99.
Jreige, C., R. Patel, and H. T. Bunnell. 2009. Vocalid: Personalizing text-to-speech synthesis for individuals with severe speech impairment. ASSETS ’09: Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, October 25-28.
Judge, S., and G. Townend. 2013. Perceptions of the design of voice output communication aids. International Journal of Language and Communication Disorders 48(4):366-381.
Jutai, J., N. Ladak, R. Schuller, S. Naumann, and V. Wright. 1996. Outcomes measurement of assistive technologies: An institutional case study. Assistive Technology 8(2):110-120.
Kagohara, D., L. van der Meer, S. Ramdoss, M. F. O’Reilly, G. E. Lancioni, T. N. Davis, M. Rispoli, R. Lang, P. B. Marschik, D. Sutherland, V. A. Green, and J. Sigafoos. 2013. Using iPods® and iPads® in teaching programs for individuals with developmental disabilities: A systematic review. Research in Developmental Disabilities 34(1):147-156.
Kent, R. D., G. Weismer, J. F. Kent, and J. C. Rosenbek. 1989. Toward phonetic intelligibility testing in dysarthria. Journal of Speech and Hearing Disorders 54(4):482-499.
Kent-Walsh, J., and D. McNaughton. 2009. Communication partner instruction in AAC: Present practices and future directions. Augmentative and Alternative Communication 21(3):195-204.
King, T. 2000. Modern morse code in rehabilitation and education. Boston, MA: Allyn & Bacon.
King, J. M., N. Alarcon, and M. A. Rogers. 2007. Primary progressive aphasia. In Augmentative communication strategies for adults with acute or chronic medical conditions, edited by D. R. Beukelman, K. L. Garrett, and K. M. Yorkston. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 207-241.
King, M. R., C. Binger, and J. Kent-Walsh. 2015. Using dynamic assessment to evaluate the expressive syntax of children who use augmentative and alternative communication. Augmentative and Alternative Communication 31(1):1-14.
Koul, R., and L. Lloyd. 1994. Survey of professional preparation in augmentative and alternative communication (AAC) in speech-language pathology and special education programs. American Journal of Speech-Language Pathology 3(3):12-22.
Kresic, S., M. Veselinovic, G. Mumovic, and S. M. Mitrovic. 2015. Possible factors of success in teaching esophageal speech. Medicinski Pregled Medical Review 68(1-2):5-9.
Lawthers, A. G., G. S. Pransky, L. E. Peterson, and J. H. Himmelstein. 2003. Rethinking quality in the context of persons with disability. International Journal for Quality in Health Care 15(4):287-299.
Leder, S. B. 1994. Perceptual rankings of speech quality produced with one-way tracheostomy speaking valves. Journal of Speech, Language, and Hearing Research 37(6):1308-1312.
Lesher, G., B. Moulton, and D. J. Higginbotham. 1998. Techniques for augmenting scanning communication. Augmentative and Alternative Communication 14(2):81-101.
Lichtman, S. W., I. L. Birnbaum, M. R. Sanfilippo, J. T. Pellicone, W. J. Damon, and M. L. King. 1995. Effect of a tracheostomy speaking valve on secretions, arterial oxygenation, and olfaction: A quantitative evaluation. Journal of Speech, Language, and Hearing Research 38(3):549-555.
Light, J. C. 2003. Shattering the silence: Development of communicative competence by individuals who use AAC. In Communicative competence for individuals who use AAC: From research to effective practice, edited by J. C. Light, D. R. Beukelman, and J. Reichle. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 3-38.
Light, J. C., and K. Drager. 2007. AAC technologies for children with complex communication needs: State of the science and future research directions. Augmentative and Alternative Communication 23(3):204-216.
Light, J. C., and D. McNaughton. 2012. The changing face of augmentative and alternative communication: Past, present, and future challenges. Augmentative and Alternative Communication 28(4):197-204.
Light, J. C., and D. McNaughton. 2013. Putting people first: Re-thinking the role of technology in augmentative and alternative communication intervention. Augmentative and Alternative Communication 29(4):299-309.
Light, J. C., and D. McNaughton. 2014. Communicative competence for individuals who require augmentative and alternative communication: A new definition for a new era of communication? Augmentative and Alternative Communication 30(1):1-18.
Light, J. C., B. Stoltz, and D. McNaughton. 1996. Community-based employment: Experiences of adults who use AAC. Augmentative and Alternative Communication 12(4):215-229.
Light, J. C., C. Binger, T. L. Agate, and K. N. Ramsay. 1999. Teaching partner-focused questions to individuals who use augmentative and alternative communication to enhance their communicative competence. Journal of Speech, Language, and Hearing Research 42(1):241-255.
Lilienfeld, M., and E. Alant. 2009. The social interaction of an adolescent who uses AAC: The evaluation of a peer-training program. Augmentative and Alternative Communication 21(4):278-294.
Lund, S. K., and J. Light. 2006. Long-term outcomes for individuals who use augmentative and alternative communication: Part I—What is a “good” outcome? Augmentative and Alternative Communication 22(4):284-299.
Marics, M. A., and B. H. Williges. 1988. The intelligibility of synthesized speech in data inquiry systems. Human Factors 30(6):719-732.
Matthews, R. 2001. A survey to identify therapists’ high-tech AAC knowledge, application, and training. International Journal of Language and Communication Disorders 36(Suppl.):64-69.
McAuliffe, M. J., S. Carpenter, and C. Moran. 2010. Speech intelligibility and perceptions of communication effectiveness by speakers with dysarthria following traumatic brain injury and their communication partners. Brain Injury 24(12):1408-1415.
McCarthy, H. 1986. Making it in able-bodied America: Career development in young adults with physical disabilities. Journal of Applied Rehabilitation Counselling 17(4):30-38.
McDuffie, A., A. Oakes, W. Machalicek, M. Ma, L. Bullard, S. Nelson, and L. Abbeduto. 2016. Early language intervention using distance video-teleconferencing: A pilot study of young boys with Fragile X syndrome and their mothers. American Journal of Speech-Language Pathology 25(1):46-66.
McGrory, A. 2011. Communicating with head and neck cancer patients. ORL—Head and Neck Nursing 29(3):7-11.
McNaughton, D., and A. Arnold. 2013. Supporting positive employment outcomes for individuals who use AAC. Perspectives on Augmentative and Alternative Communication 22(3):164-172.
McNaughton, D., and D. N. Bryen. 2007. AAC technologies to enhance participation and access to meaningful societal roles for adolescents and adults with developmental disabilities who require AAC. Augmentative and Alternative Communication 23(3):217-229.
McNaughton, D., and J. Light. 2013. The iPad and mobile technology revolution: Benefits and challenges for individuals who require augmentative and alternative communication. Augmentative and Alternative Communication 29(2):107-116.
McNaughton, D., and L. Richardson. 2013. Supporting positive employment outcomes for individuals with autism who use AAC. Perspectives on Augmentative and Alternative Communication 22(3):164-172.
McNaughton, D., K. Fallon, J. Tod, F. Weiner, and J. Neisworth. 1994. Effect of repeated listening experiences on the intelligibility of synthesized speech. Augmentative and Alternative Communication 10(3):161-168.
McNaughton, D., J. Light, and L. Groszyk. 2001. “Don’t give up”: Employment experiences of individuals with amyotrophic lateral sclerosis who use augmentative and alternative communication. Augmentative and Alternative Communication 17(3):179-195.
McNaughton, D., J. C. Light, and K. Arnold. 2002. “Getting your wheel in the door”: Successful full-time employment experiences of individuals with cerebral palsy who use augmentative and alternative communication. Augmentative and Alternative Communication 17(2):59-76.
McNaughton, D., T. Rackensperger, D. Dorn, and N. Wilson. 2014. “Home is at work and work is at home”: Telework and individuals who use augmentative and alternative communication. Work 48(1):117-126.
Mirenda, P. 2003. Toward functional augmentative and alternative communication for students with autism: Manual signs, graphic symbols, and voice output communication aids. Language, Speech, and Hearing Services in Schools 34(3):203-216.
Mirenda, P., and D. R. Beukelman. 1987. A comparison of speech synthesis intelligibility with listeners from three age groups. Augmentative and Alternative Communication 3(3):120-128.
Mirenda, P., and D. R. Beukelman. 1997. A comparison of intelligibility among natural speech and seven speech synthesizers and listeners from three age groups. Augmentative and Alternative Communication 6(1):61-68.
Mizuko, M., J. Reichle, A. Ratcliff, and J. Esser. 1994. Effects of selection techniques and array sizes on short-term visual memory. Augmentative and Alternative Communication 10(4):237-244.
Müller, J., G. K. Wenning, M. Verny, A. McKee, K. R. Chaudhuri, K. Jellinger, W. Poewe, and I. Litvan. 2001. Progression of dysarthria and dysphagia in postmortem-confirmed parkinsonian disorders. Archives of Neurology 58(2):259-264.
Nelson, P., K. Kohnert, S. Sabur, and D. Shaw. 2005. Classroom noise and children learning a second language: Double jeopardy? Language, Speech, and Hearing Services in Schools 36(3):219-229.
Netsell, R.. 1982. Speech motor control and selected neurologic disorders. In Speech motor control, edited by S. Grillner, J. Lindblom J. Lubker, and A. Persson. Oxford, UK: Pergamon Press. Pp. 247-261.
Newell, A., S. Langer, and M. Hickey. 1998. The role of natural language processing in alternative and augmentative communication. Natural Language Engineering 4(1):1-16.
Nordness, A. S., L. J. Ball, S. Fager, D. R. Beukelman, and G. L. Pattee. 2010. Late AAC assessment for individuals with amyotrophic lateral sclerosis. Journal of Medical Speech-Language Pathology 18(1):48-54.
O’Keefe, B. M., N. B. Kozak, , and R. Schuller. 2007. Research priorities in augmentative and alternative communication as identified by people who use AAC and their facilitators. Augmentative and Alternative Communication 23(1):89-96.
Oommen, E., and J. McCarthy. 2014. Natural speech and AAC intervention in childhood motor speech disorders: Not an either/or situation. Perspectives on Augmentative and Alternative Communication 23(3):117-123.
Oshrin, S. E., and J. A. Siders. 1987. The effect of word predictability on the intelligibility of computer synthesized speech. Journal of Computer-Based Instruction 14(3):89-90.
Patel, R., G. Meltzner, M. Suen, A. Nilsson, G. Rahhal, S. Bhandari, and R. Schweiker. 2015. VocaliD™ human voicebank app. Belmont, MA: VocaliD™.
Perry, A. R., and M. A. Shaw. 2000. Evaluation of functional outcomes (speech, swallowing and voice) in patients attending speech pathology after head and neck cancer treatment(s): Development of a multi-centre database. The Journal of Laryngology & Otology 114(8):605-615.
Perry, A., S. Reilly, S. Cotton, K. Bloomberg, and H. Johnson. 2004. A demographic survey of people who have a disability and complex communication needs in Victoria, Australia. Asia Pacific Journal of Speech, Language, and Hearing 9(3):259-271.
Raghavendra, P., J. Bornman, M. Granlund, and E. Björck-Åkesson. 2007. The World Health Organization’s International Classification of Functioning, Disability, and Health: Implications for clinical and research practice in the field of augmentative and alternative communication. Augmentative and Alternative Communication 23(4):349-361.
Ratcliff, A. 1994. Comparison of relative demands implicated in direct selection and scanning: Considerations from normal children. Augmentative and Alternative Communication 10(2):67-74.
Ratcliff, A., and D. R. Beukelman. 1995. Pre-professional preparation in augmentative and alternative communication: State-of-the-art report. Augmentative and Alternative Communication 11(2):61-73.
Reynolds, M. E., Z. S. Bond, and D. Fucci. 1996. Synthetic speech intelligibility: Comparison of native and non-native speakers of English. Augmentative and Alternative Communication 12(1):32-36.
Robinson, N. B., and K. C. Sadao. 2005. Person-focused learning: A collaborative teaching model to prepare future AAC professionals. Augmentative and Alternative Communication 21(2):149-163.
Rodriguez, C. S., M. Rowe, L. Thomas, J. Shuster, B. Koeppel, and P. Cairns. 2016. Enhancing the communication of suddenly speechless critical care patients. American Journal of Critical Care 25(3):e40-e47.
Romich, B., G. Vanderheiden, and K. Hill. 2005. Augmentative communication. In The biomedical engineering handbook, 3rd ed., edited by J. D. Bronzino. Boca Raton, FL: CRC Press.
Romski, M. A., and R. A. Sevcik. 2005. Augmentative communication and early intervention: Myths and realities. Infants and Young Children 18(3):174-185.
Rousseau, B., M. Gutmann, I. F. T. Mau, D. Francis, J. Johnson, C. K. Novaleski, K. N. Vinson, and C. G. Garrett. 2015. Randomized controlled trial of supplemental augmentative and alternative communication versus voice rest alone after phonomicrosurgery. Otolaryngology—Head and Neck Surgery 152(3):494-500.
Rowland, C., P. D. Schweigert, J. C. Light, D. R. Beukelman, and J. Reichle. 2003. Cognitive skills and AAC. In Communicative competence for individuals who use AAC: From research to effective practice, edited by J. C. Light, D. R. Beukelman, and J. Reichle. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 241-275.
Saltürk, Z., A. Arslanoglu, E. Özdemir, G. Yildirim, I. Ayodogdu, T. L. Kumral, G. Berkiten, Y. Atar, and Y. Uyar. 2016. How do voice restoration methods affect the psychological status of patients after total laryngectomy? HNO 64(3):163-168.
Satterfield, L. 2015. Voice of reimbursement: Medicare coverage for vocal disorders. Perspectives on Voice and Voice Disorders 25(3):116-123.
Scherer, M. J. 2002. Assistive technology: Matching device and consumer for successful rehabilitation. Washington, DC: American Psychological Association.
Scherer, M. J. 2005. Living in the state of stuck: How assistive technology impacts the lives of people with disabilities. 4th ed. Brookline, MA: Brookline Books.
Scherer, M. J., and G. Craddock. 2002. Matching person and technology (MPT) assessment process. Technology and Disability 14(3):125-131.
Schlosser, R. W. 2003. Outcome measurement in AAC. In Communicative competence for individuals who use AAC, edited by J. Light, D. R. Beukelman, and J. Reichle. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 479-513.
Segalman, R. 2011. AAC, aging, and telephone relay access technology. Disability Studies Quarterly 31(4).
Shane, H., S. Blackstone, G. Vanderheiden, M. Williams, and F. DeRuyter. 2012. Using AAC technology to access the world. Assistive Technology 24(1):3-13.
Singer, S., H. Danker, A. Dietz, B. Hornemann, S. Koscielny, J. Oeken, C. Matthaus, H. J. Vogel, and O. Krauss. 2008. Screening for mental disorders in laryngeal cancer patients: A comparison of 6 methods. Psychooncology 17(3):280-286.
Slowiaczek, L. M., and H. C. Nusbaum. 1985. Effects of speech rate and pitch contour on the perception of synthetic speech. Human Factors 27(6):701-712.
Smith, R. O. 1996. Measuring the outcomes of assistive technology: Challenge and innovation. Assistive Technology 8(2):71-81.
Snell, M. E., N. Brady, L. Mclean, B. T. Ogletree, E. Siegel, L. Sylvester, B. Mineo, D. Paul, M. A. Romski, and R. Sevcik. 2010. Twenty years of communication intervention research with individuals who have severe intellectual and developmental disabilities. American Journal of Intellectual and Developmental Disabilities 115(5):364-380.
SSA (U.S. Social Security Administration). 2017. DI 24535.015 Evaluation of speech impairments. Program Operations Manual System (POMS). https://secure.ssa.gov/poms.nsf/lnx/0424535015 (accessed January 18, 2017).
SSA. n.d. Disability evaluation under Social Security—listing of impairments—adult listings (Part A). https://www.ssa.gov/disability/professionals/bluebook/AdultListings.htm (accessed January 18, 2017).
Staffieri, A., B. E. Mostafea, B. T. Varghese, E. D. Kitcher, M. Jalisi, J. J. Fagan, C. Staffieri, and G. Marioni. 2006. Cost of tracheoesophageal prostheses in developing countries. Facing the problem from an internal perspective. Acta Oto-Laryngologica 126(1):4-9.
Stern, S. E., C. M. Chobany, A. A. Beam, B. N. Hoover, T. T. Hull, M. Linsenbigler, C. Makdad-Light, and C. N. Rubright. 2017. Use of speech generating devices can improve perception of qualifications for skilled, verbal, and interactive jobs. Work [E-pub ahead of print].
Sullivan, M., D. Beukelman, and P. Mathy-Laikko. 1993. Situational communicative effectiveness of rehabilitated individuals with total laryngectomies. Journal of Medical Speech-Language Pathology 1(1):73-80.
Sullivan, M., C. Gaebler, and L. J. Ball. 2007a. AAC for people with head and neck cancer. In Augmentative communication strategies for adults with acute or chronic medical conditions, edited by D. R. Beukelman, K. L. Garrett, and K. M. Yorkston. Baltimore, MD: Paul H. Brookes Publishing Co. Pp. 347-367.
Sullivan, M., C. Gaebler, and L. J. Ball. 2007b. Supporting persons with chronic communication limitations: Head & neck cancer. Paper presented at American Speech-Language-Hearing Association Convention, Boston, MA.
Tang, C. G., and C. F. Sinclair. 2015. Voice restoration after total laryngectomy. Otolaryngologic Clinics of North America 48(4):687-702.
THANC (Thyroid Head & Neck Cancer) Foundation. 2017. Head & neck cancer guide—Speech and swallowing rehabilitation. http://www.headandneckcancerguide.org/adults/cancer-diagnosis-treatments/surgery-and-rehabilitation/surgeries-to-aid-breathing-and-eating/speech-and-swallowing-rehabilitation/ (accessed April 6, 2017).
Therrien, M. C., and J. C. Light. 2016. Using the iPad to facilitate interaction between preschool children who use AAC and their peers. Augmentative and Alternative Communication 32(3):163-174.
Thistle, J. J., and K. M. Wilkinson. 2009. The effects of color cues on typically developing preschoolers’ speed of locating a target line drawing: Implications for augmentative and alternative communication display design. American Journal of Speech-Language Pathology 18(3):231-240.
Thistle, J. J., and K. M. Wilkinson. 2013. Working memory demands of aided augmentative and alternative communication for individuals with developmental disabilities. Augmentative and Alternative Communication 29(3):235-245.
Thistle, J. J., and K. M. Wilkinson. 2015. Building evidence-based practice in AAC display design for young children: Current practices and future directions. Augmentative and Alternative Communication 31(2):124-136.
Trnka, K., J. McCaw, C. Pennington, and K. McCoy. 2008. Word prediction and communication rate in AAC. Proceedings of the IASTED International Conference on Telehealth/Assistive Technologies, Baltimore, MD, April 16-18.
Ulmer, E., K. Hux, J. Brown, T. Nelms, and C. Reeder. 2016. Using self-captured photographs to support the expressive communication of people with aphasia. Aphasiology 1-22.
van der Merwe, A. 2009. A theoretical framework for the characterization of pathological speech sensorimotor control. In Clinical management of sensorimotor speech disorders, 2nd ed., edited by M. R. McNeil. New York: Thieme. Pp. 3-18.
Venkatagiri, H. S. 2003. Segmental intelligibility of four currently used text-to-speech synthesis methods. Journal of the Acoustical Society of America 113(4 Pt. 1):2095-2104.
Vincer, M. J., A. C. Allen, K. S. Joseph, D. A. Stinson, H. Scott, and E. Wood. 2006. Increasing prevalence of cerebral palsy among very preterm infants: A population-based study. Pediatrics 118(6):e1621-e1626.
Wagner, B., and H. M. Jackson. 2006. Developmental memory capacity resources of typical children retrieving picture communication symbols using direct selection and visual linear scanning with fixed communication displays. Journal of Speech, Language, and Hearing Research 49(1):113-126.
Ward, E. C., S. K. Koh, J. Frisby, and R. Hodge. 2003. Differential modes of alaryngeal communication and long-term voice outcomes following pharyngolaryngectomy and laryngectomy. Folia Phoniatrica et Logopaedica 55(1):39-49.
Wehman, P., S. Lau, A. Molinelli, V. Brooke, K. Thompson, C. Moore, and M. West. 2012. Supported employment for young adults with autism spectrum disorder: Preliminary data. Research and Practice for Persons with Severe Disabilities 37(3):160-169.
WHO (World Health Organization). 2002. Towards a common language for functioning, disability and health: ICF. http://www.who.int/classifications/icf/training/icfbeginnersguide.pdf (accessed November 16, 2016).
Williams, M. B., C. Krezman, and D. McNaughton. 2008. “Reach for the stars”: Five principles for the next 25 years of AAC. Augmentative and Alternative Communication 24(3):194-206.
Wisenburn, B., and D. J. Higginbotham. 2009. Participant evaluations of rate and communication efficacy of an AAC application using natural language processing. Augmentative and Alternative Communication 25(2):78-89.
Wodka, E. L., P. Mathy, and L. Kalb. 2013. Predictors of phrase and fluent speech in children with autism and severe language delay. Pediatrics 131(4):e1128-e1134.
Wolpaw, J. R., N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, and T. M. Vaughan. 2000. Brain-computer interface technology: A review of the first international meeting. IEEE Transactions on Neural Systems and Rehabilitation Engineering 8(2):164-173.
Yamagishi, J., C. Veaux, S. King, and S. Renals. 2012. Speech synthesis technologies for individuals with vocal disabilities: Voice banking and reconstruction. Acoustical Science and Technology 33(1):15.
Yang, C. S., C. H. Yang, L. Y. Chuang, and C. H. Yang. 2009. A wireless Internet interface for person with physical disability. Mathematical and Computer Modelling 50(1-2):72-80.
Yorkston, K. M., and D. R. Beukelman. 1981. Communication efficiency of dysarthric speakers as measured by sentence intelligibility and speaking rate. Journal of Speech and Hearing Disorders 46(3):296-301.
Yorkston, K. M., P. A. Dowden, and D. R. Beukelman. 1992. Intelligibility measurement as a tool in the clinical management of dysarthric speakers. In Intelligibility in speech disorders: Theory, measurement and management, edited by R. D. Kent. Philadelphia, PA: John Benjamin Publishing Company. Pp. 265-285.
Yorkston, K. M., E. A. Strand, and M. R. T. Kennedy. 1996. Comprehensibility of dysarthric speech. American Journal of Speech-Language Pathology 5(1):55.
Yorkston, K. M., M. S. Bourgeois, and C. R. Baylor. 2010a. Communication and Aging. Physical Medicine and Rehabilitation Clinics of North America 21(2):309-319.
Yorkston, K. M., D. R. Beukelman, E. A. Strand, and M. Hakel. 2010b. Management of motor speech disorders in children and adults. 3rd ed. Austin, TX: ProEd.
Zabala, J., G. Bowser, and J. Korsten. 2005. SETT and ReSETT: Concepts for AT implementation. Closing the Gap 23(5):1, 10-11.
ANNEX TABLE 6-1*
Summary of Aided Augmentative and Alternative Communication (AAC) Products and Technologiesa
Cost Range | |
---|---|
NO TECHNOLOGY Healthcare Common Procedure Coding System (HCPCS) not applicable (NA) Examples: Alphabet board Symbol sets Transparent gaze board/eye transfer (ETRAN) Topic boards ![]() A. E-TRAN Topic Board ![]() B. EZ Board™ |
$1–$100 |
* The images in Annex Table 6-1 serve as examples of device categories only and should not be considered an endorsement of specific products or manufacturers.
Indications for Use | Relative Contraindications | Benefits | Limitations |
---|---|---|---|
Basic messaging Visual output Establish topic Spell messages Partner-supported communication |
Visual impairment Requires verbal output Complex or detailed messaging needs |
Lightweight Simple to create, use Replace when damaged Create for multiple contexts Digitized speech is highly intelligible Plexiglass board has increased durability |
No voice output No telephone interaction Limited independence Difficult with limited literacy Fixed display Printer required; may require symbol software Limited clinical support available Limited funding available |
Cost Range | |
---|---|
LOW TECHNOLOGY HCPCS NA Examples: Megabee Eyegaze Communication Device ![]() C. MegaBee Eye Gaze Communication Device |
$1,260 |
AAC/AAC TECHNOLOGY Type Centers for Medicare & Medicaid Services (CMS) code |
Cost Range |
DIGITIZED VOICE OUTPUT |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Eye gaze selection Text output |
Low literacy |
Lightweight Portable Battery-operated Dual LCD screen for both communicators to see message Reduces effort of communication partner writing message by displaying on LCD screen |
Partner-dependent communication Limited funding available |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Basic, brief messages Prerecorded messages Supports greeting, name/labeling, simple requesting, protesting |
Need to formulate novel messages Unimpaired adult cognitive function Literate Complex communication |
Lightweight Portable Relatively inexpensive Battery-operated Durable design Multilingual Simple message recording Assessment and treatment codes established for speech-language pathologist (SLP) |
Require age/gender-matched communication partner to record messages (partner-dependent) Limited conversations No spontaneous messages |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
≤8 minutes recording time HCPCS E2500 Examples: BIGmack LITTLE Step-by-Step Talking Brix Sequencer ![]() D. LITTLE Step-by Step |
$130–$3,000 |
9–20 minutes recording time HCPCS E2502 Examples: SuperTalker QuickTalker23 VoicePal Levels ![]() E. SuperTalker |
$300–$500 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Single messages Limited communication needs |
Initiate interactions Social comments Call attention Familiar communication partners |
Long messages Multiple messages Message formulation Multiple environments |
|
Limited need for multiple messages Supports choice making from array |
Combine thoughts into utterances Direct others |
Multiple conversation partners |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
21–40 minutes recording time HCPCS E2504 Examples: GoTalk Express32 ![]() F. Express32 |
$600–$1,500 |
>40 minutes recording time HCPCS E2506 Examples: Talara32 Logan ProxTalker Smart/128VSD ![]() G. Smart/128VSD |
$400–$4,100 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Multiple basic messages Narrative storage and retell |
Use messages to describe known places/activities |
||
Multiple basic and detailed messages Supports introductions |
Lengthier message content |
Unknown contexts, activities |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
SYNTHESIZED VOICE OUTPUT | |
Physical Contact and Spelling HCPCS E2508 Examples: LightWriter SL40 Allora2 TextSpeak TS04 ![]() H. LightWriter SL40 Connect |
$400–$7,000 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Provides formulation for individually unique messages Supports telephone interaction, conversation, complex message formulation, personal narratives, past event messaging, clarification, self-talk Benefit from full formulation for individually unique messages |
Inability to formulate or comprehend complex interactions Distractibility with dynamic displays |
High-quality synthesized voice output Text-to-speech Some multilingual Unlimited messages, contexts, communication partners Independent message formulation Formulate and interact with complex language Personal choice of synthesized voice Rechargeable battery and/or AC connection |
Slow communication rate in time-sensitive interactions Some reduction in intelligibility of synthesized speech Wet, dusty conditions problematic |
Keyboard skills (most QWERTY) Typed message formulation Message formulation by (a) spelling every word, (b) device speaking word by word, or (c) person selecting “Enter” to deliver full message |
Upper-extremity movement limitations Limited literacy |
Familiar format Small, lightweight Portable Minimal training necessary Text representation Rate acceleration Assessment and treatment codes established for SLP (92607, 92608, 92609) |
Direct keyboard access Limited accommodation to access in progressive disease Communication rate limited by typing rate Hands occupied for talking, unavailable for other activities |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
Multiple Formulation and Access HCPCS E2510 Examples: Wego NovaChat T7-15 Accent ComLink ProSlate Enable Eyes ![]() I. Accent 1400 |
$2,000–$16,000 |
Multiple Formulation and Access HCPCS NA Examples: Apple iPad Android Tablet ![]() J. iPad running PredictableTM |
$50–$1,300 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Touchscreen, keyboard, alternative access Display options for visual/cognitive needs Message formulation by spelling, using word prediction, selecting from a message array, device speaking word by word or upon selection to deliver full message |
Limited language needs Difficulty navigating dynamic display |
Unlimited content Symbol, photo, visual scene, text representation Rate acceleration strategies Dynamic display Direct and scanning access Assessment and treatment codes established for SLP |
Many are large with limited portability without mounting to structure (table, wheelchair) and transport Synthesized output can be supplemented with digitized messages Communication software is integrated in speech-generating device (SGD) |
Touchscreen, some alternative access options Mainstream disability access options |
Unlimited content Symbol, photo, text representation Rate acceleration strategies Dynamic display Relatively inexpensive |
Limited funding options Limited assessment and treatment, professional support Synthesized output can be supplemented with digitized messages Physical disability access options are limited Limited device–app integration support |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
SOFTWARE |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Provides language supports for communication based on language skills and needs |
NA |
Provides communication platform for devices Includes interface for alternative access, symbol and message management, and rate acceleration Some multilingual |
Some compatibility issues, manufacturer or OS proprietary use Voice synthesizers offered as software Requires device for voice output activation Professional knowledge of communication needs essential to selecting most appropriate software/app |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
AAC TECNOLOGY software HCPCS E2511 Examples: Unity WordPower84 Communicator Speaking Dynamically Pro GoTalk Boardmaker Plus ![]() K. Boardmaker Plus ![]() L. WordPower84 |
$100–$750 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Require message overlays for digitized devices Communication format on synthesized devices |
Provide symbol sets, framework for communication Support digitized and synthesized communication methods Often packaged with SGD |
Varied levels of training necessary for use on device Some individuals will require training to understand a new representational system (i.e., using pictures to communicate) |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
Apps HCPCS NA Examples: Proloquo2Go Proloquo4Text Verbally Compass Predictable ![]() M. Proloquo2Go® ![]() N. Proloquo4Text® |
$0–$500 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Require message overlays for tablet systems |
Because of frequent changes to apps, individuals with limited acceptance of new formats or updates may have difficulty |
Readily available in online marketplace Relatively low cost |
Some apps developed for single individual; varied quality exists in market Limited access to support for use, training, and troubleshooting Large number of apps available; many professionals have difficulty remaining updated on options |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
ACCESSORIES HCPCS E2599 |
|
Access Switches![]() O. Micro Light switch ![]() P. Jelly Bean Twist switch |
$20–$2,000 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Provide access to messaging Support for evaluating multiple access method needs (e.g., fatigue, disease progression, context) (Fager et al., 2012) |
Accommodate multiple physical disabilities to gain access to communication |
Limited professional support available (e.g., SLP with AAC specialization, occupational/physical therapist [OT/PT] with switch experience) |
|
Indirect access through scanning by activating a switch when desired message is reached Direct access through Morse code by activating one or two switches to formulate message Direct access through head movement and dwell on desired message |
May be cognitively taxing (scanning) |
Relatively inexpensive |
Slow message formulation, particularly with scanning May need supports during transfers to remove accessories or mounts Minimal repairs and maintenance available; most must be replaced when damaged |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
Eye Gaze Access Examples: Eyespeak 12HD I-series (12+/15+) NuEye™ Tracking System ![]() Q. Eyespeak 12HD |
$2,000–$8,000 |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
MOUNTING SYSTEMS HCPCS E2512 |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Direct access through eye gaze selection |
Blindness Eye movement impairment (e.g., apraxia) Consistent inability to calibrate the gaze system |
Enables individuals to control an AAC device using eye gaze interaction |
Extraneous movements (e.g., not under personal control, such as chorea/hyperkinetic movement) may interfere with calibration and accuracy |
Indications for Use | Relative Contraindications | Benefits | Limitations |
Provide access to device in various environments (e.g., wheelchair, desk, workstation) Secure AAC technology and accessories to mobility devices, seating systems in home/workplace |
Direct access to touchscreen with body Portable device |
Transport communication device Limit fatigue associated with device transport Provide optimal position for access to device |
Limited professional support available (e.g., SLP with AAC specialization, OT/PT with AAC technology mounting experience) Minimal repairs and maintenance available, most must be replaced when damaged May require support to remove/reposition for safe transfers |
AAC/AAC TECHNOLOGY Type CMS code |
Cost Range |
Device Mounts Examples: Wheelchair mount Desk mount Rolling floor mount ![]() R. DaeSSy Rigid Mount Tech/Talk, Speak & Scan Devices |
$400–$1,500 |
Switch/Accessory Mounts![]() S. LIGHT-3D Table mount with two tubes and three joints with lever (14.4052)b |
$50–$400 |
NOTE: NA = not applicable.
a Technologies depicted in these images were current at the time of this writing.
b REHAdapt Engineering does not manufacture and has no claim to the switch shown on the mount.
SOURCES: A. Low Tech Solutions 2017; B. Vidatak, LLC; C. E2L Limited; D. AbleNet, Inc. Photo courtesy of AbleNet, Inc.; E. AbleNet, Inc. Photo courtesy of AbleNet, Inc.; F. Attainment Company, Inc.; G. Advanced Multimedia Devices, Inc.; H. Tobbi Dynavox. © 2017 Tobii Dynavox. All rights reserved.; I. Prentke Romich
Indications for Use | Relative Contraindications | Benefits | Limitations |
Secure SGD to mobility devices, seating systems in home and community |
Handheld portable device in use |
Provide effective transport of heavy AAC devices |
May require partner set-up for daily access and repositioning as body shifts location |
Secure switches/eye gaze to AAC technology, mobility devices, seating systems in home and community |
Flexible switch placement needed (e.g., on clothing, bedding) |
Provide stable, consistent base of support for switch access |
Limited switch placement flexibility when in use May require partner set-up for daily access and with movement that removes person from switch proximity |
Company. © Copyright 2016 Prentke Romich Company. All rights reserved.; J. Predictable™. Copyright © 2017 Therapy Box Limited. All rights reserved.; K. Tobii Dynavox/Mayer-Johnson. The Picture Communication Symbols © 1981–2015 by Mayer-Johnson LLC. All rights reserved worldwide. Used with permission.; L. Prentke Romich Company. © Copyright 2016 Prentke Romich Company. All rights reserved.; M. AssistiveWare. Proloquo2Go® is an AssistiveWare® product. Image used with permission.; N. AssistiveWare. Proloquo4Text® is an AssistiveWare® product. Image used with permission.; O. AbleNet, Inc. Photo courtesy of AbleNet, Inc.; P. AbleNet, Inc. Photo courtesy of AbleNet, Inc.; Q. Talk to Me Technologies; R. Advanced Multimedia Devices, Inc.; S. REHAdapt Engineering GmbH & Co. KG.
ANNEX TABLE 6-2
Augmentative and Alternative Communication Software and Hardware
AAC Language Representation | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Text, Alphabetic Symbols | ||||||||||||
Letter-by-Letter Spelling | Word Prediction | Whole Word Display | Letter Coding | Morse Code | ||||||||
Communication Proficiency | YES | YES | YES | YES | YES | |||||||
Literacy Proficiency | YES | YES | YES | YES | YES | |||||||
Prerecorded Utterances | NO | NO | NO | NO | NO | |||||||
Novel Utterances | YES | YES | YES | YES | YES | |||||||
Vocabulary Selection | Individual Recall memory | Individual recall and recognition memory supported by software-embedded dictionaries | Individual recognition memory supported by display of high-frequency words | Individual recall memory Typically personalized for individual | Individual recall memory Often supported with visual display of the code | |||||||
Software and Hardware Features | ||||||||||||
Symbol Set/System | Language-specific alphabet or characters, numbers, and punctuation | Alphabetic whole-word options presented dynamically based on the letter(s) entered | Whole words integrated with alphabet, numbers, and punctuation | Letters and numbers are used to create codes representing messages (e.g., asap = as soon as possible) | Dot and dash sequences represent alphabet, numbers, punctuation, and computer functions |
Single-Meaning Picture Symbols | Multiple-Meaning Icons | ||||
---|---|---|---|---|---|
One Display | Levels with Changeable Displays | Multiple Displays | Multiple Methods | Icon Sequencing | Multiple Methods |
NO | YES | YES | YES | YES | YES |
NO | NO | NO | NO | NO | NO |
YES | YES | YES | YES | YES | YES |
NO | NO | NO | NO | YES | YES |
Professional/ communication partner Selection based on high-frequency vocabulary and/or customized based on personal choice, context, activity | Professional/ communication partner Selection based on high-frequency vocabulary and/or customized based on personal choice, context, activity | Professional/ communication partner Selection based on high-frequency vocabulary and/or customized based on personal choice, context, activity | Professional/ communication partner Selection based on high-frequency vocabulary and/or customized based on personal choice, context, activity | Individual recall and recognition memory supported by high-frequency words Extended vocabulary selected based on personal choice, context, activity | Individual recall and recognition memory supported by high-frequency words Extended vocabulary selected based on personal choice, context, activity |
Photos, line drawings, or color graphic drawings represent words, messages | Photos, line drawings, or color graphic drawings represent words, messages | Photos, line drawings or color graphic drawings represent words, messages | Photos, line drawings, or color graphic drawings represent words, messages, paired with alphabet | Color graphic drawings that represent words and messages with more than one meaning | Color graphic drawings that represent words and messages with more than one meaning with alphabet configurations |
AAC Language Representation | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Text, Alphabetic Symbols | ||||||||||||
Letter-by-Letter Spelling | Word Prediction | Whole Word Display | Letter Coding | Morse Code | ||||||||
Number of Symbols | Language-specific letters or characters, numbers, punctuation, and other keyboard symbols | Not applicable (NA) | NA | NA | Based on language-specific symbol set | |||||||
Organization | Language-specific organization: QWERTY, ABCDEF, AEIOU, DVORAK, etc. | Selection options may be positioned at different display locations | Typically organized based on frequency, with the alphabet on the same or a different page | Customized letter codes, most on a separate page or section of a page | Language-specific organization | |||||||
Number of Display Locations | Based on individual letters or groupings Typically, 26 or more as numbers, punctuation, and control options are available | Typically in addition to the main display Groups of 6, 8, 12, or more words may be presented | Displays with 20 to 144 locations are common | Some are memorized and not displayed | Often a sheet of codes is used to support recall during learning |
Single-Meaning Picture Symbols | Multiple-Meaning Icons | ||||
---|---|---|---|---|---|
One Display | Levels with Changeable Displays | Multiple Displays | Multiple Methods | Icon Sequencing | Multiple Methods |
Based on the number of display locations and individual’s vocabulary needs Typically, several thousand symbols are available in communication software, and more may be added | Based on the number of display locations and individual’s vocabulary needs Typically, several thousand symbols are available in communication software, and more may be added | Based on the number of display locations and individual’s vocabulary needs Typically, several thousand symbols are available in communication software, and more may be added | Based on the number of display locations and individual’s vocabulary needs Typically, several thousand symbols are available in communication software, and more may be added | A limited icon set is combined in sequences to represent core vocabulary Several thousand single-meaning symbols represent words | A limited icon set is combined in sequences to represent core vocabulary Several thousand single-meaning symbols represent words, including letters and numbers |
Based on individual communication needs: frequency, grammar, activity, topics of conversation | Based on individual communication needs: frequency, grammar, activity, topics of conversation | Based on individual communication needs: frequency, grammar, activity, topics of conversation | Based on individual communication needs: frequency, grammar, activity, topics of conversation | Does not change with selections. Icon sequences represent core vocabulary and parts of speech | Does not change with selections Icon sequences represent core vocabulary and parts of speech paired with other functions |
Number of locations ranges from 1 to 144 per display or page | Number of locations ranges from 2 to 144 per display or page | Number of locations ranges from 2 to 144 per display or page | Number of locations ranges from 2 to 144 per display or page | Number of locations ranges from 26 to 144 per display or page | Number of locations ranges from 26 to 144 per display or page |
AAC Language Representation | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Text, Alphabetic Symbols | ||||||||||||
Letter-by-Letter Spelling | Word Prediction | Whole Word Display | Letter Coding | Morse Code | ||||||||
Visual Scene | NA | NA | NA | NA | NA | |||||||
Color Coding | Not typically; individualized color coding is possible | NA | May custom color code (parts of speech, importance, visual needs, etc.) | NA | NA | |||||||
Navigation; Number of Pages/Displays | Limited; typically a small number’1–6 pages/displays | None; predictions appear on the current page/ display | Range; depending on organization of words, multiple pages may be required based on activities or topics | None; NA | None; NA | |||||||
Rate Enhancement | NA | Likely; reduced keystrokes | Likely; reduced keystrokes | Yes | Yes |
Single-Meaning Picture Symbols | Multiple-Meaning Icons | ||||
---|---|---|---|---|---|
One Display | Levels with Changeable Displays | Multiple Displays | Multiple Methods | Icon Sequencing | Multiple Methods |
Visual scenes may support single messages | Visual scenes may support single or multiple messages with embedded hotspots | Visual scenes may support single or multiple messages with embedded hotspots | Visual scenes may support single or multiple messages with embedded hotspots and may be paired with text or symbols | NA | Visual scenes may support single or multiple messages with embedded hotspots and may be paired with multi-meaning icons |
May custom color code (importance, visual needs, etc.) | May custom color code (parts of speech, importance, visual needs, etc.) | May custom color code (parts of speech, importance, visual needs, etc.) | May custom color code (parts of speech, importance, visual needs, etc.) | May custom color code (parts of speech, importance, etc.) | May custom color code (parts of speech, importance, visual needs, etc.) |
None; NA | Range; depending on organization of symbols, multiple pages may be required based on activities or topics | Range; depending on organization of symbols, multiple pages may be required based on activities or topics | Range; depending on organization of symbols, multiple pages may be required based on activities or topics | Range; icon sequencing minimizes navigation, but multiple pages may be required based on activities or topics | Range; icon sequencing minimizes navigation, but multiple pages may be required based on activities or topics |
Possible; performance data are not available | Possible; performance data are not available | Possible; performance data are not available | Word prediction evidence indicates keystroke savings, but not rate enhancement | Yes | Yes |
AAC Language Representation | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Text, Alphabetic Symbols | ||||||||||||
Letter-by-Letter Spelling | Word Prediction | Whole Word Display | Letter Coding | Morse Code | ||||||||
The following features depend on additional needs of the individual. | ||||||||||||
Computer, Environmental Controls | Yes | Yes | Yes | Yes | No | |||||||
Training and Support | Varies depending on manufacturer and/or distributor, clinical professional access | |||||||||||
Peripherals | Touch guides, key guards, switches, mounting systems, external speakers, protective cases |
Single-Meaning Picture Symbols | Multiple-Meaning Icons | ||||
---|---|---|---|---|---|
One Display | Levels with Changeable Displays | Multiple Displays | Multiple Methods | Icon Sequencing | Multiple Methods |
Yes | Yes | Yes | Yes | Yes | Yes |
ANNEX TABLE 6-3
Augmentative and Alternative Communication Technology Function
AAC Technology Category Descriptors |
Common Message Characteristics | Common Message/Language Functions |
---|---|---|
Digitized Device ≤8 minutes recording Single display |
|
|
Digitized Device 9–20 minutes recording Display modified manually by changing communication overlays |
|
|
Digitized Device 21–40 minutes recording Display modified manually by changing communication overlays |
|
|
Digitized Device >40 minutes recording Display modified manually by changing communication overlays |
|
|
Common Communication Contexts | Primary Communication Purposes | Communication Control | Communication Barriers Limitations |
---|---|---|---|
Home Workplace Car/public transport Community |
Express wants and needs Engage in social etiquette Information transfer Social closeness/relationships |
Initiate communication Direct action of another Social comments Call attention to self-achievement |
No long interactions or conversational dialogue Prerecorded messages Poor fit for spontaneous utterances |
Home Workplace Car/public transport Community |
Express wants and needs Engage in social etiquette Information transfer Social closeness/relationships |
Initiate communication Direct action of another Social comments Call attention to self-achievement Combine thoughts into longer utterances |
No long interactions or conversational dialogue Prerecorded messages Poor fit for spontaneous utterances |
Home Workplace University Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships Information transfer |
Initiate communication Direct action of another Social comments Call attention to self-achievement Use words to describe location Ask simple questions |
Few long interactions, limited conversational dialogue Prerecorded messages Poor fit for spontaneous utterances |
Home Workplace Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships Information transfer |
Initiate communication Direct action of another Social comments Call attention to self-achievement Use words to describe location Ask simple questions |
Limited long interactions and conversational dialogue Prerecorded messages Poor fit for spontaneous utterances |
AAC Technology Category Descriptors |
Common Message Characteristics | Common Message/Language Functions |
---|---|---|
Synthesized Device Physical contact, spelling Display changes dynamically when activated, based on programming |
|
|
Synthesized Device Multiple formulation and access Display changes dynamically when activated, based on programming |
|
|
Synthesized Device Multiple formulation and access Tablet (Android/Win/iOS) Display changes dynamically when activated, based on programming |
|
|
Software Communication interface |
|
Described based on AAC technology categories above |
Common Communication Contexts | Primary Communication Purposes | Communication Control | Communication Barriers Limitations |
---|---|---|---|
Home Workplace University Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships |
Initiate and maintain communication dialogue Social comments Call attention for assistance, interactions Formulate and interact with complex language |
Slow message formulation, particularly for time-sensitive interactions (e.g., telephone, business meetings) Requires literacy Requires hand-contact keyboarding Some reduced intelligibility of synthesized speech |
Home Workplace University Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships Information transfer |
Initiate and maintain communication dialogue Social comments Call attention for assistance, interactions Formulate and interact with complex language |
Slow message formulation, particularly for time-sensitive interactions (e.g., telephone, business meetings) Some reduced intelligibility of synthesized speech Many are large and heavy, require mount for transport |
Home Workplace University Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships |
Initiate and maintain communication dialogue Social comments Call attention for assistance, interactions Formulate and interact with complex language |
Slow message formulation, particularly for time-sensitive interactions (e.g., telephone, business meetings) Some reduced intelligibility of synthesized speech Requires selection of access methods from mainstream options Not considered DME |
Home Workplace University Car/public transport Community |
Express wants and needs Engage in social etiquette Gain and share information Build and sustain relationships Information transfer |
Initiate and maintain communication dialogue Social comments Call attention for assistance, interactions Formulate and interact with complex language |
Slow message formulation, particularly for time-sensitive interactions (e.g., telephone, business meetings) Some reduced intelligibility of synthesized speech Require programming of communication display and messaging Require device for voice output Varying levels of communication based on developer |
This page intentionally left blank.