National Academies Press: OpenBook

Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium (1987)

Chapter: Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It

« Previous: Session III: Language and Displays for Human-Computer Interaction
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 151
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 152
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 153
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 154
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 155
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 156
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 157
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 158
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 159
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 160
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 161
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 162
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 163
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 164
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 165
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 166
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 167
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 168
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 169
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 170
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 171
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 172
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 173
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 174
Suggested Citation:"Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 175

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

CHANGE IN H~N~ER INTE~A~F~ ON 1~ SPACE STATION: THY IT BEEN TO HAPPEN AND HEW TO PLAN FOR IT Philip J. Hayes OVERVIEW . ~ The space station is unique In the history of manned space flight in its ~ planned Jongevi~r. Never before have we had to deal with a canned space system that was expected to perform for twenty five years or longer. m e implications of this requirement are far-reaching. This paper attempts to explore some of those implications in She area of human-computer interfaces. m e need for hooking (designing sof ~ e for future extension and modification) is already well established in the space station program as a whole. me paper explores in some detail why hooking is an important requirement for human-cc mputer interfaces on the space station. m e reasons are centered around the rabid rate of expansion , , _ . . . ~ in the kinds an] combinations of modalities (toning ara~hirc. ~ ~ — ~ ~— — —~ ~ _#— —-I———— — — ~ pointing, speech, etc.) available for human-comput~r interaction and in the interaction and implementation techniques available for them. Many of these mKdaliti== and associated interaction techniques are ~~ ~ ~ ~~ ~ ~ Different modalities well-developed, others are in embryonic stages. (or combinations of modalities) are appropriate to different The paper therefore also looks at the appropriateness of situations. the modalities according to task, user, and the space station environment. An appropriate matching of interface modalities, task, and user is essential to maximizing the potential of on-board computer systems in their primary gc~1 of supporting and amplifying human abilities. A second rationale for providing hooking in human-computer interfaces is related to the currently developing possibilities for intelligent interfaces. So the paper discusses methods of achieving intelligent- ~ interfaces, and in what circumstances it is desirable. The issue of ~nt=1ligence is also related to the distinction between conversational/agent type systems and machine/tool-like systems. The - - The paper explores the tradeoffs between the two approaches and discusses the circumstances in which a more conversational/agent style system could fit space station goals and NASA culture. After examining the need for hooking in human-computer interfaces, the paper turns to the question of how to achieve it. The discussion current culture at NINA is highly oriented towards the latter. 151

152 here centers around methods of achieving a clean separation between the interface and the underlying application (space station system) it Faces to. The key advantage of this kind of separation is that it aliens the interfaces to be changed independently of the applications, so that a new interface (possibly employing different modalities from the old one) can be rolled in without altering the application in any way. In an environment such as the space station where the underlying applications may be complicated, mission critical, and highly integrated with other applications, such separation becomes all the more important. m e feasibility of a completely clean separation between interface and application is uncl==' at the moment. The question is currently being addressed by the major subarea of human-co mput~r interaction that days with user interface management systems (blabs). Unfortunately, it is infeasible to wait for research on this topic to reach full many. Unless the original applications and interfaces are built with separation in mind, retrofitting separation is likely to be impossible. So the paper discusses what kind of interfac~e/application separation is feasible for the space station initial operating capability (IOC), and looks at how this will constrain the overall possibilities for human-computer interaction. Separation of interface from application has two other important advantages in ablution to hooking. First, it promotes consistency between interfaces to different applications. Most of the work on UIMSs emphasizes a common set of tools for construction of the separated interfaces, and this ~nevit~hly leads to considerable consistency of (at lent f~ne-gra~ned) interface behavior between interfaces. m e importance of consistency in interfaces has been appropriately emphasized by Poisson in the preceding paper. Secondly, the hooking made possible through separation also mares it -crier to alter interfaces during their initial develcpment. The only effective way of developing excellent human-oomputer ~nterfa~= is to build interfaces, son how users perform, and then re.peatedly alter them to deal with problems. This process is much more effective if the Interfaces are easy to Codify. The paper explores these two other aspects of ~nterface/application separation further. APPROPRIATE INTERFACE MODALITIES The need for change in human-computer interfaces on the space station and the acnseque~nt nerd for hooking arises out of the rapid development that has occurred and continues to Incur in interface modalities (typing, graphics, pointing, speech, etc.) and the interaction techniques used with them. m is section discusses what interface modalities (or combinations of modalities) and techniques me appropriate for different kinds of interface tasks. An appropriate matching of interface modalities, task, and user is essential to maximizing the potential of on-board oomph systems in their primary goal of supporting and amplifying human abilities.

153 Interface Requirements for the Space Station The basic considerations in designing good human-computer interfaces for the space station are the same as for any human-computer interface on Earth. In particular, the interfaces should be: - easy to learn - easy to use - efficient to use Such has been written, e.g. (Hansen, 1971), about this and similar lists of attributer. For present purpcscs, we can treat them as self-evident, though of different relative importance In different interface situations. There are, however, some special characteristics of the space station environment that require further discussion before looking at the relative utility of the different available interface _ ~ _ ~ ~ ~ ~ = ~ ~ _ ._ ~= _ _~ : _ me, .—a ~ . I=aal111=S. 111~ =~1 ~~-1~1W =1~1~. Weightlessness: In addition to being the most obvious special _ _ , ~ , 8 ~ , ~ ~ 1 ~ ~ ~ characteristic of the space station environment, zero-g causes specific problems for human-oomputPr interfaces. m e problem is that movement by humans In a weightless environment induces other movement. m is is particularly true if the movement involves pressure against another object, such ~~ in typing or pointing on a touch sensitive screen, but it is also true for any kind of gesture, such as with a non-touch light pen. A person employing such interface mcdalities will tend to drift away f ~ n or change orientation with respect to the workstation he is using. m e simplest solution to involuntary movement induced by human-computPr interaction is simply to tether the user physically to the workstation. This, however, Hal the curious died vantage of inconvenience, especially if the interaction session will not last long. Also, the tethering would have to be relatively complex and therefore intrusive to solve completely the problem of changing orientation. Analogue/continucus Interaction: Many interactions on the space station require (or could benefit frum) command input which can be given rapidly and/or in an analogue/continucus manner. Obvicus examples include any kind of docking or remote manipulation activity. Less obvious ones include manipulation of continuous variables in, for instance, systems controlling the life-support environment. Analogue/continuous interactions require different kinds of interaction modalities and techniques from those used In more traditional computer command languages. Varied groups of users: Although the most m~ssion-critical systems will continue to be operated by highly trained personnel, the sheer number of systems likely to be available in the space station suggests that this will not be true for all systems. Some leas m~ssion-critim=1 or time-critical systems

154 in, for instance, the areas of personal comfort, provisions, or ~nter~w c=~nication, are likely to have to interact with users of varyir'3 dies of sophistication and - science winch rent to those systems. To avoid negative transfer effect; between different systems, interfaces net ~ be as consistent as possible access the various systems. To deal with users who are inexperienced (for that system), interfaces also need to be as self-evident, self-explanatory, and self_documenting as possible. m e goal should be for experience with some subset of the non-m~ssion critical systems and appropriate knowledge of the dcma~n the system days with to serve as sufficient experience for the accomplishment of straightforward tasks with any of the other non-mission critical systems. · Hands-free operation: m ere are many situations in the space station environment in which hands-free interaction woN1d be useful. An obvious example is extra-vehicular activity, but more frequent examples might arise when it was important to avoid the induced motion problems mentioned above (in the weightlessness bullet) or when it was useful to have an additional I/O channel in the context of a complex hands-on analogue activity such as remote manipulation. m e most natural han~s-free mcdali~y is speech, but other possibilities include control through eye-movement, or in specialized circumstances use of feet or other body parts. Having looked at some of the space factors which might influence choice of interface style and mcdali~y, we now look at the appropriateness and range of appli~hili~y of the various modalities. Some of the discussion presupposes certain styles of interface for each type of modality. m e presuppositions are not always necessarily valid, but are characteristic of the way the modalities have typically been used. Character-Oriented Interfaces The vast majority of human-computer interfaces currently in use are character-oriented. The users of these interfaces provide input by typing on a keyboard, and the systems provide Output through a screen with a fixed number of character positions (typically 24 lines of 80 characters). Interfaces of this kind do not have a great deal to commend them for the space station environment. Reasons include: The physical pushing motion involved of typing leads to the induced motion problem mentioned above. Typing sessions of any length require some kind of tethering arrangement. Typed input is unsuitable for analogue/contLnuous interaction. In charac~-r-oriented interaction, the user typically issues commands through expressions in a l~ne-oriented artificial

155 command language). Such languages generally require significant learning effort, making them difficult to use for initial or casual users. Some ccmman] languages, such as the one for DEC's Tops-20 operating system, have shown that it is possible through uniformity and carefully thought cut help facilities, to reduce the difficulty of use by non-expert users. However, command line interaction is inherently more limited in its perspicuity than the direct manipulation style described in the section titled "Graphically-Oriented Interaction". Although some of the learnability and ease of use problems with command-line interaction can be overcome through selection frum menus f~.~ the keyboard, this can be seen as an attempt to overcome the limitations of the modality by ~ e of an interaction technique borrowed from another mcdali~y, i.e. pointing input. It sums more appropriate to we the pointing Reality directly. . ~aracter-oriented interaction is essentially an old, though very well worked out (see e.g. Martin, 1973), technology. Graphi ~a' ly~rient~ Interaction A Decently developed armful increasingly popular starve of interaction is based an the ~~== of a high-resolution graphical display and a pointing device such as a mouse or joystick. A well known system exemplifying this scheme is the Mac =tosh personal computer (Williams, 1984~. Interaction An this style is based on techniques such as menu-selection, icon selection and Excrement, and other kinds of g~-aphically~oriented Cations. This style of interaction is also knacks as did manipulation (Hutchins et al., 1986; Shneidennan, 1981), indicating ideally ~t the use Should fed that he is d; rely manipulating the objects represented bar the cC - user system. An example of this kirk of direct manipulation analogy is deleting a file by ding a muse to "pick up" the icon representing the file ark mere it ink an icon depicting a wastepaper baked. mere are many Interfaces that are graphical In nature, but fall well Short of the ideal of dirt manipulation of providing the user with the illusion of Operating di~y on the "world" of the underlying application. Interfaces that rely on Argus, for instance, often do no sort such an illusion. Interaction will have more of the flavor of direct manipulation if the user can perform an operation by moving an icon, for instance, as in the file deletion example above, an by sele ~ ing the nary of the Operation freon a list in a menu. To the extent that-they can be maintained, the metaphors implicit On direct manipulation interfaces make the interfaces more easily learnable, and reduce the need for help systems. This is important for the varied groups of users that will be using non-m~ssion-critical systems. m e Xerox Star (Smith et al., 1982) and Macintosh (Williams, 1984) have given some idea of what is possible in this line in the

156 office and personal fling are. More research is needed to provide more interaction metaphors on which to }build dialect manipulation ~nterfar~s. The creation of such metaphors will be aidedby the existence of new and innovative I/0 devices (SIX! section titled "Novel I/O Modalities". Grnphirally-oriented or direct manipulation interfaces are in many ways superior to ~aracter-oriented interfaces for the space station Am.: ~_. =~' - An_ _~ of: ~ ~ ~~ AN a: _: ~_: ~~ e0V~llmRIl~/ AUK =1~ ~ ~111 ~~ U=Ll~l~lC;l~. In particular, scene of the standard pointing devils need on earth are not well adapted to a weightless environment. This is particularly true of the mouse which is Upended to he used on a flat surface under the influence of gravity. The ligh~pen and the tracker ball both r ~ pressure against a surface and so have an induced motion problem. The joystick may be better adapted f~-w`~ the point of view of induced motion Cay'—~ ; ~ ~ll;~C Ah: Ohm 11~' ~;~ ; - ~ ~ m=~;~ll =~= ; ~ =~1'_- '1_ aLC~~'~ ~~ -IC ~ O'er 'A WIt~1'~ 'ha mis ~i~ the possibility that correction of the motion induced ~ ght be possible through the user's grip. Howe veer, there are obvious problems with this app Mach for f me-grained movements, but there is a great deal of experience with the use of joysticks In weightless environment from such tasks as remote manipulation. A better approach may be solved by further development of innovative pointing devices specifically aimed at use in a weightless environment. One possibility is a freely man able hand-held 'houses which induces 2-D motion on a screen. Of course, the full six degrees of freedom of motion with such a device also c pen up the possibility of control of three-5imensional simulations or real actions. Devices of this kind are available and investigations into their use and refinement should be encouraged. Another innovative kind of pointing technology even better adapted for space is eye tracking. Eye tracking has the dual advantages of no significant induced motion and hands-free operation. It has the disadvantage of intrusive apparatus. It may be p~'ticu1arly appropriate for activity in a space suit where the eye-tracking apparatus can be incorporated into the helmet with no increment in disccmfort or inconvenience. Further work is needed both to develop less intrusive forms of eye tracking and on the use of eye tracking control in extra-vehicu1ar activity. Earth-based direct manipulation inter feces generally operate within the context of fixed workstations. While there are many space station tasks for which this is perfectly appropriate, there are others where a more portable arrangement is required or preferable. Elk is the most common, but other examples include inventory, inspection, and communication forks. Work on ~n-helmet displays is needed for Eve to complement the work on eye-tracking. Other work on hand-held or otherwise portable display and pointing devices is needed for the on-board backs requiring mobile interactive devices.

157 Natural I£~ge Interaction Via Keyboard TO natural language input ark output is not a m~ality in its own right, but a variation on ~aracter-orierlted interaction. However, it is sufficiency different freon ~pi~1 Demand garage inaction at it is worth considering separately. A ~ow-level, but nevertheless significant, artifact of the redundancy of human language is that natural language will usually require many more keystrokes than a command language designed for a specific interaction task. This means that the remarks above about the undesirability of the significant amounts of typing involved in command language interaction apply with greater strength to typed natural language interaction. Also for rapid interaction or interaction with an expert user, the amount of typing involved typically makes natural language interfaces unacceptably slow. Natural language interaction, however, has the important advantage over command language interaction that it allows the user to express things in a way that is natural for him, rather than having to learn an artificial (and frequently arcane) command language. It is thus more suitable for casual users and could help to meet the goal of making a wide variety of space station systems accessible to many different users of varying skill levels. This argument in favor of natural language interaction presupposes that the interface= can handle any form of expression that a user cares to come up with and is relevant to the underlying application. At the current state-of-the-art, this is an invalid assumption. In practice, natural language interfaces fall well short of full coverage on syntactic, semantic, and pragmatic grounds, even for the restricted domain of discourse implied by a specific underlying application. m is leads to the habitability problem (Watt, 1968) in which many of the advantages of naturalness and lack of 1P=rning disappear became e the user has to learn what is still essentially a subset of English (or whatever natural language is being used) artificially restricted by the limitations of the natural language processing system. This problem can sometimes even make the language more difficult to learn than a simple command language because the limitations are less easy for the user to identify and remember. On the other hand, these problems can be minimized by appropriate human engineering for interfaces to appropriately limited applications. However, this is very Me-consuming and expensive at the time the interface is developed s mce it involves detailed observations of many users interacting with the system and repeated extensions of the natural language coverage until all the commonly occurring syntax, semantics, and pragmatics a handled. Perhaps the most ~ rtant reason for not us mg natural language interaction is that most interaction can be handled more easily by direct manipulation or other graphim~lly-orient^~ means. Moreover, as the section titled "Graphi~lly-oriented Interaction" po Ants out, graphim=1 interaction is likely to be more suitable for the space station environment than characber-oriented interaction in general. Whenever the user is trying to select between a limited number of

158 al~tives or is Crying to manipulate objects or access information that can be present to him ~ ntuitive spatially~istribu~ manner, then natural language interaction (or any other fores of keyboard interaction) is likely to prove inferior to graphical interaction. There are, however, some cir=~mstan~= in which natural language or command language interaction is preferable to graphical interaction, including: . When there is a large range of options to choose between, especially when the options can be composed in a combinatorially explosive kind of way; When there is no convenient way to distribute the information in a twordimer6ional space; When a suitable spatial distribution exists, but the resulting space of information is so large that only a small fraction of it can be presented to the user at any one time; When the user is looking for information that is distributed across several spatia1ly-dist~nct items, so that retrieval of the information by direct manipulation would require iterative exam mation of each of the relev ant interface components. These conditions are not true for most interactive situations, but come up frequently enough for natural language to be considered as a secondary mode of interaction for many applications to supplement a largely direct manipulation interface. To be effective in this rote the natural language interaction has to be suitably integrated with the direct manipulation interaction. Some work has been done in this area on how to use vision context to help interpret pronouns and other anaphoric and deictic references by the user and also to allow intermixing of pa muting and natural language input (Bolt, 1980; Hayes, 1987a). However, integrated natural language and graphing interfaces could provide significant benefits given an appropriate research effort. Speech Interaction Although a combination of typed natural language and graphical interaction offers some attractive advantages, nature' language interaction through speech offers many more. While the habitability problems mentioned in the section titled "Natural Language Interaction Via Keyboards remain, spoken input is much more rapid and natural than typing the same words. Moreover, The voice and ears offer channels of communication quite separate frown the hands and eyes. Speech input leaves the hands free and speech output leaves the eyes free for other tasks (either computer interaction or interaction with the physical worId). In terms of suitability for speech interaction, the space station environment has one specific advantage and one specific disadvantage. The advantage is the absence of any need for speaker-independent speech recognition. At the present state-of-the-art in speech processing,

159 considerably better results can be obtained if the speech recognition system has been trained in advance on the specific characteristics of a speaker's voice (through recordings of the speaker saying a predetermined set of words several tomes). Given the relatively small number of people that will be on-board the space station at any given time, theft relatively long training period, and their relatively long stay, such system training is unlikely to be a problem. The specific disadvantage of the space station environment is the relatively high level of ambient noise that can be expected inside it, at least if the experience of the Shuttle is a guide. Ambient noise is problematic for speech recognition. At the current state-of-the-art, resolving this problem would probably require the use of a close-speaking microphone of some kind. This itself has the disadvantage of being Intrusive ~ ~ inconvenient to take off and put back on. the current state-of-the-art in speech processing is still fairly limited. In addition to the speaker-dependent and ambient noise limitations mentioned above, the better commercially available systems tend to be able to handle only small vocabularies (less than a thousand words is typical) and pauses between each word or group of words that the system recognizes as a lexical units (so-called connected speech recognition, as opposed to continuous speech recognition in which no pauses are needed. However, this is a field where rapid advances are occurring an] new commercial developments plus a very active academic research program are pushing back all of thence limitations. In fact, speaker-independent, large (10,000 word plus) vocabulary, continuous speech recognition ~ noisy environments is likely to be available within the lifetime of the apace station, and systems ~nwhich a subset of these restrictions have been relaxed are likely in the early part of the space station's lifetime. Given these prospects for advancement and the inherent advantages of speech interaction, it seems natural for NASA both to plan on a significant role for voice In space station human-computer ~nterfam== and to keep track of or actively support research on speech processing. Nevertheless, even if the underlying speech technology advances a.c projected above, other problems rema ~ that will re ~ solution before speech can make its full contribution to human-computer interaction on the space station. First, speech interaction on its own is quite unsuitable for some kinds of interaction, particularly analogue/cont mucus ccmman~s--it would be very difficult to control a remote manipulation device through a series of "left a bit", "down a bit" kinds of commands. Moreover, even in situations where speech could be used, such as the specification of discrete commands in an inventory tracking system, it may not always be the preferred mcde of interaction. For instance, if the arguments to a particular command all have relatively complex verbal descriptions, but there me only four of them, it is probably simpler, more demonic, arm more reliable to let ache user input ache an~t ~ pointing at a emu or set of icons representing then. Both of these situations indicate the need for techniques for ~nt~ratir~ Ah interaction with ocher T - alities ~ncl~i~ pointing and 3-D manipulation. Speech can then be seen as a cc~mplemen~ry Carrel for

160 issuers discrete cards during continuous/analogue manipulations bile both hard; are o~ied, subh as rel=~=ir~ catches during a rate manipulation task. It can also be seen as a supplementary barbel for issuer whatever As or portion of ~=~r~s are co~nrenier~t during a discrete ~ snare interaction, and as a stand-alone interaction medium for discrete cc=man~s whenever han~s-free operation is n^~=sary or convenient. Many of the same research issues arise in integrating speech with other mKdalitimc as were described in the section titled "Natural Language Interaction Via Keyboard" for the integration of typed natural language and graphical interaction. These issue= include resolution of deictic phrases ("this one", "that") and other pronouns, ~ e of the user's visual context in interpreting what he ways, and methods of combining input from pointing and speech to form a single command. Although interesting explorations have already been undertaken in this area (Bolt, 1980; Hayes, 1986), these issues all require further research. In auction to problems of integration with other input mcdalities, speech interaction raises some interesting problems of its own related to managing the dialogue between human and computer. The first problem concerns when the computer should listen, i.e. when it should try to interpret the speech that its users are producing. The users will speak to other people (or sometimes to themselves) ~~ well as to the machine and attempts by the machine to interpret speech not directed at it is only likely to Cause trouble. Techniques that have been explored here include physical switches (typically foot switches on Earth) or switches based on key phrases (such as "listen to me" and "stop listening") that have to be uttered to start and stop the machine trying to interpret speech. These devices are clumsy and detract from the feeling of naturalness that spoken interaction should provide, but will probably be necessary until speech systems become sophisticated enough to mate positive determinations that spoken input is not being directed at them. The prospect of such an ability is well beyond the horizon of current No search. Another dialogue issue with special implications for speech is that of ensuring reliable communication. An interactive speech interface must ensure that it understands the user accurately; that the user is confident of this; that the user becomes aware when the system has failed to understand correctly; and that the user is able to correct such errors when they Is- Humans have developed sophisticated conventions (Sacks et al., 1974; Schegloff et al., 1977) for ensuring that communication is indent robust in this way. Unfortunately, many of these conventions rely on a level of understanding and intelligence that is unrealistic for machines. However, to have smooth conversations, ways must be found to perform the abode functions that are both suitable for the limit ~ intelligence of current machines and fit reasonably well with human conventions. A limited amount of work has been done in this area e.g., (Hayes and Reddy, 1983), but much more is needed. Finally, there is the same problem of habitability that arises for typed natural language 1nterfa~-c. For speech, however, the problem can be even worse since the user is less well able to be deliberate and

161 precise In his choice of words and phrasings while speaking than while typing. Moreover, when speech is use as a standalone h~nan~uter interaction meatily, there is no possibility of r~ir~i~ the user through a display about the limitations of the d ~ in of discourse or the phrasings that can be used. Work is needed here to find better ways of developing a reasonably habitable subset of a natural language for a restricted domain, to develop ways for the system to encourage the user to stay within the bounds of the restricted language through appropriate cu¢Fut of its own, to devise methods for partial understanding when a user strays outside the bounds of the restricted language, and to develop interaction methods for st ~ ring the user back on track when he does stray as he inevitably will. Novel I/O Modalities m e m~ceraction Cities discussed so far are conventional ~ the sense that they have already been widely used (this is 1~t true of speech) in ~ interfaces and other space systems. However. the numerous challenges posed for human~mputPr interaction by the space station and the recent emergence of same novel and innovative interaction Tonalities suggest ~t it is worthwhile also to consider same of these less~evelop-~ Cities for use In the spans station. An innovative input ~ ality of potentially considerable utility on the space station is the ~ e of gesture. The conventional use of a mouse or other pointing device in conjunction with a display screen is a limited form of gesture, but it is possible to sense and interpret a much broader range of human gesture by machine. Tinge scale gestures involving whole limes are not practical for the space station because of the constraints of a weightless environment, but smaller-scale gestures are quite suitable. The least problematic form of gesture from the point of view of the induced motion problem is eye motion. As already discussed In the section titled "Graphi~a1ly-Orient=d Interaction", eye tracking can be used as a substitute for pointing via a manse or other conventional pointing device. It is particularly well suited for use wit in-hel~net displays. A more radical departure from conventional Serology is the interpretation of hand and finger gestures. Technology is =emerging that will allay a machine to recognize a full range of small man~1 Gestures made In restricted spatial context. m ere is a large range of gestures that have associated conventional meanings (such as yes, no, get rid of it, move it from place to place, etc.~. m is suggests that interfaces that accepted such gestures as input could be very easy And intuitive to learn and natural to use. It might even be possible to resolve any motion problems inure by gesturing through the use of balanced symmetrical gestures which employ two equal and opposite motions. We have discussed two ways in which gesture can be used in innovative ways for computer input. mere may weJ1 be others. In general, there is a need for imaginative exploration of the whole range of ways in which human movement compatible with a weightless, noisy ,

162 environment can mast racily be sensed by machine. Another potentially promising area for innovation in interaction techniques involves output by means other than fixed screens and simple adagio frock. Ih-helmet displays hold significant promise in this direction. Although such displays are most natural in circumstances which the user has to wear a helmet anyway, such a_ EVA, they can also improve human-computer interaction in other circumstances. Current investigations, including some at N~SA-Ames, have shown the utility of in-helmet displays for presenting a complex 3-D world view to the user. m is work involves the use of direct-eye projection, rather than an actual display screen inside the helmet. It p Divides the illusion of a 3-D world by sensing the direction in which the user's head is pointing and adjusting the projection accordingly. This is a good example of the kind of innovative work in novel interaction modalities that name.= to be undertaken to exploit fully the potential for human-computer interaction on the space station. Ok her kinds of novel output mcdalities on which further research could bring useful results include force or tactile f~dh~ck on j~ystick-type direct manipulation or analogue tasks and acceptably unobtrusive speech output. Force and tactile feedback has been used regularly ~ flying and remote manipulation tasks, but has been little explored for use in human-comput~r interaction for more abstract tasks, such as manipulating a set of computer files. Force or tactile f^F~h~ck through a joystick on such problems could enhance the directness of the "feel" of direct manipulation interfaces and also be useful as an indicator of urgency, importance, or difficulty. Speedh output has also been used before, but a recurring difficulty is getting the speech output to fit naturally into the flow of an interaction. Speedh output is by its nature transitory and must be given at just the right point In the interaction and be repeatable by the user if desired. Mbreaver, the speech output should not occur so frequently that it becomes distracting to the user. Just as in the case with input mcdalities, much work is needed in the form of imaginative explorations over a large range of untried and speculative output mcdalities. Finally in this section, we turn to the idea of expert interfaces, i.e. interfaces that require eons Operable expertise and training to operate, but offer high rates of very efficient interaction in return. The high degree of training that will be undergone by many space station personnel provides good opportunity for use of innovative expert interfaces, involving coordinated use of multiple limbs, eyes, etc. in multiple modalities for high efficiency interaction. Flying is best explored example of such an activity, and many of the techniques developed with flying have been successfully transferred to docking and other such maneuvers in space. Another' source of ideas for expert ~terfa~-= can come from musical performance (Buxton, l986~. Players of such instruments as the organ learn after a long period of training to use all four limbs in a coordinated fashion to produce an enormously high rate of command input to the instrument. For interaction tasks that are important enough to justify the large training periods involved and could benefit from a high data transfer rate, interfaces

163 which draw on the experience of flying and ~ vicar interfaces are well worth investigation. NEGLIGENT ~ERFA~F-~ The new to plan for Charge In interfaces comes not only freon the possibility for advances in interface ~ dualities and the tedhniquec used with them, but also from the increasing possibility of the development of intelligent interfaces. Intelligent interfaces are still a research area, rather than a set of proven interface techniques, but the potential benefits of truly intelligent interfaces in terms of ease of ~ e make them an area worthy of investigation for future spare station interfaces. Intelligent interfaces also fit very well with the increasing development of ~ntellige~nt, autonomous application systems for space use. If an application exhibits Intelligent task behavior, then it should also behave intelligently in its interaction with its user. An initial fun~ament~ distinction to be made in considering the potential of intelligent interfaces is the distinction between conversational or agent-like systems and tool or machine-like systems. Almost all current interfaces are of the tool/machine-like kind. Users of such systems accomplish a task by controlling a (hopefully) responsive, but -=sentia1ly unintelligent system. Direct manipulation interfaces (see section titled "Graphi~ly-Oriented Interaction") are the archetype of this kind of interface since they encourage the user to feel that he is directly controlling the world that the underlying system Beads with. However, command language Interfaces can also be thought of as tool/machine-like since they respond in predictable ways ~ ~ _ or: ~~. ~~ Ado_ uo ~ Axes sex or commands. The user is left feeling firmly In control. Conversationa1/agent interfaces, on the other hand, are Intended to give the user an entirely different feeling. Users of conversational/agent systems are intended to feel that they are negotiating with a subservient, but intelligent, system. They accomplish their tasks through negotiation with and through the agency of the system, rather than through direct actions of their own. Conversational systems thus have much greater possibilities for intelligent interaction than machine-like systems. Conversational systems also do not fit well with the direct manipulation or command language styles of interface, but fit much better with natural language or speech interfaces which naturally lend themselves to a dialogue style. Interfaces to intelligent, autonomous application systems can also make good use of a conversational style of interaction. The user of a conversational equipment reservation system might, for instance, request (in natural language) the reservation of a certain piece of equipment and then be engaged by the system in a dialogue concerning the period of the reservation and if the equipment was unavailable the possibility of substitute equipment or substitute times. The user of a tool/machine-like interface to the same underlying functionality would, on the other hand, expect to be forced

164 to specify the reservation times through constraints on the interaction enforced by the Surface. If equipment was unavailable at the Sac wed time, he would also expect to have to initiate a search himself through alternative times and substitute equipment. It is clear that the culture within NASA is very much oriented to tool/machine-1ike interface-= and moreover to interfaces In which ache degree of control exercised bar the user is very high. There are historical reasons for this relate to ache importance placed freon marry on in the space program (LofLus, 1986) on having as much he an control as possible available so that there would be the maximum chance of fixing any problems that arose. As systems increase in complexity, the tool/machine-like interfaces have tended to reduce the amount of complexity (and therefore f me control) available to the user without, however, crossing over the line that separates tools from agents. At the current state of the art, this approach is ent Rely as it should be. There are no successful operational interfaces anyw2here that could fairly be described as true conversational/agent systems However, the promise of intelligent con versationaI systems remains. If this promise is successfully Size; then it offers an attractive way of achieving the goal of having a large variety of non-m~ssion-criti~1 space station system -Emily available to a broad class of users. The key to the development of conversational/agent interfaces lies in the development of detailed models of the back and the user. To produce intelligent agent behavior, it is necessary to use Artificial Intelligence techniques to model what tasks the user can accomplish through the interface, how he can achieve his goals, and what his current goals and state of knowledge are. Previous work that has tried to do this includes (Huff and Teaser, 1982; Mark, 1981; Card et al., 1983). This detailed level of modelling is necessary for intelligent agent-like behavior because, without it, the interface can only respond to the user's individual actions and the very local context. Using our equipment reservation example, knowledge of what purpose the user might be trying to achieve through use of a particular piece of equipment could allow the system to suggest a suitable alternative. Without that knowledge, the system can only respond on the availability of a particular piece of equipment. This kind of modelling becomes much harder when the user is pursuing a goal that involves several system actions. An agent~system then has to determine the nature of the higher level goal from observation of the individual actions. An electronic mail system, for instance, ~ ght observe that the user is trying to write a message out to a file and then use the contents of the file as the body of a message to another system user. If it recognized from this that the user was simply trying to forward the message to the other user, it could suggest an abbreviated method of doing so. Since individual system actions can often fit into many plans and s mce system users often interleave plans to achieve several goals, the detection of such larger scale gems out of lower level actions is a very hard took. A system that has such an ability can, however', assist the user in a variety of ways including suggesting simpler ways of doing things (as in the example above),

165 warning about pitfalls that it can foresee could lead to the user's current plan not achieving his overall goal, offering to take over and complete the plan it believes the user to be following, or offering to perform the next action or actions in the plan whenever it becomes clean what they are. The kinds of tack and user modelling abilities mentioned above could be used An conjunction with any kind of interface, not joust one t ~ t n=== natural language. However, agent-like interfaces fit particularly well with natural language for two reasons. First, natural language is a natural medium for the kinds of negotiation that arise when a system is trying to respond to the goals it believes its user to have rather than direct commands. Second, the goal and task models themselves can be very useful in natural language and speech understanding. The biggest single problem in natural language processing is handling ambiguity of various kinds (syntactic, semantic, referential, etc.) and if one version of the ambiguity makes sense in the context of the other user model and the other does not, then the one that does not fit can be eliminated. The whole area of conversational modelling is still in its infancy. Much work remains to be done to produce he systems. However, progress in this field is necessary for truly intelligent interfaces, whether or not they are based on natural language. Given the potential benefits of intelligent ~nterfa~= to the space station, it is an area of research well worth pursuing. The same kind of techniques that go into pure conversational systems can also be used in conjunction with more conventional interaction techniques to produce a hybrid kind of interface that incorporates both conversational/agent and tool/machine-like components. The basic flavor of such an interface is essentially tool/machine-like. The conversational component serve_ as medium through which the system an] user can exchange comments about what is going on in the central tool/machine-like component. The user can also use he conversational component to instruct the system indirectly to perform actions or present information that he could perform or request directly (though perhaps more tediously) through the tool/machine-like component. A system of this kin] has several advantages. First, pure conversational systems are unsuitable for any task that can be performed effectively through direct manipulation techniques, and particularly for tasks that involve continucus/analogue interaction. Adding a conversational/agent component to a tool/machine-like direct manipulation interface for performing such tasks allows the basic task to be performed in the most efficient manner, but also allows components of that task that could benefit from a con versationa~ approach to do so. Examples of conversational interaction in such a situation include: the user requesting information that waNld require multiple actions to retrieve through the direct manipulation interface; the user asking questions about how to use the direct manipulation interface component; the system volunteering information about more efficient ways to use the direct manipulation component; the user requesting the system to achieve a higher level goal that WaN1d require extensive interaction with the direct manipulation component.

166 A record advantage of this kit of hybrid system is bat the conversational Energy does not have to be use at all if the user does not so desire. This kirk of arrangement may be the best way to In ~ *ace conversational sys ~ ms into a culture like ~ SA's that has good reason to be cautious about such systems. The unproven nature of con versation~ /ace nt systems suggests that they be introduced in a way that gives their user alternative methods of accomplishing all their tasks. This kind of hybrid agent/machine-like interface requires the same technological underpinnings as pure conversational systems and hence the same research program. However, italsor~=-~additiona~ work on how to integrate the two Opponents ~ a brooch way. Same work (N - ronponte, 1981; Bolt, 1980; Hayes, 1987b) has airheads been done in this area' but Froth more is led. PLANNING FOR CHANGE IN INTERFA~F~ The previous two sections have discussed same of the potential developments in interface mcdalities and techniques that will generate the need for change in human-comput~r ~nterfam--= during the life of the space station. In this section, we turn to the issue of how to bell with such change. User Interface Management Systems The essence of the approach discussed here is based on hooking, i.e. designing software for future extension and modification. The kind of hooking envisaged is determined by the assumption that it is unnecessary and probably infeasible to rewrite the underlying application systems whenever interfaces change. This means that the application systems need to be hooked in such a way that new interface systems can be developed for them without chances to the applications ~ ^, _ ,, ~ —~ ~~_~ ~ at_ ma: _ : _ ~ ~~ __ ~ =_ ~ _~: _~ l ~ = _~ ~~ ._ t ~lelllSelV~C. 1111S An ~IrT1 InearlS Clay appllcallons a ~ retraces ~ so; be written in as separate a way as possible with communication between them as narrow and as tightly defined as possible. There is already a substantial body of work in the human-computer interaction literature on this kind of separation between application and interface, e.g. (Tanner and Buxton, 1983; Hayes and Szekely, 1983; Hayes et al., 1985; Wasserman and Shewmake, 1984; Jacob, 1984; Yunten and Hartson, 1984~. m e systems developed to achieve this kind of separation are known as user interface management systems (Unless). However, work to date is far from achieving a consensus on the best way to achieve the desired separation or indeed the degree of separation that is desirable, appropriate, or possible. This is unfortunate from the point of view of building the software for the space station IOC, since to achieve any useful degree of separation both interface and application must be built using a strict model of the kinds of communication that can occur between application and interface. Decisions made now on this kind of communication will affect the

167 possibi~ ities for in~rfam-/application separation for the life of the space station. Since research work In this area is for freon reaching a conclusion abcut what is ache best model of Fornication, whatever Gel is adopted now is likely to be considerably less than optimal. Hawed, adopting s ~ model may be better ~ n none at all, so the remainder of this section reviews current research and future Directions in the area of DIMS work. The basic modeJ adopted by most work on user interface management systems is shown in Figure 1. the user ccmmunicat~= with the UIMS ~ e He e ~ ~ I ~ I ~ ~ Be -e ~ I ~ which in turn ccmmunlcat== with the application. communication between the URNS and the application is achieved through a carefully defined protocol which limits the kind of interaction that can occur. A typical repertoire of communication events might includes . request frump` the UlMS to the application to perform a particular operation with a certain set of parameters · notification by the application of completion of an operation · update by the application of a variable indicating progress towards completion of an operation · error Tressage fray ache application . . USER · — retest from the UP for a They on the semantic validity of a prod parameter for an application operation reply fray the application to such a request User Interface Management System -A Interface Specification Database Application I Specification I Database 1 1 1 FIGURE 1 M~el of Fornication In a U:~ Application

168 The precise content of the messages that flow between DIMS and application is defined by a declarative data base, the Application Specification Data Base of Figure I, which specifies what actions and cgerations the application is capable of. This model is not the one adopted by the most usual approach to interface standardization, that of providing a set of standard subroutines for high-level interface actions, such as getting the user to chose a value from a fixed set by presenting him with a pop-up menu. A Wpical ~n~cerface subroutine for this Back it take a set of choices as a parameter and return one of the choices. The subroutine would take Mare of the details of presenting the user with the menu and interpreting his mouse movements In making a choice fern it. A ciiscipl~n~ use of a cc~npr~hensive package of such subroutines can thus provide a significant degree of Ic~r-lesrel consistency across applications schema use it. However, it cannot provide scare of the ocher advantages of the kirk of separation between interface arx] a~li~tic~n described above, an we dhall see. _ =~ ~ The kind of separation between application and interface shown in Figure 1 can allow the interface to change without any alteration to the underlying application, whether or not the interface is provided by a UlMS. A UTMS goes further by defining the behavior of the interface itself through another declarative data base (possibly integrated with the application specification data base). mis interface specification data base governs the details of the way the user is able to issue commands to the application. It would govern, for instance, whether commands ware selected from menus, from an array of icons, through a command language line, etc., or whether a particular parameter to a specific command would be selected frum a menu, from a row of "radio buttons", or typed into a field on a form, etc.. The UIMS provides a basic set of facilities to perform these various kinds of Interaction, and the interface developer chooses the desired kind of interaction out of this cookbook by an appropriate interface specification. This arrangement has several advantages: . . Consistency: Since interfaces for different applications use the same basic set of UIMS-provided facilities, the interfaces will be consistent at the level of interaction details (how menus work, how icons are selected, etc.~. Careful design of the USES interface specification formalism can also l-~d to consistency at a higher level. Consistency of this kind is very important in the space station, particularly for those less m~ssion-criti~1 interfaces where not all users may be fully expert. The transfer effects made possible through consistent interface behavior will greatly facilitate interaction with unfamiliar ~nterfa~c. Moreover, consistency avoids the negative transfer effects that can impair operation of even familiar interfaces. Ease of interface development: Specifying an interface through the ~nterfa~= specification formalism of a UIMS should be significantly )~C5 effort than programming one from scratch.

169 The UTMS formalism should provide high-level abstractions that allow the interface developer to specify the interface in terms that relate to the functionality of the interface as perceived by the user, rather than having to program it in a conventional manner at a level of detail more closely related to the implementation of the interface. - . . . . . This remains true even if the conventional 1mplemencatlon uses a high-level subroutine package of interface operations - using a subroutine package still places the emphasis on implementation, interface operations. . rather than abstract Foxier con vergenm" on good ~nterfam~s: Despite all the advances in human-computer interaction that have occurred and continue to occur, the only known way to produce an excellent interface that fen ly meets the nab= of its users is to build (or realistically simulate) the interface, Iet users interact with it, and modify it to resolve the problems that are observed. It is generally necessary to go around this loop many times before the interface performs satisfactorily, so anything that makes the loop easier and cheaper to fo1 low is likely to improve the quality of the resulting interface by allowing more iterations. The UlMS model can speed up the modification part of the loop since interface modification can be done through modification of the declarative interface specification, rather than renroorammlna in a conventional sense. whole. , . _ _ This leads to a speed up in the loop as a · Ease of involvement of human factors experts: S. Once the UIMS model does not require programming to specify interface behavior, the interface specification can be done directly by people who are specialists in human-oomputer interaction, rather than by programmers. This allows better division of labor during interface/application development. Also, since programmers often think in terms of implementation ease and efficiency, rather than thinking about the interface from the user's point of view, better initial interfaces are likely to result if they are produced mainly by human factors specialists. Of this set of advantages, only the first, consistency, and that at a relatively low level, is shared by the alternative approach of using a set of standardized Interface subroutines. The other advantage= all rely on a level of separation between Inter face and application that the subroutine approach does not provide. Given this significant set of advantages for the UIMS approach, the natural question is why are all interfaces not produced through UTMSs. The answer is that current UIMS systems approach the ideal described above only imperfectly. There are several specific problems. The primary problem is that the constraints imposed by the need for an Interface specification make it hard to provide ways of specifying interfaces that are carefully tailored to the needs of an individual application. Solutions to this problem (Szekeley, 1987) have tended to

170 introduce a procedural component into the interface specification formalism. m e ability to program interaction allows the interface builder to tailor interface behavior to individual interface need=. The problem with this solution is that it tends to negate the benefits of the UlMS approach, such as consistency and ease of interface modification, that depend on the interface be m q specified . . · _ . ~ · · ~ . . ~ . . declaratively. The way around this difficulty may be to include a procedural component in the interface specification formalism, but organize it at ~~ high a level of abstraction as possible Fran the interface point of view. The procedural component could then be seen as a highly specialized programming language for interface specification. Such a language could conceivably ma Maya m consistency by encouraging through its available constructs a particular style of interaction. Ease of use for rapid interface development and use by human-camputer interaction specialists would be promoted by the high-level of the abstractions involved. A great deal more research would be needed to bring this idea to fruition, but the potential payoff could be great. A second problem with current UlMS work is that the model of communication between application and interface is too limited. Many URNS models allow only a subset of the list of message types listed above as flowing over the UIMS/application link. And even that list is insufficient for a sizable portion of applications, especially those involving g~-aphi~1 or analogue manipulation, which need a much closer coupling with their interfaces than that list of communication events allows. Again, the solutions that have been explored (Szekeley, 1987; Myers and Buxton, 1986) tend to change the model in the Direction of tailoring the UIMS/application link to the nears of particular applications through use of a specialized programming language - a move away from the cleanest form of the UIMS model. A compromise here may be to develop several general UlMS/application communication protocols for large classes of applications with similar needs, while still leaving open the possibility of specialized communication protocols for particular applications. A final problem with current UlMS work concerns the potential discussed earlier for interface= employing multiple interaction mcdalities in effective coordination. The coordination of the different modalities increases the challenge for the UTMS model, and the use of a UTMS approach with multiple modalities has not been explored. Work is needed to overcome all these problems if the URNS approach is to be practiced for the space station. Unfortunately, if the UIMS approach is to be used at all, a UlMS/application communication model must be adapted before the underlying applications are developed. Since meeting the napes of complex applications through a DIMS model is still a research problem with no cider solution, the only practical way a URNS approach can be adopted for the space station TOC is to choose that (probably quite large) subset of simpler space station applications that can be adequately serviced by currently well-developed UlMS/application communication protocols. Research in extending the limits of applicability of these protocols could

171 nevertheless be useful for new systems developed after IOC. If these practical difficulties of adopting a DIMS approach appear too formidable for TOC, the fall-back position would be disciplined use of a comprehensive package of interface subroutines. This fall-back approach would provide the major advantage of a significant level of consistency across applications. Ihterface Development Environments for Rapid Prototyping A topic highly related to the UlMS approach to interfaces is that of mterfa~= development environments. S. Once the only known way to generate excellent interfaces is through an iterative process of creation, testing with users, and modification, a rapid prototyping facility for interface= c ~ materially improve the quality of mterfamPs produced by making it racier and faster to go around this loop. The rapid prototyping facilities most useful from this point of view allow ~nterfa~= to be seen and ~nterac ted with as they are developed, rather than forcing the interface developer to create the interface through working in a programming language or other formalism distinct from the interface itself. Examples of this approach include (Gould and Finzer, 1984; Myers and Buxton, 1986~. They can be thought Of as interface editors analogous to a what-you-see-is-what-you-get (wysiwys) text editors. Such interface editors are a relatively new arrival on the human-computer interaction scene; their utility means they deserve a great d=~1 more research attention. Although rapid prototyping facilities can exist independently of the UlMS approach to Interface design, they fit well with it. The cleanness of the hasps separation between application and interface in the HEMS model makes an interface develcpment environment particularly useful in conjunction with a VIES approach. A DIMS interface can be developed before the real application is available (or without incurring the expense of running the real application) by creating a dummy application that operates according to the same UTMS/application protocol as the real application. Coupled with a rapid prototyping facility, this capability allows rapid development of interface mock-ups to provide cheap and fast initial Insanity checksll on interfaces as they are developed. Another intriguing possibility with wysiwyg interface development environments is their use (probably in restricted mcde) by end users to reconfigure interfaces to their personal needs or preferences. So long as the interface modification facilities are made as easy to operate as the interfaces themselves, and so long as they do not interfere with the normal operation of the interfaces, this kind of facility could serve to improve significantly the level of personal satisfaction =~hat space station users find with their interfaces. Work in the area of wysiwyg interface development facilities has been almost entirely concentrated on graphic direct ~ pulation mterfa~=c. This is natural in that it is the visual aspect of the Interfaces that is rest natural to specify in this manner. However,

172 additional work is needed both to develop techniques for this kin] of interface further, and to extend the natural interface specification techniques to multi-mode interfaces as well. CONCHES TONS This paper has focusseJ on change in space station interfaces - the reasons that it must be expected and ways to plan for it. We have identified several topic areas associated with these two aspects of change in space station interfaces in which further research effort would be beneficial. We conclude by listing several broad areas in which we parti Gularly reccm=end the support of further work. investigation of speech recognition techniques and natural language processing techniques for use with spoken input, plus the integration of both of these modalities with direct manipulation interfaces; exploration of innovative I/O device= suitable for the space station environment; work on the user and task retelling needed to support corrversationa~ interfaces and the integration of much interfaces with machine-like direct manipulation interfaces; continued deve~c~nt of He US concept, coupled with highly interactive interface development envirorm~ts for all interface Entities. NOTES 1. The camplemen~ ry concept of scaring (designing hectare for future extension and codification) is also well established, but is not addressed in this paper. 2. Though see Mark (1981), Carbonell, et al., (1983), and Douglass and Hegner (1982), for examples of successful experimental agent systems. panic Bolt, R. A. 1980 Put-that-there: voice and gesture at the graphics interface. Computer Graphics 14(3):262-270.

173 Sexton, W. 1986 There's more to interaction than meets the eye: scone issues In mat input. Pp. 319-337 In User Centers System Design. D. A. Norman ark S. W. taper, At., Eriba~n, New Jersey: Bunnell, J. G., Riggs, W. M., ~uldin, M. L., and Anick, P. G. 1983 The XCM;C~)R project: a natural lar~uage interface to expert systems. Pfflce~inqs of the Eighth International Joint Conference on Artificial Intelligence. August: Card, S. K., ~bran, T. P., and Newell, A. 1983 The Psychology of Human-Cc mputer Interaction. N.J.: Eribaum. Douglass, R. J. an] Hegner S. J. 1982 An Expert Consultant for the UNIX Operating System: Bridging the Gap Between the User an Command language Semantics. Ios Alamos National Laboratory. Karlernhe, Hillsdale, Gould, L. and Finzer, W. 1984 Programming by rehearsal. Byte 9(6):187-210. Hansen, W. J. 1971 User engineering prim idle for interactive systems. Fp. 523-532 in Proceedings of the AEIPS, Fall Joint Computer Conference. Hayes, P. J. 1986 Steps tc wards Integrating Natural language and Graphical Interaction for Knowledge-based Systems. Proceedings of the Seventh European Conference on Artificial Intelligence, Brighton, July, Pp. 456-465. 1987a Using a knowledge base to drive an expert system interface _ _ ~ _ with a natural language component. In J. Hendler, ea., Expert Systems: The User Interface. New Jersey: Ablex. 1987b Intelligent interfaces to expert systems. In T. Bernold, ea., User Interfaces, Gate way or Bottleneck? North Holland. Hayes, P. J. and Reddy, D. R. 1983 Steps toward graceful interaction in spoken and written man-machine communication. International Journal of M~n-~achine Studies 19(3):211-284. Hayes, P. J. and Szekely, P. A. 1983 Graceful interaction through the COUSIN command interface. International Journal of M~n-~ch me Studies 19~3~:285-305.

174 Hayes, P. J., Szekely, P. A., and Lerner, R. A. 1985 Design Alternatives for User Interface management Systems Based on Experience with Cousin. Proceedings of CHINS, San Francisco, April. Huff, K. E. and Tosser, V. R. 1982 Knowledge-based command understanding: an example for the software development environment. Computer and Information Sciences. University of Amherst, Massachusetts. Hutchins, E. L., Hollan, J. D., arxl Norman, D. A. 1986 Direct manipulation interfaces. Pp. 87-124 in D. A. Norman and S. W. Draper, eds., User Centered System Design. New Jersey: Erlbaum. Jacob, R. J. K. 1984 An executable specification technique for describing human-co mput~r interaction. It H. R. Hartson, Ed., Advances in Human-Computer Interaction. New Jersey: Ablex. Loftus, J. P. 1986 Space: EXploration-Exploitation and the Role of Hen. Johnson Space Center: NASA Mark' W. 1981 Representation and inference On the consul system. Pp. 375-381 In Proceedings of the Seventh International Joint Conference on Artificial Intelligence. August, Vancouver. Mart m, J. 1973 Design of Man-Cc mputer Dialogues. New Jersey: Prentice-Hall. Ayers, B. A. and Buxton, W. 1986 Creating highly interactive and graphical user interfaces by demonstration. Pp. 249-258 in Computer Graphics: SIGGRAPff '86 Conference Proceedings, August, Dallas, Texas. Negronponte, N. 1981 Media room. Proceedings of the Society for Information Display 22~2~:109-113. Sacks, H., Schegioff, E. A., and Jefferson, G. 1974 A simplest semantics for the organization of turn-taking for conversation. Language 50~4~:696-735. Scheg~off, E. A., Jefferson, G., and Sacks, H. 1977 The preference for se' f-correction in the organization of repair In conversation. Language 53~2~:361-382.

175 Shneiderman, B. 1981 Direct manipulation: Carouser 16 (8) : 57-69. A step beyond programming languages. Smith, D. C., Irby, C., Kimball, R., Verplank, W., and Harelem, E 1982 Designing the star user interface. Byte 7~4) :242-282. Szekeley, P. A. 1987 Separating User Interface and Application. Ph.D. Ah., Carnegie-on University Computer Science Department. Tanner, P. and Buxton W. 1983 . Some Issues in Future User Interface Management System (UI~= Developmer.t. IF1P Working Group 5 . 2 Workshop on User Interface Management, Seethed, West Germany, November. Wasserman, A. I. and Sh~nake, D. T. 1984 The role of prototypes In the user software Er~n~rir~ (USE) methodology. IN H. R. HArtson, ea., Advances in ~n~uter Interaction. New Jersey: Ablex Watt, W. C. 1968 H~bi~hili=. American Doc~tation 19: (3) :338-351. Williams, G. 1984 The Apple Sac Atom computer. Yunten, T. 1984 pate 9 (2) : 30-54. ark Hdrtson, H. R. Supervisory methodology and notation (SUPERMAN) for human-computer system development. In H. R. Hartson, Advances in Human-Computer Interaction. New Jersey: ea., Ab1ex.

Next: Cognitive Factors in the Design and Development of Software in the Space Station »
Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium Get This Book
×
 Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium
Buy Paperback | $125.00
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!