Intelligent Manufacturing Control
MANUFACTURING CONTROL historically has been adaptive, using sensors to detect out-of-tolerance conditions, feeding the information to a controller, and changing process parameters to bring output back within tolerance limits. This highly localized approach is no longer sufficient. As processes grow in complexity and as intense, increasingly global competition drives firms more frequently to introduce products with more variations, the need to augment existing process control techniques has grown apace. This chapter describes the tight coupling of sensor technologies and microprocessor-based software systems that manifests intelligence by learning from experience and exhibiting some degree of synergy with a human interface. Here, we present a framework for thinking about intelligent manufacturing control (IMC) in terms of a compressed organizational hierarchy and shorter feedback time.
IMC is a distributed, hierarchical approach to the control of manufacturing processes. It employs electrically coupled computer-based hardware controllers and process sensors in conjunction with a trained, self-directed work force to process physical state and historical data derived from the manufacturing environment. IMC has a twofold objective: (1) to satisfy product quality and process control requirements for existing products and processes, and (2) to be adaptable enough to do the same for future products and processes by providing a way not only to control the manufacturing process, but also to promote learning that will lead to process improvement.
In a broader context, IMC encompasses all systems that affect the manufacturing floor, including:
product and process design systems, including engineering and vendors;
facility systems, including environmental and maintenance support;
personnel systems, including training and subcontracting;
order entry and requirements forecasting by sales and marketing personnel, dealers, and distributors; and
physical distribution systems, including warehousing and transportation.
In practice, these systems usually are considered only at the interfaces to the manufacturing environment, where their effects are described heuristically and statistically.
To describe IMC in the broadest context, five levels of traditional plant hierarchy are here compressed into three, which permits a simpler organization and provides an avenue for establishing interactive links with manufacturing areas examined in other chapters. This compression is consistent with the current trend in industry toward a general flattening of organizational hierarchies.1 The requirements for IMC include a temporal dimension.
Figure 2-1 uses a logarithmic scale to correlate 10 levels of feedback time—ranging from 0.01 second to one quarter of a year— with three domains of intelligent control. These domains are shown opposite the three levels of the factory hierarchy on the vertical axis.
In the domain of process control, a precisely stated contingency procedure operates in real time at the machine level without human intervention. In the domain of observation and pattern recognition, the efficacy of procedures defined in the domain of process control is observed; contingencies in the behavior of procedure are studied; and improvements are made. Problems are solved at the cell level. The domain of learning and improvement is one of choice, where the options available for improving a system are assumed to be numerous and available resources to be limited. It is in this third domain, at the plant level, that economic choices are made about which avenues of process improvement to pursue in view of supply and demand, resource utilization, and other production management functions. IMC spans all three domains.
Finally, this chapter assesses the human skills and machine complements that are needed to achieve IMC. In general, high-speed technologies tend to push decision making down to the
unit level. Increased decision making at this level requires more highly skilled and autonomous workers, and more effective computer-based, real-time control diagnosis and decision support tools. (See case studies in the section on present and future practice on pages 42-49.)
Current sensor technology encompasses visual, ultrasonic, thermal, chemical, inertial, electrical, tactile, and audio sensors. These can be used singly or in combination to
provide highly detailed macroscopic information on dimension, position in space, shape, velocity and acceleration, global or local temperature, and compositional distribution;
detect a variety of other physical, chemical, electrical, optical, and magnetic properties;
probe internal macro- and microstructure to measure parameters such as grain size, texture, and the presence, size, and distribution of voids or other defects; and/or
determine material characteristics at the atomic and molecular levels.
Used as transducers, to trigger as well as sense signals, and to perform analog-to-digital conversions, such sensors can monitor and control a wide range of manufacturing operations and processes. IMC is concerned with using these devices in concert with human knowledge and automatically learned relationships to provide closed loop, real-time control of critical engineering and manufacturing processes. This control is essential for process stability and for maximizing quality, performance, and reliability while keeping costs low.
In sum, IMC implies an ability to (1) assimilate and validate sensory information from a variety of sources, (2) make reasonable assumptions about an operating environment, and (3) execute suitable action plans based on both scientific models of a process and experience gained from executing prior action plans. A model that can learn from prior actions and adjust itself accordingly must thus be devised.
IMC has the earmarks of a science that today is uncharted, but that has tremendous implications for the future. Its extensive use of computer technology builds on U.S. strengths in cognitive science and computers and systems and provides an avenue for strengthening the transfer of knowledge from the laboratory to the manufacturing floor. This will occur as the distinction between laboratory and factory fades, with the latter necessarily becoming the locus of experimentation. IMC will change the way people think about knowledge transfer. It will change the culture of improvement, shifting emphasis away from transferring technology from the laboratory to creating and exploiting technology in the factory.
The need for IMC systems is being driven by the increased precision and decreased cycle times demanded by today's intense competition and by business needs for improved product quality. Given the increasing use of technology in manufacturing, and the growing volume and complexity of information and information sources, unaided human decision making is becoming less and less optimal—decisions made by people simply take too long and fail to reflect the richness of available data. But to leapfrog to the concept of the “lights-out” factory is to overlook the value of the people on
the floor. In order to process increasingly complex information in a shorter time, new tools must be developed to leverage human cognitive abilities. This is precisely the vision for IMC.
Even within the limited context of the manufacturing floor, the issues associated with IMC are numerous and technically challenging. Yet to be resolved are issues faced by early users and suppliers of such systems—issues related to data acquisition, correlation, presentation, and quality control, simulation of control decisions, learning from process disruptions, understanding process complexity, standardization in system implementation, and more efficient user training. A description of each of these issues follows.
Data acquisition. U.S. manufacturing is characterized by thousands of types of data, interfaces, and sensor requirements. One issue is the infrequent use of computerized machine controllers by U.S. manufacturers relative to manufacturers in other countries. Without such controllers, data for IMC are simply not available. Another issue in process measurement is the low reliability of gauges, which can corrupt control data. A third issue is the need for communication protocols for moving data to control points.
Data correlation. The problem of combining different types of data must be solved. When the underlying process physics are understood (e.g., in steel rolling), data fusion can be algorithmically described and handled by computer-based controllers. In most manufacturing today (both discrete parts and process industry), data fusion is performed by a human operator or supervisor.
Data presentation. Presentation of data becomes critical when humans act as controllers. X-R charts for statistical data are one approach—well-known in manufacturing —for reducing the time to operator action.2 More such presentation methods are needed to reduce to actionable information the enormous amount of available manufacturing data.
Data quality control. Collection and correlation of data is necessary, but not sufficient, to meet the needs of IMC. Data from which knowledge is to be extracted must be of uniformly high quality; this is generally not true of raw data. Automatic methods of identifying and eliminating errors, gaps, and redundancies in data are needed.
Simulation of control decisions. Data presented to a human controller must be analyzed and a control option selected. To optimize the control decision, historical information in the form of expert advice or root cause analysis is required. It should be possible to simulate a critical control decision to ensure that it is the optimum selection.
Learning from process disruptions. Each disruption of a process needs to be recorded, the problem identified, its cause determined, and a way devised to prevent it from happening again. This task requires access to processing data that may be highly precise, at the microsecond interval and across the entire system, and include historical data on similar disruptions. In addition, such data, being problem-specific, would change from one disruption to another. Process learning relies on an ability to adjust rapidly to the changing needs of different problems and to link information from a wide variety of data bases. To be useful, such data must be statistically significant and thus may require variable recording times.
Understanding process complexity. In certain industries, such as the integrated circuit industry, the complexity and number of processes overwhelm all other considerations. In some chip manufacturing situations, for example, the process limits are so strictly adhered to that a yield of only a few percent is allowable. In such industries, the basic need is for a much better understanding of the processes involved. A scientific point of view might dictate explicit modeling of the processes. From a control perspective, it might be enough to understand the interaction parameters sufficiently well to provide appropriate control compensation in real time.
Standardization in system implementation. Because every manufacturing environment is unique, standardization of control architectures, communications, data base structures, information graphics, and computer applications software is necessary to reduce the cost of implementing IMC. Creative financial programs and approaches to technology upgrades are needed to encourage start-up.
More efficient user training. Because control is distributed and centralized decision making is often inefficient, IMC systems require a new type of operator and new organizational structure. The work force must be trained to use computer-based information tools to make local control decisions. Management must be reorganized and its role changed from traditional decision making to coaching. Managers, particularly in smaller companies, must be shown that new manufacturing technologies not only are available, but also are vital to their firms' long-term health.
To imagine manufacturing with intelligent control, it is helpful to recall the Martian Rover. Deposited on the surface of Mars, this
highly reliable mobile robot manipulated a variety of tools, responded to numerous sensors, performed experiments, and learned from and adapted to its environment. Its use of hierarchical, local, decentralized control to operate untended and deal with contingencies in an unpredictable environment was observable remotely (from another planet!), allowing its control algorithms to be studied, improved, and effectively reintroduced into the environment. The notion of remote control—of running a process from behind a wall, without seeing, touching, or feeling any part of it—is implicit in IMC.
The human observer brings to the analysis of an unfamiliar scene broad knowledge tied to perceptual skills, and the ability to review, and perhaps rethink, a scene from different perspectives. Such dynamic interplay among sensors, analysis, and knowledge is necessary for learning about a sensed environment to be able to respond to changes in it. Lacking this integration, current machine vision and sensor systems have limited ability to work in uncontrolled or changing environments. Many plant installations of sensor technology have ultimately failed, not because the sensor technology was wanting, but because conditions changed relative to those for which the sensor and control strategy were optimized.
Current inspection and robotic guidance analysis systems must accept input in whatever form sensors provide it. The sensors do not know whether the data they provide is of any use and the analysis system does not know what specific changes in the data might mean (e.g., that a light bulb has just burnt out). The basic dynamic interplay employed by a learning child can close the loop around data acquisition and analysis; the development of computer-based analogs, such as the Cooperative Hierarchical Image Learning Dynamics (CHILD) begun and partially completed by the Industrial Technology Institute, could be a major step toward truly autonomous robotic capability.3 It is this dynamic interplay that leads to adaptation and modification of knowledge, which, in turn, opens the way to novel inquiries and greater understanding. Application of such image learning dynamics would produce truly flexible manufacturing systems that could identify or sort mixed parts or direct robotic applications in less controlled environments.
CHILD entails development of four basic building blocks as a foundation for the required interplay of dynamic interaction. These are:
an adjustable image acquisition module, which includes adaptable imaging, lighting (irradiance), and/or any other sensor systems to be used;
an image (or other data) analysis module, which will be the computer complement most suited to analyzing the data provided and controlling the parameters at hand;
a knowledge base module, which will make a broad base of knowledge —of the sensor systems and what they can do, of the environment and natural laws, and of the object (be it a production part, a machine, or a satellite)—available on a hierarchical level; and
an executive control module, which will be able to solve simple problems, learn from experience, and coordinate the activities of the other modules.
Learning about the environment and effectively responding to contingencies involves a synthesis between human and machine, local control, and central intelligence. An intelligent controller must be able to communicate with heterogeneous data bases, learn from similar or related instances, and incorporate a model of control for different contingencies. This model must be autonomous, i.e., capable of operating with or without human intervention. An even more powerful paradigm for control exploits the human powers of perception, pattern recognition, and problem solving and the intelligent manufacturing system's ability to manipulate vast quantities of procedural knowledge. Manufacturing control systems built on this paradigm exhibit synergy between human and machine and an ability of both to learn and they incorporate a dynamic model of this world.
The vision for intelligent control is of control across the breadth of the nation's manufacturing systems. It is not implementing optimal control for achieving stated processing goals for a unit operation, but rather creating systemwide views of processing that take into account the interrelationships among individual unit operations in the manufacturing cycle. Given such views, it is possible to affirm the goals for any one of a chain of processes and clearly recognize the implications of material, process, or product deviations. With linkages among individual intelligent controllers in the manufacturing chain, important process revisions, quality information, and overall system objectives can be shared with minimal lag time.
The intelligent controller implicit in this vision must be capable of establishing and executing process plans that both reflect operator know-how and model-derived principles and are aimed specifically at controlling product attributes. To maintain processing flexibility in an environment characterized by rapid maturation of process, material, and product applications, IMC must exploit control
technologies that do not (1) restrictively prejudge interrelationships among key process parameters, (2) assume certainty of control objectives, or (3) prematurely determine linkages between process behavior and product quality.
Adaptive control, though it responds to environmental change, is based on some fixed model of a process and is local in terms of the information it gathers and the control it exerts. IMC, in contrast, analyzes and uses historical information about its own actions, together with systemwide information from many sensors, to adjust its model of the world and effect novel action plans.
Knowledge-based systems that clone the knowledge of one or more experts to improve understanding and control of a process already are in use in some applications such as steam turbine generators, diesel locomotives, and wave-soldering machines.4 The next steps are to effect a synergy between these expert systems and the humans whose knowledge they embody and to implement methods for learning by example. This will optimally be achieved by using all available information. This information might take the form of known rules and algorithms or patterns of response learned by example. Learning must take place with every decision and situation. The characteristics that make manufacturing environments ideal for learning—information sources that are multiple, complex, dynamic, and accessible—are also responsible for the difficulty of learning.
IMC is more than knowledge-based systems for process control; its purpose is not to exploit operating expertise, but to convert operating experience into a manufacturing science. This conversion will require very close interaction between the scientific community and the manufacturing plant. The plant must be the locus of research because only there can entire processes, rather than isolated steps of a process, be studied. At the same time, it is the scientific process that provides the methods of learning by which a logical model capable of representing entire processes can be built. In plants, pressure for control overwhelms scientific understanding. The scientific community's interest in the development of a manufacturing science must be turned into the resources required to build needed learning systems within the context of the plant.
PRESENT AND FUTURE PRACTICE
The implementation of IMC depends on both machine-related and business-or environment-related manufacturing assumptions,
greater intelligence in the form of precise and complete sets of contingent procedures, and versatility and generalization of deep process knowledge. It relies on the replacement of scale economies by learning economies and the existence of a systemwide architecture that encompasses information structure, organization of human resources, and process structure at all levels of the hierarchy and on the nature of the decision making at each level.
Manufacturing assumptions today are vastly different from those of only a few decades ago. A fundamental shift in the paradigm of production is taking place—from managing materials processing to managing information —in which machines are seen increasingly as extensions of the human mind. Any discussion of IMC, whether in discrete or process industries, must consider two broad sets of manufacturing assumptions, machine-related and business- or environment-related.
Modern systems characterized by integration and intelligence are appropriately viewed as human-machine cooperatives. To understand the significance of this shift, imagine the technology in the extreme. Consider a small group of engineers using a connected system of workstations to design and write the software for producing a product on any defined configuration of equipment anywhere in the world. Once the procedures are created, machine capacity and materials become commodities to be bought and sold at whatever price can be obtained. What is preventing such a development is not the absence of mechanization, but the need for greater intelligence in the form of precise and complete sets of contingent procedures. Developing these intellectual assets is today's technological imperative.5
The development of intellectual assets rests on a fundamental, machine-related assumption—the versatility and generalization of deep process knowledge. Newer manufacturing systems have distinct advantages: they can be used to produce many different products, they are adaptable to changes in design or recipe, and they can operate untended. Investment in such systems—in people, equipment, and software—must be made before production begins. The versatility of manufacturing systems and transportability of procedures make the market for production capacity highly competitive and subsequently make the capacity itself a commodity. The only premium that can be extracted lies in the creation of new procedures for improved processes and products.
Scale economies are replaced by learning economies, and firms compete by trying to create performance advantages and introduce new products more quickly.
The versatility and generalization of deep process knowledge rests on another machine-related assumption—the existence of a systemwide architecture that encompasses information structure, organization of human resources, and process structure at the plant, cell, and machine levels.6 The architecture of factory work that integrates information and materials processing, automated auxiliary functions, and extreme flexibility and versatility would probably be hierarchical. The system would use specialized, dedicated computers to operate machines, material-handling facilities, and inspection processes and to manage cells throughout the plant in an integrated manner. Such hierarchical organization is used in almost all existing automated manufacturing systems and corresponds to the vertical organization of most present-day factories.
Levels of decision making associated with these factories also are present in the hierarchical computer system (see Figure 2-1). At the lowest level, concerned with actual machine operation, is process information. Decision making at this level can employ adaptive control. At the intermediate level, where manufacturing operations are managed and contingencies and conflicts are resolved, are cell controllers. The principal decision-making function at this level is coordination and management of resources. At the plant level, the principal decision-making function is management of experimental procedures, capacity, and knowledge bases, and the resource allocation methods for learning and control are strategic.
Other manufacturing assumptions related to the machinery of production are high reliability, failure recovery (graceful degradation and restart), the existence of workstations and massive amounts of information, and consistency of data.
The fundamental business-related assumption of the new manufacturing environment is an economic representation of a world model, a complete description of contingent procedures for manufacturing control viewed from the perspectives of factory, product, and process. These descriptions may reflect different levels of abstraction and precision, but must be consistent across the hierarchy.
The increasingly innovation-based nature of competition demands quick start-up and transition with minimum waste of materials, time, and human resources. This situation gives rise to business-related manufacturing assumptions associated with the development and management of intellectual assets, including:
person-machine interaction (operator training and instruction, and ease of use in a more complex world), economic management of disruptions (a new architecture for process control costing), explicit modeling of disruptions, and intergenerational learning.
Contingencies are typically brought to light by a product's failure to conform to specifications. Discrepancies—whether in consistency, location, shape, size, surface finish, or volume—can result from combinations of changes in four broad categories: mechanical, thermal, operational, and feedstock properties. Also of concern is whether changes are systematic (occurring with approximately the same magnitude each time), random (occurring each time with different magnitudes and without apparent pattern), or both. Recognizing, diagnosing, and learning from contingencies requires high technical intelligence, both human and machine, and grounding in the scientific method.
In the following section, a model for intelligent manufacturing control is developed, and the consequences of the various machine-and business-related manufacturing assumptions in the areas of integration, control, and IMC.
An Architecture for IMC—the World Model
Figure 2-2 shows a classical model of adaptive control. Ideally, inputs are fed into a black box and subjected to procedures that produce an expected output. Environmental effects on the procedures may result in an actual output that differs from the expected output. Adaptive control consists of adjusting procedures to compensate for the detected difference between actual and expected outputs, thus moving actual output closer to expected output. Adaptive control assumes that differences between actual and expected outputs arise from environmental conditions and are therefore random; its only response is to make whatever adjustments are necessary to get back to the target value.
A simple example of adaptive control is thermostatic control of room temperature. If the temperature rises above the upper setpoint on the thermostat, the heat is shut off; if it falls below the lower setpoint, the heat is turned on. The system cannot discern external causes of variation, such as the window being left open. It simply continues to turn the heat on and off when the temperature falls below or rises above the established setpoints. Beyond certain tolerances, differences between actual and expected outcomes become disruptions. IMC views these disruptions as having both random and systematic elements, and treats the latter as
assignable to a cause. Returning to the example of thermostatic control, an intelligent system would attempt to identify a systematic element at work in too-rapid fluctuations in temperature. To do so, the system would need a logical model capable of representing cause and effect relationships associated with the disruption. This model might consider, for example, outside temperature, building insulation, and inside temperature and incorporate heat transfer equations and a mechanism for controlling doors and windows. Because it must associate a cause and effect relationship with every disruption, and because disruptions are evolutionary, such a model can never be complete. It is thus always a model of search with a virtual structure.
The logical model supplies the structure needed to relate a disruption to its possible causes and to refine the model progressively as more is learned about the process. Beginning with a simple matrix of causes and effects, progressive learning can lead to a scientific/mathematical model that can predict from a change in one parameter its consequences for downstream processes.
Figure 2-3 shows the architecture of a world model for IMC. It has a process control loop for each process step and a logical model for every disruption that can occur at that step. This part of the world model extends the simple input/output model of adaptive control to take into account all known parameters of a process step and to deal with disruptions that occur over a number of process steps.
The progression from one logical model to the next is the
essence of IMC. At different stages of knowledge about a process, one would require different information and might run different experiments. These experiments may be explicit of or implicit in the operation of the process.
At the plant level, manufacturing control consists in systematically choosing which disruptions to address. Here, an economic representation of the effects of disruptions is required to guide product and process choices for the plant. In the world model, this representation takes the form of a logical model for deciding what resources to allocate for problem solving and learning. Because the factory is a dynamic, uncertain world, the world model is always evolving. Disruptions provide opportunities for learning, which in turn require the commitment of people and money. Modern, microprocessor-based manufacturing systems make available an enormous amount of disaggregated data. IMC supports the development of an evolutionary world model by allowing these data to be organized along the domains of process control identified earlier.
The following sections examine the world model in terms of integration, control, and intelligence.
Integration in IMC involves data accumulated over time, including data on past disruptions, and implies the ability to relate current disruptions to earlier, similar disruptions. In this temporal context, IMC exists at three levels: (1) between machines and the flow of processes within a factory; (2) among different functions, such as design, engineering, and manufacturing; and (3) between human knowledge and machine intelligence.
Machine–machine integration relies on an information structure that is capable of supporting the breakdown of tasks in an uncertain, dynamic environment. Necessary are standards for data communication to support the comprehensive information objectives of the firm and a consistent world model that represents the factory as a system. The latter requires that performance levels be consistent across the hierarchy of control from plant to machine.
Functional integration implies that information from the factory can be used as a basis for learning, improving processes, and assisting in the design of the next generation of products and processes. Process models need to be constructed that can express different levels of abstraction and different views of the same circumstances (i.e., the manufacturing view, the product view, and the process view). Such models of design and manufacturing management must be consistent across all three views and be able to adapt to change.
The third level of integration, between human knowledge and machine intelligence, is discussed in the section on intelligence.
Process control in a manufacturing plant relates to how a product is made rather than to what is made or when it is made. Process control generally takes place at the machine and the system levels. At both levels, control may be open loop, which requires one to recognize patterns and take action, or closed loop, which removes the human decision maker. The typical view of control presumes the existence of sensors that measure process outcomes, a model that recognizes discrepancies between expected and actual outcomes, and an algorithm that determines which process parameters must be changed and how (either to feed forward to correct the process in subsequent steps or feed back to correct subsequent output, or both).
In a noisy world with faulty sensors, incomplete knowledge
of cause and effect relationships between process parameters and outcomes, and dynamic changes in the environment, closed-loop control is difficult at best. Even at the machine level it is not very effective. Control theory has provided some impressive results, but these generally have been applied to very restrictive domains. This situation is changing, however, with the growing ability to gather, assimilate, and make sense of masses of information in a variety of forms using developing techniques in artificial intelligence and with the building of intelligent systems that exploit available information and synergies between people and machines.
To use the flexibility inherent in modern manufacturing systems competitively, the design process must be speeded significantly. The rapid learning implicit in doing so can be facilitated by using machine intelligence as an adjunct to human knowledge. Intelligent systems foster such synergy.
The criteria for an intelligent system are that it be used for learning, that it be focused on technological know-how, and that its intelligence be the joint product of a person and a machine. This definition encompasses a variety of systems available today, including decision support, expert, and optimization systems.
To exploit aspects of cognitive science that can lead to a technology of problem solving—the foundations of an applied cognitive science —it is not enough to model the mind as a machine, as in early artificial intelligence research, or to replicate human problem-solving processes in computer programs, as is done in expert systems. Instead, the knowledge of a given problem domain must be exploited by (1) separating well understood or formal elements from poorly understood or informal elements, (2) using the formal elements to enhance understanding of the informal elements, and (3) continuously transforming the informal elements into formal ones.
The separation and subsequent reintegration of formal and informal elements is the essence of cognitive activity and the function of any intelligent system. As the progress of technology is marked by an increase in formal abstraction, the architecture of an intelligent system can be judged by its inherent degree of formality and the extent to which this can be increased. To understand how formality is enhanced in an intelligent system, the problem-solving process —how problems are recognized, posed, and solved—must be studied.
How are problems recognized in technological development? The traditional view is that they are recognized in the design stage and steps are taken to forestall them. In practice, problems are more often recognized after the fact, as contingencies arise and causes are sought. Problems can only be recognized at the design stage in manufacturing environments that are well understood, which implies that they are static. More generally, problems arise unexpectedly in uncertain, ambiguous, and dynamic environments.
How problems are posed is inextricably linked to the way knowledge of a problem is organized and represented. The traditional view is that knowledge is organized in categories, and that theories are developed that relate these categories, creating new ones or collapsing several into one. The dynamic view is that knowledge is organized around prototypes that have implicit internal relationships, and that problems are posed as searches aimed at identifying the degree of similarity to a typical member of a prototype and elaborating on the internal structure of that prototype.
How are problems solved? Here, again, the extreme views can be characterized as traditional and dynamic. The traditional view holds that precise procedures can be written for solving problems through logic and reasoning. The dynamic view is that problems are solved by means of some combination of experience, judgment, experimentation, intuition, and skill.
These views have a certain consistency. The traditional approach to problem recognition, the organization of knowledge into categories, and the formal method of problem solving are consistent with the notion of technology as science. The dynamic approach to problem recognition, the organization of knowledge around prototypes, and informal methods of problem solving are consistent with the notion of technology as expertise. These two perspectives lie at the upper and lower bounds of process knowledge. An intelligent system can simultaneously take both perspectives and exploit the synergy between science and expertise to move progressively to higher planes of knowledge.
An architecture of control for an intelligent system relies on five central premises:
problem solving is begun with partial knowledge of the problem domain;
this knowledge comes in chunks;
these chunks can be formally represented and manipulated;
relationships between chunks can be seen, theorized about, and tested in the external environment; and
human rationality is bounded and judgment theory is value-laden and biased.
Two cases—one involving a wire-drawing operation, the other a chemical plant —are used here to illustrate issues of control, integration, and intelligence in IMC.
Both cases involve numerous processes that are subject to disruptions. Because a disruption in one process can originate in an earlier process, IMC must be able to establish such relationships. Traditionally, this has been done by searching historical data for “similar” discrepancies between recipes and effects. The concept of the logical cell (introduced with Figure 2-1 , and expanded upon in the wire-drawing case and implicit in the chemicals case) provides a structure for searching process control data bases and for controlling process parameters through closed-loop feedback. The objective is ultimately to move from a rule-based system to a system that begins to approach a science. IMC at this level serves the development of recipes and products, as seen in both of the following cases.
It is readily apparent from these cases that much work is needed to achieve the vision of IMC described in this report. The state of the art will not move beyond that depicted in these cases unless industry and academe mount a concerted effort and commit the necessary resources.
IMC in the Wire-Drawing Industry
The 1985 introduction of microprocessor control in the wire-drawing industry and that industry's rapid adoption of computer-integrated manufacturing (CIM) have afforded an opportunity for cooperative development of an architecture for IMC. Preliminary results of the efforts of one firm working jointly with academic researchers to introduce IMC in one of its wire-drawing plants are reported below.
To preserve the firm's anonymity, this account is set in a generic, pre-1985 installation. A typical wire-drawing plant has two large pickling machines, 200 dry wire-drawing machines, 20 heat-treating installations, 1,000 wet wire-drawing machines, and 100 finishing lines, all laid out functionally. Such a plant makes between 150 and 1,000 different products. Wire rod received on spools is tested for properties related to the raw material and
stored. The spools are subsequently pickled (cleaned in an acid bath), then loaded into an unwinding spool bin and the wire is pulled through a series of progressively smaller dies to lengthen it and reduce its diameter by adding stress and changing its crystalline structure. Next, the wire is heat treated to relieve the stress and then coated with different substances to change its surface structure. The wire is wound back onto spools, transported to a wet wire-drawing operation, and then to finishing operations such as cutting, galvanizing, and coating with adhesives.
Wire fractures, the most vexing problem in the wire-drawing process, result from process variance and are reflected in poor-quality end products. Process variables number in the hundreds, as do possible responses to a wire fracture. To enable operators to cope with these many degrees of freedom, attention-focusing and control mechanisms are needed.
Today, different steps in the process are located in different parts of the plant under their own supervisory structures. Information about the impact of heat treating, which is done in one part of the plant, on wire-drawing, which is done in another, is not captured. In fact, process variance is not analyzed systematically; a process that is out of control is handled by ad hoc engineering analysis. Consequently, no learning occurs and history repeats itself.
IMC could provide the mechanisms needed to identify problems and adjust process parameters, both upstream to prevent problems from recurring, and downstream to correct for process deficiencies. It could also help management make trade-offs between managing production and running experiments to isolate process problems.
An automated, continuous wire-drawing process is currently being developed that places process flows under the control of one system for similar products, significantly reducing the required machine complement. The typical automated plant will probably still have two pickling machines, but only 10 dry-wire drawing machines, one heat-treating installation, 500 wet wire-drawing machines, and 50 finishing lines (representing a 95-percent decrease in dry-wire drawing and heat-treating equipment, and a 50-percent decrease in wet-wire drawing machines and finishing lines).
When all of these microprocessor-controlled machines and the processes that run on them are integrated under a hierarchical control structure a decision maker will be able to track every process parameter that operates on every meter of wire that goes through the line. For example, during heat treating, the location of a given meter of wire can be known when different furnace
burners are opened and closed. Thus, the decision maker can know not only the ambient temperature of the furnace at any given time, but also the finite states of a number of operating parameters. This means that the decision maker will have data, never before available, that describe the sequence of events that acted on a meter of wire precisely when a fracture occurs.
Consider just the wire-drawing process. Certain parameters relate to the incoming material, certain parameters to the outgoing material, and certain parameters are fixed for the entire spool. Control parameters change with every meter of wire. One can use the information gathered on fractures to make changes in the microprocessor controller during the process and affect key process variables in the outcome. The algorithm for control can be changed, the results of experiments observed, and new changes introduced. The decision maker can use this information in both day-to-day production decisions and in the development of algorithms to create new process capability over the long term.
The systemwide structure of IMC permits a plant to be organized into virtual cells for problem solving. These cells can be either horizontal (i.e., logical groupings of different machines) or vertical (i.e., logical groupings of machines of the same kind). The actual configuration will be related to the point of view (i.e., product, process, or manufacturing) and the problem being solved rather than to the location of machines.
Consider the horizontal cell in Figure 2-4, in which the output of several dry wire-drawing machines is passed through furnaces to a chemical bath. If suboptimal procedures in the furnaces can be detected, that information can be fed forward to make compensating adjustments to parameters associated with the bath. Or consider the vertical cell in Figure 2-5 , which shows a number of wet wire-drawing machines that produce strands that are wound into cable. Current practice with wet wire-drawing machines is to make the output as consistent as possible and wind strands together randomly. With IMC, it becomes possible to know the precise diameter of each strand from each machine and thus to wind strands selectively so as to produce cable of just the right gauge. At the dies themselves, process control today relies on heuristics rather than understanding. With sensors, information can be fed back within a single die, thus gaining another level of intelligence.
The IMC system described above is event-based, statistical in nature, and attention-focusing. It allows construction and subsequent refinement of models of production processes. These char-
acteristics permit distinctions to be made between systematic and random events, detection of novel events and patterns of events, and evaluation of features of interest. A complete picture of events can be captured and compared to expectations and alternative procedures. Economic values can be assigned to these procedures and relevant variations and experiments can be suggested.
Within eight months of developing an IMC system to control wire drawing on one line, fractures on that line were reduced fourfold. In addition, the firm expects to realize a tenfold increase in productivity. Because the architecture being used is general, many different implementations are possible.
IMC in the Chemicals Industry—Five Years from Today7
This case examines the future organization of a chemical plant that reflects the implementation of a host of existing or developing advanced process control technologies such as:
three-dimensional (3-D) computer-aided design (CAD) imaging with walk-through and a computer interface (X-window);
multiple, integrated views of project design data, including an integrated dynamic process model;
a standard interface that allows transfer of dynamic data between the plant control system and the CAD system;
expert systems that provide embedded explanations of design concepts and current states of control systems;
fiber optic sensing with embedded diagnostics;
chemometrics with neural computers providing on-line composition;
global access to process data; and
natural language translators.
These technologies provide the operator's only view of processes that take place entirely within pipes and tanks—the notion of operating from behind a wall. Hence, the case plays out entirely in the control room.
In this hypothetical chemical plant, operators no longer spend 15 or 20 minutes catching up at shift change. Everything they need to know is now in the control room log, which is integrated into the business and maintenance systems. An incoming operator can tell, for example, that the product the plant switched to last shift, Betafon-134, is right on target in terms of the forecast for orders, and that the Zexene column has fouled and will have to be cleaned, a condition that will warrant analysis during the operator's shift.
Interaction with the operating system has become much easier. A large, flat wall display has replaced numerous cathode ray tubes for systems with different user interfaces. The new display is used by a distributed control system and several host computer systems. It has a single user interface: a glove that the operator can point at any part of the display and a headset for giving instructions to the integrated control system.
The display uses 3-D CAD imaging instead of menus and faceplates and gives the operator a videolike view of the process.
To examine the fouled Zexene column, for example, the operator voices a command into the headset mike and an exploded view of the Zexene column appears in an overlay. After voicing another command that causes the column to flash in blue, confirming that it is the right one, the operator asks the expert system for current data. Pertinent information overlays the image.
Suddenly, the operator is alerted to a more immediate problem—the Topper distillation column is flashing yellow, with the rest of the process running normally. A high-level diagnostic indicates that the column's control system is no longer in the normal state. This multivariable controller, which usually regulates controls at both ends of the column to on-aim control, has diagnosed the failure of its tails analyzer. The sensor's on-line diagnostics indicate that the fiber-optic probe has failed, causing the control system to rely on its on-line model to predict composition. The system has already automatically sent a priority electronic mail message to the analyzer specialist. The operator asks the system to predict the cost of the analyzer outage and is told that the model-based control system is compensating well but significant degradation in quality is likely. The operator asks the system to analyze the cost of switching to another product while running under degraded control. The results show that a switch to Gammafon-39 will allow the model to predict the tails composition much more precisely. A check of the integrated production planning and scheduling system indicates that the switch will not disrupt customer shipments, although it will increase plant costs somewhat. The operator uses the special glove to select Gammafon-39 from the product menu, voices the command to make the change, and then dispatches an electronic mail message to the team leader.
Frequent problems attended the old analyzer's sample system. Because the system could not explain its changes, operators often went manual when it did something they did not understand. Often the plant ran for days with bad composition data without the operator being aware of the failure. The fiber-optic probes of the new chemometric sensor look at the process stream directly, eliminating the need for a sample system. The operator does not need to understand the calculations in the neural computer to be confident that the system will accurately predict product composition and diagnose any problems.
Another multivariable controller indicates a reduction in monomer recycle to the Step A reactor. The on-line expert system informs the operator that the control system has detected a subtle
change in catalyst activity, necessitating a reduction in the recycle rate for a short time until the catalyst can recover.
The operator is interrupted by the plant support engineer, who has dropped in to review some preliminary information on a new project aimed at increasing plant capacity. On a corner of the wall display, the engineer brings up a 3-D CAD image of the upgrade design, which is a composite of a conceptual design prepared by the corporate engineering design division and some detail design from the local regional engineering office, both sent to the plant electronically. Additions to the process are highlighted in green, modifications in yellow, and the existing process in blue. The engineer pulls up a schematic of a new reactor on the screen, notes the calculated residence times, and sees the plots from a simulation done by a consultant. The operator observes that the feed to this reactor from the existing process is very erratic. The engineer, wondering what effect this would have on reactor effluent composition, obtains from the plant control system flow data from that part of the real process for the past two months and uses the data to drive a simulation embedded in the conceptual design. The results suggest a problem, and operator and engineer decide to send the conceptual design, together with the plant data, back to corporate engineering design.
The operator, gesturing with the glove and voicing commands to recall the Zexene column display, reviews various formats that suggest that increased plugging in the column may be related to reduced catalyst activity in the Step A reactor. The operator runs analyses of last year's plant data and data for a similar plant in Japan and finding a correlation in the data from the Japanese plant, sends an electronic mail message to the plant engineer.
The phone rings and the operator is soon involved in a conversation that is a mixture of English and Spanish. A Spanish swimwear manufacturer needs a new Betafon product for a new swimwear line that adheres to some very tight specifications. The English-Spanish dialogue was a result of operator and customer mastering one another's language using a computer-based translator that provides interpretations of the conversation in the language of choice. Unable to provide an immediate solution, the operator calls an engineer at the Japanese plant and then calls the customer back. For several minutes, the operator (in the U.S.) and the engineer (in Japan) pan back and forth on the display screen, monitoring key variables throughout the process while the customer (in Spain) explains the problem. The customer sends the engineer a computer model—generated by a modeling package provided by
the chemical company—describing the properties needed for the new line of swimwear and thanks the operator for making the connection with the Japanese engineer. In coming weeks, operator, engineer, and customer will become a tightly knit team as they work on the new Betafon product.
PRIORITIZED RESEARCH RECOMMENDATIONS
The panel believes that research in IMC must be directed at developing techniques for breaking down and refining knowledge as a foundation for building knowledge bases that are capable of adapting to change. Promoting synergy between people and machines is an essential part of this task. Research aimed at producing a world model for IMC should focus on high-level supervisory control that links both depth and breadth of knowledge. Research is also needed to develop data communication standards, sensor integration, and mechanisms for facilitating learning in an integrated environment. The necessary research must be jointly undertaken by industry and academe and must employ the factory as a laboratory, a theme shared with Chapter 3, Equipment Reliability and Maintenance.
In prioritizing its research recommendations, the panel concluded that productive areas of research in IMC lie at the cell level. In a world of dynamic product and process change, manufacturing must go beyond statistically controlling processes to building process capability.
To meet requirements for adapting intelligence to changing knowledge and organizational structures, knowledge bases must be developed that adapt to changing people, products, and external forces. Such development must be based on techniques for breaking down and refining knowledge. An information structure using these techniques, and operating in an uncertain, dynamic world cannot be built on machine intelligence alone; human–machine integration is essential. This synergy between people and machines must yield knowledge acquisition techniques that are capable of supporting rapid start-up.
A model that encompasses design, manufacturing, and management and can adapt to change also is needed, as is research on a variety of hybrid open-loop control systems.
The development of common standards for data communications, essential to the diffusion of IMC, implies creation of a specialized vocabulary for development and change. This approach to standardization must be technique-oriented and utilize standard physics, mathematics, business, and economics models.
Efforts at sensor integration should be guided by the need (1) to talk to the same world model as actuators, and (2) to integrate data, pattern recognition, and action models. Transparent algorithmic structures that facilitate ease of understanding and change and guide algorithmic development are very important for diffusion of the technology. Ancillary requirements include statistical control of process capability and an ability to reason from incomplete knowledge.
In addition, educational approaches must move away from training students and researchers, recognizing important work, and posing problems within specific disciplines. The laboratory— where theories are tested by experiments that control for potentially contaminating noise, the questions posed are narrow and well defined, and the results are unambiguous—is no longer sufficient as a locus of research. To make meaningful contributions today, research must reflect the union of engineering and manufacturing. It must recognize that the interfaces and interactions among processes have become as important as the processes themselves. Researchers must construct new methods of building knowledge and of unifying knowledge in different disciplines. The factory must become the laboratory because only in the factory can manufacturing be studied as a whole.
The context for research will demand close interaction between academe and industry. Research on systems that encompass entire factories cannot be done by academe alone and basic research in such areas would require a commitment of human resources beyond what most firms can afford. Interdisciplinary research, therefore, must be directed at this task.
This joint approach presents problems on both fronts. In academe, incentives for this kind of research are rare. Though the scope of such research is broad, because it is field-based and development is tied to a particular site with its associated idiosyncracies, it is not readily accepted and does not further an academic career. Furthermore, the track record for interdisciplinary research involving both engineering and management is not very encouraging.
Similar problems exist in industry. The factory is not viewed as a laboratory, and management of knowledge acquisition as an important, continuing activity is not part of the factory culture. Most U.S. factories operate incrementally, realizing only marginal improvements from the status quo. Their operation is not based on any vision of the future, let alone the vision presented here. While a firm may welcome a specific solution to a pressing problem, it is not likely to be interested in solving general problems of a basic
nature in its factories (e.g., factory cost-accounting systems.8 Mechanisms are needed that encourage cooperation between academe and industry and can accommodate conflicting goals, such as scholarly publication versus proprietary considerations and the need to experiment with real processes in real factories. In addition, management must be made aware that the rich stream of information guaranteed as a by-product of running its factories to produce products could be used to solve yield problems. The promise of IMC is to make the factory a more effective laboratory, capable of realizing both quantum and incremental improvements.
These barriers notwithstanding, industry perceives a need for training a new kind of engineer and for developing methods for managing learning and knowledge. Academe, in search of relevant initiatives, is preparing to meet the challenges of a broader playing field. Building intelligent systems that exploit person–machine synergies for learning in IMC is a significant challenge indeed.
In summary, research in IMC should aim at:
developing technique-oriented communication standards to facilitate the diffusion of IMC;
refining sensor technology in the areas of data integration, pattern recognition, and actionable models;
building knowledge bases of design, manufacturing, and management intelligence that can adapt to changing knowledge and organizational structures;
creating a dynamic world model of manufacturing;
identifying ways to utilize the human–machine interface to facilitate learning in an integrated environment; and
redefining its methods to accommodate holistic research in a production environment—the factory as laboratory.
MECHANISMS FOR DIFFUSION AND IMPLEMENTATION
Some of the larger Fortune 100 companies, in industries most threatened by foreign competition or in process industries that already use a high degree of closed-loop feedback control, may develop and build IMC systems independently. Such developments, however, will constitute isolated instances of technological proficiency that will diffuse only very slowly to the rest of the manufacturing world. (Witness the slow diffusion of robotics technology to the small-business community.) The panel believes that to move IMC to the larger manufacturing community, including enterprises with fewer than 100 people, rapid diffusion must be made
an explicit focus of research on the architecture and development of the technology. Rapid diffusion can only occur if the building blocks become commodity-like elements that easily can be incorporated into the manufacturing system. In addition, the manufacturing community must have its eyes opened to the urgency of the competitive global challenge and IMC's vital role in addressing issues that may not be amenable to technical solutions.
The lack of diffusion of a similar technology, mechatronics, the integration of machine controls with electronics, suggests that early developers paid little or no attention to this requirement. Machine builders, using mechatronics, built sophisticated systems for large companies with specialized needs rather than general purpose systems for the larger body of small users, for whom the infrastructure for effective and easy use of a technology is as important as the technology itself. IMC systems, by their very nature, integrate all of the process steps in a manufacturing plant and require deep knowledge of each of those steps, so that any program that does not simultaneously build the infrastructure for the development of intelligent systems along with generic software for system integration will fail.
It is very important that an effort be made to diffuse knowledge about IMC rapidly. A cadre of good researchers and research sites must be built that will promote effective collaboration between industry and academe. Missionary work is needed in building an infrastructure for diffusion and in emphasizing the importance of the problem.
Even more important is the supply and building of talent and development of the necessary incentives for industry and academe. Scholarly publication of interdisciplinary research and effective peer review of such work is crucial to creating these incentives as is research funding and matching funding from business. Still another need is the creation of incentives to promote the development of educational and training materials that will enable instructors in universities, community colleges, and technical institutes to further diffuse the requisite knowledge to appropriate. users.
Consider the application of the personal computer to manufacturing problems. In less than 10 years, the personal computer has profoundly influenced the way many manufacturing-related processes are performed. The rapid acceptance of this technology is due not only to dramatic price/performance reductions, but also, and more importantly, to the ease with which the average person can use the personal computer to solve problems and in-
crease his or her productivity. The personal computer will surely become one of the building blocks of the future IMC system; it already has, to some extent. Other such standardized solutions must be found.
The panel recognizes that real-world management cannot abruptly move into the world of IMC—that move will have to be economically justified and made incrementally. IMC cannot simply be dropped into place in the manufacturing world. Its adoption is an evolutionary process that will have to be engineered to suit different environments. In a world economy, this is a vital process.
1. Bohn, R. and R. Jaikumar. 1989. The Dynamic Approach: An Alternative Paradigm for Operations Management. Harvard Business School Working Paper No. 88011. Boston, Mass.: Revised August 1989.
2. There is a special reason for concentrating on the statistical aspects when introducing a program of better quality at lower cost in a going operation. They are more tangible than other quality control aspects and can be presented in a more interesting and appealing manner. The preparation of a list of trouble spots converted to costs per unit period, and plotted as one would a curve of cumulative wealth, will point out the operations where X (average) and R (range) charts should first be applied. (Juran, J.M. Quality Control Handbook. 1962. 2d ed. McGrawHill Book Company, New York, N.Y.)
3. The project was abandoned due to lack of continuing funding. ITI worked with the NASA sponsored Center for Autonomous Man-controlled Robotic and Sensing Systems, located at the Environmental Research Institute of Michigan. For further information, contact Dr. Robert J. Bieringer, Manager, Sensors and Control Systems Engineering, Industrial Technology Institute, Ann Arbor, Michigan.
4. Jaikumar, R. and R. Bohn. 1986. The Development of Intelligent Systems for Industrial Use: A Conceptual Framework. Research on Technological Innovation, Management, and Policy. 3:169-211. Boston, Mass.: JAI Press, Inc.
5. Jaikumar, R. 1986. Postindustrial Manufacturing. Harvard Business Review (November-December) 69-76. Reprint No. 86606.
6. Clark, K., R. Henderson, and R. Jaikumar. 1989. A Perspective on Computer Integrated Manufacturing Tools. Harvard Business School Working Paper No. 88-048. Boston, Mass.: Revised January 1989.
7. This scenario was adapted from material provided by E.I. du Pont de Nemours & Company and is used with that company's permission.
8. Jaikumar, R. 1990. An architecture for a process control costing system. Chapter 7 in Measures for Manufacturing Excellence, R. S. Kaplan, ed. Boston, Mass.: Harvard Business School Press. 193-222.