National Academies Press: OpenBook
« Previous: 2 Facilities Management Departments
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

3

Models Applied to Staffing

While there are a considerable number of definitions of “staffing,” their central ideas and scope are captured in the Staffing Organizations definition: “Staffing is the process of acquiring, deploying, and retaining a workforce of sufficient quantity and quality to create positive impacts on the organization’s effectiveness (Heneman and Judge, 2006).” To accomplish those ends, organizations typically require models that assess the current staffing requirements and current staffing levels, and, ideally, predict future staffing needs. This chapter reviews the basic characteristics of models in general and staffing models in particular. This chapter also provides a general checklist of features to evaluate when choosing among staffing models. In addition, staffing (or “manpower”) modeling and planning often occurs in the context of a broader staffing strategy (Edwards, 2003). The application of models in this context is discussed as well. In this chapter, the committee drew heavily on two reports of the National Academies of Sciences, Engineering, and Medicine1 for the Federal Aviation Administration (FAA)—Staffing Standards for Aviation Safety Inspectors (NRC, 2007; hereafter the ASI report) and Assessment of Staffing Needs of Systems Specialists in Aviation (NRC, 2013; hereafter the ATSS report). The latter report examined staffing requirements for airway transportation systems specialists (ATSSs) who are responsible for FAA facilities maintenance.

MODELS

The Model Thinker: What You Need to Know to Make Data Work for You (Page, 2018) says that models have three primary characteristics: First, they simplify the real world. By removing details, aggregating across individual observations and cases, and eliminating variables that are unnecessary or have small impacts on the outcomes, models enable us to make accurate decisions about complex, multivariate processes. Second, models formalize. Models force us to more precisely define our goals, resources, and processes—typically using mathematical definitions—even when those definitions are representing relatively fuzzy constructs such as human beliefs. Third, Page points out that no model exactly captures all of the features and processes of the domain being. A good model comes with a list of assumptions and disclaimers—preferably sets of specific limitations as to accuracy and believability, sets of specific situations in which the model can produce misleading results (verification and validation), and some general guidance as to the costs and benefits of various model options and decisions. A model

___________________

1 Publications of the National Academies of Sciences, Engineering, and Medicine prior to July 1, 2015, were authored by the National Research Council (NRC).

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

that prescribes staffing levels as a function of facility characteristics and stops there…is not making a refutable prediction. Models that relate staffing levels to performance measures and predict the risk of adverse events can be tested against levels of performance that are actually observed, and differences between the predictions and observations are the equivalent of admonitions regarding model accuracy. Assessment of model prediction against field results is essential. In this section, the committee has included a list of primary model features and a set of characteristics that are generally desirable in any applied model. This section also includes several warnings about potential pitfalls in the development and use of models for staffing. The first and second characteristics of models can provide decision makers and stakeholders valuable and even unique benefits, and the mere pursuit of a good model can have positive organizational outcomes of its own. The ASI report cited above said that although “neither modeling per se nor any approach relying exclusively on manpower management tools can ensure optimal staffing” (NRC, 2007); models have the potential to provide critical information about the resources needed to meet organizational goals and to provide information to guide resource allocation decisions. The ASI report listed six distinguishing features of models, as follows (NRC, 2007):

  1. Models are representations of processes or systems. In general, representations of processes tend to be more straightforward, often less complex and more concise, and often more deterministic than representations of systems. Models of systems can be sufficiently complex to describe or predict emergent behaviors but also can be more difficult and expensive to develop. The ATSS report emphasized the need to identify the essential features of the situation captured in the model. A review of these essential features will guide the choice to pursue a model or models of specific processes or a broader model of the target system.
  2. Models are typically either descriptive or predictive. Per the ASI report: “Descriptive models typically document the structure and processes of a system, but they do not add a computational component to enable predictions about system behavior as a function of system design” (NRC, 2007, p. 30). This distinction is somewhat artificial. Good “descriptive” models can precisely characterize the current state of staffing in the target positions and functions relative to the actual needs in those positions and functions. If that descriptive model is well constructed, potential future changes can be used as inputs to infer potential future needs. Likewise, a good predictive model will include, typically as a way of documenting the computation components, accurate descriptions of the structures and processes that make up the model and how those descriptions were operationalized mathematically.

    In general, models constructed with the goal of predicting future staffing needs and possible over- and understaffing, and the consequences of each for organizational outcomes, are typically more likely to reach the level of specificity and accuracy that are desirable, primarily because predictions often require those levels of specificity and accuracy.

  3. Models can be stochastic or deterministic. Models can either incorporate stochastic (i.e., probabilistic) elements or be deterministic (i.e., the input variables are treated as fully known and the model outputs are fully determined by those inputs and the model processes). Every complex system typically has stochastic elements. For most real-world systems, workable models contain a mixture of stochastic and deterministic parts. The essential choice for those searching for a useful model is to determine which elements must be represented stochastically (for accuracy of outputs and the ensuing decisions) and which elements can (or must) be represented deterministically in order to get a working model at a reasonable cost. This is one of the many decision points in which accuracy, utility (cost/benefit), and communicability of the model must be balanced.

    The ASI report also added that an advantage of a stochastic model is that it “allows one to better assess risk in the staffing decisions that are made” (NRC, 2007, p. 31). The committee heard multiple speakers mention the necessity for risk assessment in managing trade-offs among staffing decisions. This type of risk assessment, while desirable, also may come with considerable cost and complexity. Moreover, such a risk assessment may be practically feasible in some areas but not in others. The factors that determine risk are based on the causal chain between the input variables and the outcome variables—in this case, patient outcomes. Linking a variable of interest (e.g., maintenance staffing levels) and relatively distal outcomes that have multiple and potentially complex causes (e.g., patient incidents) can be very difficult. This issue

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
  1. is discussed in more detail in the sections “Desirable Model Characteristics” and “Staffing Model Outputs” below. In general, stochastic models are more complex (both in their input variables and in their algorithms) and therefore are likely to be more expensive to develop and maintain.

  2. The colloquial expression “garbage in, garbage out” succinctly captures the importance of high-quality data for inputs into any model. The choice of data sources and the costs and benefits of obtaining various types of input data is a critical decision. Some would argue that it is the most critical decision for developing or choosing a model. Moreover, the choice of input data is not independent of the choice of model. Data inputs and their level of accuracy and specificity often interact in complex ways with the model processes and algorithms. Another of the primary difficulties in making this decision is the broad range of choices: everything from apparently simple, relatively easy-to-obtain information such as raw square feet to very cost-intensive methods such as job and task analysis, time and motion studies, and so on. Given the criticality of the input data, any proposed staffing model should have a complete explication of the necessary data, how they will be obtained, and a discussion of the data quality in terms of reliability, validity, and utility. This topic is discussed in greater detail below, in the section “Critical Characteristics of Input Data.”
  3. Models may be either decision support tools or summative evaluation tools. Decision support tools, as their name implies, are used in a staffing context to provide information about the likely consequences of various potential staffing decisions. Summative evaluation tools “tell the user how well the proposed system is going to achieve the specified goals” (NRC, 2007, p. 32). As the name implies, the primary goal of summative evaluation tools is to evaluate the performance of current organizational practices. This can be as much a diagnostic tool as a planning tool. Decision support, on the other hand, is more future-focused and anticipates the need for changes in organizational practices. Furthermore, summative evaluation is sometimes seen as an effort to characterize the effects of organizational practices on distal or ultimate organizational goals. When the causal chain linking model inputs (e.g., staffing levels) to outputs is long or complex, developing a valid model is likely to be difficult and expensive (again, see the discussions on risk below).
  4. Models may either be allocation models or sufficiency models. Allocation models distribute resources equitably … “irrespective of their collective adequacy” (NRC, 2007, p. 32), and the implication is that allocation models are appropriate when the resources stream is relatively fixed and, therefore, relatively independent of the outputs of any staffing model. Sufficiency models are designed to predict the resources needed to operate the organization at some defined target level of performance. This assumes that (a) a specific target level of performance can be defined, and (b) the staffing model can predict levels of performance. Having Requirements models that also predict performance can be challenging.

The ASI report argued strongly that a chosen model “should provide an estimate of the workload that can be accomplished with any given level of staffing” and the demand side of the model should estimate staffing for a “satisfactory level of performance” (NRC, 2007, p. 42). Because a feature of sufficiency models is the ability to predict work performance levels, given available staffing and the ability to estimate how much work will not be accomplished in understaffed situations, these factors argue for the greater usefulness of a sufficiency model.

The decision to pursue an allocation model or a sufficiency model is one of several primary decisions needed early in the model choice process. Interestingly, the ASI report commented: “To date, apart from one aborted effort, the staffing models … have been exclusively of the allocation variety” (NRC, 2007, p. 32). The authors suggested that staffing model should serve both functions: “it should be able to estimate aggregate staffing demand, provide estimates regarding the consequences of alternative levels of staffing, and help guide the allocation of resources across functions, regions, and offices” (NRC, 2007, p. 40).

These comments support the view that the allocation versus sufficiency versus hybrid/both decision is critical in determining how to proceed in developing a model, especially a staffing model. In the staffing context, this is directly related to the distinctions among the required level of a workforce to meet a defined level of performance, the funded level of a workforce, and the current (“filled”) level of a workforce.

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

Desirable Model Characteristics

Previous National Academies reports on staffing models have listed several characteristics generally deemed desirable for useful models. These characteristics, not necessarily in order of importance, include transparency, scalability, usability, relevance, validity, adaptability, and validation and verification capability. The committee would add utility—Can the model’s inputs be measured accurately at a reasonable cost and are the model’s outputs in ranges that can realistically be implemented? The committee would also add communicability—Can the model be communicated in sufficient detail and in such a way that stakeholders can reasonably be expected to understand and have confidence in the gist of the model? These characteristics are discussed in more detail below (see Table 3.1). This list of factors could be considered as a “preflight checklist”—that is, each of these factors should be carefully considered when developing or choosing a model.

Validity

Clearly the most critical model characteristic is its accuracy. In this case, accuracy means, Does the model correctly characterize the levels and characteristics of the input variables needed to produce the desired organizational

TABLE 3.1 A Checklist for Evaluating Staffing Models

  1. Primary design/goal choices:
    1. Primarily descriptive or predictive?

      Predictive are generally more expensive but have higher long-run utility.

    2. Stochastic, deterministic, or hybrid?

      Stochastic models are more difficult to set up and more difficult to communicate to stakeholders but more accurately represent staffing models.

    3. Source of model inputs?

      More “micro” levels of analyses are more expensive, but all inputs should be closely vetted—early investment in input quality will pay off.

    4. Decision support tool or summative evaluation tool?

      Costs roughly equivalent—this depends on anticipated use; however, decision support tool implies a longer timeline/forecast horizon and this will likely increase model complexity and therefore cost.

    5. Allocation model or sufficiency model?
      Sufficiency models are generally preferable but likely more expensive and difficult to develop.
  2. Assess the model for desirable characteristics
    1. Validity
    2. Relevance
    3. Usability
    4. Scalability
    5. Adaptability
    6. Utility
    7. Validation and verification capability
    8. Transparency
    9. Communicability
  3. Examine the proposed input variables for
    1. Reliability
    2. Validity
    3. Sufficiency
    4. Contamination
  4. Examine the proposed model for
    1. Mathematical adequacy
    2. Realism
    3. Communicability
  5. Examine the proposed output variables for
    1. Utility
    2. Usability
    3. Transparency
    4. Communicability
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

outcomes? For example, in workforce staffing, validity means that the model correctly yields workforce size and composition levels for the desired level of organizational performance. However, two points should be made: (1) The validity of a model is defined within a specific context—a model may be valid for one purpose but not valid for another. Choosing a valid model depends not only on the qualities of the model but also on how clearly and precisely defined are the organizational goals associated with the modeling effort. (2) The reliability and the validity of the input data are inextricably mixed with the quality and accuracy of the model itself.

First, even agreeing on the definition of the ultimate criterion can be a decidedly problematic task. For example, often it is heavily multivariate. In a hospital setting, patient outcomes can be characterized in a variety of ways and may go beyond patient safety or quality of care to include clinical patient experience, timeliness of care, the experience of the patient’s family, etc. For example, the General Accounting Office report on oversight of Veterans Administration Medical Center (VAMC) facility conditions, while referencing Joint Commission goals such as “managing utility systems to ensure operational reliability” and “minimizing fire hazards,” listed outcomes such as stained ceiling tiles and “scrapes not patched/painted” in its list of top five most often identified “condition deficiencies.”

Second, measurement of these variables, which would be critical to an “ultimate criterion model,” is a nontrivial task. A well-known finding in the industrial-organizational psychology literature is that performance measures are often highly affected by “criterion contamination” and “criterion deficiency.” These issues in measurement are discussed in more detail below, in the section “Staffing Model Outputs.”

Third, many of the variables that directly affect patient outcomes are typically only distantly affected by maintenance staffing. Modeling of complex cause and effect links, especially those with many intervening steps, can often introduce errors into predictive models. Even small errors can sometimes propagate through the many links to create large errors. It may be possible to include facility performance in a larger model that includes clinical and other inputs that can predict patient outcomes.

On the other hand, in the health-care domain, there have been a number of efforts to relate patient outcomes to staffing. These have largely been in the clinical staffing area, intensive care unit (ICU) physicians, nurses, and so on. These studies are valuable guides to some of the difficulties and efforts that are inherent in connecting staffing to patient outcomes, and the lessons they teach should be considered in facilities maintenance staffing. For example, Wang et al. (2013) attempted to estimate the number of patient incidents caused by omissions in medical equipment maintenance. In part of that study, this included “the physical environment,” “general safety,” “fire safety,” “utilities management,” and so on. Part of the authors’ challenge in this study was the difficulty of isolating medical equipment maintenance failures from other types of failures (“some assumptions need to be made to obtain ballpark estimates for events caused by equipment maintenance omissions”—note the qualifiers). Interestingly, Wang et al. also attempted to characterize such incidents in Six Sigma terms (defects per million opportunities [DPMO]; also citing the 3.4 DPMO goal), a characterization that was mentioned in committee discussions. However, this characterization was based on the analogy of treating each patient interaction with a medical device as an opportunity for a defect, and that required several assumptions even to generate a “ballpark estimate” (to use their term).

Utility

Utility, as it is used in staffing modeling literature, refers to the value of an organizational system or process—in this case, workforce planning to the goals of the organization. It is essentially the “bang for the buck” yield of any organization system. To paraphrase the classical definition, utility of a model is the degree to which an organizational system (e.g., an operational staffing model) improves the performance of an organization (per unit cost) beyond what would have occurred if that model had not been used. As such, it is incumbent upon the organization to try to choose and implement the model with the highest utility.

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

Relevance

The ASI report defined relevance as the extent to which a model includes “the issues for which it is designed” and excludes unnecessary and “marginally relevant” factors (NRC, 2007). Relevance can be seen as the intersection of validity and utility. Relevance can also be seen as a characteristic in which the model is as efficient as possible while still providing the information needed for real-world organizational decisions and providing that information at the optimally usable level of detail.

Obviously, determining the marginally relevant factors can be challenging using some modeling techniques. Typically, those are determined in an interactive and iterative process that involves looking at the relationships between the input variables and indices of validity of the output variables. But relevance analysis is necessary to produce a parsimonious and cost-effective model.

Usability

Usability refers to a broad range of issues in the practical implementation of any model. Usability includes the ability of organizational personnel to use the systems and procedures demanded by the model inputs (e.g., data-gathering systems) and the model operations (e.g., model parameter specifications, model option selection, etc.). This includes being able to use those systems and procedures with the minimal necessary training (i.e., a minimal learning curve to accomplish the relevant tasks). Usability may be encapsulated in a short checklist: Can the input data be obtained? Can the model be implemented and run? Can the outputs be understood and applied to the problem for which it was designed?

Scalability

Scalability, as used here, generally refers to the ability of the model to adapt to the range of VHA facilities and functions for which it is predicting workforce levels. In the facilities maintenance domain, there may be a broad range of facilities services that the organization must supply to customers. The committee heard from several speakers about the broad range of facilities (see the facilities complexity index discussed in Chapter 4) and the increasingly broad range of services that VHA must supply to veterans.

A good model will be able to handle these wide variations in the magnitudes of inputs and still yield valid outputs, even at the extremes/tails of the distributions. It is also in the area of scalability that nonlinearities in the relationships may occur and may need to be accommodated in the model.

Adaptability

Adaptability is similar to scalability in the sense that it refers to the capability of the model to handle nonstandard situations. These are different in the sense that scalability typically has a quantitative focus—Can the model handle unusual ranges of inputs?—while adaptability typically refers to qualitative changes across situations. Specifically, adaptability is used here to refer to the ability of the model to handle variances. Variances are characteristics that are peculiar to only a few or a single facility, service, or client base, and based on mission, environmental, or technology that are not captured by the core modeling technique. A good model will demonstrate a high tolerance for outliers in the input variables and will have the capability to incorporate variances into the modeling processes without a serious loss of validity.

A critical aspect of adaptability is the ability of model inputs to incorporate or act as proxies for unanticipated input variables. In the health-care domain, changes in patient demographics, types of services provided, and advances of changes in medical technology may potentially affect model outcomes (in this case, predicted staffing levels). An adaptable model has inputs or elements that can incorporate these changes or new variables without the need for major changes in the model.

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

Validation and Verification Capability

Another desirable characteristic in planning models is the capability or features that facilitate validation and verification. Carson et. al. (2002) defines verification as the process in which the model developer runs the model to find and fix errors, and validation as the joint process among the model developer, model customer, and other stakeholders to ensure that the model “represents the real system … to a sufficient level of accuracy” (p. 52). This is primarily the comparison of the model’s outputs and predictions against actual results. While validation of the model’s outputs against the ultimate criterion of patient outcomes would be ideal, it may not be feasible or practical, as noted above.

This last feature is important largely as a result of the human tendency to overconfidence (see Johnson and Fowler, 2011), especially when metrics are available. The validation process, as a joint discussion among all the stakeholders, will likely help to appropriately calibrate the stakeholders’ confidence in the model outputs. In addition, the ATSS report suggested using indicators of staffing sufficiency such as the use of overtime, work backlogs, and the use of shortcuts. Indicators such as these may be very useful in the validation and verification process (NRC, 2013).

Transparency

Transparency, as used here, refers to the ability to accurately portray the model’s internal processes to interested stakeholders. The point of transparency is to engender trust in the workforce planning process by making the modeling processes sufficiently clear so that stakeholders have confidence in those processes. In essence, transparency supports perceptions of procedural justice in VHA staffing decisions.

This is an important feature of a good model also because most organizational models do not operate in isolation. Rather they are often part of organizational change/organizational development processes (this is discussed in more detail in Chapter 5). As such, acceptance of the changes associated with the model is critical to the actual implementation of the model and its use in the workforce, especially when that implementation has effects over long periods and substantial numbers of employees. Staffing models would certainly fit in this category. Unfortunately, some modeling techniques, especially those associated with machine learning approaches, can produce algorithms and techniques whose inner workings are very difficult to interpret or decode. They may produce usable, even impressive, results, but how those results are derived can be a “black box.” Even well-known techniques such as multiple linear regression (perhaps one of the earliest forms of machine learning) can have such issues. In multiple regression models, there may be several predictor variables that may themselves be correlated. This can lead to the statistical anomaly of the coefficient of one predictor being negative because part of its contribution was already covered by a more powerful predictor. This situation can mask or mislead about the actual nature of the relationships. While this can be remedied with larger samples and re-estimation, model builders with relatively small or fixed sample sizes should be alert for this possibility.

“White box” models are desirable for transparency. Historically, most staffing models probably qualify as white box models. White box models (e.g., discrete event simulation models) are those that have understandable, transparent (and communicable) internal mechanisms.

Communicability

Communicability is, as the term implies, the extent to which the validity and details of the operation of the model inputs, processes, and outputs can be communicated to stakeholders. While communicability supports transparency, this context distinguishes between these two desirable characteristics. Communicability is the broader characteristic of the ability to provide stakeholders with an accurate, if simplified, mental model of the planning system. This characteristic removes or minimizes the perceptions of workforce planning models as black boxes. While this characteristic is related to transparency, the critical issue here is providing information in such a way that it optimizes subsequent decisions.

For example, it is likely that explicitly or implicitly, the model will be communicating levels of risks to the decision makers and stakeholders. While the research on communicating the outputs of complex models to

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

stakeholders is not fully mature, there is sufficient evidence to conclude that how and in what forms the model outputs are given can have a substantive effect on perceptions and decisions. The risk communication literature indicates that issues such as absolute versus relative risk reduction, individual versus cumulative probabilities, context and framing, and other factors can alter how model outputs are understood and used. A good model will have a clear and well-defined set of outputs that will improve rather than bias decision making.

Critical Characteristics of Input Data

Given the criticality of the input data, a primary focus of any evaluation of proposed staffing models is the evaluation of the input data. This data must simultaneously be of sufficient quality to produce reliable, accurate outputs and have a practical cost/benefit ratio (in time, money, and less tangible factors such as degree of disruption to operations during data collection). A variety of types of data have been used or could potentially be used as staffing model inputs, but all types must have three primary characteristics.

First, it should be noted that input variables are measurements—these measurements quantify the model elements that drive the model predictions. The essence of good measurement requires that the assigned numerical values correctly represent the relations among the phenomena being measured. To preserve those relations, any measurement must have two interrelated characteristics: reliability and validity. These terms as used here have specific mathematical definitions, but they may have slightly different, sometimes less precise, definitions in other areas and domains. Reliability, as used here, refers to consistency in measurement operations across observations. This is the “rubber ruler” problem: variability in the observed variable due to variability in the measurement process itself, not in the thing being measured. Validity, as used here, means accuracy. Do the numbers obtained precisely reflect the phenomena being observed (and do the numbers correctly tell us about the relations among those real-world phenomena)? Validity is also the correlation between the observed value and the true value of a measurement.

A very important point to note is that reliability puts an upper bound on validity. Inconsistencies in the operational measurement process will invariably reduce accuracy. This relationship is often described in the psychometrics literature as Rxy = p * √rxx, where Rxy is the operational accuracy of the measurement, p is the theoretical maximum accuracy (validity) of the measurement, and rxx is the reliability of the measurement.

The practical gist of this relationship is that every potential input into a model must be evaluated for its quality in terms of both reliability and validity. Even relatively straightforward variables such as building gross square footage, should be examined carefully for adequate measurement characteristics before use. For example, recorded total square footage might include outbuildings housing backup generators, sewage lift stations, and so on at one facility, while another facility might include only primary care buildings in its total square footage. This inconsistency in the operationalization of a measured variable directly reduces the validity of the model input and thereby directly reduces the validity of the model output and any conclusions or decisions based on that output.

As an example, the International Facilities Management Association (IFMA, 2013) benchmarking report defined gross square feet (specified as exterior gross area, but denoted as GSF) as follows: “Exterior gross area is defined as the area of the floor measured to the outside face of the walls that enclose the floor(s) of the building.” The report also included this comment: “Numbers reported vary greatly based upon the types of services and supporting departments located within the reporting health care organization” (IFMA, 2013, p. 19). They also asked respondents for operating suite, parking structure, and data center square footage and noted that all of these were self-report measures.

Related to this issue is the “hidden variable” problem. For example, facility age may be a useful variable. But it may hide an important moderator of the target relationship (e.g., facility age to staffing requirements). In two facilities with the same average age, one may consist of two buildings of 25 years each, whereas the other facility may consist of a dozen buildings with an age range of 75 years. That variability in building age might be a critical input variable, hidden by using average age.

Very high correlations between measured variables such as square footage and current staffing levels may appear to imply that the input variables are adequately reliable and valid. However, if those variables were used in whole or part to determine the staffing levels in the past, without an explicit examination of their validity, the subsequent squared multiple correlation values could be very high, even with relatively invalid input variables. In

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

other words, if there has been a previous causal link between the input variables and the dependent variable, this can artificially inflate the apparent relationship. Again, the input data should be examined carefully for this possibility.

Utility of the input data, while it does not have the same level of precise definition, is also of great practical importance. When evaluating the usefulness of a model, the cost, time, and impact of the measurement process on the stakeholders must be considered. Again, to use an apparently straightforward variable, accurate building square footage information, operationalized in a consistent way, may sometimes be expensive and time-consuming to collect. Historic records may not be accurate or may not have been kept up to date, exact operational definitions may vary from facility to facility (as above), and so on. The best practice here is that close examination of every input variable is at least as important as the algorithms used in the model.

It should also be noted that many of the measures used as inputs or parameters into staffing models are essentially measurements of human performance by proxy. For example, Wang et al.’s (2012) study of measures of hospital clinical engineering staff productivity included not only the traditional “time worked/chargeable hours” measures but also measures such as number of operating beds, number of patient days and discharges, number of pieces of medical equipment, number of work orders (scheduled and unscheduled), and so on. While this study found reasonable correlations between these indices and clinical engineering full-time equivalents (FTEs; average r = 0.80), they also listed a series of shortcomings for these “objective” measures.

The intent in this section is not to present a negative or pessimistic view of the use of models. Rather, the intent is to provide a checklist or set of items to be closely examined when developing a model. Careful scrutiny of these items early in the process will yield a more accurate and more useful model.

MODELS APPLIED TO STAFFING

Context

Staffing models typically operate in the context of a staffing strategy (Bechet, 2008). The ATSS report noted that its task fell largely in step 2 of the Office of Personnel Management’s (OPM’s) workforce planning cycle (see Figure 3.1): “Analyze workforce, identify skill gaps, and conduct workforce analysis” (NRC, 2013). This model ideally will also aid in steps 3 and 5 in the OPM strategic staffing cycle: “Develop an action plan” and “Monitor, evaluate, and revise,” respectively. Likewise, Heneman and Judge (2006) list nine strategic staffing decisions as part of developing a staffing strategy. Of those, several are relevant to choosing a staffing model including staffing to needs (“lag”) versus staffing to goals (“lead”), core workforce versus contract labor, focus on current facilities (“attract”) versus focus on new facilities (“relocate”), and of course, “overstaff versus understaff.” Several other decisions from this list may be important to choosing a staffing model in the near future. These might include hiring versus retraining (especially in the case of technologically changing jobs) and “national versus global.” In a U.S. organization, this might translate to “How broad is our target labor market?” for the supply portion of a staffing strategy.

Model Complexity

A review of the literature on employment staffing models found a wide range of published work. In general, this literature was notable for its bimodal nature. There were a substantial number of general works on staffing organizations with good, albeit very general, advice about staffing (e.g., each unit should compare forecasts of the needed headcount to forecasts of workforce availabilities, identify goals before choosing metrics, etc.). Some adapted general business practices to staffing—for example, using supply chain management as a model for staffing (Cappelli, 2009). At the other end of the spectrum were very sophisticated mathematical models. These were primarily from the operations research literature—for example, Van den Bergh et al. (2013) describe an extensive list of models applied to staff scheduling. These models ranged from relatively simple regression analyses to sophisticated optimization approaches such as mixed-integer linear programming, genetic algorithms, discrete-event simulations, and many others. However, De Bruecker et al. (2015, p. 1) noted that “technical research regarding workforce planning usually focuses on the mathematical model and neglects the real-life implications of the simplifications that were needed for the model to perform well.”

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Image
FIGURE 3.1 Office of Personnel Management workforce planning model. SOURCE: National Research Council (2013).

Several speakers to the committee made the point that there is a “sweet spot” in terms of mathematical sophistication and complexity in staffing models (multivariate regression was used as an example more than once). For example, Cruz and Guarín (2016) used a number of biomedical devices, technology management hours, and number of patient discharges (weighted by patient case complexity) in a multivariate regression model to account for 74 percent of the variance in FTEs in clinical engineering departments. Mathematical complexity of the model must be assessed with regard to model complexity but also must be weighed against the other desirable characteristics of models delineated above.

Staffing Model Inputs

Given the criticality of the input data (as discussed above), a primary focus of any evaluation of proposed staffing models is the evaluation of the input data. This data must simultaneously be of sufficient quality to produce reliable, accurate outputs and have a practical cost/benefit ratio (in time, money, and less tangible factors such as degree of disruption to operations during data collection). A variety of types of data have been used (or could potentially be used) as staffing model inputs. This list is neither comprehensive nor an in-depth examination of each type, but rather examples of common (and uncommon) inputs and some discussion of the critical characteristics of each. The intent is only to provide some exemplars of the kinds of issues to be examined when evaluating potential model inputs. In general, this list proceeds from more “micro” measures (at very specific levels of analysis and often more expensive) to more global, sometimes more easily obtained measures.

Tasks and Task Durations

The ATSS report cites task duration as a primary input and lists three sources of task duration data: subject-matter expert (SME) estimates, historical data, and direct time studies (NRC, 2013).

SME estimates—Self-report measures of task duration are widely used (as are related measures such as work order throughput) in workforce planning. The dangers of self-report data are well documented (Kahneman et al.,

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

1982). However, three factors probably account for the widespread use of such data: (1) the data are acquired with relative ease; (2) even with the inherent drawbacks of human memory and judgment, the data generally provide a first approximation to the actual time spent on tasks; and (3) because these data are the result of human judgment, they often can automatically incorporate other factors that more “objective” measures might miss—for example, travel time, variability in the availability of supplies and parts, and other factors that the U.S. Air Force refers to as “indirect productive time” (USAF, 1995).

Technological advancements may provide a substitute or enhancement of SME estimates. Wearable devices, mobile devices, location tracking, and so on may augment or replace SME estimates in some types of jobs. Human resources systems such as KRONOS have already implemented systems for real-time tracking of employee locations. However, these data must be carefully reviewed to determine whether they capture all (or most) important aspects of work performance and are not overly influenced by factors that have little to do with task performance. For example, location tracking may overestimate task times (e.g., by including non-working time on the task/job site) or underestimate task times (e.g., by not including time necessary to complete the task when it is done off-site). Also such data collection methods (as also for time studies) must be ethical and transparent. Employee consent is specifically mentioned in Chapter 2.

Another factor that must be evaluated in considering model inputs is the data collection system. The ATSS report stated that “the cost of developing and sustaining data collection systems to feed into modeling must be considered” (NRC, 2013, p. 54). Unfortunately, these costs, especially when developing the system from scratch or even upgrading it to fit the requirements for staffing modeling may be prohibitive.

Historical data refers to data typically available in existing databases—either labor/human resources management (HRM) databases or production/quality control databases. While the data in these sources often were not explicitly intended to yield task duration information, they can often be adapted to that purpose (keeping in mind general warnings about the characteristics of data obtained for other purposes). Bisantz and Drury (2004) point out that this type of data has the advantage (as noted above) of being “minimally reactive”—that is, providing minimal disruption to operations.

Direct time studies are traditionally known as time and motion studies, although here the focus is on the time study processes used to determine task duration. Time studies are typically conducted using some form of observation of the task performance (direct observation by humans, cameras, or other mechanical means of recording task times). Typically, these studies are fairly unobtrusive except that the employees know that their work is being observed (and must give their consent to be observed and recorded). As with self-report measures, this factor may influence the observed behaviors and consequently the recorded task durations. Brannick et al. (2007) list several variants of time studies including work sampling, stopwatch studies, synthesizing task duration from knowledge of the times associated with each task element, using industry standard data, and others.

Lopetegui et al. (2014) did an extensive review of the use of time motion studies in health care. They called for the use of the term “continuous observation time motion studies” to refer to traditional direct time studies, and further dividing that technique into “single duration measurement,” “milestones timing,” and “workflow time study.” Zheng et al. (2011) provided a set of methodological guidelines when conducting time motion studies. It should be noted that the variability of task duration is almost as important as its average, as usually measured. Those modeling techniques that can capture the variability of task cycle time and other key factors are differentiate a deterministic (i.e., fixed) from a probabilistic (i.e., variable) model. Also note that in traditional times studies (Mital et al., 2016) the variability of task times is measured, and then used to calculate the sample size needed to ensure that the mean is known to within a specific range. However, the variability of task times is then no longer used, with just the mean being retained, returning the model from probabilistic to deterministic.

Predetermined Motion-Time Systems (PMTS)—If a task is broken into elements (as recommended for most direct times studies), then the same elements could well occur in other tasks. This potentially allows for re-use of time standards at the element level, and a database of element standard times can be created that can be used to find time standards for new or different tasks. This idea of re-use has long been extended to a set of elements potentially applying to all tasks in all industries. This is the basis for PMTS, which have claimed to reduce the time for determining time standards. Commercial PMTS packages have been available since the 1940s and have been widely used to set time standards for tasks in manufacturing industry (e.g., Zardin, 2002). However, these

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

PMTS packages typically (although not exclusively) cover physical work rather than the more complex physical/cognitive tasks required in maintenance and office activities, and so are potentially less applicable to management (engineering) functions at VHA.

Knowledge, Skills, Abilities, and Other Characteristics

Typically, staffing models focus almost entirely on matching numbers of persons in each job class to the tasks and task durations required in those job classes. The requirements of each position are usually defined almost solely in terms of tasks and task durations. An alternative view expands this to matching persons and knowledge, skills, abilities, and other characteristics (KSAO) requirements, persons and contextual job requirements, persons and extrinsic and intrinsic rewards, and persons and organizational characteristics and values (Heneman and Judge, 2006). While this expanded view is beyond the purview of this report, it should be noted that critical steps in implementing staffing include recruitment and retention. Trends such as the aging of the maintenance workforce (and attendant potential loss of expertise) and its counterpart—the potential movement of maintenance jobs toward more cognitively complex tasks—may make recruitment and retention critical to maintaining adequate staffing. Likewise, given that (as some of the speakers mentioned) VHA facilities maintenance salaries are often not competitive with those of local for-profit firms, job satisfaction and motivation may also come to play critical roles in adequate staffing.

Similarly, operations researchers have been making efforts to go beyond the strict task of modeling and incorporating skills into workforce planning. De Bruecker et al. (2015) give an extensive review of efforts in this area. However, at this stage of development, the incorporation of KSAOs may not be practically feasible. One drawback of this approach is the difficulty of accurately determining the KSAO requirements for a specific job. There is an extensive literature on the measurement of KSAOs in the job analysis from industrial-organizational psychology and related literature on cognitive task analysis from human-systems integration researchers. These sources make it clear that accurate determination of KSAOs is a nontrivial (in other words, time-consuming and often expensive) task. However, it should also be noted that if facilities maintenance jobs follow the trend (observed in other job domains) of increasing cognitive complexity, incorporating evaluations of the KSAOs demanded for acceptable organizational performance may be a necessity, not an option.

Facility characteristics—A very common type of variable in workforce analyses is the use of facility characteristics. In the facilities maintenance domain, this is a very logical and potentially useful input variable. In the IFMA (2017) report on operations and maintenance benchmarks, several facility characteristics were cited, including facility age, setting (urban, rural, industrial park, etc.), days/hours of heating and cooling, developed acres, gross square feet, and several others. In a related example, Cruz and Guarín (2016) noted that the most common input variables in staffing models for clinical engineering were number of beds, number of pieces of equipment, and the total acquisition cost. Their analysis looked at total number of devices, total technology management hours, and hospital complexity and found that these factors predicted current FTEs with a considerable accuracy. This implies that operationalized correctly, facility characteristics may have a very high potential as model input variables.

Workforce productivity—In addition to total staff time, another consideration is whether there will be any changes in productivity. That is, will the workforce be more productive or less productive? If it will be more productive, then the organization can handle an increased workload with a smaller increase in staffing. Alternatively, if productivity will likely decline, then the organization will need a larger number of personnel. Several factors influence productivity, including changes in technology, the degree of workforce experience, the degree of workforce skills, or formal productivity programs. It is the organization’s task to look at these major factors for workload and productivity and to include them in its logic process for predicting the needs for personnel (Bulla and Scott, 1987).

Staffing Model Outputs

While the most common model outputs of staffing models are FTEs of many types, categories, organizational levels, time frames, and so on, the committee often discussed the kind of ultimate outcomes, primarily patient outcomes (as introduced in the discussion of validity). Presumably, these outcomes are the results of proper or

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

improper numbers of FTEs. One of the primary issues in choosing the appropriate staffing model is deciding what model outputs are feasible. As a framework for that decision, consider the theoretical structure of staffing model development.

In general, a staffing model is an attempt to reverse-engineer desired staff levels from desired organizational outcomes. In a hospital setting, the relevant causal chain is shown in Box 3.1.

Risk, Benchmarks, and a Caution About Metrics

As mentioned above, performance measures are subject to “criterion contamination” and “criterion deficiency.” In criterion contamination, observed variance in measures of job performance often not only are a function of the actual job performance itself but also may be a function of external factors outside the scope of the job. For example, a real estate salesperson’s commission sales are not only a function of that person’s sales ability and effort but also a function of the state of the economy, the location in which the person works, the advertising budget of the real estate company, and so on. In a health-care setting, attempting to isolate the source of the relationship between staffing levels and clinical outcomes would be difficult at best.

Criterion deficiency refers to situations in which performance metrics fail to measure (or are not designed to measure) important aspects of the job (e.g., using commissions as a metric of real estate sales ability may miss outcomes important to the organization such as reputation, good will, etc.). Typically, observed patient outcomes may not capture important aspects of organizational performance that are more directly tied to staffing levels. For example, one speaker from the maintenance department of a children’s hospital told the committee that a highlight of the maintenance staff’s work was to build stages for musical performances, puppet shows, and so on for the hospital lobby. This is a valuable outcome for the organization, and it is very likely affected by staffing levels. But it probably is not captured in typical patient outcome measures. It is even difficult to link staffing of direct caregivers to patient outcomes (see McGillis-Hall et al., 2003). The Wang et al. (2013) study noted that measures such as number of operating (staffed) beds can be skewed by the mix of long-term versus acute care or by hospitals reducing clinical staff but keeping “large amounts of unused equipment.”

Another caution regards the use of benchmarks. Benchmarks are widely used in examining staffing levels, but it cannot be overemphasized that benchmarks require all of the characteristics of good measurements discussed above and they have the additional drawback of being drawn from organizations that are inevitably different from the organization they are being compared to. Whether those differences are inconsequential, “marginally relevant,” or absolutely critical can be determined only by careful examination of the sources of the benchmarks (preferably with close attention to the operationalization of the measurements) and by careful search for critical differences between the benchmark organizations and the target organization. The common practice of aggregating across different organizations to produce the benchmarks may exacerbate this problem.

The point here is that evaluations of staffing models should include not only a close examination of the costs of obtaining input data but also a close examination of potential contaminants and deficiencies in the metrics used as inputs. Likewise, any input metric should be examined closely for its fundamental measurement properties: reliability (consistency in measurement) and validity (accuracy).

Modeling Techniques

Various modeling techniques have been developed and deployed to handle the increasing complexity and diversity of predicting talent needs. These techniques are typically grouped into three categories: demand modeling techniques, supply modeling techniques, and integrated manpower planning models (see Table 3.2).

The focus in this report is on the demand side. It is beyond the scope of this report to discuss each of these techniques in-depth. Regression analysis, time series analysis, simple percentage changes in staff levels and project-driven staffing estimates seem to be the most popular demand modeling techniques (Bechet and Maki, 1987).

Each technique has its special use, and care must be taken to select the correct technique or summation of techniques for a particular application. The organization and modeler play a significant role in the selection of the correct technique, and the better they understand the range of modeling possibilities, the more likely it is that the

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

TABLE 3.2 Modeling Techniques for Human Resource Planning

Demand Models Supply Models Integrated Models
  • Trend/percentage estimates
  • Deterministic relationships based on other organization variables
  • Regression/correlation analysis
    • Simple
    • Multiple
    • Step-wise
  • Time series analysis
    • Moving averages
    • Exponential smoothing
    • Box–Jenkins
  • Delphi techniques
  • Econometric models
  • Replacement charts/forward planning
  • Markov models (supply push)
  • Renewal models (demand pull)
  • Linear programming
  • Goal programming
  • Network models
  • Dynamic programming
  • Simulation models

SOURCE: Committee generated from information in Bechet and Maki (1987).

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

organization’s modeling efforts will yield positive results. The selection of a technique, as noted above, depends on many factors, such as the context of the forecast; the relevance and availability of historical data; the desired degree of detail, accuracy, credibility, and defensibility; the time period of the forecast; the cost to develop, apply, and maintain the model; and time available for making the analysis (Norman, 2018).

Modeling is still an art, and as such the modeler’s role is to constantly weight these factors on a variety of levels to choose the right technique. As a minimum, the modeler should choose a technique that makes the best use of available data. For example, if the modeler can readily apply one technique of acceptable accuracy, the modeler should not try to apply features or refinements by using a more advanced technique that offers potentially greater accuracy but that requires nonexistent data or data that are costly to obtain. This kind of balance is relatively easy to make, but others may require more consideration (Chambers et al., 1971).

Furthermore, some applications may call for a mix of techniques to model different functions (Norman, 2019). For example, if continuous staffing of a boiler plant is mandated, then there is a deterministic number of staff required to provide that coverage, so it is probably a yes/no variable in the model. In contrast, the staff needed to perform highly unpredictable tasks, such as emergency response to a major utility outage, will be inherently probabilistic and require a different modeling technique. The overall model may be a summation of elements with different types of techniques.

Table 3.3 illustrates various techniques that modelers have at their disposal when making a selection. This is not meant to be an exhaustive list, but it exemplifies a range of options across various factors. Techniques vary in their costs, as well as in scope and accuracy. The organization must determine the level of inaccuracy it can tolerate—in other words, how the decision will vary, depending on the range of accuracy of the output (Chambers et al., 1971). This allows the modeler to trade off cost against the value of accuracy in choosing a technique for a particular application. It is important to highlight that the techniques profiled do not represent a single method, but rather a family of methods. Therefore, the descriptions may not capture all the variations of a technique. They should only be viewed as a general description of the basic elements of the technique. Similarly, some of these techniques, such as time series analysis, may not have an application for the purpose of this study.

New techniques such as machine learning (ML), artificial intelligence (AI), and deep-learning (deep neural network) are fast becoming a major area of interest for organizations, and they are being implemented in businesses to accelerate automated decision making. These techniques bring new capabilities not imagined before, such as building a system that can evaluate not just a few, but thousands of models. The system can choose not only which model is best but also which subset of thousands can be best combined in a collective of models to minimize prediction error (Kosiba, 2018).

While discussions about the benefits have caught the attention of many organizations, deciding how to implement these techniques can be a challenging task. These techniques require significant amounts of data to be robust. They are black box models, highly nonlinear in nature, and harder to explain in general. Users can observe only the input–output relationship; the underlying reasons or processes to produce the output are not available. They often result in greater accuracy than white box models, but at the expense of transparency and accountability (Tannam, 2019).

One aspect of data and measures that bears future consideration is that the sheer amount of data collected from the sources noted above may be large enough to support analysis by the various techniques of big data analytics (BDA; e.g., Mayer-Schönberger and Cukier, 2013) or data mining (e.g., Ye, 2004). A characteristic of big data is not just its volume but also the fact that it comprises almost the whole population of data rather than the usual sample. In fact, Halevy et al. (2009) showed how language translation using billions of Internet pages produced better results than translations using carefully chosen samples. BDA uses data collected for other purposes, often from different sources and in different databases, to obtain fresh insights into complex problems (e.g., Drury, 2015), and thus would seem to be applicable to the insight needs of VHA as BDA models staffing requirements. VHA in its staffing model efforts does indeed have large amounts of disparate data, collected for other purposes and stored in different databases.

This combination of big data and advanced ML techniques to analyze such data holds great promise. But as with any analytic technique, care must be taken in using this approach. Big data quality drives maximum information usability, which enables high-performance scalable management of tasks that are initiated by front-in data

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

input. Such tasks include workforce prediction algorithms. Even with big data, the utility (accuracy and assessment) of prediction algorithms is directly dependent on the quality of the input data that drive the algorithm. Data may be classified as structured, unstructured, or semi-structured (Holmes, 2017). Structured data are generated by and are usually stored electronically as spreadsheets or databases. Structured data are relatively easy to manage and amenable to statistical analyses. In contrast, unstructured data are much more challenging to subject to statistical analyses because they often contain material that is not easily broken into discrete, quantifiable components or that have a meaning beyond numerical values. Examples of unstructured data include photos and videos Examples of semi-structured data include emails or text for which it is relatively easy to analyze the meaning of the content. However, it is important to note that all of these data approaches only work with existing data, which include the actual or current staffing levels. These levels may not be a good measure of the optimal levels of staffing, so the construction of the model should never be left entirely to the ML algorithm.

Big data quality is enhanced through timely and accurate data acquisition, preparation, and quality assurance (Loshin, 2011). This demands both data governance and quality assurance. Firm and consistent monitoring of data at loci of selection greatly enhances data quality. Redundancy at the points of data selection is also an enabler of data quality. Management of error propagation due to poor data quality at loci of entry may be controlled by utilizing an analytical system of checks and balances at each stage of exercising prediction algorithms. Such a complex analytical system is not free of challenges of quality. Last, perfect data quality processed within an imperfect prediction algorithm will produce meaningless predictions. In short, while big data holds great promise, there are potential pitfalls about which any potential user should be aware. O’Neil (2016) describes many of those pitfalls and offers suggestions for how to avoid them. In using big data, the advantages of white box over black box models (as in the discussion of transparency above) may be even larger.

SUMMARY

The goal for this chapter was to provide an overview of the critical distinctions in types of models, an overview of models as they typically apply to the staffing domain, and a checklist of features and dimensions that should be carefully considered when developing or choosing a staffing model. In addition, a number of potential pitfalls and problems were described in order to provide guidance to decision makers. For any staffing model, whether the target is current staffing levels, short-term levels (e.g., 1-2 years), or long-term levels (e.g., 3+ years), the items reviewed in this chapter should be examined as early and thoroughly as possible. A careful review of these factors, coupled with a clear vision of the goals of the staffing model and judicious balancing of competing priorities where necessary, should result in an accurate and useful model.

Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×

TABLE 3.3 Examples of Staffing Modeling Techniques

Technique Delphi Method Econometric Models Regression Simulation Time Studies Logistic Composite Model Time Series
Description Several rounds of questions on workload estimation are given to the group of experts, and the anonymous responses are aggregated and shared with the group after each round. The experts can adjust their answers in subsequent rounds, based on how they interpret the “group response” that has been provided to them. The Delphi method seeks to reach the correct response through consensus. A system of interdependent regression equations for estimating future demand based on past statistical data. Under these models, relationship is established between the dependent variable to be predicted (e.g., manpower/human resources) and the independent variables (e.g., sales, total production, workload, etc.). The parameters of the regression equations are usually estimated simultaneously. Similar to econometric models, regression identifies the movement of two or more interrelated series. It functionally relates y variables such as the number of employees to other x variables, such as service delivery, by measuring the relationship that existed in the past. Relationships are primarily analyzed statistically, although any relationship should be selected for testing on a rational ground. A powerful technique in any system that includes variance, as it shows overall system response to different random choices of, for example, event times. It can produce meaningful estimates of the probability of demand exceeding the available resources—that is, failure of the system to provide the service level required. Simulation does, however, demand detailed models of how the resources contribute to fulfilling service demand, so that the effects of different values of each random variable can be estimated. A technique for building up the workload from direct measurements of the time required to perform each element of a job. It can be a particularly valid method provided the times found are representative both in average and in variability. A technique developed for the U.S. Air Force that uses a simulation model to determine demand for staffing services and combines the resulting data with staffing availability—for example, total working hours minus nonavailability due to vacations, training time, etc. (Dahlman et al., 2002). A technique that projects past trends into the future using “time” as the independent variable. It identifies an overall trend, the seasonal effect (fluctuations during specific time periods such as summer or holiday), the cyclical effect (represented by the business cycle), and a residual effect (result from unpredictable natural events and the randomness of human action) (Meehan and Hamed, 1990). Examples of these techniques are Box–Jenkins, Moving Average, and Expo Smoothing.
Accuracy Accuracy is highly dependent on expert skill and experience. Reliability could be poor (e.g., a different choice of experts may well produce different results). Produces excellent results. Produces accurate short- and long-term staffing forecasts. However, significant shifts among the variables’ relationships and restriction of focus can weaken the accuracy (Georgoff and Murdick, 1986). Produces very good results. Yields very detailed data and uncovers facts not immediately obvious. However, it is reactive in that the task performers need to be told how and why they are being timed, which may change their regular behavior. If the detailed demand model is available, these models can be highly accurate. Predicts seasonal fluctuations in staffing requirements. Because they are based on time rather than on organization indicators, they are not sensitive to changes in organizational circumstances.
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Cost and Data Required Relatively inexpensive, and has small data requirements. In some cases, the cost in subject-matter experts’ time could be high. Requires considerable historical data, although much of this may already be collected for other purposes. Data requirements are the same as for econometric models. Cost associated with obtaining the necessary data could be high. Time-consuming method and, hence, expensive. Requires one of the techniques listed above (e.g., Time Studies or Delphi) and so may require considerable time and cost resources. Requires several years of data with clear, stable patterns and pattern changes.
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 30
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 31
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 32
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 33
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 34
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 35
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 36
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 37
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 38
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 39
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 40
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 41
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 42
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 43
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 44
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 45
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 46
Suggested Citation:"3 Models Applied to Staffing." National Academies of Sciences, Engineering, and Medicine. 2020. Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future. Washington, DC: The National Academies Press. doi: 10.17226/25454.
×
Page 47
Next: 4 VHA Facilities Management (Engineering) Staffing Methodology »
Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future Get This Book
×
 Facilities Staffing Requirements for the Veterans Health Administration–Resource Planning and Methodology for the Future
Buy Paperback | $65.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Veterans Health Administration (VHA) is America's largest integrated health care system, providing care at 1,243 health care facilities, including 172 medical centers and 1,063 outpatient sites of care of varying complexity, serving 9 million enrolled Veterans each year. In addition, VHA has opened outpatient clinics and established telemedicine and other services to accommodate a diverse veteran population and continues to cultivate ongoing medical research and innovation. Facilities specific to VHA fulfill clinical, operational, research laboratory, and administrative functions. Each site is designed to serve a geographical location with specific health care needs. VHA's building inventory has sites of different ages, and often there is a mix of building size and age at each site or campus.

At the request of the VHA, this study presents a comprehensive resource planning and staffing methodology guidebook for VHA Facility Management Programs by reviewing the tasks of VHA building facilities staff and recommending actions for the VHA to meet the mission goals of delivering patient care, research, and effective operations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!