Skip to main content

Currently Skimming:

3 Use Case Scenarios and Design Enhancements
Pages 31-62

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 31...
... These can be single vaccines for each chosen disease or multiple candidate vac cines for a single disease (i.e., determining the ranking for vaccines with different bundles of attributes) or some combination thereof.
From page 32...
... The NYSDOH team included a group of health officials, an epidemiologist, a computer scientist, and immunization officers who supported the effort to compile disease burden and vaccine data. The use case scenario of the Serum Institute of India was spearheaded by its corporate medical director, who was supported by a project assistant.
From page 33...
... The groups' feedback is summarized by the M consultant in Appendix A, and the committee's corresponding responses or actions are provided in Appendix B The fourth use case scenario -- which will be discussed later in this chapter -- focused on using SMART Vaccines as a reverse engineering tool to determine the SMART Scores of potential vaccines for a single disease and thus offer guidance to vaccine developers concerning the most desirable bundles of attributes for potential vaccines.
From page 34...
... .1 Just as in version 1.0, SMART Vaccines 1.1 provides detailed population data, including life-table information and average hourly wage rates, all of which are used in subsequent calculations for determining the effects of various vaccines (see Figure 3-3) .2 On the page requesting information on disease burden (see Figure 3-4)
From page 35...
... ease is specified (e.g., pneumococcal infection) and the relevant data are entered, users can save the data for subsequent use.
From page 36...
... in SMART Vaccines 1.1. Users need to estimate that total cost offline by using the best data and the best analytic approach that their local resources permit (which may range from an informed expert's best estimate to richly supported true cost data)
From page 37...
... Using the same four age brackets as used for the disease burden data, the Vaccines page asks users to indicate with a check box whether or not the vaccine targets each age group and to specify the percentage of each age group expected to receive the vaccination ("coverage") and the percent age of those vaccinated persons who will gain immunity ("effectiveness")
From page 38...
... This provides a ready mechanism to determine the value (as measured by the SMART Score) of vaccines with different design profiles.
From page 39...
... took this approach to reverse engineer the desirable set of attributes of vaccines in pneumococcal vaccines for South Africa. Upon completing data entry to specify vaccines, users use the "Continue" button to proceed to the Evaluation section of the program.
From page 40...
... First, if many attributes are chosen, then the weights assigned to those at the b ­ ottom of the priority list will have little meaningful effect on the rankings of candidate vaccines. Second, even with the elimination of double counting with DALYs and QALYs, users can still select sets of attributes that could cre ate additional double counting.
From page 41...
... The first step in the Weights section asks the user to rank the selected attributes by order of importance. Figure 3-8 shows a set of attributes that a hypothetical decision maker has selected.
From page 42...
... On the Priorities page the user can select up to five vaccine candi dates for simultaneous comparison. The limit of five candidates is deter mined by screen real estate, but users can always calculate SMART Scores for a set of five candidates, save the results using the Print button at the lower right corner of the page, and then proceed to define another set of
From page 43...
... As the attribute values are completed for a candidate vaccine, a SMART Score appears in the display box on the right side of this screen. In this hypothetical example, the user's selection of vaccine candidates for pneumococcal infection, human papillomavirus, and rotavirus results in 4  This is possible because multi-attribute utility models are independent of irrelevant alternatives (IIA)
From page 44...
... Similarly, the SMART Scores of one user do not correspond to those of another, but it still makes sense to speak of differences in the SMART Scores in a single user's analysis.
From page 45...
... The vertical axis on the SMART Score range dynamically adjusts to accommodate scores outside the 0 to 100 range. Attribute values -- and hence also SMART Scores -- can also fall below 0 if an attribute value is worse than the "worst-case" outcome established for that attribute.
From page 46...
... In this case, the graph shows the total score including both the positive and negative components in the sum. This is perfectly legitimate within multi-attribute utility theory, but to alert the user that such a case exists, the SMART Score graph for that can didate vaccine will show hatched bars rather than the standard solid color bars.
From page 47...
... Key Insights from the User Groups In this section, the committee summarizes key lessons learned beginning with the broadest policy issues and then shifting to more narrow issues in application of SMART Vaccines to the settings of the three user groups, and the officials from the Mexican Ministry of Health who served as advisory consultants. All of these users fully understood that they were using a preliminary and evolving version of SMART Vaccines and that their feedback was to be applied toward improving the product.
From page 48...
... This can be done by vaccine developers using their best approximation of the attributes and weights that the public health community might use, by the public health community directly, or perhaps through a collaboration between vaccine developers and other stakeholders. To illustrate this approach to using SMART Vaccines 1.1 -- including new features not previously available in SMART Vaccines 1.0 -- the committee came up with three hypothetical vaccines for pneumococcal infection and used data from South Africa for the test case.
From page 49...
... SMART Vaccines was created to allow differential disease burden and vaccine programmatic targeting not only for different age groups but also separately for males and females, as would be appropriate, for example, for an HPV vaccine or potential vaccines against breast cancer or prostate cancer if such were to arise. • The process of entering data for health care treatment costs in SMART Vaccines 1.0 seemed overly cumbersome to many users, forcing them to find and enter highly detailed sub-categories of health care use (e.g., office visits, clinic visits, emergency room visits, hospitalizations)
From page 50...
... Attributes, For example, the "benefits women and children" attribute Weights, could be double counted if the disease burden data focused Priorities directly on women and children. Thus, they preferred to include the women and children attribute if there was special attention beyond that created by the patterns of disease burden.
From page 51...
... This arises because the multi-attribute utility model expects all attribute scores to have values between 0 and 100, and sets the weights accordingly. Within a reasonable range, allowing SMART Scores to go outside the 0 to 100 range deals with this issue, but there remains a more subtle issue if the boundaries are set so widely or narrowly that individual attribute have values that diverge too far from the 0 to 100 range anticipated by the multi-attribute utility model.
From page 52...
... • Users groups -- and other stakeholders -- requested a method to save evaluation results at any point in the process. SMART Vaccines 1.1 includes a "print" button that shows both key states of the program (e.g., all vaccine attributes, the choices of the user for attributes to be used in the evaluation and the weights attached thereto, and the resulting SMART Scores for each vaccine candidate)
From page 53...
... The committee believes that further research to study available methods to support the decision process would be desirable. which was targeted for a population excluding infants.
From page 54...
... Because of the positive attribute values for premature deaths averted per year and QALYs, the SMART Score for the PC vaccine is represented in a hatched bar. To demonstrate the target product profile concept more fully, some of the key attributes were varied and the resulting changes in the scores of the hypothetical conjugate vaccine were observed.
From page 55...
... In a second demonstration, increasing the potential length of immunity from 10 to 15 years while holding everything else constant produced a dramatic change in SMART Scores: from –27 to +4 (see Figure 3-19)
From page 56...
... From a hypothetical South African decision maker's perspective, this simulation demonstrated the following: • Although the PS30 vaccine has greater effectiveness than PS23 -- because it covers more serotypes of bacteria -- the added costs offset those health gains, making the two nearly identical in the eyes of the hypothetical decision maker involved in this exercise. • The PC conjugate vaccine -- in its original specification -- does not provide as much value as either of the polysaccharide vaccines and would not be the vaccine of choice.
From page 57...
... This sensitivity highlights the importance, when using SMART Vaccines, of agreeing on attributes and their weights at the beginning of any evaluation process rather than modifying those weights to achieve some preconceived result. Just as in version 1.0, SMART Vaccines 1.1 provides detailed population data, including life-table information and average hourly wage rates, all of which are used in subsequent calculations for determining the effects of various vaccines (see Figure 3-3)
From page 58...
... values on the final score. On the page requesting information on disease burden (see Fig ure 3-4)
From page 59...
... . The initial score of –27 dropped to –28, indicating that the additional costs associated with increasing the coverage outweighed the benefits for this vaccine.
From page 60...
... feature shows that this one product design improvement was able to elevate the score from –27 to +4.
From page 61...
... , and the number of doses (decreased from 3 to 2) dramatically increased the SMART Score of the PC vaccine candidate from an initial score of –27 to +35, thus surpassing the scores of PS23 (31)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.