National Academies Press: OpenBook

An Assessment of the SBIR Program (2008)

Chapter: 5 Program Management

« Previous: 4 SBIR Program Outputs
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

5
Program Management

5.1
INTRODUCTION

Each agency’s Small Business Innovation Research (SBIR) program operates under legislative constraints that set out program eligibility rules, the program’s three-phase structure, and (to a considerable extent) phase-specific funding limitations. Beyond these basic structural characteristics, programs have flexibility to decide how they select, manage, and track awards.

The similarities and differences across the agencies are discussed in detail in the following sections.

5.2
TOPIC GENERATION AND UTILIZATION

All applications for SBIR funding are made in response to published documents—termed solicitations—that describe what types of projects will be funded and how funding decisions will be made. Each agency publishes its solicitation separately. Subject areas eligible for SBIR funding are referred to as “topics” and “subtopics,” and each agency chooses its topics and subtopics in a different fashion with different goals in mind.

We have identified three distinct models of topic use at the SBIR agencies:

  1. Acquisition-oriented Topic Procedures. The Department of Defense (DoD) and the National Aeronautics and Space Administration (NASA) solicit for topics that channel applications toward areas that are high acquisition priorities for the awarding agency. For these agencies, the goal is to target R&D spending towards projects that will provide technolo-

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

gies that can eventually be acquired by the agency for agency use. Applications are not accepted outside topic areas defined in the solicitation.

  1. Management-oriented Topic Procedures. At the National Science Foundation (NSF) and the Department of Energy (DoE), topics are used to serve the needs of agency management, not as a method of acquiring technology. NSF primarily focuses on ensuring that the research agendas of the various NSF directorates (divisions) are closely served. This enhances agency staff buy-in for SBIR. At DoE, topics are also used as a screen to reduce the number of applications so that the agency’s limited SBIR staff is able to manage the program effectively. As a result, neither agency will accept applications outside topic areas defined in the solicitation.

  2. Guideline-oriented Topic Procedures. The National Institutes of Health (NIH) is the only agency that uses topics as guidelines and indicators, not hard delimiters. NIH issues annual topic descriptions, but emphatically notes that it primarily supports investigator-driven research. Topics are defined to show researchers areas of interest to the NIH Institutes and Centers (ICs), not to delimit what constitutes acceptable applications. Applications on any topic or subject are therefore potentially acceptable.

5.2.1
Acquisition-oriented Approaches

Acquisition-oriented approaches are designed to align long range research with the likely needs of specific agencies and programs. How well this works depends to a great degree on the specific topic development mechanisms used at each agency.

5.2.1.1
Topic Development

In DoD, topics originate in Service Laboratories1 or in Program Acquisition Offices. Many awarding units within DoD do not have their own laboratories, and depend on the Service Laboratories for “in-house” expertise. They request topics from these experts, who thus become topic authors. These authors frequently become the technical monitors for the contracts that are awarded based on their topics.

Since 1999, DoD has made a number of efforts to ensure that topics are closely aligned with the needs of acquisition programs. For example, each major acquisition program has a SBIR liaison officer, who works with SBIR program managers within DoD, and with the SBIR contractor community. Their job is to provide a mechanism through which contractors can communicate with end cus-

1

The laboratories develop technologies to meet long-term agency needs.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

tomers in acquisition programs. The liaison officers may author topics, or cause them to be authored.

5.2.1.2
Topic Review

The topic review process at an acquisition-oriented agency is stringent and extensive. Alignment with mission and with anticipated technology needs is crucial. After each individual DoD component has completed its own internal topic review process, the results go through the department-wide DoD review process. This multistage topic approval process takes a considerable amount of time, sometimes more than a year.

Intracomponent review varies in duration, somewhat correlated by size (smaller components with fewer proposed topics take less time). The Army topic review is a centralized online process that takes 4 months, while Air Force review is less centralized (but also online) and takes up to 15 months.

5.2.1.3
Broad versus Narrow Topics

There is a tension between the need to write topics tightly to ensure that they align well with acquisition programs, and the desire to write them broadly enough to fund novel mechanisms for resolving agency problems. Components have tended to adopt one or other of these approaches. For example, the Missile Defense Agency (MDA) has until recently had only a small number of broad topics, evolving only gradually from year to year, which provides great flexibility to proposing firms.

DoD is addressing this tension by calling for all topics to be written to allow a bidder “significant flexibility” in achieving the technical goals of the topic.2 For example, the Navy might need corrosion-resistant fastenings for use on ships. Instead of writing a topic that specifies the precise kinds of technology to be utilized, managers are now pushing for topics that simply state the problem and leave the technology itself up to applicant.

5.2.1.4
Quick Response Topics

Particularly since 9/11, and the start of the conflict in Iraq, DoD has had acquisitions needs with short timeframes. Quick response topics offer a new way to short-cut the often lengthy topic review process.

As part of the DoD’s SBIR FY-2004.2 solicitation, the Navy added three special SBIR Quick Response Topics. The rules for these topics were slightly different: They offered a three- to six-month Phase I award of up to $100,000 (with no option), with successful Phase I firms being invited to apply for a Phase

2

This is a correction of the text in the prepublication version released on July 27, 2007.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

II award of up to $1,000,000.3 The topics were released more rapidly because they did not go through the DoD level approval process.

5.2.1.5
Topic/Funding Allocation

The allocation of topics effectively controls the allocation of money. There are two basic models for agency funding allocation via topics:

  • Percentage of the Gross. All SBIR programs struggle with the problem of agency buy-in. Many agencies have traditionally viewed SBIR as a “small business tax” on their research budgets. This is even more important where the agency is itself the most important market for SBIR products and services, and acquisitions officers are the key customers. SBIR is funded through a set-aside of 2.5 percent of agency extramural research, and some senior agency staff have claimed that they could find better ways to spend that money. To address these concerns, some agencies and some components have designed their SBIR programs to tighten the link between the results of the SBIR research and the agency unit that has been “taxed” to fund the SBIR program. One way to do this is by allotting a specific number of topics to each funding unit, which then effectively controls the content of the research funded by “its” SBIR money. For example, in the Army and Air Force, the number of topics is allocated to agency laboratories specifically on the basis of each lab’s overall R&D budget—and hence its contribution to the SBIR program funding pool.4

  • Technology-driven. Topic decisions can be more centralized or otherwise divorced from funding unit control. This allows more flexible administration and more rapid response to urgent needs (via easier reallocation of funding between technical areas), but risks reduced unit buy-in.

5.2.1.6
DoD Pre-release

Federal Acquisition Regulations prohibit any SBIR applicant contact with the relevant agency after a solicitation opens, other than through written questions to the contracting officer, who must then make the question and the answer available to all prospective bidders.

Under its pre-release program, DoD posts the entire projected solicitation on the Internet about two months before the solicitation is to open. Each topic includes the name and contact information for the topic author. Firms may contact

3

The Navy’s quick review approach is discussed further in National Research Council, An Assessment of the SBIR Program at the Department of Defense, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2009.

4

The Air Force also allocates the number of funded Phase I projects.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

the authors, and discuss the problem that the government wants to solve as well as their intended approach.

Often, a short private discussion with the topic author can help a company avoid the cost of a proposal, or give the company a better idea of how to approach the SBIR opportunity. Small firms can learn of possible applications of their technology in acquisition programs, and determine which primes are involved in those programs. Firms new to SBIR often get procedural information, and are steered to the appropriate DoD Web sites for further information.

5.2.1.7
Company Influence on Topic Development

In some sense, private company influence on topic development is both positive and sought after. Companies often understand cutting edge technologies better than the agencies, and may be able to suggest innovative lines of research. At DoD, involvement of prime contractors in topic development is one further way to help build linkages between the major acquirers of technology and the SBIR program.

There have been some criticisms that topics are “wired”—that they are written for a specific company. There is no simple way to determine whether topics are wired, at DoD or elsewhere. Since about 40 percent of DoD Phase I winners are new to the program does, however, suggest that new or unconnected firms do have significant opportunities to participate in the SBIR program. It is also worth noting that while firms with long connections to DoD are best placed to help design topics friendly to their specific capacities, those long connections are in many cases, according to agency staff,5 the result of effective performance building to a strong track record in the course of previous SBIR awards.

5.2.2
Management-oriented Approaches to Topic Utilization

Management-oriented approaches differ from the acquisition-driven programs in that they do not evaluate applicants based on the government agency’s acquisitions needs.

Instead, there are management priorities. The two agencies using this approach differ substantially in the origins of these priorities. At NSF, SBIR funding decisions are driven by the technical decisions of the major funding directorates. Each directorate provides one annual list of topics, and their objective is explicitly to direct SBIR funding into targeted research areas. DoE’s priorities are described in the case study below.

5

See National Research Council, SBIR and the Phase III Challenge of Commercialization, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2007.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

BOX 5-1

Topics at DoE Technical Areas

For the 2003 competition, the 11 program areas at DoE were as follow:

  • Defense Nuclear Nonproliferation (3 Topics: 82 applications received);

  • Biological and Environmental Research (6: 140 apps.);

  • Environmental Management (2 Topics 10-11: 38 apps);

  • Nuclear Energy (1 Topic: 15 apps);

  • Basic Energy Sciences (8 Topics: 222 apps);

  • Energy Efficiency (6 Topics: 287 apps);

  • Fossil Energy (6 Topics: 169 applications received);

  • Fusion Energy Sciences (3 Topics: 66 applications received);

  • Nuclear Physics (4 Topics: 56 applications received);

  • Advanced Scientific and Computing Research (2 Topics: 62 applications received); and

  • High Energy Physics (6 Topics: 87 applications received).

SOURCE: Department of Energy.

5.2.2.1
Case Study: DoE

DoE attempts to provide each of its technical program areas with a “return on investment” equal to its SBIR contribution. Each year, for each technical program area, the SBIR office attempts to ensure that the dollar amount of all awards (Phase I plus Phase II) awarded to proposals submitted to that program area’s technical topics is equal to that program area’s contribution to the SBIR set-aside. Although the dollar amount is the primary consideration, the number of awards each program area gets—in both Phase I and Phase II—is also roughly proportional to the funding it provides SBIR awards.

To generate a “fair” return, each program area is provided with a number of technical topics that is proportional to its share of the total contribution. For example, if Fossil Energy (FE), provided 10 percent of the DoE’s set-aside, FE would receive 10 percent of the technical topics, and also would receive Phase I and Phase II awards (from proposals submitted to those topics) whose dollar value equaled 10 percent of the set-aside.

About six months before the fall publication of the annual program solicitation, the DoE SBIR Program Manager sends a “call for topics” memo to all Portfolio Managers responsible for technical program areas within DoE. This call also goes to all technical topic managers (TMs) from the prior year. The call includes the National Critical Technologies List,6 as well as guidelines for topic development.

6

As conveyed to DoE from the U.S. Small Business Administration.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

Topics and subtopics are submitted to or generated by the portfolio managers, who determine which topics to forward to the SBIR Program Office. Portfolio managers develop a topic list that matches the amount of funding allocated to them, and this list is 100 percent funded by the SBIR Office, which performs no further review.

The DoE SBIR Program Manager then edits the topics into a common format, and may seek to narrow topics to limit the number of applications.7 About 50 percent of DoE topics change from year to year.

5.2.3
Investigator-driven Approaches

NIH is proud of its investigator-driven approach. The agency believes its approach is the best way to fund the best science, because it substitutes the decisions of applicants and peer reviewers for the views of agency or program staff.

A small but growing percentage of NIH SBIR funding, however, goes to Program Announcements (PAs) and Requests for Applications (RFAs), which are Institute-driven requests for research proposals on specific subjects, covering the remainder. Discussions with agency staff suggest that this percentage may increase, perhaps substantially.8

5.2.3.1
Standard Procedure at NIH

The NIH Web site and the documents guiding SBIR applications are both replete with statements that investigators are free to propose research on any technical subject, and that the topics published in the annual solicitation are only guides to the current research interests of the Institutes and Centers.

Topics are developed by individual ICs for inclusion in the annual omnibus solicitation. Typically, the Program Coordinator’s office sends a request to the individual program managers (SBIR Point of Contact) at each IC. These PMs in turn meet with division directors, the locus of research assignments within the IC.

Division directors review the most recent omnibus solicitation (with their staff), and suggest changes and new topics based on recent developments in the field, areas of particular interest to the IC, and agency-wide initiatives with implications within the IC. The revised topics are then resubmitted for publication by the Office of Extramural Research (OER), which does not appear to vet the topics further.

7

Interview with Bob Berger, former DoE SBIR Program Manager, March 18, 2005.

8

Interview with National Cancer Institute staff on March 6, 2007.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
5.2.3.2
Procedures for Program Announcements and Requests for Proposals (RFAs)

PAs and RFAs occupy a position closer to management-oriented approaches. Essentially, they add a parallel structure within the NIH SBIR program through which the ICs and the agency as a whole can fund their own research priorities.

PAs/RFAs are announcements of research funding areas that the IC expects to prioritize. PAs are areas of special interest that are still evaluated within the broad applicant pool for SBIR. RFAs are evaluated separately, and its applicants compete for a separate pool of SBIR funding set aside by the awarding IC.

In both cases, the announcement is published by one or more ICs as a reflection of top research priorities at the IC. It offers applicants a more directed path, aligned with agency priorities. NIH does still try to ensure that while PAs and RFAs define a particular problem, they are written broadly enough to encompass multiple technical solutions to the problem.

PAs and RFAs appear to be an effort to develop a middle ground between topic-driven and investigator-driven research. Essentially, by layering PA/RFA announcements on top of the standard approach, NIH seeks to focus some resources on problems that it believes to be of especially pressing concern, while retaining the standard approach as well.

5.2.4
Topics: Conclusions

There are some obvious advantages to an investigator-driven approach:

  • Applications are likely to respond to current concerns more rapidly (given the long lags involved in topics developed with other mechanisms).

  • Multiple deadlines are easier because an official solicitation does not have to be adjusted.

  • Better science may be attracted, as promising and truly innovative new technologies are not excluded.

Acquisition- and management-driven models have their own advantages:

  • Agencies can much more easily impose their own research agendas.

  • Agencies can better ensure that SBIR research is aligned with eventual agency acquisition needs.

  • Narrow technical “windows” mean that the agency can limit the technologies being addressed by a particular solicitation.

  • Narrower topic options mean fewer applications, limiting the amount staff and reviewer time required.

The case for better science may be the most important point here. By definition, if topics are designed to sharply limit/focus research on particular problems,

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

or in some cases on particular technologies, they are excluding other kinds of research. And, simple mathematics would suggest, if possible applicants are excluded a priori, the average quality of successful applications is likely to fall.

This suggests the following conclusions:

  • The acquisition-oriented model seems appropriate at the agencies where eventual technology acquisition by the agency is the primary objective.

  • The management-oriented model seems much less immediately defensible.

    • It is difficult to see how the narrowing of potential topics at NSF best serves the interest of science: in conjunction with the annual funding cycle, this ensures that excellent science may have to wait for several years before its number comes up in the NSF topics lottery.

    • It is even more difficult to defend the narrowing of selection areas on the basis of administrative convenience, as at DoE. Limited resources mean that all agencies limit the number of SBIR awards that are made, but it is hard to understand how doing so through a deliberate decision to reduce the scope of possible applications makes much sense.

  • The NIH investigator-driven model seems to be working effectively for NIH.

  • Hybrid models are worth further assessment. NIH appears to be evolving toward a hybrid model, as the share of awards going to RFAs and PAs is growing. This suggests possible options with which other agencies could experiment. For example, DoD components might reserve a small percentage of funding for “open” topics that are essentially investigator-driven.

It is worth noting that in other countries, hybrid models are used. For example, in Sweden, approximately half of Vinnova’s9 research funding is distributed to agency programs focused on the research needs of specific industries. The other half is allocated to proposals for research initiated by companies.10

5.3
OUTREACH MECHANISMS AND OUTCOMES

5.3.1
Introduction

As the program has become larger and more important to early-stage funding for high-tech companies, outreach activities by the programs have increasingly come to be complemented by outreach initiatives at the state level.

At the different agencies, outreach programs seem to share three key objectives:

9

VINNOVA is the Swedish national technology agency.

10

VINNOVA Web site, accessed at <http://www.vinnova.se>.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
  • Attracting new applicants.

  • Ensuring geographical diversity in applications by attracting new applicants from under-represented regions.

  • Expanding opportunities for disadvantaged and woman-owned companies.

These objectives are implicit, rather than explicit, in the outreach activities of all the programs. With the growth in electronic access, it is unclear whether greater outreach efforts could be cost-effective.

5.3.2
Outreach Mechanisms

At the program level, all the agencies conduct outreach activities, though those efforts vary in kind and degree. Important types of outreach activities include:

  • National SBIR Conferences, which twice a year bring together representatives from all of the agencies, often at locations far from the biggest R&D hubs (e.g., the Spring 2005 national conference took place in Omaha, Nebraska).

  • Agency-specific Conferences, usually held annually, which focus on awardees from specific agencies (e.g., the NIH conference, which is usually held in Bethesda, Maryland).

  • Phase III Conferences, focused on bringing together awardees, potential partners, such as defense industry prime contractors, and potential investors (e.g., the Navy Opportunity Forum, held annually in the Washington, DC area).

  • The SWIFT Bus Tour, which makes an annual swing through several “under-represented” states, with stops at numerous cities along the way. Participants usually include some or all of the agency program managers/directors/coordinators.11

  • Web Sites and Listservs are maintained by all of the agencies. These Web sites contain application and support information. Some, such as the DoD’s site, are elaborate and comprehensive, and most allow users to sign up for a news listserv.

  • Agency Publications are used to spread information about the SBIR program. NASA, for example, has a variety of print and electronic publications devoted to technology and research promotion. The other departments use this publicity vehicle to a lesser degree.

  • Other, non-SBIR-focused Publications are sometimes used by the agen-

11

Agencies have different titles for their primary SBIR program manager.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

cies. For example, DoD publishes solicitation pre-publication announcements in its Commerce Daily.

  • Demographic-focused Outreach. Agencies acknowledge the need to ensure that woman- and minority-owned businesses know about SBIR and are attracted to apply. They have organized a series of conferences to encourage participation in the program by woman- and minority-owed business, often run in conjunction with other groups. For example, the DoD Southeastern Small Business Council held a conference for Woman-owned Small Businesses in Tampa, Florida, in June 1999.

In general, these activities are the responsibility of the SBIR Program Coordinator at the agency, rather than other agency staff. They are also funded out of general agency funds, as the governing SBIR legislation does not permit programs to use SBIR funds for purposes other than awards (and a limited amount of commercialization assistance. See below.).

Generally, agencies do not provide much funding for outreach, though there are pronounced differences between the agencies. NIH, for example, has developed a popular annual SBIR conference, whereas DoE has no funds for such an initiative. Within DoD, the Navy has provided significant extra operating funds for its SBIR program, which has therefore been able to take on outreach and other initiatives not open to the programs at other DoD components.

5.3.3
Outreach Outcomes

5.3.3.1
New Winners

As SBIR programs have become larger, with longer track records and better online support, it is increasingly less likely that potential candidate companies will remain unaware of the program, or that such companies will have difficulty understanding how to apply.

Still, new applicants remain an important component of the applicant pool (see Figure 5-1, for example). This may suggest that the current outreach activities are having the intended effect. Much learning about new opportunities such as SBIR is conveyed directly. Another interpretation of the data is that outreach activities have become increasingly unnecessary in a world where a wide range of economic development institutions at the state and local level automatically point small high-tech companies toward SBIR as a funding source, and where public information about SBIR is fairly widespread and online.

Anecdotal evidence from the SWIFT bus tour participants suggests that outreach activities do encourage new firms to enter the program. The program coordinator for NIH has noted that there is usually an upturn in applications from

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-1 Phase I new winners at NSF (previously nonwinners at NSF).

FIGURE 5-1 Phase I new winners at NSF (previously nonwinners at NSF).

SOURCE: National Science Foundation.

visited states after the tour, especially from states which have had relatively few applicants in the past.12

5.3.3.2
Applicants from Under-served States

The distribution of SBIR awards among states varies widely. California and Massachusetts consistently receive a large share of awards in federal R&D programs.13 This is also true for SBIR. Conversely, about a third of the states receive few awards.

At NASA, for example:

  • California and Massachusetts dominate, garnering 36.7 percent of Phase II awards.

  • The top five states account for 54.4 percent of all awards.

  • Nineteen states had ten or fewer phase II awards from FY1992 to FY2003, and 16 had less than four.

As yet, no systematic research has been conducted that would assess the distribution of SBIR awards relative to those of other research programs, or to other R&D indicators such as the distribution of scientists and engineers in the population.

12

Jo Anne Goodnight, Personal Communication, May 15, 2004.

13

See National Science Board, Science and Engineering Indicators 2006, Arlington, VA: National Science Foundation, 2006, Chapter 8: State Indicators. Accessed at <http://www.nsf.gov/statistis/seind06/c8/c8.cfm>.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

This issue is discussed in more detail in Chapter 3. In general, the evidence suggests that while the broad pattern of awards has not changed much, more awards are going to states that previously received none or only a few.

5.3.3.3
Applications from Woman- and Minority-owned Businesses

Congress has mandated that one objective of the SBIR program is to increase support for woman- and minority-owned businesses. Agencies track participation by these demographic groups in their programs. Unfortunately, application data from DoD is insufficiently accurate for use in this area.

Our analysis in Chapter 3 utilized the SBA data, and generated Figure 3-13.

Two core questions emerge:

First, the decline in the share of awards to minority-owned businesses is, on the surface, an issue, given that support for these businesses is one of the four Congressional objectives for the program. At a minimum, it will be important for agencies to determine why this trend is occurring, and—if necessary—to identify ways to address it.

Second, even though the share of awards going to woman-owned businesses has been increasing, it is worth noting that women now constitute almost half of all doctorates in certain scientific and engineering fields—such as life sciences—and that award rates lag this considerably. Again, further assessment by the agencies is necessary to determine if there is a problem, and what should be done to address it.

5.3.4
Conclusions—Outreach

Information for potential applicants falls into two areas: (1) basic program information for potential applicants who have not heard of the program or have not yet applied and won awards; and (2) detailed technical information about specific solicitations and opportunities for those familiar with the program.

Interviews with agency staff and SBIR awardees indicate that the agencies have substantially upgraded their SBIR online information services in recent years. The Web has become the primary source of information for both existing awardees and potential new applicants.

While some of the Web sites are still not especially easy to navigate or utilize, collectively they represent a vast improvement in outreach capacity for the agencies. Moreover, online services help to ensure that potential applicants get up-to-date information.

For those with little information about the program, the states have increasingly come to play an important information-providing role. Almost all states have a technology-based economic development agency (TBED) of some kind. Many have TBEDs that are well-known and effective organizations. These or-

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

ganizations are aware of SBIR. They help to organize local and regional conferences with the SBIR agencies, they attend other SBIR events, and they are often in direct contact with agency staff.

Some TBEDs, such as the Innovation Partnership in Philadelphia, include SBIR as a key component. They center many of their activities around SBIR by helping local companies develop winning applications and integrate SBIR into a wider system for economic development. For example, the Innovation Philadelphia has a multiple funding model for early stage financing, which includes a role for SBIR.14

Overall, most companies with some reasonable likelihood of success in applying to the SBIR program have heard about SBIR and can find out how to fill in the appropriate application forms. To the extent that they want help in the process, there is a small cottage industry of SBIR consultants eager to provide that service.

Thus, it appears that the general outreach function historically fulfilled by the SBIR Agency Coordinators/Program Managers may now be changing toward a more nuanced and targeted role. Outreach can focus on enhancing opportunities for under-served groups and under-served states, or on specific aspects of the SBIR program such as the transition to the market after Phase II (e.g., the July 2005 DoD “National Phase II Conference—Beyond Phase II: Ready for Transition” held in San Diego), while relying on the Web and other mechanisms to meet the general demand for information.

5.4
AWARD SELECTION

5.4.1
Introduction

SBA Guidelines require that an agency should use the same selection process for its SBIR program as it does for its other R&D programs; as award selection processes differ across agencies, so do selection procedures for SBIR.

The selection process is critical for the long-run success of any government award program. For SBIR, the selection process is complicated by the fact that the program serves many masters and is aimed at many objectives. A successful selection process must therefore meet a number of quite distinct criteria. Discussions with agency staff and award winners, and with the other stakeholders, suggest that the following are key criteria for judging the quality of the SBIR selection processes:15

  • Fair. Award programs must be seen to be fair, and the selection process

14

Innovation Philadelphia Web site, accessed at <http://www.ipphila.com>.

15

While these are in reality the implicit criteria against which agencies value their selection procedures, these criteria are not explicitly recognized in any agency, and the agencies balance them quite differently.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

is a key component in establishing fairness. Despite numerous complaints about details of the NIH selection process, there is uniform acceptance among stakeholders that, in general, the process is fair.

  • Open. A successful SBIR program must be open to new applicants. Agencies generally score high on this, with a third or more of Phase I awards at most agencies going to new winners.

  • Efficient. The selection process must be efficient, making the best of use of the time of applicants, reviewers, and agency staff.

  • Effective. The selection process must achieve the objectives set for it. In practice, these can be summarized as selecting the applications that show the most promise for:

    • Commercializing results, and

    • Helping to achieve the agency’s missions,16

    • While expanding new knowledge, and

    • Providing support for woman- and minority-owned businesses.

The effectiveness of the agencies’ SBIR programs is discussed elsewhere, in the Chapter 4: SBIR Program Outputs. The remaining components of the selection process are discussed in more detail below, along with some other selected topics, including the degree of centralization, and a more detailed review of the NIH approach based on outside reviewers.

5.4.2
Approaches to Award Selection

Key questions in discussing award selection concern the use of outside reviewers, and who makes final decisions on funding.

5.4.2.1
Use of External Reviewers

DoD. DoD components do not use external reviewers.17 Two or three technical experts at the laboratory level review each proposal. Proposals are judged competitively on the basis of scientific, technical, and commercial merit, in accordance with the criteria listed in Box 5-2. Prior to the closing of the solicitation, the responsibility for each topic has been clearly established, so reviewers can access “their” proposals immediately after the closing. If a proposal is beyond the

16

The other mandated goals of the program tend to have less impact on award selection, emerging from the implementation of awards.

17

There are exceptions for support personnel working at government laboratories. The Air Force section of the solicitation states “Only government personnel and technical personnel from Federally Funded Research and Development Center (FFRDC), Mitre Corporation and Aerospace Corporation, working under contract to provide technical support to Air Force product centers (Electronic Systems Center and Space and Missiles Center respectively), may evaluate proposals.”

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

expertise of the designated reviewers, the person with overall topic responsibility will obtain additional reviewers.


NIH. NIH relies exclusively on external reviewers to provide a technical and commercial assessment of proposals. A few applications received for specific technologies being sought by the agency (via the RFA mechanism) are evaluated at the proposing unit, but even then, outside reviewers provide much of the technical input.


NASA. NASA does not use outside reviewers. Initial technical reviews are done by a panel of technical experts convened at the relevant NASA Centers. Program staff at headquarters subsequently review the awards for alignment with NASA objectives and for other purposes.


DoE. DoE uses internal experts recruited for each specific proposal from across the agency to provide technical and commercial evaluations. After receiving the completed reviews, proposals are scored by the Technical Topic Manager (TTM). Only proposals scoring at least +2 (out of a possible score of +3) are eligible for selection. The SBIR office reviews for discrepancies between the average reviewer score and the TM’s score, and resolves any conflict.

Portfolio Managers for technical program areas select the awardees from that program area. Different programs within DoE have differing philosophies and mechanisms for award selection. Some provide equal awards to each of the program area’s topics. Others select applications with the highest score. DoE tries to decentralize decision making, so that those with the greatest knowledge of the technology make the decisions.


NSF. NSF uses at least three outside (non-NSF) reviewers for each application. Their ratings are then used to guide the Program Officer in making funding decisions. Final approval follows review by the Division of Grants and Agreements.

5.4.2.1.1
Outsider-dominated Processes: The NIH Approach

The peer review process at NIH is by far the most elaborate of all the agencies. It is operated primarily through the Center for Scientific Research (CSR), a separate IC at NIH which serves only the other ICs—it has no direct funding responsibilities of its own.


Study Sections. Applications are received at CSR and are assigned to a particular study section18 based on the technology and science involved in the proposed research. Panels can either be permanent panels chartered by Congress,

18

As review panels are known at the National Institutes of Health.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

or temporary panels designated for operation by NIH. Most SBIR applications are assigned to temporary panels, many of which specialize in SBIR applications only. This trend appears to be accelerating, as the requirements for assessing SBIR applications—notably the commercialization component—are quite different from the basic research conducted under most NIH grants.

Special emphasis panels (SEPs) (temporary panels) are reconstituted for each funding cycle. Almost all SBIR applications are addressed by SEPs, which have a broader technology focus, and less permanent membership, than chartered panels. Many SEPs tend to draw most of their applications from a subset of ICs. For example, the immunology Integrated Review Group (IRG) covers about 15 ICs, but 50 percent of its work comes from the National Institute of Allergy and Infectious Diseases (NIAID), with a further 33 percent from the National Cancer Institute (NCI).


Procedures. Applications are assigned to a subset of the panel—two lead reviewers and one discussant. These panelists begin by separating out the bottom half of all applications. These receive a written review, but are not formally scored. The top 50 percent are scored by the three reviewers and then, after discussion, by the whole panel.


Panel Makeup. NIH guidelines are that at least one panelist should have small business background. One current panel, for example, had 13 small business representatives out of 25 panelists. This is a recent change; previous panels in this technical area had been dominated by academics. NIH guidelines mandate that panels have 35 percent female and 25 percent minority participation.


Scoring is based on five core criteria:

  • Significance

  • Approach

  • Innovation

  • Investigators

  • Environment

No set point values are assigned to each. Scores are averaged (no effort is made to smooth results for example by eliminating highest and lowest scores), and range between 100 (best) and 500 (worst). Winning scores are usually in the 180-230 range or better, although this varies widely by IC and by funding year. Scores are computed and become final immediately.


Reviewers and Funding Issues. Reviewers are specifically not focused on the size of the funding requested; they are tasked to review whether the amount is appropriate for the research being proposed. As a result, the question of trade-offs

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

between a single larger award against multiple smaller awards is not addressed. On the other hand, this approach permits applicants to propose projects of larger scope and value to the agency mission.


Reviewers and Outcomes of Prior Awards. If the proposing company has received more than 15 Phase II awards, it must note that on its applications. Otherwise, the application forms have no place to list previous awards. Companies with strong track records try to make sure that this is reflected in the text of their application. However, there is no formal mechanism for indicating the existence of, or outcomes from, past awards. Reviewers also do not know the minority or gender status of the PI or of the company’s owners.


Positive and Negative Elements of Outside Review. On the positive side, strong outside review generates a range of benefits. These include:

  • The strong positive endorsement for applications that comes from formal peer review.

  • The alignment of the program with other peer-reviewed programs at NIH.

  • The perception of fairness related to outside review in general.

  • The absence of claims that awards are written specifically or “wired” for particular companies.

  • The probability that reviewers will have technical knowledge of the proposed award area.

On the negative side, recent efforts to infuse commercialization assessment into the process have had mixed results at best. Problems include:

  • Quality of reviews decreasing as workload increases.

  • Perceptions of random scoring.

  • Conflict of interest problems related to commercialization.19

  • Substantial delays in processing.

  • Growing questions about the trade-offs between different size awards.

Overall, outside review adds fairness but also adds complexity and possibly delay. The generic NIH selection process has not yet been substantially adjusted to address the needs of companies trying to make rapid progress in an increasingly competitive environment. Delays that might be appropriate at an institution focused on academic and basic research, may be less applicable to smaller businesses working on a much shorter development cycle.

19

One example of conflict of interest may rise when the commercialization reviewer has a similar or competitive product planned for or already in the market.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

5.4.3
Fairness

The fairness of selection procedures depends on several factors. These include:

  • Transparency—is the process well known and understood?

  • Implementation—are procedures always followed?

  • Checks and balances—are outcomes effectively reviewed by staff with the knowledge and the authority to correct mistakes?

  • Conflict of interest—are there procedures in place and effectively implemented to ensure that conflicts of interest are recognized and eliminated?

  • Appeals and resubmissions—are there effective appeals and/or resubmission procedures in place?

  • Debriefings—do agency staff give both successful and unsuccessful applicants good feedback about their proposals?

Each agency has strengths and weaknesses in these areas. They are summarized below.

5.4.3.1
NIH

Transparency. At NIH, the review process is well known to many applicants because it is almost the same as selection procedures for all other NIH awards. Interviews with awardees suggested that many had sat on NIH review panels and were familiar with review procedures.


Implementation. The NIH review procedures are highly formalized and are implemented by a professional and independent review staff at CSR.


Checks and Balances. NIH scores somewhat lower on this factor than do other agencies, because so much power is placed in the hands of independent reviewers. Resubmission (see below) replaces appeals for practical purposes at NIH, but adds substantial delay. Thus, there are few options available for applicants who believe that reviewers misunderstood or misjudged their proposals.


Conflict of Interest. NIH does have clear conflict of interest regulations in place for reviewers, and also has procedures in place that allow applicants to seek to exclude individual panel members from reviewing their application. However, the extent to which this works in practice is not clear, and may depend on individual CSR officers. Conflict of interest has been raised repeatedly by awardees as a problem.


Resubmissions are the standard mechanism for improvement and/or appeal

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

at NIH, and about one-third of all awards are eventually made after at least one resubmission. Awardees noted in interviews that this capacity to resubmit enhances perceptions of fairness.20

5.4.3.2
DoD

DoD’s SBIR program is regulation-driven (as is natural at a procurement-oriented agency). Its processes are governed by the Federal Acquisition Regulations (FAR), the Defense FAR Supplement (DFARS), and component specific supplements.21 Interviewees among the companies presented few com plaints that the program was unfair.” Overall, DoD relies on the possibility of appeals, and on the self-interest of topic managers who are seeking to find technology that they can use, to ensure that the process is both fair and seen to be fair.


Transparency. DoD selection processes are less transparent than those at NIH. Information about selection criteria is available in the text of the solicitation, but there is no information there on selection processes—i.e., who will review the proposal, and how the review will be conducted. These procedures are agency specific, and in most cases minimal information is publicly available. For example, the Air Force has no information publicly posted on the agency’s Web site concerning selection procedures.22


Implementation. There are no data concerning the degree to which DoD SBIR procedures are followed. As the procedures themselves are far from transparent, neither applicants nor researchers can easily evaluate whether the agency plays by its own rules.


Checks and Balances. While practices vary widely between agencies, DoD always gives applicants several layers of effective proposal review, and it is reasonable to conclude that checks and balances are formally in place. It is not clear how well these work in practice. Some components have more checks in place than others—the Army uses its lead scientists as gatekeepers for example, while other agencies have programs that are more decentralized.


Resubmission and Appeals. Resubmission is effectively impractical as topics change substantially between solicitations. The appeals process is much more important. GAO rules for appeals procedures are published on the DoD SBIR Web site, and appeals are utilized by aggrieved companies. In essence, the DoD system is the inverse of the NIH resubmission approach: at NIH, companies are free to propose any line of research they like, and can resubmit applications if

20

For example, interview with Dr. Josephine Card, Sociometrics, Inc.

21

This is a correction of the text in the prepublication version released on July 27, 2007.

22

As of June 15, 2006.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

BOX 5-2

DoD Evaluation Criteria

Evaluation Criteria—Phase I


The DoD Components plan to select for award those proposals offering the best value to the government and the nation considering the following factors.

  1. The soundness, technical merit, and innovation of the proposed approach and its incremental progress toward topic or subtopic solution.

  2. The qualifications of the proposed principal/key investigators, supporting staff, and consultants. Qualifications include not only the ability to perform the research and development but also the ability to commercialize the results.

  3. The potential for commercial (government or private sector) application and the benefits expected to accrue from this commercialization as assessed utilizing the criteria in Section 4.4.

Where technical evaluations are essentially equal in merit, cost to the government will be considered in determining the successful offeror.

Technical reviewers will base their conclusions only on information contained in the proposal. It cannot be assumed that reviewers are acquainted with the firm or key individuals or any referenced experiments. Relevant supporting data such as journal articles, literature, including government publications, etc., should be contained or referenced in the proposal and will count toward the 25-page limit.


Evaluation Criteria—Phase II


The Phase II proposal will be reviewed for overall merit based upon the criteria below.

  1. The soundness, technical merit, and innovation of the proposed approach and its incremental progress toward topic or subtopic solution.

  2. The qualifications of the proposed principal/key investigators, supporting staff, and consultants. Qualifications include not only the ability to perform the research and development but also the ability to commercialize the results.

  3. The potential for commercial (government or private sector) application and the benefits expected to accrue from this commercialization.

The reasonableness of the proposed costs of the effort to be performed will be examined to determine those proposals that offer the best value to the government. Where technical evaluations are essentially equal in merit, cost to the government will be considered in determining the successful offeror.

Phase II proposal evaluation may include on-site evaluations of the Phase I effort by government personnel.


SOURCE: Department of Defense SBIR Solicitation, FY2005.

they do not like the result of selection; at DoD, topics are restricted and resubmission is not permitted, so companies are directed strongly toward an appeals process instead. There are no public data about the extent of utilization.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
5.4.3.3
NASA

Transparency. While the NASA SBIR Participation Guide23 provides a general description of the selection procedure, it could be more helpful to applicants if it added more details about the process. The decentralized character of the program—with selection being conducted initially at the center level—makes transparency hard to achieve.


Implementation. It appears that all the NASA Centers operate the selection procedures in similar ways, as their end-products must be sent to HG for review by the SBIR Program Office and by representatives of the Mission Directorates.


Checks and Balances. There are multiple levels of checks and balances in the NASA selection system. Initially, proposals are reviewed by Center Committees, which collectively provide some balance against individual enthusiasm for a proposal. Tentative rankings are then submitted to fairly extensive review by SBIR program management and by other headquarters staff.


Conflicts of Interest. As with DoD, the absence of external reviews removes one potential source of such conflicts. Internally, champions of a proposal may have many reasons for supporting it, but the process is designed to balance out these justifications against objective criteria and other competing interests.

5.4.3.4
NSF

Transparency. All NSF SBIR reviewers are provided with instructions and guidance regarding the SBIR Peer Review process and are compensated for their time. NSF policy regarding the NSF review process and compensation can be found in the NSF Proposal and Grant Manual.

The NSF Report discusses the use of “additional factors.” These may include the balance among NSF programs; past commercialization efforts by the firm; excessive concentration of grants in one firm or with one principal investigator; participation by woman-owned and socially and economically disadvantaged small business concerns; distribution of grants across the states; importance to science or society; and critical technology areas. However, there is no further explanation of how these factors might be applied or when they might be applied.


Implementation. The relatively small size of the NSF program and the centralized management structure ensure that implementation is uniform.

23

National Aeronautics and Space Administration, “The NASA SBIR and STTR Programs Participation Guide,” June 2005, accessed at <http://sbir.gsfc.nasa.gov/SBIR/zips/guide.pdf>.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

Checks and Balances. Applications are reviewed by an outside panel, and then a final decision is made by the SBIR staff and the program manager. This indicates a limited level of review.


Conflict of Interest. NSF appears to take seriously applicant views on appropriate reviewers. It does not appear that reviewers sign a conflict of interest form.

5.4.4
Efficiency

For programs to work well, they must be efficient in term of their use of resources, both internal and external to the agency.

The efficiency objective must be balanced against other considerations. Hypothetically, having one awarding officer making instant decisions would be highly efficient, but would be neither fair nor necessarily effective.

Still, it is possible to construct indicators or measures of efficiency from the perspective of both the applicant and the agency. While many of the efficiency considerations listed in Box 5-3 are a factor in agency program management, it is also noteworthy that no agency has conducted an analysis of program efficiency from the perspective of applicants and award winners. Also, no agency has, to our knowledge, conducted any detailed surveys of customer satisfaction among these populations, focused on program improvement.

BOX 5-3

Possible Efficiency Indicators for the SBIR Selection Process

External: Efficiency for the Applicant

  • Time from application to award

  • Effort involved in application

  • Red tape involved

  • Output from application (not including award)

  • Re-use of applications

Internal: Efficiency for the Agency

  • Moves the money

  • Minimizes staff resources

  • Maximizes agency staff buy-in

  • Minimizes appeals and bad feelings

Timelines are discussed in the section on timelines and gaps.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

5.4.5
Other Issues

5.4.5.1
Electronic Submission

All agencies have now moved to electronic submission of SBIR applications, though DoE and NIH only did so relatively recently. Each agency uses its own home grown electronic submission system, even though the existing systems at DoD and NASA have been highly regarded by recipients.24

5.4.5.2
Degree of Centralization

It is somewhat misleading to talk about the degree of centralization. Programs handle selection so differently that the same functions are not performed at all agencies. However, it is possible to identify some shared functions (see Table 5-1).

5.4.6
Selection—Conclusions

We return to our initial conceptual framework of assessment. Are the selection procedures operated at the various agencies meeting the core requirements in this area? Are they:

  • Fair.

  • Open.

  • Efficient.

  • Effective.

Fairness. From discussions with applicants and staff, it would appear that fairness is substantially enhanced by two key characteristics:

  • Transparency. The more applicants understand about the process, the fairer it appears. Generally, the debriefing programs provide sufficient feedback to meet this requirement, and the agency’s procedures are reasonably well-known. NIH is unique in listing the names of outside panel members.

  • Conflict of Interest. Conversely, the more outsiders are used for assessment, the more conflict of interest appears to be a potential problem. Claims of problems at NIH cropped up regularly in interviews, and it appears that procedures to address conflicts of interest could be strengthened there.

Openness. There have been comments in interviews that the DoD program

24

Case study interviews.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

TABLE 5-1 Comparing Selection Procedures Across Agencies

Procedure

NIH

NASA

NSF

DoE

Army

Navy

Air Force

MDA

Initial admin review

CSR

?

?

SBIR program

TPOC?

 

 

 

Technical assessment

Outsiders

TPOC/Centers

Outsiders

TTM/Agency staff

TPOC/agency staff

TPOC/agency staff

TPOC/agency staff

TPOC/agency staff

Scoring mechanisms

Quant.

Qual.

Qual.

Quant.

Varies

Varies

Varies

Varies

Program alignment review

N/A

PM

PM

N/A

Lead Agency scientists

 

 

 

Final rankings list

PM

SBIR dir.

PM

PM

Lead Agency scientists

 

 

 

Effective funding approval

Div director

SBIR dir.

SBIR dir.

PM

SSA

SSA

SSA

PM/SSA

SSA = designated contracts officer/program liaison

TPOC = technical point of contact

SBIR dir. = SBIR Program Coordinator or similar function

Outsiders = outside technical reviewers

Quant. = quantitative (numerical) scoring system

Centers = NASA Centers (e.g., Goddard Space Center)

PM = Program Manager (technical area manager, not SBIR manager)

favors companies that are known to the program. However, the data (see Chapter 2) indicate that at least one-third of awards go to new winners every year, and interviewees stated that the program was sufficiently open to new ideas and companies. These comments did not appear in relation to other agencies’ SBIR programs


Efficiency. At most agencies, SBIR managers run the program with limited administrative funds. This limitation means lean program operations, but leanness does not necessarily mean that the needs of applicants and the agency are met as effectively as possible. In general, discussions with staff and awardees lead us to conclude that agency budgetary and efficiency concerns tend to trump the needs of applicants. For example, limiting the number of applications from a firm, or tightening topic definitions to reduce the number of applications, are both “efficient” in the sense that they allow agencies to process applications using the

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

minimum amount of administrative funding. Yet both strategies run the risk of eliminating firms’ and technologies’ potential value.


Effectiveness. The effectiveness of the SBIR program is discussed in more detail in Chapter 4: SBIR Program Outputs. However, effectiveness can also be judged partly by the number of applications the program receives. Application numbers continue to rise.

5.5
FUNDING CYCLES AND TIMELINES

The agencies differ in the number of funding cycles they operate each year, ranging from four at DoD (in which only some components participate) to three at NIH and one at DoE. NSF operates two funding cycles, but any given technical topic will be available in only one of the two.

This has resulted in significant funding gaps between Phase I and Phase II, and after Phase II. The NRC Phase II Survey indicated that on average, 73 percent of companies experienced a gap; the average gap reported by those respondents with a gap was 9 months. Five percent reported a gap of two or more years.

Agencies also differ substantially in their efforts to close the funding gaps between Phase I and Phase II, and after Phase II. In general, it seems reasonable to posit the following models:

  • “A standard annual award” model, used at NASA and DoE.

  • A “gap-reducing model,” favored at NIH and DoD.

NSF appears to primarily use the standard model, with some gap reducing features.

5.5.1
The Standard Model

Agencies using the standard model differ in the timing of milestones and the handling of administrative material. DoE, for example, uses a 9-month Phase I; NASA staff note that use of a sophisticated electronic tool for submission and program management has worked to reduce the gap between Phase I and Phase II.

Yet despite these differences, which can have an important impact on both awardees and program performance, the basic structure of the system can be captured (see Figure 5-2).

Key features of this model include:

  • A single annual submission date (usually).

  • No gap-funding mechanisms between Phase I and Phase II.

  • No Fast Track program.

  • No Phase IIB funding.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-2 Standard model for SBIR award cycles.

FIGURE 5-2 Standard model for SBIR award cycles.

5.5.2
The Gap-reduction Model

The gap-reduction model looks quite different. It includes some or all of a range of features designed to reduce gaps and improve the time performance from initial conception to final commercial product.

Drawing on the efforts made in this direction at the NIH and some components of DoD, as well in some respects at NSF, we can extract the following elements of a gap-reduction approach to award cycles.

5.5.2.1
Multiple Annual Submission Dates

More submission dates reduces the time that companies must wait before applying. Annual submission means a potential wait of up to a year.

NIH provides three annual submission dates for awards, in April, August, and December. DoD now offers a quarterly solicitation schedule, although few of the components participate in all four. SOCOM (Special Operations Command), for example, still participates in only two, and staff there are working to persuade

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

TABLE 5-2 Fixing the Phase I-Phase II Funding Gap

Agency Unit

Phase I-Phase II Gap Funding

NIH

3 mo. at own risk

NSF

None

DoE

None

NASA

None

DoD

 

Army

50k after Phase II selection

Navy

30k after Phase II selection

Air Force

None

DARPA

None

DTRA

None

MDA

None

SOCOM

None

CBD

30k after Phase II selection

OSD

9 mo. for Phase I

NIMA

None

the agency’s engineers and topic managers that more submission dates would result in better and more relevant applications.25

5.5.2.2
Topic Flexibility

Topics have important implications for company research timelines. A company that wishes to pursue a particular technology may have a problem at agencies with “tight” topic boundaries because the companies must wait until the topic shows up in a published program solicitation. If they miss that window of opportunity, that topic may not show up again for several years—in which case the effective gap between opportunities for that company is several years, regardless of whether the agency provides multiple annual award deadlines.

This suggests that an approach that writes broad topics, and offers some opportunity for “out-of-topic” projects, is one that best fits with the gap-reduction model.

5.5.2.3
Phase I—Phase II Gap Funding

The gap between the end of Phase I and the beginning of Phase II is potentially difficult for companies; smaller companies in particular may not have the resources or the other contracts in hand to keep working, and may even lose critical staff.

25

Telephone interview with Special Operation Command (SOCOM) SBIR Program Manager, June 8, 2005.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

Two mechanisms appear to have emerged to provide funds during the gap between the conclusion of Phase I and the start Phase II funding.

Several DoD components use some version of an option arrangement. They reduce the size of the initial Phase I award (usually to $70,000), and add an “option” for 3 months additional work for $30,000-50,000. Projects that win a Phase II award may become eligible for the option, which is designed to cover the funding gap. The “option” programs are activated only when the agency awards a Phase II contract, so it is unclear as yet whether these options are applied to all phase II winners, or how well they cover the gap, as the end of Phase I may well be some months before the start of a Phase II award at these components.

NIH does not use the option approach. Instead, companies that anticipate winning a Phase II can work for up to three months at their own risk, and the cost of that work will be covered if the Phase II award eventually comes through. If it does not, the company must swallow the cost. This may work reasonably well for companies that have sufficient cash flow to continue work, but less well for smaller companies with more limited resources.

5.5.2.4
Fast Track Models

Some agencies now offer some kind of Fast Track model to close the gap between Phase I and Phase II.

DoD

At DoD, starting in 1992, the Ballistic Missile Defense Organization (BMDO) developed a program called “co-investment,” under which applicants who could demonstrate additional funding commitments, gained preferences during the Phase II application process, and were also offered more rapid transitions. This effectively eliminated the gap. In 1992, less than half of all BMDO awardees had such commitments; by 1996 this figure had risen to over 90 percent.26

In 1995, DoD launched a broader initiative. Under the Fast Track policy, SBIR projects that attract matching cash from an outside investor for their Phase II effort, can receive interim funding between Phases I and II, and are evaluated for Phase II under an expedited process.

Companies submit a Fast Track application, including a statement of work and cost estimates, between 120 and 180 days after the award of a Phase I contract, along with a commitment of third party funding for Phase II.

Subsequently, the company must submit its Phase I Final Report and its Phase II proposal no later than 210 days after the effective date of Phase I, and must certify, within 45 days of being selected for a Phase II award, that all matching funds have been transferred to the company.

26

National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2000, p. 27.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

TABLE 5-3 NAVSEA Fast Track Calendar of Events

Responsible Party

Required Deliverable/Event

Due Date

Delivered to:

SBIR Company

Fast Track application

150 days after Phase I award

NAVAIR SBIR Program Office, TPOC, Navy Program Office, and DoD

SBIR Company

Five-to ten page electronic summary

150 days after Phase I award

NAVAIR SBIR Program Office

SBIR Company

Phase II proposal

Phase I Final Report

181-200 days after Phase I award

Technical point of contact and NAVAIR Program Office

NAVAIR

Acceptance or rejection of Phase II proposal

201-215 days after Phase I award

SBIR Company

SBIR Company

Proof that third-party funding has been received by SBIR company

45 days after acceptance of Phase II proposal

Contract Specialist

SBIR Company

Final accounting of how investor’s funds were expended

Include in final Phase II Progress report

Technical point of contact

SOURCE: NAVSEA Web site.

At NAVAIR, for example, a subcomponent of the Navy, Fast Track proposals will be decided and returned to the company within a maximum of 40 days from submission, which may be a month before the end of Phase I.27 This provides “essentially continuous funding” from Phase I to Phase II. About 6-9 percent of DoD awards are Fast Track.28

Data from the NRC study indicates that Fast Track does reduce the gap between Phase I and Phase II. Survey data showed that for Fast Track recipients, more than 50 percent reported no gap at all, and the average gap was 2.4 months; for a control group of awardees, only 11 percent reported no gap, and the average gap was 4.7 month—about double that for Fast Track projects.29

However, recent awards data indicates that DoD components and companies are focusing more on Phase II+ arrangements, and less on Fast Track (see DoD Report: Chapter 2).30

27

Department of Defense, “NAVSEA Fast Track Calendar of Events,” accessed at <http://www.navair.navy.mil/sbir/ft_cal.htm>.

28

National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit, p. 28.

29

See National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit., pp. 66-67.

30

See National Research Council, An Assessment of the SBIR Program at the Department of Defense, op. cit.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

The DoD model is premised on the view that matching funding adds both legitimacy to a project and value in the form of additional research, and that this should therefore qualify the project for more rapid approval.31

NIH

The NIH Fast Track program is quite different from that at DoD—in fact, the similarity of names is highly confusing. Essentially, the NIH program allows applicants to propose a complete Phase I-Phase II program of research. No matching funds are required, and projects are reviewed and selected within the normal timeframe for review at NIH—i.e., they are not expedited.

The advantage to the applicant is that NIH Fast Track awards are designed to dramatically reduce uncertainty by awarding a Phase II at the same time as a Phase I, and also by reducing the time needed between awards.

Under normal NIH procedures, success with a Phase I will be followed by a Phase II application, which will be decided between 8 months and 12 months after the end of Phase I. Under Fast Track, Phase II can start immediately.

To date, there is little evidence about the impact of the program. Initial administrative difficulties have led to substantial confusion for some awardees.

5.5.2.5
Shorter Selection Cycles

Some agencies appear to manage the selection process in a shorter time period than others. Relatively minor changes to the process could make a substantial impact for companies. For example, NIH projects which do not receive funding typically receive their formal review comments too late for those comments to be applied to a resubmitted application in the next funding round—which means another 4 months delay until the next opportunity. NIH is aware of this and is initiating pilot programs to address this problem.

5.5.2.6
Closely Articulated Phase I-Phase II Cycles

Agencies that use different cycles for Phase I and Phase II can arrange them so that the end of Phase I is timed to meet the Phase II application process quite closely. DoD, NSF, and DoE already do this, but NIH does not. At NIH, Phase II applications have the same deadlines as Phase I.

31

To address the efficacy and impact of the DoD Fast Track program, the SBIR program management has asked the Academies to update the 2000 study. That study found that the program was having a positive effect on commercialization. The new study will survey participants in the Fast Track program, in order to determine subsequent outcomes. For the initial Academy review, see National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
5.5.2.7
Phase II+ Programs

Phase II+ programs are designed to help bridge the gap between the end of Phase II and the marketplace, or Phase III. The two agencies that use the standard model (DoE and NASA) also do not provide any Phase II+ program.

Various Phase II+ efforts at the other agencies are described elsewhere in this chapter in more detail (see below). The point here is to note that the gap-reducing model extends beyond the Phase I-Phase II gap, into the gap after Phase II.

5.5.3
Conclusions

While the standard selection model has the virtues of simplicity and of close adherence to SBA guidelines, there are solid reasons for supporting efforts to close the funding gaps that recipient companies confront, especially where this can be done through program redesign with relatively minimal additional costs.

Best-practice analysis across agencies should enable significant improvements in this area if agencies commit to the gap-reduction model, though such a commitment may require the provision of additional administrative funding.

5.6
AWARD SIZE AND BEYOND

Both the relevant legislation and SBA guidelines make it appear that SBIR award sizes are tightly delineated. However, this is a misconception.

The “standard” awards from the SBIR program, as defined in the SBA Guidelines, are:

  • Phase I: $100,000 for 6 months, to cover feasibility studies; and

  • Phase II: up to $750,000 for 2 years to fund research leading toward a product.

No agency follows these guidelines precisely. Differences emerge in five areas:

  • Size/duration of Phase I.

  • Size/duration of Phase II.

  • Funding to cover the Phase I-Phase II gap.

  • Supplementary funding to cover unexpected costs.

  • Follow-on funding to help completed Phase II projects move closer to market.

The recent GAO report, focused specifically on DoD and NIH, noted that a majority of awards at NIH, and almost 20 percent of awards at DoD, were larger than the guidelines permitted.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

5.6.1
Size/Duration of Phase I

As Table 5-4 shows, the formal size and duration of Phase Is vary considerably at the five agencies and, in the case of DoD, between its components.

Most of the deviations from the $100,000/6-month norm are accounted for by DoD components that reserve $30,000 of the award for an option to cover the gap between the end of Phase I funding and the start of Phase II.

5.6.1.1
Larger Awards at NIH

NIH uses Phase I awards differently than the other agencies:

  • It has begun to make much larger awards in some cases.

  • It regularly extends Phase I awards to one year (at no additional cost), and has begun to offer a second year of Phase I support in some cases, compared to strict six or nine month limits at most other agencies.

  • It provides administrative supplements that boost Phase I awards, when additional resources are needed to complete the proposed research.

TABLE 5-4 Phase I Awards

Agency Component

Award Size ($)

Duration (months)

NIH

150,000

12

NSF

100,000

6

DoE

100,000

6

NASA

70,000

6

DoD

 

 

Army

70,000

6

Navy

70,000

6

AF

100,000

6

DARPA

99,000

6

DTRA

100,000

6

MDA

100,000

6

SOCOM

100,000

6

CBD

70,000

9

OSD

100,000

9

NIMA

100,000

6

Education

75,000

6

EPA

70,000

6

Agriculture

80,000

6

DoC

 

 

NOAA

n/a

 

NIST

75,000

6

DoT

100,000

6

SOURCE: Agency Reports and Web pages.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-3 Average size of Phase I awards at NIH, FY1992-2005.

FIGURE 5-3 Average size of Phase I awards at NIH, FY1992-2005.

SOURCE: NIH Awards Database.

Figure 5-3 shows that starting in 1999, the average size of NIH Phase I awards reached $171,000 in 2005. In a growing number of cases, NIH provided Phase I funding of more than $1 million, although the median size of award has grown much more slowly than the average.

No specific policy decision appears to have been taken to make these larger awards, and the large Phase I awards have been explained in various ways by different staff at NIH. (See NIH Report: Chapter 232, for detailed discussion).

The median award size has remained close to the guideline, although it continues to trend up, while the mean award size has increased quite rapidly. This implies that extra large awards are (relatively) rare and (relatively) large.

5.6.1.2
Second Year Awards at NIH

Just as the size of awards has changed, NIH has extended the period of support as well. In FY2002 and FY2003, more than 5 percent of all Phase I awards were receiving a second year of support, with a median value of about $200,000 for the second year alone (see Figure 5-4).

NIH staff and recipients noted in interviews that six months is often too short to complete many biomedical Phase I projects, and that at NIH it is standard practice to grant a “no-cost” extension to one year or even more. This extends the

32

National Research Council, An Assessment of the SBIR Program at the National Institutes of Health, op. cit.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-4 Phase I, year two awards at NIH.

FIGURE 5-4 Phase I, year two awards at NIH.

SOURCE: NIH Awards Database.

term of the award without providing additional funding. No other agency offers such a liberal extension program.

5.6.1.3
Supplementary Funding

NIH offers a further form of funding flexibility. In principle, program officers can add limited additional funds to an award in order to help a recipient pay for unexpected costs. While practices vary at individual ICs, it appears that awards of up to 25 percent of the annual funding awarded can be made by the program manager, without further IC or NIH review. More substantial supplements are possible, but these are reviewed more extensively.

For Phase I, supplementary funding remains relatively rare, averaging less than 20 cases per year in recent years.

5.6.2
Size/Duration of Phase II

While most agencies follow the $750,000 limit mandated by SBA, others such as NSF offer less, while NIH offers more. In each case, this is an agency decision, although a waiver from SBA is required for funding beyond $750,000. NIH has received a “blanket” SBA waiver for all extra sized awards.

Beyond the formal limits, NIH once again is an important exception, as it has begun to offer some awards that are much bigger than the norm, and it also extends support far beyond 24 months.33 GAO’s 2006 report indicates that using

33

The design of the NIH database makes it impossible to comprehensively and systematically track

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

TABLE 5-5 Phase II Awards

Agency Component

Award Size ($)

Duration (months)

NIH

860,000

n/a

NSF

500,000

24

DoE

750,000

24

NASA

600,000

24

DoD

 

 

Army

730,000

24

Navy

600,000

24

AF

750,000

24

DARPA

750,000

24

DTRA

750,000

24

MDA

750,000

24

SOCOM

750,000

24

CBD

750,000

24

OSD

750,000

24

NIMA

250,000

24

Education

500,000

24

EPA

225,000

24

Agriculture

325,000

24

DoC

 

 

NOAA

 

 

NIST

300,000

24

DoT

750,000

15

SOURCE: Data from agency Web sites.

a slightly different methodology than NRC analysis, it found that more than half of FY2001-2004 NIH awards were above the guidelines, as were about 12 percent of DoD awards (see Table 5-6).34

Table 5-6 indicates that oversized awards altogether accounted for 69.3 percent of all SBIR funding at NIH (2001-2004) and 22.6 percent of funding at DoD. The larger NIH awards may reflect the rapidly rising cost of medical research (exceeding the inflation rate) and the lack of any change in the guidance for award size by the SBA. The larger awards noted above, reflecting the needs of those agencies, are mirrored by the smaller awards made, for example, by NSF and NASA. In both cases, the award amounts reflect agency judgments on how to best adapt the program to meet their mission needs.

awards across Phases. Thus, these NIH data are derived from reflect the sum of the average awards for Phase II year 1 support in FY2002 and year 2 or more support in FY2003. The correct average for total phase II award is now somewhat higher, as awards sizes have continued to grow since FY2002. Data for other agencies reflect formal limits on award sizes.

34

U.S. General Accountability Office, Small Business Innovation Research: Information on Awards made by NIH and DoD in Fiscal Years 2001 through 2004, GAO-06-565, Washington, DC: U.S. General Accountability Office, 2006, p.21.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

TABLE 5-6 Awards Beyond Guidelines at NIH and DoD, FY2001-2004

 

NIH

DoD

Within the Guidelines—Number and Percentage

Above the Guidelines—Number and Percentage

Within the Guidelines—Number and Percentage

Above the Guidelines—Number and Percentage

Phase I

 

 

 

 

Number of Awards

1,738

(49%)

1,842

(51%)

6,826

(90%)

740

(10%)

Dollar Value of Awards

(in Millions)

171

(29%)

411

(71%)

587

(87%)

91

(13%)

Phase II

 

 

 

 

Number of Awards

549

(43%)

734

(57%)

2,830

(83%)

562

(17%)

Dollar Value of Awards

in Millions)

376

(31%)

823

(69%)

1,964

(75%)

653

(25%)

Fast Track

 

 

 

 

Number of Awards

100

(51%)

98

(49%)

a

a

Dollar Value of Awards

(in Millions)

59

(28%)

152

(72%)

a

a

Total

 

 

 

 

Number of Awards

2,387

(47%)

2,674

(53%)

9,656

(88%)

1,302

(12%)

Dollar Value of Awards

(in Millions)

606

(30%)

1,386

(70%)

2,550

(77%)

743

(23%)

NOTE:

aAll DoD Fast Track awards are included with DoD’s Phase I and Phase II awards.

Almost all of DoD’s Phase I awards above the guidelines are attributable to the Army’s effort to provide funding for the transition from Phase I to Phase II. However, the amount above the guidelines generally reduces subsequent Phase II award amounts.

SOURCE: U.S. General Accountability Office, Small Business Innovation Research: Information on Awards made by NIH and DoD in Fiscal Years 2001 through 2004, GAO-06-565, Washington, DC: U.S. General Accountability Office, 2006.

5.6.3
Extra-large Phase II Awards at NIH

The data recording mechanisms at NIH make it extremely difficult to compute overall award sizes with any degree of certainty, as awards are recorded on an annual basis and the tools for linking the different years of a given award are not always accurate.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-5 Phase II, year one award size at NIH.

FIGURE 5-5 Phase II, year one award size at NIH.

SOURCE: National Institutes of Health.

The data show that by 2000, more than two-thirds of Phase II awards at NIH received more than $350,000, and 20 percent received more than $500,000 during the first year of the award. While the average size of the first year of an NIH Phase II award has increased to approximately $500,000, this increase has been driven by some large awards—such as the $3.5 million award in FY2003. Median award sizes have changed less.

Beyond larger awards, NIH also offers longer awards, as Phase II is sometimes continued beyond 2 years (see Figure 5-6).

FIGURE 5-6 Third year of support for Phase II awards at NIH.

FIGURE 5-6 Third year of support for Phase II awards at NIH.

SOURCE: NIH Awards Database.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

The steadily rising numbers of third-year support awards in recent years suggest that they are becoming an important component of NIH SBIR activity. In FY2002 and FY2003, more than 10 percent of awards received a third year of support.

In a few cases, NIH goes further. Ten awards have received a 5th overall year of SBIR support, and a few have received support beyond the fifth year.

5.6.3.1
Conclusions

These trends indicate that the median and average size of NIH SBIR awards are both rising, with the average award size now comfortably exceeding the statutory limit. Large awards have been made in recent years and multi-year awards have also become increasingly common.

As discussed in the NIH volume, a shift away from standardized award sizes carries important implications for fairness and efficiency that NIH has not yet confronted. Varied award size implies a trade-off between a few larger awards and more smaller ones. Yet the award selection process at NIH does not seek guidance on this from the peer reviewers who make the key funding recommendations, and there is no standard process within the Institutes and Centers at NIH to ensure that this tradeoff is fully assessed.

Absent some balancing mechanism, some agency staff agreed that this could lead to a “race to upsize,” as larger projects tend—all other things being equal—to generate higher technical merit scores because they are more ambitious.

5.6.4
Supplementary Funding

NIH is also a leader in finding flexibility within the guidelines for supplementary funding. No other agency provides supplementary funding to cover expected costs.

At NIH, there are two kinds of supplementary funding:

  • Administrative supplements.

  • Competing supplements.

Each IC has its own guideline for supplements. Some will not permit them at all, and others allow them for only limited amounts. Administrative supplements of up to $50,000 can be authorized by an IC without going to the IC’s governing council.

Only a few ICs allow competing supplements, mainly because these must be re-reviewed by peer reviewers. They reflect large adjustments for change in scope from the original project.

All supplemental requests are considered formal and require documentation. A full application is required for competing supplements, and a budget page and letter justification is required for administrative supplements. However, discus-

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
FIGURE 5-7 Supplementary Phase II, year one awards at NIH.

FIGURE 5-7 Supplementary Phase II, year one awards at NIH.

SOURCE: NIH Awards Database.

sions with awardees who have received administrative supplements suggest that agency staff have considerable discretion.

The data indicates that the trend for both the number and size of supplementary awards are growing at NIH (see Figure 5-7).

5.6.5
Bridge Funding to Phase III

Traversing the “Valley of Death” between the end of Phase II research funding and the commercial marketplace is the single most important challenge confronting SBIR companies, as they seek to convert research into viable projects and services. As a result, success in helping awardees traverse this crucial period should be a major consideration in assessing each agency’s program.

All the agencies are now using mechanisms for addressing the Phase III/commercialization problem, such as the training and support services described in the section below on commercialization.

In addition, though, some agencies are developing funding mechanism to bridge transition to the market. Two models can be identified:

  • The NSF matching funds model (also used in parts of DoD, where it known as Phase II+).

  • The NIH focus on support for addressing regulatory requirements.

5.6.5.1
The NSF Matching Funds Model

NSF has taken perhaps the most aggressive approach to bridge funding. NSF Phase II awards are limited to $500,000. This leaves NSF with $250,000

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

under the SBA guidelines maximum, which the agency uses to provide Phase IIB awards which are made once companies can show matching third party funding. In some cases, NSF is prepared to offer Phase IIB+ awards—larger than $250,000—but these require a 2:1 match rather than 1:1.

About 30 percent of Phase II grantees have applied for Phase IIB grants. Approximately 80 percent of these applicants have been successful in receiving a Phase IIB grant. It does not appear that any applicants who met the third-party investment requirement had been turned down.

Phase IIB funding is highly conditional:

  • It requires a matching contribution from bona fide third parties;

  • It matches at a maximum rate of 50 percent, and provides a maximum of $250,000 in new funding, up to a total Phase II-IIB maximum of $750,000; and

  • It is not automatic—companies have to apply.

Other agencies are also using variants on the matching model.

The Army’s Phase II+ program is similar to the NSF model. Matching funds of up to $250,000 are made available on a competitive basis to companies with third-party funding, which can come either from the private sector or from an acquisition program at DoD.35 This funding must be generated after the Phase II contract is signed, and must be related to that contract. Applications must be submitted at least 90 days before the end of the Phase II contract.

All other DoD components use similar but not identical approaches, based on the provision of matching funds for outside investment. Some components offer advantageous matches for funds from within DoD, which is taken as a signal that the technology is finding interest within the acquisitions community.

Although NASA does not have a Phase II+ program, agency staff have indicated that Phase II award decisions themselves can be influenced by the existence of matching non-SBIR funds from a NASA research center.

5.6.5.2
The NIH Regulatory Support Model

NIH has noticed that some of its SBIR awardees have difficulty in dealing with the financial requirements for addressing the FDA regulatory approval process for medical products and devices. The process of regulatory approval is time-consuming and, in some cases, extremely expensive. Deep-pocket investors from the private sector often do not wish to invest in high-risk products before at least some of the regulatory hurdles have been overcome.

Figure 5-8 shows that of the 768 respondents to the 2003 NIH Phase II recipient survey, about 40 percent required FDA approval, but only about 15 per-

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

BOX 5-4

Characteristics of the Army Phase II Plus Awards

  • 126 total Phase II plus awards

  • 8 awards greater than the stated maximum

  • 51 awards at $250,000 or more

  • Median award size: $160,000

  • Average award size: $165,670

  • Average match: $198,416

  • Private investors—35

  • Other government agencies—6

cent had received it. It should be understood that clinical trials involve a whole series of activities, testing both the efficacy and the safety of proposed drugs and devices, using first animal trials in many cases and then human subjects. This is generally not a short or inexpensive process.

What remains is a substantial funding gap, between the end of Phase II—

FIGURE 5-8 FDA requirements and NIH survey respondents.

FIGURE 5-8 FDA requirements and NIH survey respondents.

SOURCE: National Institutes of Health, National Survey to Evaluate the NIH SBIR Program: Final Report, July 2003.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

which at NIH may not even create a product that is ready for first phase clinical trials—and the possible adoption of the product by subsequent investors.

NIH has addressed this gap, in part, through a new program called Competing Continuation Awards (CCAs). Competing Continuation Awards offer up to $1 million annually for up to 3 years to companies that complete Phase II, but require additional support as they enter the regulatory approval process.

The introduction of these awards is still too recent to develop a clear picture of their eventual success or utilization level at NIH. However, they have certainly been greeted with enthusiasm by the awardee community, although recent data from NIH show that only three were awarded in FY2004.36

5.7
REPORTING REQUIREMENTS AND OTHER TECHNICAL ISSUES

In general, reporting requirements for SBIR awards appear to be comparable or better than those for other federal government R&D programs.

Award winners at all agencies must provide the following:

  • A Phase I application, nominally limited to 25 pages, in reality may be closer to 40 pages. Includes detailed worksheet of proposed expenses.

  • A Phase I contract (if a contracting agency).

  • Phase I final report, showing outcomes of the research.

  • Phase II application may be closer to 60-70, as letters of support etc. become more important. This phase also includes the commercialization plan—about 15 pages—outlining plans for commercializing the output from Phase II.

  • Phase II final report.

  • Phase II+ application (if applicable).

  • Phase II+ final report (if applicable).

  • Longer-term reports on downstream results from the award (varies strongly by agency).

Each agency has its own mechanisms for paying awardees. In each case, however, it is important to note that funds are made available in the course of the research, not as reimbursement for specific expenses. The internal funding gap for these projects is therefore much smaller.

Discussions with awardees and with experienced program managers suggest that there is little dissatisfaction with the level of paperwork required by the various agencies.

36

This raises the question of whether this is an appropriate number, given the low rate at which successful laboratory discoveries get through clinical trials. Awards data from NIH, available online at <http://grants.nih.gov/grants/funding/award_data.htm>.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

5.8
COMMERCIALIZATION SUPPORT

In the course of the most recent congressional reauthorization of SBIR, considerable emphasis was placed on the need to support commercialization efforts by awardees. Agencies have taken these comments as a direction to create or enhance quite a wide range of efforts to help companies commercialize their research.

The following major initiatives are under way at some or all of the agencies:

5.8.1
Commercialization Planning

All agencies now require applicants to submit a commercialization plan with their Phase II application. The extent to which this plan actually affects selection decisions varies widely. At DoE, it accounts for one-sixth of the final score.37 At NIH, the effect varies depending on the make-up and interests of the study section. Several program managers at NIH institutes and centers noted that their study sections paid little heed to commercialization in the selection process. However, they also observed that having to develop a commercialization plan was a valuable exercise for the companies, even if it had no impact on selection.

5.8.2
Business Training

As agency interest in Phase III outcomes has increased, the agencies have begun to see that the technically sophisticated winners of Phase II awards are in many cases inexperienced business people with only a limited understanding of how to transition their work into the marketplace. Most agencies have developed initiatives to address this area, although these efforts are constrained by limited funds. While the transfer of best practices does occur between agencies, the extent to which agencies invent their own solutions to what are, in general, common problems is striking.

5.8.2.1
DoD

The various components at DoD operate their own business training programs; the Navy is generally believed to have put the most effort and resources into this area.

The Navy’s Transition Assistance Program (TAP) has several distinct and important components. Possibly more important than any of the specific elements is the clear commitment to supporting the transition of SBIR technologies into acquisition streams at the Navy.

The TAP involves a number of components.

37

DoE “Overview” PowerPoint presentation.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
  • All Phase II companies must attend the TAP orientation, which kicks off the program and provides detailed information about the program and its possible benefits.

  • The program itself is optional—a service provided by a contractor (Dawnbreaker, Inc.) on behalf of the Navy.

The DoD components do not offer a comparable commercialization assistance program, although change is clearly under way in some areas. Defense Advanced Research Projects Agency (DARPA) has teamed with Larta Institute, a technology commercialization assistance organization, to develop and deliver a pilot program designed to assist DARPA’s SBIR Phase II awardees in the commercialization of their technologies. The program began in May 2006 and will comprise individual mentoring, coaching sessions, and a showcase event where SBIR awardees will present their technologies to the investment community, to potential strategic partners and licensees—an approach similar to the Navy’s. The program is available to current DARPA Defense Sciences Office SBIR Phase II Awardees.38 DARPA is also funding technical assistance through the Foundation for Enterprise Development (FED).39

In FY2005, following the NRC meeting on this subject, Congress established a new commercialization pilot program (CPP), which permits DoD to use 1 percent of SBIR funding (about $11.3 million in FY2006) to pilot various approaches that will further support commercialization of successful Phase II research.40

  • The CPP was established in the 2006 Defense Bill and is still being defined by DoD.

  • The goal is to provide additional resources and emphasis on SBIR insertion into Acquisition Programs.

  • New incentives are to be established and CPP projects will receive additional assistance.

Under guidance from the Under Secretary of Defense for Acquisition, Technology & Logistics, USD(AT&L), the Army, Navy and Air Force began establishing commercialization pilot programs in FY2006. Subsequent USD(AT&L) guidance in FY2007 encouraged all remaining DoD components to establish CPP activities.41

38

Larta Institute Web site, accessed at <http://www.larta.org/darpa/>.

39

This is a correction of the text in the prepublication version released on July 27, 2007.

40

For a summary of the key points raised on this topic at the conference, see National Research Council, SBIR and the Phase III Challenge of Commercialization, op. cit.

41

This is a correction of the text in the prepublication version released on July 27, 2007.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
5.8.2.2
NSF

NSF’s program emphasizes commercialization and begins support for commercialization during Phase I. All grantees are required to attend a business development and training workshop during Phase I. Phase II grantees meet annually, where they are briefed on Phase IIB opportunities and requirements and are provided with workshops intended to assist with commercialization. The Phase IIB program provides additional funding predicated on the companies obtaining third-party funding.

NSF’s SBIR program operates a “MatchMaker” program, which seeks to bring together SBIR recipient companies and potential third-party funders such as venture capitalists. Just recently, NSF has begun to support participation of its grantees in DoE’s SBIR Opportunity Forum, which brings grantees together with potential investors and partners.

5.8.2.3
NASA

Like other agencies, NASA does not formally market its SBIR-derived technologies. However, because the agency understands the difficulties facing companies as they approach Phase III, it has developed a range of mechanisms through which to publicize the technologies developed using SBIR (and other NASA R&D activities). These include:

  • Spin-off (an annual publication).

  • Technology Innovation (quarterly).

  • Technology Briefs (monthly).

  • Success Stories publications.

  • TechFinder, NASA’s heavily used electronic portal and database.

  • Other materials such as a 2003 DVD portfolio of then-current NASA SBIR projects.

NASA also sponsors nine small business incubators in different regions of the country. Their purpose is to provide assistance in creating new businesses based on NASA technology.

5.8.2.4
NIH

NIH has managed a successful annual conference for some years, with growing attendance and positive attendee feedback.

NIH has also completed two pilot programs, one at the National Cancer Institute and one agency wide. The positive response to these programs has led the agency to roll out a larger Commercialization Assistance Program (CAP).

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

Larta Institute (Larta) of Los Angeles, California42 was selected by a competitive process to be the contractor for this program.43 The Larta contract began in July 2004, and will run for five years. During the first three years, three cohorts of SBIR Phase II winners will receive assistance. Years four and five will cover follow-up work, as each cohort is tracked for 18 months after completion of the assistance effort.


CAP Program Details. The assistance process for each group typically includes:

  • Provision of consultant time for business planning and development.

  • Business presentation training.

  • Development of presentation materials.

  • Participation in a public investment event organized by Larta.

  • 18 months for follow-up and tracking.

It is still too soon to draw conclusions about the effectiveness of this effort, although NIH reports that of the 114 companies participating in the CAP program in 2004-2005, 22 have received investments totaling $22.4 million.44

5.8.2.5
DoE

To aid Phase II awardees that seek to speed the commercialization of their SBIR technology, the DoE has sponsored a Commercialization Assistance Program (CAP). Formally launched in 1989 by Program Manager Samuel Barish with a $50,000 contribution from 11 DoE research departments, the CAP is now the oldest commercialization focused program among all federal SBIR programs.45 The CAP provides, on a voluntary basis, individual assistance in developing business plans and in preparing presentations to potential investment sponsors.

The CAP is operated by a contractor, Dawnbreaker, Inc., a private firm based in Rochester, New York. In order to make the forum a more attractive event for the potential partners/investors (i.e., more business opportunities), the DoE SBIR program often partners with other agencies or with other DoE offices. For the FY2006 Forum DoE SBIR has agreed to partner with the National Science Foundation’s SBIR Program and with DoE’s Office of Industrial Technologies.

42

Larta Institute Web site, accessed at <http://www.larta.org>.

43

Larta was founded by Rohit Shukla who remains as its chief executive officer. It assists technology oriented companies by bringing together management, technologies, and capital to accelerate the transition of technologies to the marketplace.

44

Jo Anne Goodnight, NIH SBIR Coordinator, Personal Communication, March 16, 2007.

45

Interview with Sam Barish, DoE Director of Technology Research Division, May 6, 2005.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

BOX 5-5

NASA’s Sponsored Business Incubators

  • Business Technology Development Center

  • Emerging Technology Center

  • Florida/NASA Business Incubation Center

  • Hampton Roads Technology Incubator

  • Lewis Incubator for Technology

  • Mississippi Enterprise for Technology

  • NASA Commercialization Center/California State Polytechnic University

  • University of Houston/NASA Technology Commercialization Incubator

  • NASA Illinois Commercialization Center

5.9
CONCLUSIONS

The wide variety of organizational arrangements used to implement the SBIR program at the various agencies—and the differences even within programs at the two largest programs, at DoD and NIH—mean that generalized conclusions about the program should be treated with some caution,. Nonetheless, the NRC research permits us to draw the following conclusions, some which form the basis for recommendations.

5.9.1
Differentiation and Flexibility

It is difficult to overstress the importance of diversity within the SBIR program. In fact, given the differences, it is a mistake to talk about “the” SBIR program at all: There are agency SBIR programs, and they are—and should be—different enough that they must be considered separately, as the NRC has done through the companion agency volumes.

These differences—outlined above and emerging through a comparison of the agency volumes—stem of course partly from the differing agency missions and histories, and partly from the differing cultures within them. It is not our mandate to disentangle the sources of these complexities. The point here is to assert that the diversity of the program—which reflects the flexibility permitted within the SBIR structure, one largely enhanced by SBA interpretations of its guidelines—is fundamental to the program’s success. The fact is that DoD programs are managed to generate technologies for use within DoD. It would make little sense to manage them like NIH programs, which aim to provide technologies that will eventually find their way largely to private sector providers in health care. The different objectives impose different needs and constraints and hence different strategies.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

The inescapable conclusion from this is that efforts to standardize the operation of different aspects of the program are likely to prove counterproductive.

A further corollary is that managers need substantial flexibility to adapt the program to the context that they confront within their own agency and to their own technical domains. Nothing in the following points should therefore be taken as asserting the need for a single one-size-fits-all approach. We believe that every agency will find value in considering the suggestions and recommendations in this report—but also that each agency must judge those recommendations in light of its own needs, objectives, and programs.

5.9.2
Fairness, Openness, and Selection Procedures

The evidence suggests that the programs are by and large operated in a fair and open manner. Opportunities for funding are widely publicized, and the process for applying and for selection is largely transparent at all agencies.

It is also true that at some agencies—in particular NIH, where peer reviewers from outside the agency have a predominant role in selection—concerns about possible conflict of interest were widely reported by interviewees. Conflicts of interest may occur in some areas, but the different selection procedures used by the agencies require that this issue be addressed at the agency level, as it is recommended, for example, in the NIH volume. It is by no means a general problem.

While open, the agency selection procedures do differ, yet none of the agencies has made any effort to assess the success of their particular selection methodologies, or to determine ways of piloting improvements in them. Linking outcomes to selection has been done only at DoD, through the use of the Commercialization Index, but efforts to integrate this into selection procedures have not been systematically analyzed.

5.9.3
Topics

In agencies where topics define the proposal boundaries, the topic development process is also an important part of outsider perceptions of the program, and here there is less transparency: in none of the agencies is the process of topic development spelled out clearly on the Web site, which at least at DoD results in perceptions that some topics are designed—wired—for specific companies. As noted above, there is no available evidence to support this claim, not least because some companies have developed expertise, giving them a competitive advantage. Increased transparency in this area might enhance perceptions of the program’s fairness more generally.

At DoD in particular, the agency began making efforts to reduce cycle times and to increase the relevancy of topics, starting in 1997 if not before. These efforts have had an important impact on relevancy, where a growing number of topics—now more than 50 percent DoD-wide—are sponsored by acquisition of-

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

fices. At NASA, DoE, and NSF, topics are all drawn from technical experts with area responsibilities in the agencies.

However, two other topic issues have emerged:

  • Cycle Time. The DoD process does reduce duplication and enhance relevance, but at the cost of timeliness. Recent initiates to develop quick response topics are therefore a promising way of balancing the needs for effectiveness and speed.

  • Breadth. At least two factors seem to drive agencies toward narrow definitions of topics, including definitions that mandate the technical solution to be used as well as the problem to be solved. At DoD, the effort to gain interest from acquisition and other non-research communities could lead to the use of SBIR as a form of off-budget contract research. At NSF and DoE, topics have been narrowed in an effort to reduce the flow of applications to what the agency staff believes to be manageable proportions. Neither of these motives seems justified, and we would suggest that agencies seek to keep their topics as broadly defined as possible, at least in terms of the technical solutions that might be acceptable.

5.9.4
Cycle Time

In interviews with awardees and other observers of the program, concerns about cycle time surfaced regularly, at every agency. The fact is that it takes time to develop topics, publish a solicitation, assess applications, make awards, and in the end sign a contract. And that is just for Phase I.

Despite efforts at some agencies—including DoD, and in other ways DoE—the cycle time issue has not been fully addressed. Agencies have not in general fully committed to a “gap reduction” model where every effort is made to squeeze days out of the award cycle. For small businesses, especially those with few awards and few other resources, cycle time is a huge disincentive to applying for SBIR, and constitutes a significant problem for companies in the program.

Every aspect of cycle time—from lags between Phases to the number of annual solicitations, the breadth of topics and the permeability of topic borders, to contracting procedures—should be monitored, assessed, and, where possible, improved. Currently, agencies are in general insufficiently focused on this issue, as detailed in the agency volumes.

The additional funding recommended below should, in part, be used to address this problem.

5.9.5
Award Size

The fact that award size has not been officially increased since the 1992 reauthorization, and has therefore not kept pace with inflation, in itself raises

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

the question of whether the size of awards should be increased. But this is not a simple question. Related questions include: by what amount? One-time only or possibly tied to inflation? Both Phases? At all agencies?

The primary justification for raising the nominal limits is that the cost of research has increased with inflation, and hence that these limits do not buy the agencies the same amount of research results that the Congress intended when this guidance on award size was first introduced.

Agency staff have offered a range of additional justifications for larger awards. At NIH, which has been most active in experimenting with larger awards, justifications include the need to focus on the highest quality research, the likelihood that more funding will lead to more commercial success, the impact of inflation, the need to support companies through regulatory hurdles, and the possibility that higher funding levels will expand the applicant pool by attracting, in particular, high-quality applicants who currently believe SBIR is too small to justify the effort to apply. Finally, and not insignificantly, there is a relatively higher overhead cost of administering more, smaller awards.

The latter is tied to the minimal administrative funding for SBIR, discussed below. The other NIH-specific points are discussed in the NIH volume.

Awardees in interviews have also favored larger awards—until they are asked to make an explicit trade-off between the size of awards and the number of awards. At that point, awardees often become less supportive of larger awards.

There are also arguments against larger (and fewer) awards. Because it is extremely difficult to predict which awards will generate large returns (commercial or otherwise), it may be wise to spread the awards as widely as possible. SBIR awards also play a critical role (described in Chapter 4: SBIR Program Outputs) in supporting the transition of research from the academy to the market place. This kind of motivation may not require more than the existing level of support. There is as yet also no evidence to support the assertion that larger awards generate larger returns, although further analysis at NIH might test that connection.

If we conclude—as we do—that the SBIR programs at the agencies do work as intended by Congress, and do generate significant benefits, we should recommend change only with caution. It therefore seems that while there is a case to increase award size, there are risks involved, and it would be prudent for agencies taking this step to increase the awards incrementally over, perhaps, three years to avoid a sharp contraction of the program and to allow hope for increases in R&D funding to mitigate the impact on applicant success rates of increasing award sizes.

5.9.6
Multiple-Award Winners

Multiple-award winners do not appear to constitute a problem for the SBIR program at any agency. At all agencies except DoD, only a limited number of

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

companies win a sufficiently large number of awards to meet even the loosest definition of a “mill.”

Even at DoD, we find arguments aimed at limiting a company’s participation in SBIR to be unconvincing, for a number of reasons:

  1. Successful Commercialization. Aggregate data from the DoD commercialization database indicates that the basic charge against “mills,” i.e., no commercialization, is simply incorrect. Companies winning the most awards are on average more successful commercializers than those winning fewer awards.

    • While data from this source are not comprehensive, they do cover the vast majority of MAWs—and the data indicate that on average, firms with the largest number of awards commercialize as much or more than all other groups of awardees; that in the aggregate, there is no MAW problem of companies living off SBIR awards.

  1. For some multiple winners, at least, even though they continue to win a considerable number of awards, the contribution of SBIR to overall revenues has declined.46

  2. Case studies show that some of the most prolific award winners have successfully commercialized, and have also in other ways met the needs of sponsoring agencies.

  3. Graduation. Some of the biggest Phase II winners have graduated from the program either by growing beyond the 500-employee limit or by being acquired—in the case of Foster-Miller, for example, by a foreign-owned firm. Legislating to solve a problem with companies that are in any event no longer eligible seems inappropriate.

  4. Contract Research. This can be valuable in and of itself. Agency staff indicate that SBIR fills multiple needs, many of which do not show up in sales data. For example, efficient probes of the technological frontier, conducted on time, on budget, to effectively test technical hypotheses, may save extensive time and resources later, according to agency staff.

  5. Spin-offs Some MAWs spin off companies—like Optical Sciences, Creare, and Luna. Creating new firms can be a valuable contribution.

  6. Valuable Outputs. Some MAWs have provided the highly efficient and flexible capabilities needed to solve pressing problems rapidly.

  7. Compared to What? Agency programs do not impose limits. It is hard to see why small businesses should be subjected to limits on the number of awards annually when successful universities and prime contractors are not subject to such limits.

46

At Radiation Monitoring, for example, SBIR has fallen steadily and is now only 16 percent of total firm revenues.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

All these points suggest that while there have been companies that depend on SBIR as their primary source of revenue for a considerable period to time, and there are some who fail to develop commercial results, the evidence strongly supports the conclusion that there is no multiple winner problem. Moreover, those who advocate a limit on the annual number of awards to a given company should explain how this limit is to be addressed across multiple agencies, and why technologies that may be important and unique to a given company should be excluded on this basis.

Given that SBIR awards meet multiple agency needs and multiple congressional objectives, it is difficult to see how the program might be enhanced by the imposition of an arbitrary limit on the number of applications per year, as is currently the case at NSF. However, if agencies continue to see issues in this area, they should consider adopting some version of the DoD “enhanced surveillance” model, in which multiple winners are subject to enhanced scrutiny in the context of the award process.

5.9.7
Information Flows

The shift toward Web-based information delivery has occurred unevenly at the different agencies. DoD has perhaps moved farthest; along with NASA, it was the first agency to require electronic submission of applications, and the online support for applicants is strong. It is, moreover, well integrated with non-electronic information sources, with two innovations in particular being well-received by awardees:

  • The Pre-release Period, during which topics are released on the Web along with contact information for the topic authors. This enables potential applicants to directly determine how well their proposed research will fit with the agency’s needs, and provides opportunities for tuning applications, so they are a better fit. This innovation also connects applicants specifically to the technical officers running a particular topic, who will, in the end, also make finding decisions.

  • The Help Desk, which is staffed by contractors and is designed to divert non-technical questions (for example about contracts and contracting) to staff with relevant experience in the SBIR program, who may well know these materials better than topic authors for example. Program managers at NIH and NASA have complained in interviews that SBIR applicants require much more help than academic applicants for other kinds of awards. Some of that burden might be alleviated with a better resourced help desk function.47

47

This is a correction of the text in the prepublication version released on July 27, 2007.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

Overall, the growing size of the program, the clear interest exhibited by state economic development staff, and the increasingly positive view of SBIR at many universities, suggest that knowledge about the program is increasingly being diffused to potential applicants. The rise of the Internet—and the high quality Web sites developed by the agencies—mean that general interest can be translated into specifics quickly and inexpensively for both applicants and agencies. This conclusion is buttressed by the continuing flow of new companies into the program; these new companies account for more than 30 percent of all Phase I awards at every agency every year.

Thus it appears that the general outreach function historically fulfilled by the SBIR Agency Coordinators/Program Managers may now be changing toward a more nuanced and targeted role, focused on enhancing opportunities for underserved groups and underserved states, or on specific aspects of the SBIR program (e.g., the July 2005 DoD Phase III meeting in San Diego)—while relying on the Web and other mechanisms to meet the general demand for information. This seems entirely appropriate.

5.9.8
Commercialization Support

While some agencies have been working to support the commercialization activities of their companies for a number of years, this has clearly become a higher priority at most agencies in the recent past. Congress has always permitted agencies to spend a small amount per award ($4,000) on commercialization support, and most agencies have done so.

Commercialization support appears likely to have a significant pay-off for the agencies, partly because many SBIR firms have limited commercial experience. They are often founded by scientists and engineers who are focused on the technology, and interviews with awardees, agency staff, and commercialization contractors all indicate that the business side of commercial activities is often where companies experience the most difficulty.

Important recent initiatives include the extensive set of services provided at Navy through the TAP, and the NIH commitment to roll out the CAP program. These add to the long-running program at DoE.

It is important to understand that the character of commercialization differs quite fundamentally between DoD, NASA, and the remaining nonprocurement agencies respectively (DoE is partly a procurement agency, but for our purposes here, it purchases such a small amount of SBIR outputs for internal consumption, that it is best grouped with the nonprocurement agencies).

At DoD, where the agency provides a substantial market if companies can find a connection to the acquisitions programs, the critical focus of commercialization is on bridging the gaps to adequate Technology Readiness Levels and on finding ways to align companies with potential downstream acquisition programs.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×

At NASA, where the market within NASA for technologies, though important, may not be large enough to sustain long term development and profitability, the focus is increasingly on the spinout of technologies into the private sector.

At NIH, NSF, and DoE, commercialization means finding markets in the private sector.

These differences mean that while in general all commercialization assistance programs provide help in formulating business plans, in developing strategic business objectives, and in tuning pitches for more funding, there are important differences. In particular, DoD, which accounts for about half of the entire SBIR program, has commercialization programs that are largely (though not exclusively) focused on markets internal to DoD and on the particularly complex process of finding a way into the acquisition stream. This requires different training, different analysis, and different benchmarks than do other commercialization programs.

All of these suggest that it is important to find appropriate benchmarks against which to measure success.

Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 170
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 171
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 172
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 173
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 174
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 175
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 176
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 177
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 178
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 179
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 180
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 181
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 182
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 183
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 184
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 185
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 186
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 187
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 188
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 189
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 190
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 191
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 192
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 193
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 194
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 195
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 196
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 197
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 198
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 199
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 200
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 201
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 202
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 203
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 204
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 205
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 206
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 207
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 208
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 209
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 210
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 211
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 212
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 213
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 214
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 215
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 216
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 217
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 218
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 219
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 220
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 221
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 222
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 223
Suggested Citation:"5 Program Management." National Research Council. 2008. An Assessment of the SBIR Program. Washington, DC: The National Academies Press. doi: 10.17226/11989.
×
Page 224
Next: Appendix A: NRC Phase II Survey and NRC Firm Survey »
An Assessment of the SBIR Program Get This Book
×
Buy Hardback | $107.00 Buy Ebook | $84.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The SBIR program allocates 2.5 percent of 11 federal agencies' extramural R&D budgets to fund R&D projects by small businesses, providing approximately $2 billion annually in competitive awards. At the request of Congress, the National Academies conducted a comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet federal research and development needs. Drawing substantially on new data collection, this report provides a comprehensive overview of the SBIR program at the five agencies representing 96 percent of program expenditure-- DOD, NIH, NSF, DOE, and NASA--and makes recommendations on improvements to the program. Separate books on each agency will also be issued.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!