View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SCIENCE & CENTS
Science & Cents
Exploring the Economics of Biotechnology

Exploring the Economics of Biotechnology

Proceedings of the 2002 Conference on
Exploring the Economics of Biotechnology
Sponsored by the
Federal Reserve Bank of Dallas
Edited by John V. Duca and Mine K. Yücel

Science & Cents:
Exploring the Economics
of Biotechnology

i

Science & Cents:
Exploring the Economics
of Biotechnology
Proceedings of a Conference Sponsored by the
Federal Reserve Bank of Dallas
April 19, 2002

Edited by
John V. Duca and Mine K. Yücel

Federal Reserve Bank of Dallas

iv

The views expressed in this Proceedings are those of the authors and should not
be attributed to the Federal Reserve Bank of Dallas or the Federal Reserve System. With the exception of the articles by Darby et al. and Zucker et al., articles
in this Proceedings may be reprinted on the condition that the source is credited
and a copy is provided to the Research Department, Federal Reserve Bank of
Dallas, P.O. Box 655906, Dallas, TX 75265-5906. Additional copies of this publication can be obtained by calling 214-922-5254. It is also available on the Internet at www.dallasfed.org.
Published September 2003
Federal Reserve Bank of Dallas

Contents
Preface .....................................................................................................................vii
About the Contributors............................................................................................ix
Introduction
Exploring the Economics of Biotechnology: An Overview ......................................3
John V. Duca and Mine K. Yücel
Part One. An Economic Perspective on the Biotech Revolution
Growing by Leaps and Inches: Creative Destruction, Real Cost Reduction,
and Inching Up.......................................................................................................13
Michael R. Darby and Lynne G. Zucker
The Benefits to Society of New Drugs: A Survey of the
Econometric Evidence.............................................................................................43
Frank R. Lichtenberg
Part Two. The Interdisciplinary Nature of Biotech Research
Harnessing New Technologies for the 21st Century..............................................63
Malcolm Gillis
The Convergence of Disruptive Technologies Enabling a New
Industrial Approach to Health Products................................................................77
C. Thomas Caskey
v

vi

Contents

Part Three. Legal and Regulatory Issues Facing Biotechnology
Patents and New Product Development in the Pharmaceutical
and Biotechnology Industries.................................................................................87
Henry G. Grabowski
Reaching Through the Genome............................................................................105
Rebecca S. Eisenberg
Part Four. Financing Biotech Research
Financing Biotechnology Research: A Firsthand Perspective .............................119
Timothy F. Howe
Biotechnology and Government Funding: Economic Motivation
and Policy Models .................................................................................................131
Michael S. Lawlor
Part Five. Local Determinants of Biotech Research
Commercializing Knowledge: University Science, Knowledge Capture,
and Firm Performance in Biotechnology............................................................149
Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Preface

T

he articles included in this Proceedings were presented at the Federal
Reserve Bank of Dallas conference “Science & Cents: Exploring the Economics of Biotechnology” on April 19, 2002.

We wish to thank a number of people who helped make the conference
possible. We especially thank Federal Reserve Bank of Dallas President Robert
D. McTeer, Jr. for encouraging us to organize a conference on the economics of
biotechnology. We are grateful to John Thompson of the Research Department
for his help in hosting the conference. We also thank the conference speakers
for sharing their insights and particularly Michael Lawlor for suggesting many of
his fellow participants.

vii

About the Contributors
C. Thomas Caskey is president and CEO of Cogene Biotech Ventures Ltd., a
venture capital fund that supports early-stage biotechnology companies. He serves
on several national research review panels, including the Intramural Human
Genome Projects Special Review Committee at the National Institutes of Health,
and on the editorial boards of the Journal of the American Medical Association
and Science.
Michael R. Darby is Warren C. Cordner Professor of Money and Financial Markets at the University of California at Los Angeles. He is also director of UCLA’s
John M. Olin Center for Public Policy, chairman of The Dumbarton Group, a
research associate with the National Bureau of Economic Research, and adjunct
scholar with the American Enterprise Institute.
John V. Duca is a vice president and senior economist at the Federal Reserve
Bank of Dallas, where he heads and conducts research in financial-side macroeconomics.
Rebecca S. Eisenberg is Robert and Barbara Luciano Professor of Law at the
University of Michigan. Eisenberg, who teaches patent law, trademark law, and
torts, is on the advisory committee to the director of the National Institutes of
Health and on the Panel on Science, Technology, and Law at the National Academies.
Malcolm Gillis is president and Ervin Kenneth Zingler Professor of Economics
at Rice University. He is on the National Academy of Sciences Board on Sustainable Development and a board member of the National Council for Science
and the Environment, Houston Advanced Research Center, and Houston Technology Center.
ix

x

About the Contributors

Henry G. Grabowski is professor of economics and director of the Program in
Pharmaceuticals and Health Economics at Duke University. He has acted as an
adviser and consultant to the National Academy of Sciences, Institute of Medicine, Federal Trade Commission, Office of Technology Assessment, and General
Accounting Office.
Timothy F. Howe is founding partner of Collinson, Howe & Lennox, a venture
capital firm that provides management and advisory services to four closed-end
investment pools representing more than $100 million in committed capital. He
is also adjunct professor in the School of Business at Columbia University.
Michael S. Lawlor is a professor in the economics department and a research
associate in the Social Science Section of the Department of Public Health Sciences at Wake Forest University. He also directs the interdisciplinary minor in
health policy and administration. Lawlor is a financial economist and economic
historian who specializes in the economic behavior of health organizations and
the social influences on market behavior.
Frank R. Lichtenberg is Courtney C. Brown Professor of Business at Columbia University. He is also a research associate at the National Bureau of Economic Research. His research has examined how the introduction of new technology arising from research and development affects the productivity of
companies, industries, and nations.
Dennis K. Stone is vice president for technology development at the University
of Texas Southwestern Medical Center. He is also a professor of internal medicine, physiology, and biochemistry and holder of the NCH Corporation Chair in
Molecular Transport. Stone was a Searle Scholar and has received the Established Investigator Award of the American Heart Association.
Mine K. Yücel is a vice president and senior economist at the Federal Reserve
Bank of Dallas, where she heads and conducts regional economic research.
Lynne G. Zucker is professor of sociology and policy studies at the University
of California at Los Angeles. She is also director of the Organizational Research
Program at the Institute for Social Science Research and director of UCLA’s Center for International Science, Technology, and Cultural Policy.

Introduction
Exploring the Economics
of Biotechnology: An Overview
John V. Duca and Mine K. Yücel

Exploring the Economics
of Biotechnology: An Overview
John V. Duca and Mine K. Yücel

T

he recent rapid pace of discovery in life sciences raises a host of economic
issues. Advances in biotechnology will likely affect the well-being of people
worldwide for years to come. While we can only speculate on the specific
form those advances will take, we can address many of the economic questions
raised by developments in the life sciences. What potential economic benefits
does biotechnology offer? How is the industry’s emergence similar to the
infancy of now-established industries? What legal and regulatory issues does the
industry face? How will biotechnology research be financed and what are the
funding hurdles? Where do biotechnology firms locate?
To address these and other important questions on the subject, the Federal Reserve Bank of Dallas hosted “Science & Cents: Exploring the Economics
of Biotechnology” on April 19, 2002. The conference brought together distinguished experts who discussed economic and scientific issues related to
biotechnology.
AN ECONOMIC PERSPECTIVE ON THE BIOTECH REVOLUTION
Professor Michael Darby of the University of California at Los Angeles
opened the conference with an economic perspective on whether biotech
advances will kindle a new industrial revolution. He emphasized that biotech
research appears to be a major, metamorphic revolution that is creating new
industries, rather than incremental progress that perfects existing products.
As with earlier metamorphic revolutions, a lack of data and history hampers our ability to gauge biotech’s importance. Another characteristic of such
revolutions is that many new firms enter an emerging industry that has few or
no incumbents, but just a fraction of these new firms succeed and thrive. The
biotech sector is still in its formative stage, so the number of firms will proba3

4

Mine K. Yücel and John V. Duca

bly expand before declining during the shakeout phase that often occurs in an
industry’s development. Nevertheless, as Darby stressed, many of the economic
benefits to society accrue during the consolidation and maturation stages of an
industry’s life cycle.
Darby also noted that biotech research is hard to imitate and has a natural
excludability in that innovators have a profound advantage over imitators in creating successful applications from the research. In particular, success in biotech
is highly correlated with links to star scientists at universities, and these links are
empirically the most important factors affecting the probability of success. For
this reason, Darby stressed that drawing top scientific talent and expanding university research are critical to increasing biotech activity in areas like Texas.
Columbia University Professor Frank Lichtenberg reviewed some of the
limited evidence on biotech’s promise from studies of the economic benefits of
drugs, the most established biotech-related industry. These benefits include
lower overall medical costs, higher productivity, and increased longevity. Lichtenberg said that combined, these savings imply that $34 spent on prescriptions
boosts output by roughly $152.
Of course, these findings are based on past experience, and there is no
guarantee that future advances will pay off as handsomely. Nevertheless, the
track record for new pharmaceuticals is impressive and should be considered
when evaluating policy proposals that affect incentives for innovation. This caution also applies to other biotech industries, especially in light of the key role
highly risky research plays in biotech advances, a point stressed by other conference speakers.
THE INTERDISCIPLINARY NATURE OF BIOTECH RESEARCH
Two conference presentations spotlighted the complex, interdisciplinary
nature of biotechnology research. In his keynote address, Rice University President Malcolm Gillis focused on the critical roles nanotechnology and bioinformatics will likely play in biotech advances.
Gillis noted that the development of biotech will help accelerate growth
in dozens of other industries, thereby fostering overall economic growth.
Biotech innovations are generally the outcome of the interplay of a collection
of discoveries in different fields over a long period. In particular, Gillis stressed
how biotech progress is propelled by a synthesis of new technologies, not only
from the biosciences but also from other sciences, such as information technology and nanotechnology.
Gillis noted that mathematical, statistical, and computer methods are indispensable to analyzing biological, biochemical, and biophysical data. For example,
computational cancer research deals with an overwhelming number of possible
combinations and permutations of cancer-causing mutations, a problem bioinfor-

Exploring the Economics of Biotechnology: An Overview

5

matics is well suited to handle. Another subfield is pharmacogenomics, which
combines computational sciences with biochemistry and pharmacology and
offers the potential for customizing drugs to the genetic makeup of individuals
and developing new insights into disease prevention.
Gillis also described the growing research in the interface between
biotechnology and nanotechnology, such as developments in the design and
use of nanomaterials for biomedical engineering. He sees biotech as the principal arena for an ongoing, far-reaching synthesis of science and engineering.
Gillis noted that the interplay between bio-, nano-, and information technology
will have a striking impact on health maintenance, diagnosis, and treatment. He
also predicted that biotechnology will provide an array of products and services
to fuel sharp increases in living standards.
Tom Caskey, head of Cogene BioTech Ventures Ltd., a biotechnology
venture capital fund, stressed how technical innovations from several areas of
science are being used in the new field of proteomics. Caskey discussed the
convergence of new technologies that enable a new industrial approach to
health products. He noted that many different technologies in chemistry and
biology are being combined to develop new therapeutics. For example, recombinant DNA technology and genome sequencing have helped researchers
understand the structure of HIV and aided work on developing HIV vaccines
and treatments. More broadly, advances in recombinant DNA technology, the
study of cell growth, proteomics, and bioinformatics contribute to the development of proteins that can be used to prevent and treat diseases.
Caskey also briefly discussed the financial drivers of the biotech industry,
pointing out that the National Institutes of Health (NIH) and large pharmaceutical firms are the main source of funds, with a small amount coming from venture capital. He then spoke of some developments in Texas and shared his ideas
about what is needed to foster biotech in the state. These include increasing the
number of new firms, improving the recruitment of pharmaceutical and large
biotech firms to the region, and enabling in-state and out-of-state firms to consolidate. Caskey concluded that achieving these goals requires upgrading business plans and management, recruiting biotech talent, bolstering venture capital funding, and improving state and regional incentives.
LEGAL AND REGULATORY ISSUES FACING BIOTECHNOLOGY
Two speakers addressed the legal and regulatory issues surrounding biotechnology. Duke University Professor Henry Grabowski emphasized that two
of the biggest hurdles for drug research are high risk and high costs. Only 22
percent of drugs that enter clinical trials eventually receive Food and Drug
Administration approval, he noted. Furthermore, even among approved drugs
there are few winners. Plus, R&D costs are high and are rising sharply. Adjust-

6

Mine K. Yücel and John V. Duca

ing out-of-pocket costs for risk and time, Grabowski and his research colleagues
estimate that developing a new drug costs roughly $800 million.
Obviously, inventors need to capture enough of the economic returns to
make their investment worthwhile. In general, biotech firms defend their intellectual property through patents and an evolving set of legal strategies. Because
R&D costs and risks are high, patents need to last long enough for firms to
recoup their risk-adjusted R&D costs without unduly dissuading patent holders
or their potential competitors from conducting more research. Grabowski
pointed out that patents provide outsiders with information about new discoveries, which, in turn, spurs more research. He said that surveys of biotech firms
have shown that considerations surrounding patent protection are the most
important factor affecting R&D decisions.
Professor Rebecca Eisenberg of the University of Michigan also stressed
the importance of patent strategies for inventors to capture the returns to R&D
in biotech. An impediment to this is that existing patent practices may be unsuitable for the fast-changing biotech landscape because it takes time for the law to
catch up with science. Today, the value of an innovation is not in the direct production of therapeutic or diagnostic products but in the use of that innovation
in research and product development. For this reason, many innovators pursue
reach-through strategies to claim a share of the value of future products. These
strategies include licensing agreements that allow others to use an invention in exchange for a share of future products and pursuing damages for the unlicensed
use of an invention that has led to the development of a profitable product.
Eisenberg argued that these reach-through strategies help with the valuation and financing of biotech research and tools. After discussing the pros and
cons of the different strategies, she concluded by observing that patent law has
traditionally limited patent protection to actual accomplishments and future variations that arise from work that is routine and predictable. She considers this a
sensible limitation that guides patent examiners away from granting patent
rights that would unreasonably cover future research. Eisenberg believes there
are good reasons for permitting prior innovators to capture a fair share of the
value their discoveries contribute to subsequent downstream innovation. Nevertheless, she is generally more comfortable with strategies in which licenses are
negotiated in the marketplace than with strategies that require negotiation in the
course of patent prosecution.
FINANCING BIOTECH RESEARCH
Funding expensive research that has highly risky returns is another hurdle
for biotech. Aside from pharmaceutical research, which is often done by established companies, much biotech research is conducted by new firms that are
partly funded by venture capitalists and other private equity investors. Much of

Exploring the Economics of Biotechnology: An Overview

7

their applied research is based on basic or generic research that is either publicly funded or conducted at publicly funded universities and other institutions.
Given that future biotech research is likely to branch out beyond old-style pharmaceutical R&D, the session on funding biotech research focused on the roles
played by venture capital and the public sector.
Timothy Howe, a founding partner of the venture capital firm Collinson,
Howe & Lennox, emphasized several points about the role of venture capitalists. First, biotech venture capital firms combine managerial with scientific talent in picking, funding, advising, and even managing biotech start-ups. This
enables scientists at start-up firms to focus on inventing. A second point is that
most venture firms directly invest in young companies, without intermediaries.
The distribution of returns is highly skewed, with few big winners. Venture capital firms also have an incentive to diversify their investments across different
solutions to medical problems, which can be found not only in biotechnology
but also in medical devices and health service firms.
Howe sees a shift in the type of science funded by venture firms, from
conventional drug development in the 1980s and genomics in the 1990s to projects in proteomics, the study of how human genes produce proteins that act on
the body. Howe sees the pharmaceutical industry moving from being vertically
integrated to horizontally organized and dominated by a few major players in
distinct horizontal segments, such as research and target discovery, clinical testing, and distribution. Finally, Howe believes the rising share of gross domestic
product devoted to health and the related aging of the baby-boom generation
are big incentives for venture capital firms to enter the medical arena.
Another important source of funding for biotech research is government.
Wake Forest University Professor Michael Lawlor emphasized that the benefits
arising from certain types of research warrant some form of public subsidy.
Lawlor asks why returns to R&D have historically exceeded those on other
investments and have not been driven down to normal by increased investment.
One reason is that there are high-risk premiums on biotechnology investments
because there are few winners. Another is that the economic value generated
by inventors’ discoveries spills over to others, and inventors recoup only a part
of the economic value of their research.
Lawlor discussed three public policy options for addressing underinvestment, along with the drawbacks of each: an industrial policy (which invests
directly in the research and production of goods), tax incentives, and direct funding of R&D. Because of the drawbacks to the industrial policy and tax approaches,
the United States has mainly pursued a strategy of directly funding basic research through the National Institutes of Health, coupled with developing a system of patent and copyright protections for applied research. From Lawlor’s perspective, NIH’s approach yields many public benefits, while limiting some of the
pitfalls of government intervention. In particular, he noted that Congress sets the

8

Mine K. Yücel and John V. Duca

overall budget of the NIH, but panels of scientists select the research projects
to fund.
Lawlor stressed that in recent decades public R&D funding has changed in
response to the increased complexity of research, which is more interdisciplinary and has blurred the lines between basic and applied research. Recognizing this and seeking to encourage the transfer of federally funded research to
the private sector, Congress passed legislation in the mid-1980s creating cooperative research agreements that allow federally funded laboratories to establish
profitable links with commercial firms. Lawlor noted that the complex, directfunding approach that has evolved in this country has helped make the United
States the world leader in biotech research.
LOCAL DETERMINANTS OF BIOTECH RESEARCH
Dennis Stone, vice president for technology development at the University of Texas Southwestern Medical Center in Dallas (UTSW), focused on biotech
activity in the Dallas/Fort Worth metro area and emphasized the role of the university. Unlike the information technology industries, biotech depends on the
university as a technology source. Stone illustrated the scope of the University of
Texas’ biotech presence, using life science research expenditures and patent data.
Stone noted that Dallas has few biotech companies because of barriers to
entry facing start-ups. In his opinion, the main barriers include the lack of
biotech entrepreneurs, the lack of local venture capitalists, the academic culture
of local faculty, and the fact that UTSW cannot form companies. Stone said that
fostering the growth of seed capital, venture capital, and biotech space is
needed for biotech to flourish in Dallas. In addition, he sees a need to increase
the flexibility of firms to operate with public institutions such as UTSW and to
bolster cooperation among North Dallas stakeholders.*
The last conference speaker, UCLA Professor Lynne Zucker, discussed
broader patterns across the country. Zucker began with a glimpse of Texas’ science base. Using several gauges, she showed that Texas was below the hightech-state average for a variety of measures of scientific prowess.
Zucker stressed that biotech has had few big winners and many losers, as
only 10 percent of biotech start-ups grow into reasonably large firms. Her
research shows that basic university science is integral to successful commercialization of scientific discoveries. Firms working with star scientists are much
more likely to be successful, controlling for other factors. Her findings also
show that local venture capital has been key to the industry’s growth, increasing the productivity of R&D and fueling firms’ expansion. Zucker concluded by
noting that Texas’ biotech success will be driven by the number and quality of
* Dr. Stone’s presentation is not included in this proceedings.

Exploring the Economics of Biotechnology: An Overview

9

top research university bioscientists, especially those with ties to firms, and
stressed the need for more investment in the state’s scientific base.
ECONOMIC IMPLICATIONS OF BIOTECHNOLOGY
Several broad implications arise from the conference. One is that if past
technological revolutions are any guide, more research is needed to develop
gauges of biotech activity. Also, the benefits of biotech advances are likely to
be felt long after the inevitable shakeouts that will cull firms’ ranks. In addition,
although health care premiums are growing rapidly and drug cost increases are
getting a lot of press, we should remember that the benefits of new drugs have
historically outweighed their higher cost.
Another broad implication is that while policymakers should spur basic
and generic research, they must ensure that incentives are appropriate for markets to perform efficiently. Intervention in the form of price controls or forcing
biotech firms to relinquish property rights could discourage innovation. Given the
high cost and risks of biotech research, emerging industries need a few big winners to justify investing in many new ideas. In addition, patent and royalty laws
need to catch up with technological innovations so markets can perform better.
Other implications concern the interdisciplinary nature of biotech research,
which encompasses a broad scientific base and may greatly affect other areas
and industries. Current biotech science draws on advances in chemistry, biology,
computational methods, and medicine to develop new therapeutics. Looking
ahead, the interplay of advances in biotechnology, informatics, and nanotechnology could extend biotech applications to an array of products and services
inconceivable only a short time ago, greatly improving quality of life and boosting economic growth. But to succeed, biotechnology firms must draw on specialists from different areas, foster technical collaboration among these scientists, and credibly communicate their findings to regulatory agencies, customers,
and investors.
The conference presentations also have implications for investors. Direct implications include recognizing the high risks in holding large stakes in individual
biotech firms. Given the difficulties in capturing the value of inventions, investors
should consider the risk that innovations could benefit end users more than inventors.
Perhaps the biggest implications for investors arise from the indirect effects
of biotech research on benefit costs and customer bases for all sorts of companies. In particular, biotech could increase longevity beyond most projections,
raising the risk to firms with large defined-benefit pension obligations and the
Social Security retirement system. On the other hand, medical advances might
help control the projected jump in Medicare benefits, which are expected to
produce bigger budget shortfalls than the looming Social Security problem.
Another demographic implication is that spending patterns could shift more

10

Mine K. Yücel and John V. Duca

than expected if longevity increases more rapidly than projected, particularly if
medical advances reduce disabilities and improve the quality — as well as the
quantity — of life.
The conference presentations also have implications for local government
policies aimed at fostering biotech activity. The recipe for success in biotech
seems to be a strong scientific base built around top-rated academic institutions,
which provide groundbreaking research and draw star scientists to the region.
The second important element is the ability to commercially develop the innovations coming out of research institutions. To become a major player in the
biotech arena, Texas needs to not only continue to develop its strong research
base but also foster the venture capital investment needed to commercialize the
innovations from the state’s research institutions.

PART ONE

An Economic Perspective
on the Biotech Revolution
Growing by Leaps and Inches:
Creative Destruction, Real Cost Reduction,
and Inching Up
Michael R. Darby and Lynne G. Zucker

The Benefits to Society of New Drugs:
A Survey of the Econometric Evidence
Frank R. Lichtenberg

Growing by Leaps and Inches:
Creative Destruction, Real Cost
Reduction, and Inching Up
Michael R. Darby and Lynne G. Zucker

D

runk: Can you help me find my keys?
Passerby: Sure, where exactly did you drop them?
Drunk: Way over there by the trash can.
Passerby: Then why are you searching over here?
Drunk: The light’s much better under the lamppost.
— Milton Friedman (Economics 331, 1967)

The class laughed after hearing this joke, not yet realizing how well it
described the profession for which they were preparing. Even those present
who cannot carry memory of a joke home from the barbershop still remember
the day they first heard that little joke. The thesis of this article is that the economics profession has spent years looking for technological progress under the
familiar lamppost of research and development (R&D) by incumbent firms
aimed at improvement in existing commodities or productive methods. Such
perfective progress (as we call it) is amenable to hedonic measurement and
analysis of firm behavior and market equilibrium in terms of return on investment, public goods, and positive externalities. We show here that metamorphic
progress, associated with creation of new industries or technological transformation of existing industries, is of the same or higher order of magnitude as a
source of technological progress.
We believe that our approach complements Arnold C. Harberger’s recent
emphasis on the concentration of growth in a few companies in a few industries that are achieving dramatic real cost reductions. He began to formulate his
own schema in his 1990 Western Economic Association presidential address and
by his 1998 American Economic Association presidential address could report
This article first appeared in Economic Inquiry Vol. 40, no. 3.
Reprinted by permission of the publisher, Western Economic Association International.
13

14

Michael R. Darby and Lynne G. Zucker

considerable empirical evidence in support of this concentration (Harberger,
1998). Harberger distinguishes between yeast, which makes bread rise evenly,
and mushrooms, which pop up unexpectedly in the back yard. In titling this
article, we had in mind the Japanese picture of progress by inching up — or
Frank Knight’s (1944) Crusonia plant, which grows proportionately except as
parts are cut off and eaten.1 In contrast, we emphasize the process of this or that
industry leaping forward at any given time — a process that may have prompted
Schumpeter’s (1934) model of creative destruction.
Breakthrough discoveries in science and engineering—particularly invention
of a new way of inventing, such as corn hybridization, integrated circuits, and
recombinant DNA—typically drive metamorphic progress. These discoveries are
rarely well understood in the early years following them. As a result, natural
excludability is characteristic of these radical technologies due to the extensive tacit
knowledge required to practice them and the lengthy period of learning-by-doingwith at the lab bench required to transfer them. Thus, metamorphic progress cannot
be analyzed following Arrow’s information as a public good paradigm.
The importance of metamorphic progress based on naturally excludable
technologies motivates a challenging and exciting research agenda to remove
the black box covering the linkages among scientific breakthroughs, high technologies, entry and success in nascent industries, and the movement toward
industrial maturity where government statistics and economic research are most
likely (coincidentally) to begin. There are real data problems in studying hundreds of private start-up companies in industries still lumped into one or
another classification ending in “n.e.c.” (not elsewhere classified). They are
manageable, however, if economists are willing to exploit unconventional
sources and methods more familiar to organizational theorists, such as industry
directories, financial practitioners’ online services, the ISI and other scientific literature databases, and sophisticated matching methods for linking firms and
individuals across databases.
Before addressing these central issues, we make a necessary digression in
the next section to clarify the relationship between metamorphic progress and
the supposed acceleration of secular productivity growth post-1995 labeled the
new economy by Federal Reserve Chairman Alan Greenspan (2000a, 2000b,
2001) and others.2 In section II, we review a large and important sociology of
organizations and management literature that has identified recurrent patterns of
industry formation. These patterns clearly indicate that the formation process
involves decades of change in numbers and average size of firms inconsistent
with standard microeconomic analyses of entry and exit for industries in and
around equilibrium. We also review equilibrium models of industrial organization, highlighting key points of difference and congruence. In the third section
we report in some detail on research on biotechnology by us and others,
emphasizing theoretically and empirically interesting results that appear to be

Growing by Leaps and Inches

15

generalizable to other industries during their formative and transformative
phases. The fourth section focuses on natural excludability, which is central to
understanding the slow diffusion of very profitable innovations. We then point
out the implications of these results for important issues in policy analysis and
welfare economics. In the concluding section we attempt to draft a collective
research agenda that suggests some next steps for economics and its sister disciplines in understanding growth and the wealth of nations.
I. METAMORPHIC PROGRESS AND THE NEW ECONOMY
Experience suggests that our arguments on the importance of metamorphic progress can be misread — and perhaps dismissed — as supporting or even
implying the new economy ideas discussed most significantly by Greenspan
(2000a, 2000b, 2001). We have no reason to believe that the processes driving
metamorphic progress have either accelerated or decelerated in the last half of
the 1990s and thus have no expectation of change in either direction of overall
technical progress.
Little support for any extraordinary productivity growth in 1996 – 2000 is
found in the 1950 – 2001 record of U.S. nonfarm-business labor productivity
growth reported in Figure 1.3 We believe that the years 1996 – 2000 are better
Figure 1

U.S. Nonfarm-Business Labor Productivity Growth, 1950–2001

SOURCE: U.S. Department of Labor, Bureau of Labor Statistics.
NOTE: Nonfarm business sector: output per hour of all persons.

16

Michael R. Darby and Lynne G. Zucker

characterized as years of average productivity growth with one year moderately
above average. Despite his best efforts, Rudebusch (2000) was unable to find
any statistically significant increase in potential output (corrected for cyclical
movements using the demographics-adjusted unemployment rate).4 This sort of
new economy looks very much like the same old economy. Indeed, 1995 – 2000
productivity growth was considerably below that experienced in the period
1960 – 68 just preceding the great inflation. We believe that the evidence is fully
consistent with normal procyclical patterns.
In summary, although changes in the rate of metamorphic progress might
explain a new economy increase in potential-output growth, we do not believe
that has occurred in recent years. Landefeld and Fraumeni (2001) provide a nice
review of the debate and measurement issues in regard to the new economy
hypothesis.
II. PATTERNS OF INDUSTRIAL FORMATION
The typical pattern of formation of new industries involves a few firms initially entering, growing to many, and ultimately consolidating, producing the
curve shown in Figure 2 for number of firms. When the number of firms stabilizes or begins to decline, that does not necessarily imply that the overall industry size also declines. What typically happens instead is that the remaining sucFigure 2

The Four Stages of an Industry’s Life Cycle

Growing by Leaps and Inches

17

cessful entrants grow fast enough that the overall size of the industry continues
to increase (as does the average size of the remaining firms), as shown in the
industry gross domestic product share curve in Figure 2. Costs of adjustment in
size are generally nonlinear, with fixed costs of adjustment rather than the standard assumption of convex adjustment costs, as the review of evidence in Haltiwanger (1997) shows.5 Thus the peak number of firms is reached at a time when
industry output is still growing. The general form of the industry life cycle
shown in Figure 2 has been strongly supported in empirical research.
We first review the findings relevant to our main line of argument in the
population or organizational ecology approach in the sociology/management
literature. We then do the same for the more familiar (to economists) industrial
organization literature concerned with learning by firms under competition. We
aim to place our own approach in a broader context, not to attempt a global
review.
Organizational Ecology
Populations of organizations emerge sharing the same organizational form,
meaning central or core design. Reviews by Baum (1996) and Singh and Lumsden (1990) identify a wide range of organizational forms, including savings and
loan associations, hotels, life insurance companies, day care centers, semiconductor firms, and California wineries. The mixture of private and public organizations is typical of ecological research and represents exploitation of available
data resources rather than systematic comparison across these two sectors.
Most ecological research gathers data on the initial or at least early growth
of each organizational form and sometimes captures the full life cycle of a population as shown in Figure 2. Organizational ecology focuses attention on the
founding/birth of firms and on the population dynamics that support moving
from the initial founding of a single firm to emergence of a new industry.
Clearly, a population is generally a significantly narrower group of firms than an
industry and has the advantage of studying proto-industries during the process
of their development.
The hypothesized shape of the number of firms curve shown in Figure 2
has been broadly supported across strikingly different empirical settings, as
shown for trade associations by Aldrich and Staber (1988, Figures 7-2 to 7-5),
local units of Mothers Against Drunk Driving by McCarthy et al. (1988, Figures
5-1 and 5-2), labor unions by Carroll and Hannan (1989, Figure 1), telephone
companies by Barnett and Amburgey (1990, Figures 4.1 and 4.3), and Finnish
newspapers by Miner et al. (1990, Figures 1, 2, 3). But theory development has
not kept pace with empirical work, and the framework within which results can
or should be interpreted is often unclear, contradictory, or disconfirmed. Variables proliferate with few validity tests and tenuous relationships to theoretical

18

Michael R. Darby and Lynne G. Zucker

dimensions of central interest; central theoretical constructs often have no clear
empirical referents.6
Probably the most robust thread in ecological theory is organizational
form, introduced explicitly and developed in McKelvey (1982), McKelvey and
Aldrich (1983), and Romanelli (1991). Processes by which new forms are developed include imprinting at the period of emergence in Stinchcombe (1965), and
the source and emergence of varieties of forms in Brittain and Freeman (1986),
Marrett (1980), and Aldrich and Waldinger (1990).
What underlies the initial emergence and early growth of a new organizational population? Ecological research has only recently gone beyond measuring the effects of the number of prior births on the number of births in the next
period, called population dynamics, and the number of organizations in a population in the prior period, called population density. Zucker et al. (1998c) show
that fundamentals of resource reallocation and mobilization, coupled with
resource quality, provide significantly stronger predictive power, especially in
predicting location of growth than population dynamics or density. We report
repeated dynamic simulations demonstrating that population ecology model
predictions are essentially uncorrelated with the panel data on biotechnology
entry by year and region, whereas our alternative model has correlation coefficients averaging above 0.8.
Industrial Organization
Most theory and research in industrial organization (hereafter, I/O) begins
where organizational ecology leaves off. Ecology-based research focuses on the
history of development of an organizational population — the process of industry emergence. I/O research has been primarily concerned with firms in mature
industries and processes central to mature industries life cycles, including
growth and turnover, as Caves’s (1998) recent review indicates. In mature industries, observed differences in profitability, productivity, industry output shares,
investment, and similar variables provide the basis for entry by the firm as well
as the basis for later changes in firm strategy, predicting growth and turnover in
industries.
Studies in industrial organization broadly support the pattern of change
shown in Figure 2 but only for a subset of companies operating in mature industries, as Caves summarizes (1998, 1958 – 59): “Hazard rates for incumbents are
lower than for entrants through all stages of the cycle in ‘non-technical’ products (where experience advantages might be great),” but “higher for ‘technical’
products, where entrants bring the continuing flow of innovations.” The latter
results come from Audretsch (1991). Klepper and Miller (1995) and Klepper
(1996) show that the number of firms offering a product reaches a long-run stable
equilibrium after declining from an early peak through a prolonged, steady

Growing by Leaps and Inches

19

shakeout phase that suggests continuing competition among firms to reduce
costs rather than initial entry that overshoots the potential market.
I/O research is based directly on economic theories of competition. From the
I/O perspective (Caves, 1998, 1947, note 2), organizational ecology “suffers from
eschewing simple priors about business behavior: intended profit-maximization
and the need to cover costs to keep a firm’s coalition together.” Hence, the
orienting theories underlying population ecology and I/O are sufficiently different that there has been little cross-fertilization, despite empirical research on the
same or very similar underlying processes.7
Our research program seeks to build a bridge between these two related
approaches by bringing organizational ecology’s focus on industry emergence
into a model that includes wealth maximization and measures of resources
(e.g., intellectual human capital of the stars, venture capital), competencies
(e.g., main technology employed), and external environment (beyond other
firms to include top-quality universities and other local characteristics, as well
as quality of the local labor force and national cost of capital).
In standard I/O studies, two major theoretical approaches have developed
over the past two decades to deal with empirical inconsistencies with earlier
models, such as the law of proportionate growth. Central to both are the
processes of learning by and the characteristics of the information available to
firms in an industry. Learning about the decisions and success of other firms, as
well as your own firm through its experience, improves the firm’s efficiency and
hence growth and survival.
Most models of competition and growth are more suited to manufacturing
and other routinized production contexts where the main source of uncertainty
is arguably how an entering firm will perform relative to existing firms in that
same industry. In Jovanovic (1982) and Lippman and Rumelt (1982), firms learn
about their competitiveness only after entry through experience relative to that
of other firms. Because costs are random and different between firms, a potential entering firm does not know its own expectation but knows the distribution
of all firms’ costs in that industry. Firms differ in size because some discover that
they are more efficient than others, not because of fixity of capital. These models have proven themselves in numerous empirical studies of mature industries
as reviewed by Caves (1998).
Recent large-scale research in I/O has documented the variability of the
performance path of individual firms, as shown especially in panel studies by
Davis and Haltiwanger (1992) and Pakes and Ericson (1998). A recent model
developed by Ericson and Pakes (1995) explicitly incorporates firm-specific
changes in investment in response to changes in uncertainty and to evolution
of competing firms and other industries. The success of the firm in terms of
profitability and value is determined by the stochastic outcome of its investment,
within the context of success by other firms in the same industry and the con-

20

Michael R. Darby and Lynne G. Zucker

text of competitive pressures from new entry and other industries.
This model endogenizes the processes of selection in industry evolution
and thus both entry and exit. Industry-level dynamics are predicted to develop
over time in an increasingly regular way, spending more time in natural states,
including number of incumbents and entrants and exits, but failing to reach a
limit. The Ericson-Pakes approach provides a more complete model of firm
behavior in industries where production is not routine but where central tasks
are invented and reinvented as the frontiers of knowledge develop, whether
due to technological breakthrough or other kinds of invention, from quality circles to new financial instruments.
III. FINDINGS FOR BIOTECHNOLOGY AND OTHER SCIENCE-DRIVEN
TECHNOLOGICAL REVOLUTIONS
The process underlying metamorphic progress is defined by the introduction of a new breakthrough technology that either eliminates the ability of firms
practicing the old technology to survive or creates an entirely new industry.8 If
the technological breakthrough relies on the same scientific and engineering
base as the previous technology, incumbent firms are generally strengthened as
they readily convert to the new technology. Focusing on what happens to
incumbent firms, Tushman and Anderson (1986) refer to these changes as competence-enhancing. If the science and engineering base of the new technology
is disjoint from that of the existing technology, existing firms tend to shrink and
exit and many new entrants arise practicing the new (incumbent’s) competencedestroying technology (Tushman and Anderson, 1986; Henderson, 1993).
We emphasize whether the breakthrough technology is incumbent-enhancing
or entry-generating. Incumbent-enhancing breakthroughs are the same as Tushman and Anderson’s competence-enhancing breakthroughs. Entry-generating
breakthroughs include both their competence-destroying technologies and breakthroughs that create whole new industries. The key example of entry-generating
breakthroughs is the entrepreneurial start-up phase in high-technology industries characterized by a high valuation on ability to practice the new technology
while any incumbent firms’ expertise in a previous technology becomes obsolete and, often, a barrier to adoption of the new technology.
Much recent research — including ours — has concentrated on industries
being formed or transformed in response to entry-generating technological
breakthroughs. Nonetheless, Tushman and Anderson (1986) provide an impressive list of incumbent-enhancing breakthroughs, and the recent work by Harberger (1998) and his associates suggests that metamorphic progress of this type
is also a relatively frequent feature of a growing economy. In contrast, we
(Darby and Zucker, 2001; Zucker and Darby, 2001) found in Japan that the technological breakthroughs that led to a wave of entrepreneurial start-ups in the

Growing by Leaps and Inches

21

United States were adopted more or less successfully either by established firms
with congruent scientific bases that took advantage of the opportunity to enter
new industries or by technological transformation of incumbent firms. The key
institutional difference that appears to have led to different metamorphic
processes in the two countries was the (recently relaxed) Japanese prohibition
on public offerings of stock in firms without an established record of substantial profitability. The extraordinary length of private financing implied by this
prohibition effectively eliminated the possibility of Japanese startup firms
financed by venture capitalists.
Research on the formation/transformation entrepreneurial phase in hightechnology industries has proceeded far enough that we can begin to define and
(in some cases) tentatively answer key questions about processes that shape
metamorphic change and ultimately the total rate of technological progress in
the economy. We focus here on entry-generating breakthroughs, but incumbentenhancing metamorphic change also may be important for technological progress.9
Many Are Called, but Few Are Chosen
Entry-generating breakthroughs are characterized by a formation phase of
perhaps 10 to 20 years (see Figure 2) during which many more firms enter the
industry than will survive in the long run. In the following consolidation or
shakeout phase of perhaps 10 to 30 years, most of these firms are either
absorbed by the industry’s winners or leave the industry at their owners’ initiative or that of their creditors.10 This occurs even as industry output continues to
grow dramatically; average (surviving) firm size grows even more rapidly. The
consolidation phase may be followed by an extended period of stability corresponding to the standard price-theory model of entry and exit maintaining zeroeconomic profits and optimal firm size. A final phase of decline is not necessary
but often observed. Alternatively, the entire process may be interrupted in any
phase by another metamorphic breakthrough.
Why are so many more new firms or new operations of existing firms created than are really needed? Is their creation and destruction a case of organizational waste and entrepreneurial misjudgment or is firm-number overshooting
valuable and entry ex ante justified? Uncertainty about which entrants will be most
successful in implementing the new technology is sufficient for the observed pattern to be efficient, as shown by Jovanovic and MacDonald (1994) and Ericson
and Pakes (1995), and recently elaborated by Bernardo and Chowdhry (2002).
For incumbent-enhancing breakthroughs it is obvious that the most successful implementers will be among the incumbent firms where much expertise
relative to the technology and cooperative technologies is present. Indeed, one
or more of these firms is likely to be the source of the breakthrough. Though
inventing and early adopting incumbent firms are likely to improve their stand-

22

Michael R. Darby and Lynne G. Zucker

ing in the industry (Tushman and Anderson, 1986), there is no reason for any
outsiders to enter in the expectation that they will outcompete the incumbents.
Thus the overshooting of firm numbers is characteristic of only entry-generating
metamorphic progress.
Although there are many hopeful entrants in the latter case, few of them
typically survive. For example, Table 1 presents some data on new U.S. biotechnology firms in 1989 drawn from a study we did with Jeff Armstrong (Zucker et
al., 2002). The first of these firms was founded in 1976 to exploit the string of
technological breakthroughs in the life sciences, most of which followed directly
or indirectly from the invention of genetic engineering as reported by Cohen et
al. (1973). Firm formation accelerated after the 1980 U.S. Supreme Court decision
that upheld the patenting of engineered cells and cell parts and the underlying
recombinant-DNA technology covered by the Cohen-Boyer patent (1980). By
1990 over half of the employees in the industry were concentrated in the top 10%
of the firms, and over two-thirds of the industry were in the top 20% of the firms.
Figure 3 illustrates these data and shows that the same top 21 firms (out of 211)
also accounted for 54% of the growth in employment from 1989 to 1994.11 More
generally, Lamoreaux and Sokoloff (2002, Table 6) show that U.S. patents have
been concentrated in a relatively few career inventors since the 1870s.
Academic Science Matters a Lot
Entry-generating metamorphic progress almost always arises from outside
the industry(ies) to which it will be applied. Many observers have pointed to
anecdotal evidence of the importance of research universities as a source of
breakthroughs that have created such regions as the Silicon Valley around Stanford; Route 128 around MIT and Harvard; and the Research Triangle Park around
Duke, the University of North Carolina–Chapel Hill, and North Carolina State
University.12 Mansfield (1995) documents the important role played by academic
research in even incremental industrial R&D, that is, in perfective progress.
Table 1

Concentration of Employment in New Biotech Firms, 1989

SOURCE: Calculations of the authors for the
biotech-using firms that disclosed employment for
1989 and 1994 and were formed after 1975 in the
Zucker et al. (2002) database.

Growing by Leaps and Inches

23

Figure 3

Concentration of U.S. New Biotechnology Firms’ 1989 Employment and
Employment Change 1989–94

A stream of recent research on innovation in the United States has found
evidence of “geographically localized knowledge spillovers” occurring in areas
around major universities: Jaffe (1986, 1989), Jaffe et al. (1993), Audretsch and
Feldman (1996), and Henderson et al. (1998). The underlying assumption is that
proximity to a major university itself provides technological opportunity; the
localization is assumed to be due to the social ties between university and firm
employees or to firm employees’ access to seminars at the university. The
importance of distance is strengthened by Adams and Jaffe’s (1996) finding that
geographic distance is an important impediment to flow of technology even
within the firm.
Zucker et al. (1998b) and Darby and Zucker (2001) find that firms are
more likely to begin using biotechnology near where and when “star” bioscientists are actively publishing in the United States and Japan, respectively.
Although these findings have been cited as evidence of geographically localized
knowledge spillovers, we read our results — and those of the other authors
cited — as only demonstrating geographical localization of knowledge. Zucker
et al. (1998a, 2002) and Zucker and Darby (2001) show for California, the United
States, and Japan, respectively, that university effects on nearby firm R&D productivity are highly concentrated in the particular firms with bench-science
working relationships with top academic scientists and practically absent otherwise. We identify these academic-firm links by the academic scientist publishing a journal article that also has one or more firm-affiliated authors.13 Table 2
and Figure 4 indicate the close connection between links to top research university faculty and success: ranking firms by their linked articles up to 1989 does
about as well as ranking by 1989 employment at predicting the 1989 – 94

24

Michael R. Darby and Lynne G. Zucker

Table 2

Relation of Employment in New Biotech Firms to Links to High Science as
Represented by Articles Coauthored with Scientists in Top 112 Research Universities

SOURCE: Calculations of the authors for the biotech-using firms that disclosed employment for 1989 and
1994 and were formed after 1975 in the Zucker et al. (2002) database.
NOTES: Core links are a count of articles published through 1989 in journals directly related to biotechnology indexed by the Institute of Scientific Information and with one or more authors affiliated with the
firm and one or more authors affiliated with any of the top 112 U.S. research universities in terms of receipt
of federal research funding. Other links are a count of articles published through 1989 in journals not
directly related to biotechnology indexed by the Institute of Scientific Information and with one or more
authors affiliated with the firm and one or more authors affiliated with any of the top 112 U.S. research
universities in terms of receipt of federal research funding.

employment increase. Put another way, an investor who restricted his or her
biotech portfolio at the end of 1989 to only the 22.7% of firms with any linked
firm-research university core biotech publications or the 10.9% with more than
one or two of these would include all of the top 10 firms and nearly all of the
base-hit firms. The message of these simple correlations holds up in the context
of poison regressions which allow for other determinants. Figure 5 reports the
strong estimated effects of these linked articles on firm research productivity in
California and Japan.14
Fieldwork — supported by analysis of the timing of the academic scientists’
first articles with a firm and its founding — indicates that these academic-firm
copublishing relationships most often connote that the academic scientist was a
firm founder or at least presently has a significant financial interest in the firm.15
Indeed, Herbert Boyer, of the Cohen-Boyer team who discovered recombinant
RNA or genetic engineering, and entrepreneur Robert Swanson founded the first
of the new biotech firms (Genentech). Similarly, Torero (1998) finds that a few
hundred top scientists and engineers account for a large part of the patenting in
the semiconductor industry, and firm success depends heavily on the degree of
involvement of those stars in a firm. Where and when these star semiconductor

Growing by Leaps and Inches

Figure 4

Concentration of New Biotechnology Firms’ Links to Top Research Universities
for 1989 Employment Deciles

Figure 5

Estimated Effects of Number of University Star-Firm Linked Articles on
Success of Californian and Japanese Biotechnology-Using Firms

25

26

Michael R. Darby and Lynne G. Zucker

scientists and engineers are working is an important determinant of where and
when new semiconductor firms are established (Torero et al., 2001).
IV. NATURAL EXCLUDABILITY AND THE DIFFUSION OF
IV. METAMORPHIC BREAKTHROUGHS
The central role of a relatively small number of scientists and engineers in
determining success of high-technology firms forces us to rethink the nature of
technology. Economists have traditionally analyzed technology as if it were a
public good with a marginal cost of (re)production of zero (Nelson, 1959;
Arrow, 1962). Despite the seminal works of Stigler (1961) and Becker (1964)
spawning the vast literatures on the economics of costly information and human
capital, most analyses of technology including the “new” endogenous growth
models typically conceive of technology as information that can be recorded on
a floppy disk and then be costlessly reproduced and applied. Romer (1990), for
example, acknowledges that this nonrivalrous characterization is an idealization
but argues that it is much more expensive to create a new technology than learn
it and that the idealization is harmless. We disagree.
When a major scientific breakthrough occurs and creates the opportunity
for a corresponding incumbent-enhancing or entry-generating technological
breakthrough, it may be very difficult for anyone other than the discovering scientists or their close working associates to reduce the discovery to technological practice. The ideas are far from codified and even the discovering scientists
are not sure exactly what it is that they are doing, which is crucial. Published
results—including those in a patent—may not be reproducible unless the reproducing scientist goes to the discoverer’s lab and learns by doing with him or
her.16 In biotechnology, patent disclosures are often made by deposit of a cell
line with an independent agent so that they will be publicly available at the
expiration of the patent term: it is simply not possible to write down what a person skilled in the art would have to do to obtain the same organism.
Breakthrough discoveries leading to metamorphic growth are often of the
same nature as Griliches’s (1957, 1960) classic case of corn hybridization: an
invention of a way of invention. Such platform technologies involving new techniques and instrumentation are typically hard to work with at first, and their
diffusion is based on learning-by-doing-with at the laboratory bench: that is, by
immediate observation and practice with someone who holds the tacit knowledge of how to make the technique work.
Not only are breakthrough discoveries often characterized by extensive
tacit knowledge, only a relatively few top scientists near the frontier of the area
are likely to be able to figure out how the discovery might be used to actually
produce something of economic value. Although everyone might want to pluck
the newly available low-lying fruit, not everyone can see where they are. The

Growing by Leaps and Inches

27

late Robert Swanson, founding CEO of Genentech, liked to tell the story of how
the firm obtained such a favorable royalty deal for Humulin® (human insulin
produced by genetically modified bacteria) from the usually shrewd bargainers
at Eli Lilly and Co. The scientists there were so sure that Herbert Boyer and
Genentech were attempting the impossible that no serious bargaining was done
until Genentech notified Lilly that they were holding a press conference
announcing success in three days.
We say that this embodied knowledge — transferred slowly only by learning-by-doing-with — is characterized by natural excludability. Even if the university is assigned a patent to the discovery, most of the value accrues to the
discoverers because without their cooperation the patent cannot be used. Our
fieldwork for biotechnology and more general studies by Jensen and Thursby
(2001) and Thursby and Thursby (2002) support the natural excludability
hypothesis. For example, in the Jensen and Thursby (2001, 243) survey of Technology Transfer Office managers, “For 71 percent of the inventions licensed,
respondents claim that successful commercialization requires cooperation by
the inventor and licensee in further development.”
Diffusion with Natural Excludability
If new metamorphic technologies were really like software on a disk, diffusion of this highly profitable knowledge would be limited only by the speed
with which people realize the value of the new processes (Mansfield, 1961;
Griliches and Schmookler, 1963). In contrast to this potentially infinite rate of
adoption, natural excludability limits the extent of diffusion to an exponential
times the number of discoverers.
To see this, consider biotechnology in 1973 and suppose that six people
in two laboratories knew how to do genetic engineering (recombinant DNA).
Suppose one knowledgeable person can transfer the knowledge to at most one
person per year. Then the maximum number of potential practitioners of the art
in year t (t = 0 in 1973) is 6 • 2t. Even if this rapid rate of diffusion were possible, there would only be 6•210 = 6,144 potential practitioners of genetic engineering in 1983, each of whom would still be earning a very large shadow
wage. Over time, the value of the knowledge declines as the number of practitioners increases until new apprentices earn only the normal human capital
return to their investment in learning the knowledge.
Thus there is a varying period of time during which the discoverers and
early learners derive supranormal returns from practicing their knowledge and
also benefit from lower-cost assistants due to the implicit tuition chain. This
period of time can be long enough to significantly impact the formative period
of a new industry, such as biotechnology or nanotechnology, or transformative
periods, such as have occurred in semiconductors. We have formulated a much

28

Michael R. Darby and Lynne G. Zucker

more elaborate model involving multiyear learning in a lab with the number of
learners in the lab and their probability and lag to leading their own lab, all as
a function of the value of the knowledge, but the basic message of at most exponential growth from a small base remains intact. Zucker et al. (2002) illustrate
both the geometric growth in scientists publishing their first paper reporting a
genetic-sequence discovery and the continuing tacit nature of the knowledge.17
Discovering and other top scientists and engineers play a key role in metamorphic progress as we have seen so far for biotechnology and semiconductors, lasers as described by Sleeper (1998), and nanotechnology (based on our
new research). We believe that natural excludability makes this role a frequent
feature of metamorphic progress. Note that even where university professors
follow the rules and promptly disclose inventions for patenting by the university under the Bayh-Dole Act, the value of those patents is impacted by the
usual necessity to license the patent to a firm and on terms such that the discovering professors are willing to cooperate in the commercialization process.
V. UNSETTLED WELFARE AND POLICY ISSUES
Academic purists often express concerns about faculty involvement in
commercialization of their discoveries. These concerns include: (1) lost scientific productivity of the scientists, (2) reduction in the amount of science contributed to the common pool by publishing, (3) deflection of the development
of science toward more commercially relevant problems, (4) conflict of interest
leading to scientists’ distorting their findings, and (5) conflict of commitment to
the university. Our research can shed light on some of these concerns, but others remain open issues. We do not consider more radical objections to scientific
progress and productivity growth because we believe that these are well
answered in more general debates.
Lost Scientific Productivity of the Scientists
One of the initial motivations of the biotechnology study that spawned our
current larger growth, science, and technology project was to examine the cost
in lost scientific productivity of commercial involvement of the very best academic bioscientists. Surprisingly, we found robust evidence that scientific productivity of these scientists increases during their commercial involvement (as
compared to their own productivity before or after) on the standard measures
of publications and citations to those publications (Zucker and Darby, 1995,
1996). To give an extreme example, the most commercially involved star scientists (those ever affiliated with a firm and with patents) have nine times as many
citations as do star scientists who are never affiliated or linked to a firm and
have no patents. About half of that difference reflects the fact that those who

Growing by Leaps and Inches

29

become involved are more energetic to begin with, and the rest reflects the
increase in publications per year and citations per publication during their years
of firm involvement.
In the half-decade since we first published those findings, we have further
tested them on an expanded U.S. data set using improved methodology and
replicated them for Japan. Because publishing increases robustly for scientists
working with firms, we were forced to reconsider our initial assumptions. First,
the delays in publication required for patenting by firms are typically on the
order of three months, and universities also require delay while they prepare
patent applications with possibly less efficiency. Furthermore, purely academic
scientists also may prefer to delay publication for strategic reasons; one respondent put it this way: “When I was a pure academic, I didn’t exactly throw away
my lead by publishing rich discoveries until I put together three or four articles
following them up.” We may not only have overestimated the increased returns
to secrecy but also missed two factors that seem to swamp any higher value for
secrecy.
The first countervailing factor is that commercial involvement gives the scientists much more resources to do their work. Not only are venture capitalists
and investment bankers easier funding sources (per dollar) than the National
Institutes of Health or the National Science Foundation, but it permits scientists
the luxury of research assistants who are highly experienced and skilled longterm employees instead of first-year graduate students performing an assay or
protocol for the first time.
The second countervailing factor is that the best scientists really love doing
science! That is, doing science is a luxury good for which the income elasticity
is greater than 1. When their company goes public, they consume not only more
Ferraris but more experiments.18
Reduction in the Amount of Science Contributed to the
Common Pool by Publishing
These concerns in part refer to publishing activities by scientists who are
commercially involved and those have been addressed. There is, however, a
broader concern that the commercialization of science will reduce the amount
of publishing by scientists generally — thus reducing the positive externalities
that enrich the entire enterprise. Put another way, extensive faculty involvement
in the commercial world may import commercial norms of trade secrecy into the
academy. Our evidence suggests that just the opposite is true and that the new
biotech firms — largely started with active faculty as principals—have exported
academic values of publishing to the industries in which they are involved. The
new biotech firms were a major organizational form/design innovation that
forced the surviving incumbents to permit and reward journal publication to

30

Michael R. Darby and Lynne G. Zucker

compete for the best and brightest scientists who are needed for the firm to survive and prosper. As the top research executive at one of the largest pharmaceutical firms put it:
We see some danger of losing our competitive advantage by publishing, but
a much greater danger if we do anything that deters the best scientists from
coming here. Further, we need for our scientists to have great reputations in
order to bring others like them to [the firm]. We are the beneficiaries of
world-wide scientific research, and thus we also need to contribute to this
pool of scientific knowledge, creating a public good....Relative to new
biotechnology firms, [we] may believe more strongly in the commonality of
research tools because we have a wider array of methodologies and products. (Zucker and Darby, 1997, 438 – 39).

Table 3 is an extract of the top and other references (i.e., nonpatent references) section from Goeddel and Heyneker’s (1982) U.S. Patent 4,342,832,
assigned at issue to Genentech. The patent was applied for in July 1979 and
cites related work by the inventors (Goeddel et al., 1979). Note the extensive
citations to other work published in leading academic journals, indicating the
continuity between basic science and new intellectual property in the sciencedriven industries. Indeed, much research done at firms is openly published
either without a patent or shortly after one is applied for. In the most successful firms, world-class scientists are more likely to follow high-stakes, high-return
R&D strategies instead of more predictable incremental strategies, as indicated
by the larger jump-size in their stock price when success or failure is revealed
(Darby et al., 2002).
The evidence is clear that the involvement of university faculty in commercialization of their discoveries has widened the norms of publication of
research results into the very science-driven industries where there is the most
to be learned from firm research. It is hard to credit that other university scientists are publishing less while those directly involved are publishing more; so
we conclude that there has likely been an overall increase in the propensity to
publish research results rather than the hypothesized decrease.
Deflection of the Development of Science Toward More Commercially
Relevant Problems
We believe that the trajectory of science is bent to a degree toward more
commercially relevant problems. Just as provision of government research funding targeted to politically important issues would seem to have some impact on
the trajectory of science, we would expect that the availability of commercial
funding should also have an impact. However, it is very hard to develop a counterfactual trajectory for science, so our evidence is indirect: Zucker et al. (2002)

Growing by Leaps and Inches

31

Table 3

Extract from U.S. Patent 4,342,832, Assigned to Genentech Illustrating
Close Ties to Academic Science

find that bioscientists working in areas more directly relevant to human disease
are more likely to become linked to firms, and, as noted, scientists who are
linked to firms are both generally more productive of articles and citations to
those articles and are significantly more productive during their linkage than
they were previously. Thus there must be some impact of commercial relevance
on the course of science. However, because more science is being done in total
and progress in one area depends partly on progress in other areas, we cannot
conclude unambiguously that there is less progress in the less commercially
applicable areas than there would have been in the absence of commercial
involvement.

32

Michael R. Darby and Lynne G. Zucker

Even if there were less science in the less commercially applicable areas,
it does not follow that this is a cost rather than a benefit. In the case of biotechnology, it means that more people are being spared from death and spared from
suffering from disease and starvation due to high food costs. Possibly it is
appropriate that scientists weigh these benefits directly and in terms of their
financial implications in choosing which problems to work on. Even in economics, there are some distinguished practitioners who argue that their science
would be healthier if empirical relevance played a greater role in allocation of
rewards and hence choice of problems.
Conflict of Interest Leading to Scientists’ Distorting Their Findings
From time to time cases of scientific fraud emerge, and the fear is that this
frequency is inevitably increased where scientists can profit directly from selling products or shares of stock based on such claims. This is probably a very
small risk for star scientists who are likely at a robust corner solution due to reputations of immense value and realistic prospects for the Nobel and other major
prizes. Where reputation value is less, one would expect that fraud increases
with the returns. However, we do not normally argue against wealth creation
on the grounds that it increases the incentives for theft and fraud.
Conflict of Commitment to the University
Finally, there is an argument that the opportunity to commercialize discoveries distracts faculty from the roles for which they are paid: to instruct, do
research, and attend committee meetings. We can leave out any threat to
research because that unambiguously increases in quantity and quality during
commercial involvement, so the threat is concentrated in the areas of teaching
and collegiality. Even for teaching, the issues are complicated by the extraordinarily high value of training received by apprentice researchers in the
laboratories of scientists making valuable discoveries with natural excludability.
If the possibility of working with such scientists increases the applications to the
university in the relevant department(s) or school(s), can we truly say that their
teaching output has decreased?
Moreover, in addressing the question of diversion from commitment to the
university, we must face the issue that the roles or commitments of a professor
are not standardized and are traditionally subject to individual negotiation as
discussed by Stigler (1950) and Stinchcombe (1990). This immediately raises the
issues of incentive packages and compensating differentials in wages of professors who — if they make a commercially valuable discovery — will tend to profit
from the discovery as well as do more research and less teaching and collegiality. Normally, we would suppose that markets handle these contracting

Growing by Leaps and Inches

33

issues rather efficiently, although not perfectly compared to a costless world
(Darby and Karni, 1973; Aghion and Tirole, 1994). Possibly the complaints
about conflict of commitments reflect more the feeling of some faculty in other
departments that they work just as hard and should be equally rewarded by the
market.
VI. CONCLUSIONS: A DRAFT RESEARCH AGENDA
The endogenous growth literature assumes that technology is a nonrivalrous
recipe that is costly to discover but costless to replicate. We saw in section IV
that for many industries undergoing metamorphic progress, technology instead
possesses natural excludability, resides in particular individuals, and diffuses by
learning-by-doing-with. That is, breakthrough technologies are better thought of
as rivalrous human capital, not a recipe on a disk capable of free copying. It
follows that the focus of the endogenous growth literature should shift from the
theory of the firm toward understanding the motivations of discovering scientists to report or bootleg discoveries, to found new firms or cooperate with
existing firms in commercializing their discoveries, and most important to do the
initial research that creates the opportunity for a commercial breakthrough. Key
issues largely ignored in the current growth literature include compensating
wage differentials, incentive pay, rents and quasi-rents, and moral hazard along
the lines of Aghion and Tirole (1994). Jensen and Thursby (2001), Thursby and
Thursby (2000), and Zucker et al. (2002) explicitly pursue those issues.
If the most important breakthrough technologies are typically embodied in
individual scientists and transferred or diffused by learning-by-doing-with, then
the incentives to discover are considerably higher than conventionally analyzed
even if the university or firm gets nominal ownership of the intellectual property
rights in the discovery through a patent. The discoverers and patent owner have
an interesting bargaining problem because the patent is worthless unless the discoverers cooperate with the licensee(s), often firms in which the discoverers have
founders’ interests. On the other hand, the angel investors and venture capitalists financing discovers’ firms want to be sure that the intellectual property is
secure and tied down, so the discoverers must either negotiate a reasonable
agreement with their employers (the patent owners) or take extraordinary steps
to document that the discoveries were not made with, say, university resources.
Hence, the plethora of firm laboratories very near campuses and the attraction
of university-adjacent science parks to ensure that follow-up discoveries clearly
belong to the firms and not the universities.
Our approach also suggests that the analysis of spillovers (the science and
technology literature’s term for positive externalities) is basically flawed. The
spillovers from the ivory tower that are widely used to explain geographically
localized knowledge (i.e., increased research productivity for firms) in the

34

Michael R. Darby and Lynne G. Zucker

neighborhood of great research universities do not hold up to rigorous empirical analysis. Increased research productivity is very large in firms with specific
identifiable links to discovering university scientists and engineers and otherwise nil or insignificant. The more important positive externalities associated
with commercialization of university discoveries have been neglected in the literature. These are the nonlocalized spillovers associated with increased publishing by the university scientists working with the firms and by the scientists
and engineers employed by the firms.19
We know from a great deal of empirical research in the field of growth
accounting that technological progress together with growth in the average level
of human capital are the ultimate determinants of growth in output per capita.
The endogenous growth literature has started the important work of understanding the determinants of technological progress in an aggregate model. The
aggregate models to date are oriented toward explaining what we call perfective progress — based on incremental R&D performed by incumbent firms. We
argue that metamorphic progress is an equal or greater source of technological
progress and that most often (but not always) metamorphic progress involves
discoveries made by scientists and engineers external to the existing industry
and involves embodied knowledge that is protected by natural excludability and
diffused by learning-by-doing-with. We believe that building on these ideas will
strengthen both the science and technology and the endogenous growth literatures with the ultimate result that we understand what institutional arrangements
are most conducive to growth in the standard of living.
NOTES
An earlier version of this article was delivered as Michael R. Darby’s presidential address at the
Western Economic Association International Meetings, San Francisco, CA, July 6, 2001. The
research has been supported by grants from the University of California’s Industry – University
Cooperative Research Program, the University of California Systemwide Biotechnology Research
and Education Program, and the Alfred P. Sloan Foundation through the NBER Research Program
on Industrial Technology and Productivity. We are indebted to many coauthors, postdoctoral fellows, and graduate and undergraduate research assistants who have contributed to the development of these ideas over the past decade. This article is part of the NBER’s research program in
productivity. Any opinions expressed are those of the authors and not those of their employers or
the National Bureau of Economic Research.
1

Abba Lerner (1953) also propagated the Crusonia plant.

2

See, for example, the papers collected in Federal Reserve Bank of Kansas City (2001).

3

If it were the point of the article, we would do a full analysis of productivity growth taking account
of changes in capital, labor-quality adjustments for the hours worked, and procyclical movements in productivity (see Darby 1984a, 1984b). Before undertaking such an effort, we would
want to see evidence of an interesting anomaly in cruder measures of productivity growth. Cen-

Growing by Leaps and Inches

35

tral bankers saying that the economy works differently from before so that they can ignore the
usual signs of monetary overstimulus hardly qualify as an anomaly.
4

Rudebusch (2000) clearly walked a tight line between professional and institutional loyalty: “As
noted above, there is, of course, always a large amount of uncertainty about estimates of the
growth rate of potential output. Indeed, based on a strict statistical interpretation of Figure 1,
there is a one in five chance that there has been no change in the growth of potential output in
the 1990s.”

5

Growth in overall industry size can be attributed to movement down an elastic demand curve
as more efficient, lower-cost producers replace higher-cost producers. The question is why it
takes so long for the low-cost producers to emerge and drive out the others.

6

Zucker (1989), Baum and Powell (1995), as well as the review articles by Baum (1996) and by
Singh and Lumsden (1990), raise significant questions about the directions of theory and
research in organizational ecology, while also stressing the value of particular empirical studies
done under the ecology banner.

7

Compare infant mortality in Wedervang (1965) to liability of smallness, ruling out age effects, in
Freeman et al. (1983).

8

New industries may eliminate or greatly reduce the size of other industries previously satisfying
the fundamental function — for example, the advent of the automobile industry all but eliminated
both the buggy and buggy whip industries. In principle, we could view the present automobile
and vestigial buggy industries as a transformed personal land transportation industry, but it is
not apparent what would be gained from such semantic niceties.

9

The range and impact of incumbent-enhancing metamorphic change is suggested by Harberger’s ongoing work on major cost reductions in existing industries.

10

This process may interact with waves of optimism and pessimism about the future of an emerging
industry. For example, despite a promising and ultimately successful pipeline of drug discoveries,
Cetus faced a cash shortage during a phase of biotech pessimism and merged into Chiron.

11

Note that we maintained the decile sorting by level of 1989 employment in Figure 3. If we had
instead sorted by employment change along the lines of Harberger (1998), the top and second
deciles so defined would account for 75.8% and 17.4% of the net employment change with only
6.7% left for the other firms. The bottom 80% (169 firms) on this basis includes 63 firms with
negative change, 10 with no change, and 96 firms with positive employment change.

12

See especially Dorfman (1988), Jones and Vedlitz (1988), Smilor et al. (1988), and Bania et al.
(1993). There are, of course, other important sources of geographic agglomeration (see, for
example, Head et al., 1995).

13

Publications involving scientists at two firms are extremely rare. Furthermore, the scientists
practice serial monogamy, usually writing with only one firm during his or her career and, in the
alternative, writing with only one firm at a time.

14

We introduce major methodological innovations in Zucker et al. (2002), exploiting a substantially
broadened database, so that simple comparisons are not possible although the results are very
supportive of the importance of academic-firm linked articles.

15

In Japan, explicit principal status in a firm is forbidden to professors at the national universities.
However, continuing unreported cash payments on the order of the scientist’s salary are com-

36

Michael R. Darby and Lynne G. Zucker

mon (and rarely prosecuted, but see Japan Times, 1999, for a counterexample) as are lucrative
corporate directorships promised when the professor “descends from heaven” at age 55 or 60
(i.e., postretirement).
16

Sometimes when an important result is difficult to reproduce in another location, the entire laboratory is reproduced, including the placement of equipment down to the coffee urn. If the result
can then be obtained, detective work ensues to figure out what features are crucial. In a similar vein, during our fieldwork we heard one distinguished scientist grumble that another “had
stolen [his] best cloner.” This is not a remark applicable to something easy to learn from material written on a floppy disk!

17

Tacitness is indicated by the fact that the bulk of new authors reporting genetic-sequence discoveries for the first time were writing as coauthors with previously published discoverers, and
this continued to the end of the data set (1994), as reported in Zucker et al. (2002).

18

Milton Friedman reminds us that economists are not immune to this science as (tax-exempt or
conspicuous?) consumption phenomenon: Irving Fisher amassed a fortune inventing a visible
file system and founding one of the constituents of Remington-Rand. He used it to hire a sizable
staff of assistants to compute (X ′ X ) –1 X ′ y in the days before electric calculators. The ability to
estimate multiple regressions was a powerful professional advantage in the 1920s.

19

We are indebted to Milton Friedman for this point.

REFERENCES
Adams, J. D., and A. B. Jaffe. “Bounding the Effects of R&D: An Investigation Using Matched
Establishment-Firm Data.” Rand Journal of Economics, 27(4), 1996, 700 – 21.
Aghion, P., and J. Tirole. “The Management of Innovation.” Quarterly Journal of Economics,
109(4), 1994, 1185 – 209.
Aldrich, H., and U. H. Staber. “Organizing Business Interests: Patterns of Trade Association
Foundings, Transformations, and Death,” in Ecological Models of Organizations, edited by Glenn
R. Carroll. Cambridge: Ballinger, 1988, 111– 26.
Aldrich, H. E., and R. Waldinger. “Ethnicity and Entrepreneurship.” Annual Review of Sociology,
16, 1990, 111– 35.
Arrow, K. J. “Economic Welfare and the Allocation of Resources for Invention,” in The Rate and
Direction of Inventive Activity: Economic and Social Factors, edited by R. R. Nelson. Princeton:
Princeton University Press, 1962.
Audretsch, D. B. “New-Firm Survival and the Technological Regime.” Review of Economics and
Statistics, 73(3), 1991, 441– 50.
Audretsch, D. B., and M. P Feldman. “R&D Spillovers and the Geography of Innovation and Production.” American Economic Review, 86(3), 1996, 630 – 40.

Growing by Leaps and Inches

37

Bania, N., R. Eberts, and M. Fogarty. “Universities and the Startup of New Companies: Can We
Generalize from Route 128 and Silicon Valley?” Review of Economics and Statistics, 75(4), 1993,
761– 66.
Barnett, W. P., and T. L. Amburgey. “Do Larger Organizations Generate Stronger Competition?,”
in Organizational Evolution: New Directions, edited by J. V. Singh. Newbury Park, CA: Sage,
1990, 78 –102.
Baum, J. A. C. “Organizational Ecology,” in Handbook of Organization Studies, edited by S. R.
Clegg, C. Hardy, and W R. Nord. London: Sage, 1996, 77–114.
Baum, J. A. C., and W. W. Powell. “Cultivating an Institutional Ecology of Organizations: Comment
on Hannan, Carroll, Dundon, and Torres.” American Sociological Review, 60(4), 1995, 529 – 38.
Becker, G. S. Human Capital: A Theoretical and Empirical Analysis, with Special Reference to
Education. Chicago: University of Chicago Press, 1964.
Bernardo, A. E., and B. Chowdhry. “Resources, Real Options, and Corporate Strategy.” Journal
of Financial Economics, 2002, in press.
Brittain, J. W., and J. H. Freeman. “Entrepreneurship in the Semiconductor Industry.” Paper presented at the 46th Annual Meeting of the Academy of Management, New Orleans, 1986.
Carroll, G. R., and M. T. Hannan. “Density Dependence in the Evolution of Populations of Newspaper Organizations.” American Sociological Review, 54(4), 1989, 524 – 41.
Caves, R. E. “Industrial Organization and New Findings on the Turnover and Mobility of Firms.”
Journal of Economic Literature, 36(4), 1998, 1947– 82.
Cohen, S., and H. Boyer. “Process for Producing Biologically Functional Molecular Chimeras.”
U.S. Patent number 4,237,224, granted December 2, 1980.
Cohen, S., A. Chang, H. Boyer, and R. Helling. “Construction of Biologically Functional Bacterial
Plasmids in Vitro.” Proceedings of the National Academy of Sciences, 70(11), 1973, 3240 – 44.
Darby, M. R. Labor Force, Employment, and Productivity in Historical Perspective. Los Angeles:
UCLA Institute of Industrial Relations, 1984a.
———. “The U.S. Productivity Slowdown: A Case of Statistical Myopia.” American Economic
Review, 74(3), 1984b, 301– 22.
Darby, M. R., and E. Karni. “Free Competition and the Optimal Amount of Fraud.” Journal of Law
and Economics, 16(2), 1973, 67– 88.
Darby, M. R., and L. G. Zucker. “Change or Die: The Adoption of Biotechnology in the Japanese
and U.S. Pharmaceutical Industries.” Comparative Studies of Technological Evolution, 7, 2001,
85 –125.

38

Michael R. Darby and Lynne G. Zucker

Darby, M. R., Q. Liu, and L. G. Zucker. “High Stakes in High Technology: High-Tech Market Values as Options.” UCLA Working Paper, April 2002.
Davis, S. J., and J. Haltiwanger. “Gross Job Creation, Gross Job Destruction, and Employment
Reallocation.” Quarterly Journal of Economics, 107(3), 1992, 819 – 63.
Dorfman, N. S. “Route 128: The Development of a Regional High Technology Economy,” in The
Massachusetts Miracle: High Technology and Economic Revitalization, edited by D. Lampe.
Cambridge, MA: MIT Press, 1988, 240 – 74.
Ericson, R., and A. Pakes. “Markov-Perfect Industry Dynamics: A Framework for Empirical Work.”
Review of Economic Studies, 62(1), 1995, 53 – 82.
Federal Reserve Bank of Kansas City. Economic Policy for the Information Economy. A Symposium Sponsored by the Federal Reserve Bank of Kansas City. Kansas City: Federal Reserve Bank
of Kansas City, 2001.
Freeman, J., G. R. Carroll, and M. T. Hannan. “The Liability of Newness: Age Dependence in
Organizational Death Rates.” American Sociological Review, 48(5), 1983, 692 – 710.
Goeddel, D. V., and H. L. Heyneker. “Method of Constructing a Replicable Cloning Vehicle
Having Quasi-Synthetic Genes.” U.S. Patent number 4,342,832, granted August 3, 1982.
Goeddel, D. V., D. G. Kleid, F. Bolivar, H. L. Heyneker, D. G. Yansura, R. Crea, T. Hirose,
A. Kraszewski, K. Itakura, and A. I. Riggs. “Expression in Escherichia coli of Chemically Synthesized Genes for Human Insulin.” Proceedings of the National Academy of Sciences, 76(1), 1979,
106 –10.
Greenspan, A. “Technology and the Economy.” Remarks before the Economic Club of New York.
New York, NY, January 13, 2000a.
———. “Technological Innovation and the Economy.” Remarks before the White House Conference on the New Economy. Washington, DC, April 5, 2000b.
———. “Economic Developments.” Remarks before the Economic Club of New York. New York,
NY, May 24, 2001.
Griliches, Z. “Hybrid Corn: An Exploration in the Economics of Technological Change.” Econometrica, 25(4), 1957, 501– 22.
———. “Hybrid Corn and the Economics of Innovation.” Science, 132(3422), 1960, 275 – 80.
Griliches, Z., and J. Schmookler. “Inventing and Maximizing.” American Economic Review, 53(4),
1963, 725 – 29.
Haltiwanger, J. C. “Measuring and Analyzing Aggregate Fluctuations: The Importance of Building from Microeconomic Evidence.” Federal Reserve Bank of St. Louis Review, 79(3), 1997,
55 – 77.

Growing by Leaps and Inches

39

Harberger, A. C. “A Vision of the Growth Process.” American Economic Review, 88(1), 1998, 1– 32.
Head, K., J. Ries, and D. Swenson. “Agglomeration Benefits and Location Choice: Evidence from
Japanese Manufacturing Investments in the United States.” Journal of International Economics,
38(3 – 4), 1995, 223 – 47.
Henderson, R. “Underinvestment and Incompetence as Responses to Radical Innovation: Evidence from the Photolithographic Alignment Industry.” RAND Journal of Economics, 24(2), 1993,
248 – 70.
Henderson, R., A. B. Jaffe, and M. Trajtenberg. “Universities as a Source of Commercial Technology: A Detailed Analysis of University Patenting 1965 –1988.” Review of Economics and Statistics, 80(1), 1998, 119 – 27.
Jaffe, A. B. “Technological Opportunity and Spillovers of R&D; Evidence from Firms’ Patents,
Profits, and Market Value.” American Economic Review, 76(5), 1986, 984 –1001.
———. “Characterizing the ‘Technological Position’ of Firms, with Application to Quantifying
Technological Opportunity and Research Spillovers.” Research Policy, 18(l), 1989, 87– 97.
Jaffe, A. B., M. Trajtenberg, and R. Henderson. “Geographic Localization of Knowledge Spillovers as Evidenced by Patent Citations.” Quarterly Journal of Economics, 63(3), 1993, 577– 98.
Japan Times. “Professor Guilty in Drugs Bribes Case.” Japan Times International Edition, April
18, 1999.
Jensen, R., and M. Thursby. “Proofs and Prototypes for Sale: The Tale of University Licensing.”
American Economic Review, 91(1), 2001, 240 – 59.
Jones, B. D., and A. Vedlitz. “Higher Education Policies and Economic Growth in the American
States.” Economic Development Quarterly, 2(1), 1988, 78 – 87.
Jovanovic, B. “Selection and the Evolution of Industry.” Econometrica, 50(3), 1982, 649 – 70.
Jovanovic, B., and G. MacDonald. “The Life-Cycle of a Competitive Industry.” Journal of Political
Economy, 102(2), 1994, 322 – 47.
Klepper, S. “Entry, Exit, Growth, and Innovation over the Product Life Cycle.” American Economic
Review, 86(3), 1996, 562 – 83.
Klepper, S., and J. H. Miller. “Entry, Exit, and Shakeouts in the United States in New Manufactured
Products.” International Journal of Industrial Organization, 13(4), 1995, 567– 91.
Knight, F. H. “Diminishing Returns from Investment.” Journal of Political Economy, 52(1), 1944, 26–47.
Lamoreaux, N. R., and K. L. Sokoloff. “Inventive Activity and the Market for Technology in the
United States, 1840 –1920.” Paper presented at the Second Annual Roundtable for Engineering
Entrepreneurship Research, Atlanta, Georgia, March 21– 23, 2002.

40

Michael R. Darby and Lynne G. Zucker

Landefeld, J. S., and B. M. Fraumeni. “Measuring the New Economy.” Survey of Current Business, 81(3), 2001, 23 – 40.
Lerner, A. P. “On the Marginal Product of Capital and the Marginal Efficiency of Investment.” Journal of Political Economy, 61(1), 1953, 1–14.
Lippman, S. A., and R. P. Rumelt. “Uncertain Imitability: An Analysis of Interfirm Differences in
Efficiency under Competition.” Bell Journal of Economics, 13(2), 1982, 418 – 38.
Mansfield, E. “Technical Change and the Rate of Imitation.” Econometrica, 29(4), 1961, 741– 66.
———. “Academic Research Underlying Industrial Innovations: Sources, Characteristics, and
Financing.” Review of Economics and Statistics, 77(1), 1995, 55 – 65.
Marrett, C. B. “Influences on the Rise of New Organizations: The Formation of Women’s Medical
Societies.” Administrative Science Quarterly, 25(1), 1980, 185 – 99.
McCarthy, J. D., M. Wolfson, D. P. Baker, and E. Mosakowski. “The Founding of Social Movement
Organizations,” in Ecological Models of Organizations, edited by G. R. Carroll. Cambridge:
Ballinger, 1988, 71– 82.
McKelvey, B. Organizational Systematics: Taxonomy, Evolution, Classification. Berkeley: University of California Press, 1982.
McKelvey, B., and H. Aldrich. “Populations, Natural Selection, and Applied Organizational Science.” Administrative Science Quarterly, 28(1), 1983, 101– 28.
Miner, A. S., T. L. Amburgey, and T. M. Stearns. “Interorganizational Linkages and Population
Dynamics: Buffering and Transformational Shields.” Administrative Science Quarterly, 35(4),
1990, 689 – 713.
Nelson, R. R. “The Economics of Invention: A Survey of the Literature.” Journal of Business, 32(2),
1959, 101– 27.
Pakes, A., and R. Ericson. “Empirical Implications of Alternative Models of Firm Dynamics.” Journal of Economic Theory, 79(1), 1998, 1– 46.
Romanelli, E. “The Evolution of New Organizational Forms.” Annual Review of Sociology, 17,
1991, 79 –103.
Romer, P. M. “Endogenous Technological Change.” Journal of Political Economy, 98(5), part 2,
suppl, 1990, S71– S102.
Rudebusch, G. D. “How Fast Can the New Economy Grow?” FRBSF Economic Letter, number
2000 – 05, February 25, 2000.
Schumpeter, J. A. The Theory of Economic Development, translated by R. Opie. Cambridge, MA:
Harvard University Press, 1934.

Growing by Leaps and Inches

41

Singh, J. V., and C. J. Lumsden. “Theory and Research in Organizational Ecology.” Annual
Review of Sociology, 16, 1990, 161– 95.
Sleeper, S. D. “The Role of Firm Capabilities in the Evolution of the Laser Industry: The Making of
a High-Tech Market.” Ph.D. dissertation, Carnegie Mellon University, 1998.
Smilor, R. W., G. Kozmetsky, and D. V. Gibson. Creating the Technopolis: Linking Technology,
Commercialization, and Economic Development. Cambridge: Ballinger, 1988.
Stigler, G. J. Employment and Compensation in Education. New York: National Bureau of Economic Research, 1950.
———. “The Economics of Information.” Journal of Political Economy, 69(3), 1961, 213 – 25.
Stinchcombe, A. L. “Organizations and Social Structure,” in Handbook of Organizations, edited
by J. G. March. Chicago: Rand McNally, 1965, 142 – 93.
———. “University Administration of Research Space and Teaching Loads: Managers Who Do
Not Know What Their Workers Are Doing,” in Information and Organizations, edited by A. L.
Stinchcombe. Berkeley: University of California Press, 1990, 312 – 40.
Thursby, J. G., and M. Thursby. “Who Is Selling the Ivory Tower? Sources of Growth in University
Licensing.” Management Science, 48(1), 2002, 90 –104.
Torero, M. “Analyzing the Spillover Mechanism on the Semiconductor Industry in the Silicon Valley and Route 128,” in Essays on Diffusion of Technical Change, edited by M. Torero. PhD diss.
UCLA Economics Department, 1998, 6 – 48.
Torero, M., M. R. Darby, and L. G. Zucker. “The Importance of Intellectual Human Capital in the
Birth of the Semiconductor Industry.” UCLA Working Paper, January 2001.
Tushman, M. L., and P. Anderson. “Technological Discontinuities and Organizational Environments.” Administrative Science Quarterly, 31(1), 1986, 439 – 65.
Wedervang, F. Development of a Population of Industrial Firms: The Structure of Manufacturing
Industries in Norway, 1930 –1948. Oslo: Universitetsforlaget, 1965.
Zucker, L. G. “Combining Institutional Theory and Population Ecology: No Legitimacy, No History.”
American Sociological Review, 54(4), 1989, 542 – 45.
Zucker, L. G., and M. R. Darby. “Virtuous Circles of Productivity: Star Bioscientists and the Institutional Transformation of Industry.” National Bureau of Economic Research Working Paper No.
5342, November 1995.
———. “Star Scientists and Institutional Transformation: Patterns of Invention and Innovation in
the Formation of the Biotechnology Industry.” Proceedings of the National Academy of Sciences,
93(23), 1996, 709 –16.

42

Michael R. Darby and Lynne G. Zucker

———. “Present at the Biotechnological Revolution: Transformation of Technical Identity for a
Large Incumbent Pharmaceutical Firm.” Research Policy, 26(4, 5), 1997, 429 – 46.
———. “Capturing Technological Opportunity via Japan’s Star Scientists: Evidence from Japanese Firms’ Biotech Patents and Products.” Journal of Technology Transfer, 26(1/2), 2001, 37– 58.
Zucker, L. G., M. R. Darby, and J. Armstrong. “Geographically Localized Knowledge: Spillovers
or Markets?” Economic Inquiry, 36(1), 1998a, 65 – 86.
———. “Commercializing Knowledge: University Science, Venture Capital, and Firm Performance in Biotechnology.” Management Science, 48(1), 2002, 138 – 53.
Zucker, L. G., M. R. Darby, and M. B. Brewer. “Intellectual Human Capital and the Birth of U.S.
Biotechnology Enterprises.” American Economic Review, 88(1), 1998b, 290 – 306.
Zucker, L. G., M. R. Darby, and Y. Peng. “Fundamentals or Population Dynamics and the Geographic Distribution of U.S. Biotechnology Enterprises, 1976 –1989.” National Bureau of Economic
Research Working Paper No. 6414, February 1998c.
Zucker, L. G., M. R. Darby, and M. Torero. “Labor Mobility from Academe to Commerce.” Journal
of Labor Economics, 2002, in press.

The Benefits to Society
of New Drugs: A Survey
of the Econometric Evidence
Frank R. Lichtenberg

M

any economists believe that “new goods are at the heart of economic
progress” (Bresnahan and Gordon 1997) and that “innovative goods are
better than older products simply because they provide more ‘product
services’ in relation to their cost of production” (Grossman and Helpman 1991).
An industry whose propensity to generate new goods is among the highest is
the pharmaceutical industry. It is one of the most R&D-intensive industries in
the economy. Moreover, due in part to extensive FDA regulation, we have
unusually good data about the launch and diffusion of new pharmaceutical
goods. I have used these data to perform a number of econometric studies to
assess the health and economic impacts of new drug development and use.
I hypothesize that people may obtain several kinds of benefits from using
newer, as opposed to older, pharmaceutical products: longer life, improved
quality of life, and reduced total medical expenditure. My studies have been
designed to estimate the magnitude and value of these benefits and compare
them with the cost of using newer drugs.
I have used a number of complementary approaches and data sources to
address these issues. One study uses aggregate U.S. data to determine the contribution of new drug approvals to longevity increase. Several others use diseaselevel data that I constructed to evaluate the effect of pharmaceutical innovation
on hospitalization rates and quality of life indicators (activity limitations, disability
days). And several other studies have used individual-level data — or even data
below the individual level.
Virtually all of my research is based on large, publicly available data sets,
most of which were produced by federal agencies. These include the Vital Statistics — Mortality Detail files, the National Ambulatory Medical Care Survey, the
National Hospital Discharge Survey, the National Health Interview Survey, the
Medical Expenditure Panel Survey (MEPS), and unpublished FDA data obtained
43

44

Frank R. Lichtenberg

via the Freedom of Information Act. The mortality data are based on a complete
census of deaths in the United States, and most other data sets are based on
large, representative samples of health care providers and households.
My studies are based on data covering all medical conditions (diseases)
and all drugs. They therefore provide evidence about the health and economic
impacts of new drugs in general, not the impacts of specific drugs or on particular diseases. While the methods I use could, in principle, be applied to specific drug classes or diseases, the number of observations about particular drugs
and diseases in publicly available data sets is generally too small to obtain statistically reliable results.
In the first section of this article I will describe some of my research about
the impact of new drugs on longevity. In the next section, I will discuss quality-of-life effects, and in the third section, I will review my findings concerning
the impact of new drugs on medical expenditure.
LONGEVITY
Between 1960 and 1997, life expectancy at birth increased approximately
10 percent, from 69.7 to 76.5 years. Nordhaus (2003) estimates that the value of life
extension during this period nearly equaled the gains in tangible consumption.1
While life expectancy has tended to increase since 1960, as Figure 1 indicates, there have been substantial fluctuations in the rate of increase. Life
Figure 1

Life Expectancy at Birth, 1960–97: Trend and Fluctuations

The Benefits to Society of New Drugs

45

expectancy increased at an average annual rate of 0.25 percent; it increased
more than 0.70 percent in 1961, 1974, and 1975, and declined more than 0.25
percent in 1963, 1968, 1980, and 1993. Measurement error is unlikely to account
for much of the fluctuation in life expectancy. As noted in Anderson (1999, 34),
“The annual life tables are based on a complete count of all reported deaths,”
and there are about 2 million deaths per year. Growth in real per capita income
(GDP) also doesn’t account for these fluctuations. The period in which life
expectancy increased most rapidly (1973–75) was a period of dismal macroeconomic performance.
There is, however, a highly statistically significant relationship between the
number of new molecular entities (NMEs) approved by the FDA and longevity
increase: The periods during which the most new drugs have been approved by the
FDA tend to be the periods in which longevity grew most rapidly.2 This is consistent
with the hypothesis that the greater the number of drugs that are available to physicians and consumers, the higher longevity will be.3 The number of drugs available
in a given year is not simply equal to the sum of the number of drugs approved
in all previous years, since the introduction of new drugs may render older drugs
obsolete. I estimate that the obsolescence rate of drugs is about 5 percent per year.
Figure 2 displays the relationship between life expectancy and the number of new drug approvals, holding constant life expectancy in the previous
year, which, on theoretical grounds, it is appropriate to do. The estimates indicate that the average new drug approval increases the life expectancy of peo-

Figure 2

New Drug Approvals and Life Expectancy at Birth, 1961–97

46

Frank R. Lichtenberg

ple born in the year that the drug is approved by 0.016 years (5.8 days). This
may sound insignificant, but since there are approximately 4 million births per
year in the United States, the average new drug approval increases the total
expected life-years of the cohort by 63.7 thousand years (4 million births times
.016 years/birth). New drug approvals in a given year also increase the life expectancy of people born in future years, but by a smaller amount (due to obsolescence of drugs).4 I estimate that the average new drug approval increases the
total expected life-years of current and future cohorts by 1.2 million. In other
words, current and future generations will live a total of 1.2 million life-years
longer due to the average new drug approval. The cost to the pharmaceutical
industry of bringing a new drug to market is often estimated to be about $500
million. Hence cost per life-year gained is $424 ($500 million/1.2 million lifeyears). According to Murphy and Topel (2003), this is a small fraction of the economic value of a life-year, which they estimate to be on the order of $150,000.
Increased longevity, while desirable for its own sake, may also have positive
implications for medical expenditure. A recent National Academy of Sciences
study showed that costs in the final two years of life were lower for people who
lived longer. “The older you are when you die, the less expensive the last two
years are,” said the study’s principal author, Kenneth G. Manton, director of the
Center for Demographic Studies at Duke University.5

Figure 3

Average Annual Number of Drugs Brought to Market

The Benefits to Society of New Drugs

47

Case Studies of Orphan Diseases and HIV
During the last two decades, there have been large, sudden increases in
the number of drugs available to treat two kinds of diseases: “orphan” (rare) diseases and human immunodeficiency virus (HIV). As Figure 3 indicates, the average annual number of drugs for rare diseases brought to market during 1983 – 99
was twelve times as great as it was during 1973 – 82,6 and the average annual
number of HIV drugs brought to market during 1994 – 98 was three times as
great as it was during 1987– 93.
These increases occurred for different reasons and under different circumstances. The increase in drugs for rare diseases occurred because Congress
passed the Orphan Drug Act in January 1983. The increase in drugs for HIV
occurred because AIDS was first reported in 1981, was identified as being
caused by HIV in 1984,7 and (in the 1990s) the average length of time required
to develop a drug was about 15 years.8
Both increases provide a good opportunity to investigate the effect of
pharmaceutical innovation on mortality. In Lichtenberg (2001a, 2003a), I investigated the effect of increases in the number of drugs available to treat these diseases on mortality associated with them.
Before the Orphan Drug Act went into effect (between 1979 and 1984),
mortality from rare diseases grew at the same rate as mortality from other diseases (Figure 4 ). In contrast, during the next five years, mortality from rare disFigure 4

Number of Deaths from Rare Diseases and Other Diseases, 1979–98

48

Frank R. Lichtenberg

eases grew more slowly than mortality from other diseases. I estimated that one
additional orphan drug approval in year t prevents 211 deaths in year t + 1 and
ultimately prevents 499 deaths, and that about 108,000 deaths from rare diseases
will ultimately be prevented by all of the 216 orphan drugs that have been
approved since 1983.
Consistent with previous patient-level studies of HIV, I find that new drugs
played a key role in the post-1995 decline in HIV mortality (Figure 5 ). I estimate that one additional HIV drug approval in year t prevents about 6,000 HIV
deaths in year t + 1 and ultimately prevents about 34,000 HIV deaths. HIV drug
approvals have reduced mortality both directly and indirectly (via increased
drug consumption). HIV mortality depends on both the quality and the quantity of medications consumed, and new drug approvals have a sizeable impact
on drug consumption: One additional HIV drug approval in year t results in 1.2
million additional HIV drug units consumed in year t + 1 and ultimately results
in 3.6 million additional HIV drug units consumed.
As summarized in Figure 6, mortality from both diseases declined dramatically following increases in drug approvals.
Effect of Increased Drug Use Associated with Medicare Eligibility
Most people become eligible for Medicare suddenly, the day they turn
sixty-five. Although Medicare does not pay for most outpatient drugs, Medicare
Figure 5

HIV Drug Approvals and HIV Mortality Reduction

The Benefits to Society of New Drugs

49

Figure 6

Average Annual Change in Number of Deaths

subsidizes a service that people must use in order to obtain prescription drugs:
physician care. In Lichtenberg (2002), I show that utilization of ambulatory care
increases suddenly and significantly at age sixty-five, presumably due to Medicare eligibility. The evidence points to a structural change in the frequency of
physician visits precisely at age sixty-five.9 Attainment of age sixty-five marks
not only an upward shift but also the beginning of a rapid upward trend (up
until age seventy-five) of about 2.8 percent per year in annual physician visits
per capita.
The number of physician visits in which at least one drug is prescribed also
jumps up at age sixty-five (Figure 7 ). Data from the 1996 Medical Expenditure
Panel Survey indicate that people between the ages of sixty-six and seventy-five
consume about 66 percent more medicines per person than people between the
ages of fifty-six and sixty-five (Figure 8 ).
I examined whether this increase in utilization leads to an improvement in
outcomes — a reduction in mortality — relative to what one would expect given
the trends in outcomes prior to age sixty-five. The estimates were consistent
with the hypothesis that the Medicare-induced increase in health care utilization
leads to slower growth in the probability of death after age sixty-five (Figure 9 ).
Physician visits (which are highly correlated with prescription drug utilization —
physicians prescribe drugs in over 60 percent of office visits) are estimated to

50

Frank R. Lichtenberg

Figure 7

Number of Physician Visits in Which at Least One Drug Was Prescribed,
1985 and 1989–98

Figure 8

Median Number of Prescriptions, by Age, 1996

The Benefits to Society of New Drugs

51

Figure 9

Percentage Increase from Previous Year in Probability of Death: Men

have a negative effect on the male death rate, conditional on age and the death
rate in the previous year: A permanent or sustained 10 percent increase in the
number of visits ultimately leads to a 5 percent reduction in the death rate.
Data on age-specific death probabilities every ten years back to 1900 —
i.e., before as well as after Medicare — were enacted to provide an alternative
way to test for the effect of Medicare on longevity and provide strong support
for the hypothesis that Medicare increased the survival rate of the elderly, by
about 13 percent.
QUALITY OF LIFE
In this section I present some new evidence about the impact of new
drugs on quality of life (“health”), as measured by ability to work, activity limitations, and disability days. The analysis was performed using a combination
of individual-level and medical-condition-level data. Most of the analysis is
based on samples of over 300,000 observations spanning more than a decade
(1985 – 96). I examined the following health indicators:
• Whether a person is limited in work, and a given condition is the main
or secondary cause of the limitation
• Whether a person is unable to work, and a given condition is the main
or secondary cause of the limitation

52

Frank R. Lichtenberg

• Whether the person has activity limitation, and a given condition is the
main cause of the limitation
• The total number of restricted activity days in the last two weeks for a
given condition
• The number of work-loss days in the last two weeks for a given condition (for currently employed workers ages eighteen to sixty-nine)
I investigated the effect of drug vintage — defined as the year in which the
FDA first approved a drug — on health. In particular, I tested the hypothesis that
a person’s health is an increasing function of the (mean) vintage of the drugs
he consumes, ceteris paribus. If the hypothesis stated above is true, then the
average health of a group of people is an increasing function of the average vintage of the drugs they consume, and the change in average health is an increasing function of the change in average drug vintage.
The estimates indicate that changes in mean drug vintage have highly statistically and economically significant effects on activity limitations and disability days. The magnitudes of these effects can be illustrated by calculating the
costs and health benefits that a ten-year increase in mean drug vintage would
have. Suppose that the average FDA approval year of the drugs consumed by a
person in 1996 increased from 1970 to 1980. Newer drugs generally cost more
than older drugs, and this switch to newer drugs would increase the person’s
drug expenditure by 27 percent, or $71, on average. However, the estimates
imply that the switch to newer drugs would yield a number of benefits, whose
value would exceed the increase in drug cost:
For employed people:
• The mean number of work-loss days per person per year would decline
by 21.3 percent, or 1.02 days. Average daily employee compensation is
about $140, so the value of this reduction is about $143.
For all people:
• The mean number of restricted-activity days per person per year would
decline by 12.0 percent, or 1.74 days.10
• The probability of having an activity limitation would decline by 9 percent.
• The probability of being completely unable to work would decline by
10.8 percent. Average annual employee compensation is about $40,000,
so the value of this reduction could be as high as $300.
I am currently engaged in another study of the impact of drug vintage on
quality of life. This study, which is restricted to the Medicare population, is
based on the Medicare Current Beneficiary Survey for the years 1992 – 96, conducted by the Health Care Financing Administration. That survey contains a
number of questions concerning the ability of respondents to engage in various

The Benefits to Society of New Drugs

53

activities of daily living (ADLs), such as walking two to three blocks, lifting ten
pounds, stooping/kneeling, reaching over head, and writing. I have examined
the relationship between the vintage of the drugs consumed by an individual
and his or her ADL limitations, controlling for the person’s age, sex, race, education, income, insurance status, total medical expenditure,11 medical history,
the therapeutic class of the drug, and other attributes. Preliminary findings indicate that Medicare beneficiaries consuming newer drugs have significantly fewer
ADL limitations than people consuming older drugs.
TOTAL MEDICAL EXPENDITURE
Case studies of a number of specific drugs have shown that these drugs
reduced the demand for hospital care. For example, according to the Boston
Consulting Group (1993), operations for peptic ulcers decreased from 97,000 in
1977, when H2 antagonists were introduced, to 19,000 in 1987; this is estimated
to have saved $224 million in annual medical costs. The Scandinavian Simvastatin Survival Study indicated that “giving the drug simvastatin to heart patients
reduced their hospital admissions by a third during five years of treatment. It
also reduced the number of days that they had to spend in the hospital when
they were admitted, and reduced the need for bypass surgery and angioplasty.”
But treatment with the $2/day pill that lowered cholesterol did not actually save
money: Hospital costs were $8 million lower among the 2,221 volunteers who
got the drug, but the medicine itself cost $11 million (The New York Times
1995a). On the other hand, the clot-dissolving drug TPA “costs $2,000 to administer to each stroke victim, but has the potential to save much more in long-term
care for those who are helped” (The New York Times 1995b).
Other case studies have indicated that government-imposed rationing of
pharmaceuticals led to increased use of hospital care. Soumerai et al. (1991)
analyzed the effect of limits imposed by the New Hampshire Medicaid program
on the number of reimbursable medications that a patient can receive on rates
of admission to nursing homes and hospitals. Imposition of the reimbursement
cap resulted in an approximate doubling of the rate of nursing home admissions
among chronically ill elderly patients.
While these studies are valuable, the extent to which their findings apply
to pharmaceutical use in general is unclear. Moreover, these studies have yielded
mixed results about (or have not addressed) the issue of whether the reduction
in hospital cost was outweighed by the increase in pharmaceutical cost. I have
performed several studies to assess the impact of pharmaceutical use in general
on the demand for inpatient hospital care and overall medical expenditure.
My first study on this issue was based on disease-level data: I constructed
a database containing information about utilization of pharmaceuticals, ambulatory care and hospital care, by disease, at two points in time (1980 and 1991 or

54

Frank R. Lichtenberg

1992). I controlled for the presence of “fixed (diagnosis) effects” by analyzing
relationships among growth rates of the variables. The main findings of this
study were as follows:
• The number of hospital bed-days declined most rapidly for those diagnoses with the greatest increase in the total number of drugs prescribed
and the greatest change in the distribution of drugs.
• An increase of 100 prescriptions is associated with 16.3 fewer hospital
days.
• A $1 increase in pharmaceutical expenditure is associated with a $3.65
reduction in hospital care expenditure (ignoring any indirect cost of
hospitalization), but it may also be associated with a $1.54 increase in
expenditure on ambulatory care.
• Diagnoses subject to higher rates of surgical innovation exhibited larger
increases (or smaller declines) in hospitalization.
My second study on this issue was based on individual-level data, most
of which were obtained from the 1996 Medical Expenditure Panel Survey, a
nationally representative survey of health care use and expenditures for the U.S.
civilian noninstitutionalized population. This survey collected extremely detailed
data from 23,230 people on use and expenditures for office and hospital-based
care, home health care, and prescribed medicines. MEPS contains data at three
different levels of aggregation: the person level, the condition level (77,000 conditions), and the event level. A person may have several conditions (e.g., hypertension, diabetes, and glaucoma); a given condition may be associated with a
number of events.
The unit of observation in my analysis was a prescribed-medicine event.
I had data on over 171,000 prescriptions. Over 90 percent of the prescriptions
are linked to exactly one medical condition, and the 1996 Medical Conditions
file contains summary information about these medical conditions, including the
number of hospital events, emergency room events, outpatient events, officebased events, dental events, and home health events associated with the condition. Expenditure (and charges) associated with each condition, by event type,
can be computed from the records contained in the respective medical event
files. For example, one can compute total hospital expenditure associated with
individual x’s hypertension. In addition to calculating expenditure, by event
type, we calculated total nondrug expenditure — i.e., the sum of expenditures
on the six event types listed above. The MEPS data enable us to control for
many important attributes, including sex, age, education, race, income, insurance status (whether the person is covered by private insurance, Medicare, or
Medicaid), who paid for the drug, the condition for which the drug was prescribed, how long the person has had the condition, and the number of medical conditions reported by the person.

The Benefits to Society of New Drugs

55

By controlling for condition, we are in effect comparing individuals only
with other individuals with the same condition. We do not control for drug class,
however, since we do not want to rule out comparisons between people consuming drugs in one class (e.g., SSRI antidepressants) and people consuming
drugs in another class (e.g., tricyclic antidepressants) for the same condition.
My objective was to determine the effect of drug age — the number of
years since the FDA first approved the drug’s active ingredient(s) — on outcomes
and expenditure, controlling, in a very nonrestrictive fashion, all of these factors
cited above. But in addition to those observed individual differences, there may
be other, unmeasured determinants, such as the physician’s “practice style”:
Physicians prescribing older drugs might be less well trained, less likely to keep
up with advances in medicine, and more likely to practice in substandard facilities. Fortunately, the fact that many individuals in the sample have both multiple medical conditions and multiple prescriptions means that we can control for
all individual characteristics — both observed and unobserved — by pursuing a
second approach. This involved estimating a model that includes “individual
effects.”
Table 1 shows the number of 1996 MEPS events, by type, and their associated average expenditures. Figure 10 depicts the frequency distribution of MEPS
prescriptions, by the date the active ingredient was first approved by the FDA.
About one-quarter of prescriptions consumed in 1996 were for drugs approved
before 1950; more than half of the drugs consumed were approved before 1980.
First I analyzed the relationship between the age of the drug and the
amount paid for the prescription. Not surprisingly, I found that new drugs are,
on average, more expensive than old drugs prescribed for the same condition.
For example, if a fifteen-year-old drug were replaced by a 5.5-year-old drug, the
cost of the prescription would increase by about $18.
Then I examined the relationship between the age of the drug and the
number and cost of nondrug medical events associated with the condition. Hospital
Table 1

Frequency of and Expenditure on MEPS Events

56

Frank R. Lichtenberg

stays are the most important of these, since they account for almost 42 percent
of total medical expenditure. The estimates revealed that people consuming
newer drugs had significantly fewer hospital stays than people consuming older
drugs. Replacing an older prescription with a newer drug as in the previous
examples would reduce the expected number of hospital stays by 0.0059 — i.e.,
about six fewer stays per thousand prescriptions. Since the average expenditure
on a hospital stay in MEPS is $7,588, one might expect a reduction in hospital
expenditure of $44 (0.0059 × $7,588), compared with an increase in drug cost
of $18. However, the reduction in hospital expenditure from the use of newer
drugs is even larger than this — $56 — because newer drugs are associated with
shorter, as well as fewer, hospital stays.
The estimates indicate that reductions in drug age tend to reduce all types
of nondrug medical expenditure, although the reduction in inpatient expenditure is by far the largest. This reduction of $71.09 in nondrug expenditure is
much greater than the increase in prescription cost ($18), so reducing the age
of the drug results in a substantial net reduction in the total cost of treating the
condition.
I estimated the nondrug medical expenditure model separately, for those
under and over sixty-five years of age. Nondrug medical expenditure is positively
related to drug age for both groups, and drug age appears to have similar effects,
in percentage terms, on nondrug expenditures of the elderly and the nonelderly.
It is sometimes suggested that because generic drugs tend to be less
expensive than branded drugs, allowing people to use only generic drugs might
Figure 10

Frequency Distribution of MEPS Prescriptions, by Date Active Ingredient
Was Approved by the FDA

The Benefits to Society of New Drugs

57

Figure 11

Mean Age (in Years) of Drugs Consumed in 1996

be an effective means of reducing health expenditure. As Figure 11 shows,
generic drugs tend to be much older than branded drugs. Suppose that instead
of consuming the actual mix of 60 percent branded and 40 percent generic
drugs, people had to consume only generic drugs. This would increase the
mean age of drugs consumed by 31 percent, from twenty-nine years to thirtyeight years. My estimates indicate that denying people access to branded drugs
would increase total treatment costs, not reduce them, and would lead to worse
outcomes.
Drug costs (and changes in drug costs) are visible to the naked eye; identification of drug benefits requires careful analysis of good data. People making
drug policy decisions need to consider the full range of effects, not just the
costs, of newer drugs.
NOTES
1

Nordhaus (2003), along with Murphy and Topel (2003), offers parallel estimates of the value of
recent increases in longevity. To the casual observer, it hardly seems possible — and may seem
morally offensive — to put a dollar value on human life. But modern economics has devised a
credible way around these imponderables, inferring the value people put on life from what they
must be “bribed” in everyday settings to incur small but predictable increases in the risk of
death. Let’s say that moving from a factory line to outdoor construction increases a worker’s
chance of a fatal accident by one in 10,000 each year. In other words, if 10,000 workers made

58

Frank R. Lichtenberg

the shift, expected on-the-job fatalities would rise by one per year. Suppose further that to
induce 10,000 workers to play this death lottery voluntarily, an employer would have to pay
an extra $500 annually to each worker, for a total of $5 million. One of these new construction
workers is likely to die in return for the group gaining $5 million. Thus the value of one life in this
example is said to be $5 million.
Estimates from the dozen or so work-related studies since the mid-1970s put the value of
a statistical life in the relatively narrow $3 million to $7 million range. Using the relatively conservative estimate of $3 million for the average value of avoiding one death to calculate the
value of extending life, Nordhaus estimates that in the 1975 – 95 period, the value of life extension nearly equaled the gains in tangible consumption.
2

The rate of introduction of new drugs fluctuates considerably from year to year. Part of this is
due to the inherent randomness of the drug development and approval process. But major
changes in government policy have also clearly influenced the number of new drugs approved.

3

Analysis of individual-level data (Lichtenberg 2001b) also indicates that people consuming new
drugs are significantly less likely to die within a given period than people consuming older
drugs.

4

It takes about 18.5 years for half of the longevity effect of a new drug approval to occur.

5

“Decrease in Chronic Illness Bodes Well for Medicare Costs,” New York Times, May 8, 2001.

6

“More than 200 drugs and biological products for rare diseases have been brought to market
since 1983. In contrast, the decade prior to 1983 saw fewer than ten such products come to
market.” (Source: www.fda.gov/orphan/History.htm.)

7

www.fda.gov/oashi/aids/miles81.html.

8

DiMasi, J. A., “New Drug Development: Cost, Risk, and Complexity,” Drug Information Journal,
May 1995, cited in PhRMA Industry Profile 2000, Chapter 2, www.phrma.org/publications/
publications/profile00/index.phtml.

9

Reaching age sixty-five has a strong positive impact on the consumption of hospital services,
but most of this impact appears to be the result of postponement of hospitalization in the prior
two years.

10

My study of the impact of Medicare, described in the previous section, indicated that average
bed-days are lower after age sixty-five than one would expect from the pre-sixty-five trend.
Increased use of drugs after age sixty-five may contribute to this.

11

Total medical expenditure can serve as an indicator of the person’s (pretreatment) medical condition or severity.

REFERENCES
Anderson, Robert (1999), “United States Life Tables, 1997,” in National Vital Statistics Reports,
vol. 47, no. 28 (Hyattsville, Md.: National Center for Health Statistics), Dec. 13.
The Boston Consulting Group, Inc. (1993), “The Contribution of Pharmaceutical Companies:
What’s at Stake for America, Executive Summary,” September.

The Benefits to Society of New Drugs

59

Bresnahan, Timothy, and Robert J. Gordon, eds. (1997), The Economics of New Goods (Chicago:
University of Chicago Press).
Grossman, Gene, and Elhanan Helpman (1991), Innovation and Growth in the Global Economy
(Cambridge, Mass.: MIT Press).
Lichtenberg, Frank R. (1996), “Do (More and Better) Drugs Keep People Out of Hospitals?”
American Economic Review 86 (May): 384 – 8.
——— (2000a), “The Effect of Pharmaceutical Utilisation and Innovation on Hospitalisation and
Mortality,” in Productivity, Technology, and Economic Growth, ed. B. van Ark, S. K. Kuipers, and
G. Kuper (Boston: Kluwer Academic Publishers).
——— (2000b), “Sources of U.S. Longevity Increase, 1960 –1997,” CESifo Working Paper Series
405, CESifo, Munich.
——— (2001a), “The Effect of New Drugs on Mortality from Rare Diseases and HIV,” National
Bureau of Economic Research Working Paper no. 8677 (December).
——— (2001b), “Are the Benefits of Newer Drugs Worth Their Cost? Evidence from the 1996
MEPS,” Health Affairs 20(5) (September/October): 241– 51.
——— (2002), “The Effects of Medicare on Health Care Utilization and Outcomes,” in Frontiers in
Health Policy Research, vol. 5, ed. Alan Garber (Cambridge, Mass.: MIT Press).
——— (2003a), “The Effect of New Drugs on HIV Mortality in the U.S., 1987–1998,” Economics
and Human Biology 1: 259 – 66.
——— (2003b), “Pharmaceutical Innovation, Mortality Reduction, and Economic Growth,” in
Measuring the Gains from Medical Research: An Economic Approach, eds. Kevin M. Murphy and
Robert H. Topel (Chicago: University of Chicago Press).
Murphy, Kevin M., and Robert H. Topel (2003), “The Economic Value of Medical Research,” in
Measuring the Gains from Medical Research: An Economic Approach, eds. Kevin M. Murphy and
Robert H. Topel (Chicago: University of Chicago Press).
The New York Times (1995a), “Cholesterol pill linked to lower hospital bills,” March 27, A11.
The New York Times (1995b), “New study finds treatment helps stroke patients,” Dec. 14, A1.
Nordhaus, William D. (2003), “The Health of Nations: The Contribution of Improved Health
to Living Standards,” in Measuring the Gains from Medical Research: An Economic Approach,
eds. Kevin M. Murphy and Robert H. Topel (Chicago: University of Chicago Press).
Soumerai, S. B., D. Ross–Degnan, J. Avorn, T. J. McLaughlin, and I. Choodnovskiy (1991),
“Effects of Medicaid Drug-Payment Limits on Admission to Hospitals and Nursing Homes,” The
New England Journal of Medicine 325 (15) (Oct. 10): 1072– 7.

PART TWO

The Interdisciplinary
Nature of Biotech Research
Harnessing New Technologies
for the 21st Century
Malcolm Gillis

The Convergence of Disruptive Technologies
Enabling a New Industrial Approach
to Health Products
C. Thomas Caskey

Harnessing New Technologies
for the 21st Century
Malcolm Gillis

T

he 20th century was a very good one for economic growth in the United States:
Real GNP in 1999 was more than twenty times that of 1900. Replicating this
performance — much less improving upon it — in the 21st century will not
be easy. Science-based industry played a crucial part in 20th century economic
expansion. This was most obviously so in the case of the chemical industry, the
first major science-based industry to arise in the United States. Development of
that industry helped accelerate growth in dozens of others, including oil and gas
refining, pulp and paper, textiles, building materials, and, of course, pharmaceuticals.
If 21st century growth rates are to approach those of the past century, new
science-based industries will have to play roles comparable with the chemical
industry after 1900. Some of these are already appearing on the scene as infant
industries. David Baltimore, Nobel laureate, now president of California Institute
of Technology, correctly asserts that biotechnology is one of these infant industries.1 The term infant, as applied to this industry, does not necessarily mean
small; rather, it means that the young biotechnology industry today is not nearly
as large or as pivotal as it is going to be within a few years.
Up until now, the principal application of the biotechnology industry has
been in the development of drugs for the pharmaceutical industry. Credible estimates are that drugs and vaccines developed through biotechnology have
already benefited more than 250 million people.2
Perhaps this is one reason why some tend to view the biotechnology
industry as almost indistinguishable from the pharmaceutical industry, which
itself accounted in 1997 for 1.2 percent of GDP.3 Indeed, in an influential article
as late as 1999, the biotechnology industry was defined essentially as a subsection of pharmaceuticals, specifically as “an industry that uses biotechnology to
produce drugs or diagnostics previously unobtainable.” 4 We will see that this
definition no longer suffices.
63

64

Malcolm Gillis

Earlier presentations by Darby and Lichtenberg will have provided a comprehensive sketch of the macro importance of biotechnology. Here, I merely
note that by the mid-’90s, the sales volume of the pharmaceutical industry was
probably about fifteen times that of the biotech industry.5 Sales of pharmaceuticals will doubtless grow apace, especially with the progressive graying of our
population.6 Sales of the now infant biotech industry, though, will surely grow
even faster over the next two decades, as biotechnology applications extend further beyond pharmaceuticals, to the commonplace manipulation of DNA, proteins, and cells in fields ranging from agriculture, nutrition, and energy production to tissue engineering and, of course, gene therapy. Clearly, biotechnology
will be an industry serving a very large market beyond that for drugs. For example, the near-term worldwide market for tissue engineering products has been
estimated to be as high as $350 billion per year.7 Moreover, the future flowering
of biotechnology will not be limited only to advances in biosciences. Progress
in biotechnology turns out to be no less dependent upon advances in information technology and nanotechnology. If I succeed today only in portraying the
growing linkages between biotechnology, information technology, and nanotechnology, I will consider it a very good day’s work. I have enlisted in my
cause the testimony of several Nobel laureates, not only in medicine but also in
economics.
My remarks do not purport to cover all subfields of biotechnology but primarily the biotechnology I know best: that found today among Rice faculty and
their research partners of the Texas Medical Center, just across the street. Necessarily, then, my comments are focused somewhat more on the longer-term societal payoffs from activity in research labs than upon near-term market prospects.
BIOTECHNOLOGY, INFORMATION TECHNOLOGY,
AND NANOTECHNOLOGY
Pity those poor economists who will be specializing in national income
accounting in the decades to come. In attributing economic activity to distinct
sectors, how will they distinguish output in biotechnology from that in information technology and nanotechnology? To be sure, in 2050 we will still find
individuals identifying themselves as biologists or information scientists or nanotechnologists. The economists could ask those people. But would those scientists be able to draw clear dividing lines for the economists? Probably not,
because these rapidly evolving fields are becoming ever more closely linked.
Not only that, but the nature of the linkages among the three is itself evolving
rapidly. Figure 1 represents an attempt to depict intersections between them.
The chart was probably obsolete by the time it was constructed in March.
The potential role of these technologies in reshaping our economy and
society cannot be understood by examining each in isolation. We know quite

Harnessing New Technologies for the 21st Century

65

Figure 1

Linkages Between Key Technologies in Science and Engineering

well from the past two centuries of American economic history that technologydriven economic progress is almost never the result of a single invention or
even a single set of technologies. Rather, rapid economic growth has generally
been the outcome of the interplay between a collection of largely unanticipated
discoveries, clumping and clustering in very different fields, not over months or
years but usually decades. To illustrate: The economic progress often attributed
to the steam engine unfolded over at least a century and a half after Watt. For
example, in the United States even sixty-five years after Watt patented the steam
engine — as you know, he did not discover it — almost all manufacturing was
powered by water.8 The steam engine began to yield truly revolutionary change
only in the 19th century, when it was modified for use in transportation and in
the textile industry’s power looms. Watt’s legacy was vastly magnified by the
invention of the dynamo, pioneered by Faraday and Wheatstone and perfected
by Edison in 1878. Even so, as late as 1940 electricity had not reached large
swathes of the rural south in the United States. Herbert Simon, another Nobel

66

Malcolm Gillis

laureate,9 was clearly right on the mark in postulating that the ramifications of
any one technological innovation depend greatly upon the stimulation it provides to and receives from other, often quite independent, innovations.
Biotechnology Generally
Yet another Nobel laureate, economist – historian Robert Fogel,10 published
a series of papers in the ’80s and ’90s showing vividly the remarkable extent to
which investments in biomedical research made seventy-five and a hundred
years ago are still paying off handsomely today, in affecting how well we live
and how long we live. More recently, the distinguished economist William
Nordhaus has suggested that the medical revolution qualifies, on economic
grounds alone, as the “greatest benefit to mankind.” He has estimated that the
heretofore unmeasured value of “health income” (the value not captured in the
national income accounts) attributable to increases in longevity in the last hundred years is nearly as large as the value of measured income attributable to
nonhealth goods and services.11 Others offer findings paralleling those of Nordhaus. Lichtenberg estimates that just between 1960 and 1997, life expectancy at
birth increased by about 10 percent, to 76.5 years,12 attributable primarily to both
medical innovation and rising expenditures — especially public spending — in
medical care. The conclusions of Fogel, Nordhaus, and Lichtenberg are reinforced
by research from a growing body of economists (Mark McClellan, David Cutler,
Elizabeth Richardson among others) working in new traditions of analysis of
health economics.13 As impressive as were the gains of the past, tomorrow’s
biotechnology holds out the promise of benefits that could make those of the
last century appear pale in comparison.
Only thirteen years have passed since scientist W. French Anderson fired
the biotech shot heard around the world by administering the first artificial gene
to cure a hereditary illness. Since then we have learned more about the workings
of human genes than in the entire half century following the 1953 discovery of
the double helix by two modern counterparts of Prometheus, Watson and Crick.
As a result, biology has been transformed from a discipline centering upon the
passive study of life to one allowing the active alteration of life almost at will.
Virtually all the molecular rungs on the chemical ladders of the human genome
have been identified, providing us with an almost complete parts list for a human.
As David Baltimore says, “Now we can discover all the secrets of nature.” 14
The theoretical understanding developed in genetics and clinical advances
in gene therapy over the past fifty years bid fair to render commonplace medical applications that were once viewed as unthinkable. This new world of possibilities arises from the joining of the insights of the geneticist with advanced
tools of information and computational science and the rapidly growing skills
of biomedical engineers and nanotechnologists.

Harnessing New Technologies for the 21st Century

67

The promise of biotechnology is, however, not at all unalloyed: The possible blessings are very obvious, while the potential banes are not.15 Moreover,
there is the possibility that both our expectations and our worries over the
biotech revolution have been overinflated. Knowing all the secrets of nature
could bring utopia, but it could also usher in a nightmare world resembling that
limned in Aldous Huxley’s remarkably prescient book Brave New World, published seventy years ago. That world boasted some innovations already here
and one we still lack: genetic engineering of humans in vitro, powerful moodaltering pharmaceuticals, and body implants to complement “feelies,” the ultimate in participatory entertainment. All of these wonders were developed to
assure human happiness. But Huxley’s totally homogenized world could hardly
be either brave or happy, for it allowed no scope at all for the exercise of human
choice or respect for fundamental human values. Huxley’s world has yet to encroach much on ours, but at the very least, it stands as an unsettling reminder
that today’s biotechnology involves ethical thickets and moral issues that society has only just begun to plumb, much less resolve.16 When, for example, does
gene therapy spill over into eugenics? To what extent will the accumulation of
genetic information stigmatize affected people?
The Biotechnology – Information Technology Interface
Mathematical, statistical, and computer methods have become indispensable in the analysis of biological, biochemical, and biophysical data. Moreover,
the interactions also work in reverse: A growing number of projects in computational science are being driven by biological problems. No less a scientist than
David Baltimore flatly asserts: “Biology is today an information science.” 17 He
goes on to note that “the human genome, as it might be recorded in a web site,
is a string of three billion units over four letters.... Only computers can store
such data, only mathematicians can understand how to take sequenced DNA
fragments and put them together in appropriate order.” 18 Indeed, the human
genome has been reconstituted perhaps as much by advanced computational
technology as in wet labs.
The field of bioinformatics weaves together biology and information science.
Although the early commercial promise of bioinformatics, like that of the Internet,
has thus far proven to have been oversold, this should not obscure the fact that
bioinformatics is beginning to usher in another technological revolution. Whereas
classical medical research depended to a great extent upon trial and error, the discipline of bioinformatics allows research to be based upon information about networks of molecular interaction that control diseased, as well as normal, life.
The emergence of bioinformatics is merely the latest testament to the fruitfulness of university-based research in drawing together several disciplines to
work on vital questions. Arguably, the most fundamental advances in biomedi-

68

Malcolm Gillis

cine have come from advances in basic sciences in the academy. Virtually all
the stunning advances in diagnostic and therapeutic tools of recent years were
based on discoveries from the one place where one finds critical masses of
questing physicists, biologists, chemists, mathematicians, engineers, and computer scientists, as well as clinicians: the research university.
Directly from physics came magnetic resonance imaging and laser surgery.
From chemistry sprang fullerenes, first discovered at Rice University in 1985, as
well as a host of pharmaceuticals. From mechanical engineering came robotics
used in surgery. New insights from researchers in computer science and applied
mathematics led to groundbreaking work in tomography, genomics, and now,
proteomics.19 The interrelationships between biomedicine and information sciences can be seen to be especially strong in the area of medical imaging. At
Rice, at least a half dozen of our computational and mathematical scientists are
involved in joint work in medical imaging with physicians from the Texas Medical Center.
Several subfields in bioinformatics are moving ahead at high speed. Computational physiology is one of these. The virtual heart is a very good example:
A union of form and function on a computer screen, this heart is the result of
the translation of thousands of mathematical equations and data points into a
computer simulation. The Economist calls this a spectacular example of in silico
biology that brings computing power to bear on a much wider range of biological problems from proteomic analysis to the re-creation of neural networks.20
Another new direction in the bio–info interface lies in computational cancer
research. Clinicians at M.D. Anderson Cancer Center stress that cancer is not a
hundred different diseases, but thousands of different diseases. At least five
mutations may be required to create a cancer cell, each drawn from a repertoire
of several hundred genes. Thus it is apparent that there is an overwhelming
number of possible combinations and permutations of cancer-causing mutations.21 This is exactly the type of problem that can be addressed only by biomathematicians, computational scientists, and biostaticians — that is to say, those
in bioinformatics.
Finally, it is to be emphasized that new disciplines at the intersection of biotechnology and information technology have ample applicability to the mainstay
of 20th century biotechnology: new pharmaceutical products. The difference is that
for the 21st century, more and more pharmaceutical innovations will be IT-based.
The fusion of computational sciences with biochemistry and pharmacology has already given birth to the new discipline of pharmacogenomics, which
promises to allow the personalization of much of medicine. This new field will
augment and perhaps eventually replace traditional therapy based on the premise that “one drug fits all.”
Pharmacogenomics, like its predecessor pharmacogenetics, deals with the
genetic basis underlying variable drug response in different individuals. Phar-

Harnessing New Technologies for the 21st Century

69

macogenetics also relies on the study of sequence variations in genes thought
to affect drug response. But pharmacogenomics goes further: It looks at the
entire genome, enabling not only the identification of variant genes governing
different drug responses across patients but also identifying genes that affect
susceptibility to disease. Thus, pharmacogenomics may allow new insights into
disease prevention as well as individualized application of drug therapy.22
For this promise to be realized, scientists must understand fully not only
the genome but the proteome as well. That requires the development of increasingly more sophisticated and powerful computational methods. The Gulf Coast
Consortium for Bioinformatics, embracing Rice and six other institutional members in Houston and Galveston, is one of the venues where such computational
approaches to drug design are beginning to blossom. There, researchers are
developing powerful new tools for use in computer-aided design of drugs.23
Interdisciplinary approaches at the consortium have been very fruitful. Robotic
path planning, developed in engineering, has been applied to modeling biomolecular interactions to help solve problems in drug design.
The Biotechnology – Nanotechnology Interface
Nano is derived from the Greek word for dwarf. Nanotechnology is the
application of findings of the highly interdisciplinary field of nanoscale science,
which deals with objects as small as one billionth of a meter: a nanometer. Nanotechnology refers to activity involving the measurement, manipulation, and fabrication of objects from less than one to about 100 nanometers across. Nanotechnology is not to be confused with the more widely known, top-down
approach called miniaturization. Nanotechnology devices are built from the bottom up, one molecule, or even one atom, at a time.
My thumb is about 30 million nanometers wide. The nanometer, the width
of about ten hydrogen atoms, has come to be the preferred unit of measure
among scientists and engineers working at or very near the atomic scale in biology, electronics, and materials science. Naturally, these individuals have come
to be called nanoscientists and/or nanotechnologists, working in either “dry” or
“wet” nanofields. The dry side is, naturally enough, waterless. The wet side centers on the study of biological systems that exist in a water environment. By
2002, the wet side of nanotechnology had become virtually indistinguishable
from molecular biophysics, structural biology, and biotechnology. Chemistry
Nobel laureate Rick Smalley goes so far as to assert that 21st century biotechnology could be considered a subset of wet nanotechnology.
The nanoworld is where much of nature’s weirdness resides —the borderland between the world of quantum mechanics and the more familiar macroworld of classical physics, where different laws apply. Navigation in this landscape is difficult indeed. Much of the most interesting work focuses on an

70

Malcolm Gillis

intermediate domain between the two worlds, involving structures too large to
be easily understood with ordinary quantum mechanics but not large enough
to escape fully the weirdness of quantum effects.24
Until quite recently, nanoscale science was on the leading edge of research,
while nanotechnology was on the “bleeding” edge of applications: lots of money
going out and not much coming in. That is now beginning to change, as we will
see, as investors and governments have begun to turn on the financial taps.
Government support of nanoresearch has risen sharply in recent years,
growing faster than that from private sources. At the federal level, the total nanotech budget for FY 2003 is proposed to increase by 17 percent, with a striking
57 percent increase for the Department of Energy. A similar pattern may be
found in other countries, where total funding for nanotechnology has jumped
from $316 million four years ago to $835 million last year.25 New nanotechnology centers have been recently established, both in Cambridge and Oxford,
one focusing on wet nanotech, the other on dry nanotech.
While the dry side of nanotechnology, especially that involving new materials, is not irrelevant for biotechnology, the wet side is by far more significant. Biomedical applications of nanotechnology were given a large boost after
it was established that the two nanoparticles discovered at Rice — carbon 60 (the
Buckyball) and carbon 70 — are nontoxic.26,27 These particles, commonly called
“fullerenes,” possess two other traits that make them especially suited for biomedical applications.
First, they are very, very small — about one nanometer wide. Second, their
surfaces are particularly well suited for attaching therapeutic compounds. In the
words of one of the discoverers, Rick Smalley, “They are molecular pincushions
that can easily be decorated with other chemicals.” Exploitation of these properties of fullerenes is proceeding. One promising anti-AIDS application capitalizes on three features of the Buckyball: its size, its ability to carry chemicals
enabling delivery of drugs to specifically targeted sites, and its unique shape that
facilitates binding with HIV-infected cells.
At least as promising are the efforts under way at Rice and nearby M.D.
Anderson Cancer Center involving nanoparticles other than fullerenes: gold
nanoshells. These are biocompatible devices with a gold surface adhered to a
silica core. At 100 nanometers in width, they easily pass through the circulatory
system. The optical properties of nanoshells may prove extremely useful in both
diagnosis and treatment. Once inserted into the body and delivered to sites of
individual tumors by virtue of antibodies attached to them, they are hit by
infrared light and heated up to 55 degrees centigrade, enough to destroy cancer
cells while leaving intact healthy ones. This highly localized therapy can penetrate
up to 15 cm. in tissues and thus reach all organs without the serious side effects
of chemotherapy or radiation therapy.

Harnessing New Technologies for the 21st Century

71

The Grand Interface: Bio – Info – Nano
We have come to the juncture of all three of the new technologies: the
design and utilization of nanomaterials for biomedical engineering. Especially
notable is the rapidly growing field of tissue engineering, which focuses primarily upon the development of biological substitutes to restore, maintain, or
improve tissue function. Put another way, tissue engineering will allow fabrication, on a large scale, of a range of spare human parts to replace diseased or
spent ones, with or without the help of embryonic stem cells.
Twentieth century forms of biomedical engineering will doubtless persist
for a time, until the field is largely eclipsed by tissue engineering. Traditional
bioengineering has already brought us biomechanical body parts, including
unduly bulky whole organs, various joints, heart valves, stents, and the like. It
is notable that in 2001, thirty-three years after Christian Barnard’s first transplant
of a living heart, an artificial heart has allowed a handful of patients to remain
alive for several weeks.
Veterinarians have contributed as well, drawing on their experience with
large animals to fashion ingenious, highly compact devices that not only augment the activity of damaged human left ventricles but also in some cases even
allow damaged natural heart tissue to heal and resume functioning. Also, innovative research is exploring how metal and ceramics can be used in the fabrication of artificial lungs.
Progress in providing other organs much more complex than the lungs or
highly specialized heart tissue will be longer in coming but is no longer the stuff
of science fiction. The overwhelming share of those advances will come from
newer approaches to tissue engineering, some of which rely on stem cells taken
from adults. Experiments are already under way using living cells to make
bioartificial pancreases and livers. Virtually every other part of the body has
attracted researchers seeking ways to find bioartificial replications of body parts.
Traditional biomedical engineering uses metals, polymers, and ceramics
to construct temporary or permanent replacements of body parts that interact
minimally with surrounding tissue. Tissue engineers take exactly the opposite
approach: They design materials to interact extensively with adjacent tissues in
order to facilitate the regeneration process.28
In bone regeneration, for example, tissue engineers use biodegradable
polymers to create scaffolding shaped like the lost bone. A biopsy is taken from
the patient himself, the bone-forming cells are isolated and expanded in the laboratory and seeded onto a scaffold. The cell/scaffold construct is grown in a
bioreactor and then grafted back into the patient. As the cells integrate with the
body’s own tissue, the polymer scaffold gradually melts away, leaving only living
tissue behind. With the right signals, this newly formed tissue regenerates the
missing bone.

72

Malcolm Gillis

Scientists and engineers at Rice and other Texas locations are engaged in
promising research for deploying tissue engineering to deal with a multitude of
other medical problems: atherosclerosis, thrombosis, inflammations, osteoporosis, cartilage regeneration, and repair of tissue.
The objectives of tissue engineering are not limited to bone and organ
replacement. Tissue engineers have already developed quite serviceable blood
substitutes. Most recently, protein engineers at Rice together with industrial collaborators surmounted one of the most vexing problems in the development of
blood substitutes. Recombinant technology was used to design new hemoglobin molecules that eliminate the hypertensive side effects of previously available
blood substitutes, traceable to nitric oxide scavenging. This work is being
carried out under the auspices of the Gulf Coast Consortia, plural because it is
an umbrella organization for cooperative research and education in structural
biology, computational biology, and molecular biophysics.
This type of research requires the most resourceful efforts not only of
biologists but also information scientists and nanotechnologists. Wet nanotechnology is used to create tissue analogs to grow skin, muscle, and organs. The
computational and structural skills of engineers are required to construct the
scaffolds on which bioscientists build, after having used mathematical models to
image their work.29
Nowhere is the interplay between bio-, nano-, and information technology
more striking than in new forms of health maintenance, diagnosis, and treatment. Already nanometer-sized biosensors can be inserted into the bloodstream.
More advanced nanosensors could eventually monitor all bodily functions.
CONCLUSION: OF PARTS LISTS AND SPARE PARTS
The potential for truly staggering applications of biotechnology in the marketplace is in little doubt. Whether much of this potential will be soon realized
is, however, yet unclear. Financial constraints on biotechnological transfer may
be shrinking, but legal constraints loom somewhat larger than in past technological revolutions.
On the bright side, biotechnology looks to be the principal arena for an
ongoing, far-reaching synthesis in science and engineering. As the infant industry of biotechnology reaches its adolescence and later adulthood, it can be
expected to provide a wide array of products and services to fuel sharp
increases in living standards in the 21st century. Pharmaceuticals will doubtless
remain prominent in this picture, but other types of new products and services
should grow steadily in importance.
From genomics, biotechnology has already provided us with a complete
parts list for both animals and plant life. As a result of advances in wet nanotechnology and information technology, tissue engineering promises to pro-

Harnessing New Technologies for the 21st Century

73

vide widely available, inexpensive, and reliable spare parts for humans. There
is, however, a darker side: a still unresolved and complex welter of ethical —
and perhaps moral — issues raised by our fast-expanding capacities in biotechnology. These are just beginning to be systematically addressed on many university campuses across the nation — including my own — and in boardrooms
and the halls of Congress. Considerable wisdom will be required to ensure that
the potential of the biotechnological revolution is realized without erosion in
fundamental human values. With resolution of these issues, the economic and
social impact could be as profound and as positive as that wrought by any previous revolution in human history.
NOTES
I am grateful for comments from many Rice colleagues, especially Eugene Levy, Kathleen
Matthews, Neal Lane, Tony Mikos, John Olson, Terry Shepard, and Moshe Vardi.
1

Baltimore (2001), 43 – 45.

2

Feldbaum (2002).

3

Landau (1999), xi.

4

Scriabine (1999), 271.

5

According to Landau, sales of the pharmaceutical industry in 1997 were $122 billion. Given
ambiguities over the very meaning of biotech sales, few precise figures are available. Sales
values for the biotech industry, according to leading biotech scientists, reached only $6 billion
in 1993 and may have grown to $7.5 billion by 1997. See Landau (1999) and Rudolph and
McIntire (1996).

6

Nearly 30 percent of biotech products in Phase III of chemical trials are for cancer and another
11 percent for the nervous system, including Alzheimer’s.

7

As a measure of the potential present-day market for tissue-engineering products, consider that
organ replacement therapies using standard organometallic devices constitute about one-twelfth
of medical spending worldwide, or about $350 million. See McIntire (2002), chapter 1, 1.

8

Rosenberg and Trajtenberg (2001).

9

Simon (2001).

10

Fogel (1994).

11

Nordhaus (2002).

12

Lichtenberg (2002).

13

Cutler et al. (1998).

14

Baltimore (2001), 49.

15

This line of argument is developed skillfully at some length by Francis Fukuyama (2002) in his
new book, Our Posthuman Future : Consequences of the Biotechnology Revolution. See also
Wade (2002).

16

See, for example, Rothstein (1996).

74

Malcolm Gillis

17

Baltimore (2001), 44, 49.

18

Baltimore (2001), 48.

19

Proteomics is the study of the proteome, an organism’s total protein set.

20

The Economist (2001).

21

M. D. Anderson Cancer Center, “The Ross and Margot Perot Center for Computational Cancer
Research” (undated, but written in January 2002).

22

See Mancinelli, Cronin, and Sadee (2000).

23

See, for example, Finn and Kavraki (1999).

24

Roukes (2001).

25

Stix (2001).

26

Wilson (1999).

27

Researchers at Rice and the Texas Medical Center in 1996 found that carbon 60 (the Buckyball) does accumulate in the liver since it cannot be oxidized in mammals. However, no toxic
effects were noted.

28

Antonios Mikos, bioengineer at Rice.

29

Jackson (2002).

REFERENCES
Baltimore, David (2001), “How Biology Became an Information Science,” in The Invisible Future:
The Seamless Integration of Technology into Everyday Life, ed. Peter J. Denning (McGraw-Hill).
Cutler, David M., Mark McClellan, Joseph P. Newhouse, and Dahlia Remler (1998), “Are Medical
Prices Declining? Evidence for Heart Attack Treatments,” Quarterly Journal of Economics 108 (4).
The Economist (2001), “The Heart of the Matter,” December 8, 21.
Feldbaum, Carl (2002), “Some History Should Be Repeated,” Science 295 (Feb. 8): 975.
Finn, Paul W., and Lydia E. Kavraki (1999), “Computational Approaches to Drug Design,” Algorithmica 25: 347–71.
Fogel, Robert W. (1994), “Economic Growth, Population Theory, and Physiology: The Bearing of
Long-Term Processes on the Making of Economic Policy,” Presidential Address, American Economic Review 84(3): 369 – 95.
Fukuyama, Francis (2002), Our Posthuman Future: Consequences of the Biotechnology Revolution (New York: Farrar, Straus and Giroux).
Jackson, Shirley A. (2002), “Interdisciplinary Research Is a Wise Investment in Our Future,”
Trusteeship 10(1) (Association of Governing Boards of Universities and Colleges).

Harnessing New Technologies for the 21st Century

75

Landau, Ralph (1999), “Introduction,” in Pharmaceutical Innovation: Revolutionizing Human
Health, eds. Ralph Landau, Basil Achilladelis, and Alexander Scriabine (Philadelphia: Chemical
Heritage Press).
Lichtenberg, Frank R. (2002), “Sources of U.S. Longevity Increase, 1960 –1997,” NBER Working
Paper Series, no. 8755 (Cambridge, Mass.: National Bureau of Economic Research), January.
Mancinelli, Laviero, Maureen Cronin, and Wolfgang Sadee (2000), “Pharmacogenomics: The
Promise of Personalized Medicine,” AAPS PharmSci 2(1): 1– 20.
McIntire, Larry, V. (2002), “Introduction,” in WTEC Panel Report on Tissue Engineering Research
(Washington, D.C.: National Science Foundation), January.
Nordhaus, William (2002), “The Health of Nations: The Contribution of Improved Health to Living
Standards,” NBER Working Paper Series, no. 8818 (Cambridge, Mass.: National Bureau of Economic Research), March.
Rosenberg, Nathan, and Manuel Trajtenberg (2001), “A General Purpose Technology at Work:
The Corliss Steam Engine in the Late 19th Century U.S.,” NBER Working Paper Series, no. 8485
(Cambridge, Mass.: National Bureau of Economic Research), September.
Rothstein, Mark (1996), “Ethical Issues Surrounding the New Technology as Applied to Health
Care,” in Biotechnology: Science, Engineering, and Ethical Challenges for the 21st Century,
eds. Frederick B. Rudolph and Larry V. McIntire (Washington, D.C.: National Academy Press),
199 – 207.
Roukes, Michael (2001), “Plenty of Room Indeed,” Scientific American, September.
Rudolph, Frederick B., and Larry V. McIntire, eds. (1996), Biotechnology: Science, Engineering,
and Ethical Challenges for the 21st Century (Washington, D.C.: National Academy Press).
Scriabine, Alexander (1999), “The Role of Biotechnology in Drug Development,” in Pharmaceutical Innovation: Revolutionizing Human Health, eds. Ralph Landau, Basil Achilladelis, and Alexander Scriabine (Philadelphia: Chemical Heritage Press).
Simon, Herbert A. (1987), “The Steam Engine and the Computer: What Makes Technology Revolutionary,” EDUCOM Bulletin 22(1): 2– 5.
Stix, Gary (2001), “Little Big Science,” Scientific American, September.
Wade, Nicholas (2002), “A Dim View of a Posthuman Future,” New York Times, April 2, D-1.
Wilson, Lon J. (1999), “Medical Applications of Fullerenes and Metallofullerenes,” Electrochemical Society Interface 8 (Winter): 24 – 28.

The Convergence of Disruptive
Technologies Enabling a
New Industrial Approach
to Health Products
C. Thomas Caskey

I

have prepared this talk to cover two areas. First, I want to present a convergence of disruptive technologies that are driving new therapeutics. I will
then finish with the situation in the state of Texas with regard to the traction we have in biotechnology, the companies being created, and what strength
our academic community is providing to stimulate the initiative.
Bear with me for a moment while I create a yachting analogy for the
biotech industry. The Vendee custom yacht is a single-person-managed yacht
with a maximum speed of about 35 knots and the capacity to go nonstop
around the world. In the Vendee race, it goes south, following a southern arch.
This is the same arch undertaken at the turn of the century by Shackleton, who
needed twenty-four months and a substantial crew.
The difference between the experiences early in the century and the experiences with the Vendee Great South Yacht Race is the difference in technologies,
which enables a single individual to achieve the objective. My message is that
technology empowers. It is the focus on new technology by biotechs that allows
them to sail faster than large pharmaceutical firms.
The Vendee race also has some good analogies to business. Let me illustrate. Four yachts were sunk. Three were sunk under conditions in which they
had full appreciation of conditions and their position. The fourth yacht knew
where it was but not the surrounding conditions. It had lost its technology and
thus was unable to estimate the surrounding conditions.
There were two categories of rescuers for these failing yachts. There was
the individual, high-tech rescuer, as illustrated by Peter Gross, who found the
failing boat in the vast southern ocean. He utilized GPS technology and had full
knowledge of the conditions surrounding both single-man boats. In the second
77

78

C. Thomas Caskey

example, the rescuer was disadvantaged because she lacked GPS position and
knowledge of the surrounding environment. The third situation utilized a large
interdisciplinary team. All came to the rescue of individual entrepreneurs and,
in this case, individual sailors. The outcome of the race was one lost at sea,
one racer-to-racer rescue (I equate that to biotech-to-biotech rescue), and two
rescued by large organizations (I equate that to pharma). The overall outcome
of the Great Southern Race: one winner, eight finished, four sunk, three rescued,
and one lost. That is about the same outcome on biotechnology investment.
I hope you remember this illustration as an example of what it takes for high
tech to succeed.
I will now shift from this introductory illustration to discuss the technologies that are enabling or empowering smaller numbers of scientists to do more.
HIV drugs have been discussed. It was my great pleasure to be senior vice president at Merck as we developed the reverse transcriptase inhibitors and proteinase inhibitors. I cannot recall a time in science that has been so rewarding.
We were able to see these drugs safely introduced and death rates fall. It was
an exciting, exciting time. We focused on the technologies necessary to achieve
that objective in eight and a half years, which was a record time for drug
approval by the FDA. No other drug development effort has ever matched that
effort. It started with the isolation of the HIV virus, isolation recombinant DNA
technology, cloning, sequencing, and understanding the structure and function
of the HIV virus. Converging disruptive technologies were fundamental to the
project; without them we would have had no product.
Never underestimate the power of cell biology. Those who criticize the
Nixon era war on cancer, saying it achieved little, were wrong. The war on cancer
gave the United States dominance in the area of cell biology. It was the ability
to grow cells and use a virus’s DNA to infect a cell that allowed us to satisfy
Koch’s postulates that the killing of a cell was caused by a virus. Then came the
ability to study drug targets by genomic sequencing and predicting gene function and thus matching the cell biology to drug development.
We had lucky breaks in the first two HIV drugs. There were cancer products in development that were structurally similar to the inhibitors of the reverse
transcriptase and the antihypertensive drugs. We were off and running, as were
many other pharmaceutical companies. We were on target within a short period
of time. Thus by recombinant DNA technology, Koch’s hypothesis was satisfied
by cell biology, followed by the lucky break of having the lead compounds. We are
now developing new products (integrase inhibitors and CCR5 and 4 blockers).
But this development is more difficult because we lack leads; thus we have to
find them by combinatorial chemistry, leading to a longer development time.
Let me shift to replacement inhibitory proteins. Examples of replacement
proteins would be Epos Factor 8, interferon, growth hormone, and insulin.
Examples of the inhibitory proteins would be monoclonal antibodies now used

The Convergence of Disruptive Technologies

79

for anticoagulation, arthritis, and cancer. There are some very novel new protein products that I would put under the classification of a Trojan horse. It looks
pretty friendly, but if you let it in the door to that receptor, it kills the receptor’s
function. Examples include emerging drugs for arthritis and cancer. These are
true recombinant molecules that create a new event at the target receptor.
Recombinant DNA technology and bioinformatics were used to predict functionality of these proteins and thus select them for drug development. We drew
upon an accumulation of data from huge databases and bioinformatics that were
critical to the identification of proteins and their corresponding monoclonal antibodies. If you examine the products from this arena, many of them are fashioned to a native molecule. We can, however, improve on nature for pharmaceutical products. We can make it an injectable, achieve a therapy peak, make
a longer action, and add a safety factor. The study of proteins and monoclonal
antibodies opened a new therapeutic area and created biotechnology.
Let me shift to HIV vaccines. Utilizing DNA technology, viral genome
sequencing as well as delivery of viral vector constrictions, HIV vaccine strategy
is directed at making a harmless virus that delivers the immunological challenge
and creates the immune response protective from HIV. Such a safe virus was
made possible by our understanding its genes and predicting what could be
removed from these vectors to create the vaccine. Cell growth and cell transfer
were critical to success.
HIV vaccines have been extremely difficult to develop. We had a poor
understanding of how to protect against HIV infections. We had to discover the
process by which the virus made cell entry, permanently established itself,
avoided the immune protective system, and others. The creativity of individual
scientists to understand this biology has made possible the new vaccines now
in trial. There have been misjudgments made along the way. We know now that
antibodies alone do not work to protect individuals. What worked for hepatitis
B, human papillomavirus, and other viruses did not work for HIV. The entire
strategy had to be changed to accommodate so-called viral cell killing. Last
week we witnessed the first failure of this strategy that used a canary pox virus
vaccine. Even with knowledge and good design, we still have a challenge on
the use of these disruptive tools to achieve a vaccine. One trial I want you to
follow now is one using a combination of a DNA-injectable, followed by a protected artificial virus construct for T-cell immune stimulation. It has protective
effects for the primate with HIV. Such a vaccine may protect individuals from
infection by clearing infected patients of their residual infected cells, those not
eliminated by drug cocktails.
Next, I will focus on Alzheimer’s. Consider where we were with Alzheimer’s
drug development ten years ago. Our efforts have been absolutely focused and
put into logic by the discovery of disease genes responsible for inherited forms
of Alzheimer’s. The study of human genetics and the discovery of the disease–

80

C. Thomas Caskey

gene associations set the field to work in a logical manner. Before these discoveries we had no conceptualization of the disease or logic for drug development. There are many more opportunities from human genetics. Humans have
about 5,000 to 6,000 inheritable diseases whose causative genes are yet to be
discovered. The disease – gene relationship represents the starting point for conceptualizing the disease process and removes the biologic chaos. Approximately
forty disease – gene associations are being discovered per month. Ten years ago
it was ten per year. This accelerated discovery rate can be connected to the scientific empowerment of the human genome project completed in 2003.
High-throughput drug screening and combinatorial chemistry have accelerated the discovery of lead compounds. In the past, pharmaceutical companies
amassed vast collections of chemical compounds, without a strategy to move
from one compound to others efficiently. All changed with the conceptualization and implementation of combinatorial chemistry, where one could begin to
build platforms of space-occupying chemistry. This created large numbers of
compounds. Now, a million-compound collection is not uncommon. Furthermore, rarely do the first compounds have the best properties. Medicinal compounds are modified for safety, drug distribution, dosage, etc. Combinatorial
chemistry has that strategy to achieve such objectives. Scientists can move rapidly. A single scientist can develop from a single lead compound a new set of
100 or 500 related compounds within a week. Such productivity previously
required teams of 100 to 500 chemists. Without high-throughput drug screening
and computer analysis of large numbers of compounds, none of this would
have been achieved. Each company now has adapted this integrated technology for drug development. Alzheimer drug development makes tremendous use
of combinatorial chemistry, high-throughput screening, and human genetic gene
leads to achieve the objective of new drugs.
The most important development involving descriptive technologies is that
of disease models and the development of designer species. It makes use of
genome sequences, human genetics, mouse genome sequences, and stem cells.
We have known about the stem cell for over twenty years. It is the cell that can be
manipulated to realize all the mouse models that we constructed over the past
twenty years. Stem cells have been very much in the research forefront and are used
very proactively. We are in the early stage of use of stem cells for human disease.
Let me illustrate a stem cell utility. Let us assume there are five genes involved in Alzheimer’s. We know that for a single patient with Alzheimer’s, we can
identify one of four genes. By transfer of the patient’s cell nucleus into a laboratoryfriendly cell type, we would have the capacity to test a single drug for efficacy.
This provides a preclinical test for therapy response and drug choice.
Let’s focus on the creation of mouse disease models. Companies like Lexicon
Genetics, located in The Woodlands, Texas, and several others have carried out
gene knockouts in the mouse to create disease models, allowing a scientist to

The Convergence of Disruptive Technologies

81

examine a single gene and determine the disease. The hospitalization and
examination rate of these mouse knockouts is a thousand per year. Thus Lexicon has industrialized disease – gene discovery for mice and man and accelerated drug development.
Let us now focus on Texas. Texas is a big agricultural state. Our northern
neighbor, Canada, is also a large agricultural entity. Canada committed substantial resources to developing special species of advantage in fish, forest, commercial crops, and production animals. I use this as an example of how the
empowering technology in genome science allows you to reach not only from
the mammal but also into very important commercial species. We need to accelerate our Texas initiatives in biotechnology.
What has been achieved by biotechnology? Of the new therapeutics, 35
percent comes from innovative products being developed out of biotech corporations. That is a very aggressive pipeline of new therapeutics. We have
numerous examples of biotech products being licensed and acquired by large
pharma. The point to be made: Biotech can move faster and more focused than
pharma. Big pharma is that locomotive preparing to have the switches thrown.
Big pharma is powerful in development; biotechs are weak in development.
Biotechs can prove the principle of the therapy, bring the product to the point
of high likelihood of utility, but need big pharma to engage in development.
Rarely can a biotech go to FDA approval with available funding.
What are the novel products coming out of biotechnology? I can assure
you that few large pharmas would have taken on the uncertainty of these new
therapeutics. They just will not take the challenge; there is too high a risk. There
are many biotechs. Some will succeed. Some will fail. It is, however, biotechnology that will discover an important pathway for regulating cancer, controlling cancer, regulating diseases like type 1 diabetes, multiple sclerosis, and others — all based on cytotoxic T lymphocytes (CTLs).
There is not one pharma that would have touched antisense because of its
novelty. The first ophthalmologic products are now in use. Isis Innovation started
antisense therapy. Almost all vaccines are licensed from outside biotechs.
Biotechs now lead the development of terrorism vaccines. No pharma would
approach gene therapy—again based on risk. Wonderful opportunities now exist
for organ and cell transplantation, and these will rely upon biotechs for advancement.
Let’s examine the financial drivers for biotechnology in Texas. As shown in
Table 1, the National Institutes of Health is the gorilla. The National Science
Foundation is increasing in importance. Pharma exceeds these two because of
the high cost of development. Venture capital, while small, is a key stimulator.
State contributions are about a million nationally.
Diversification and building high-tech industry are critical to the state. Let’s
examine the fuel for the discovery engine in Texas (Table 2 ). Listed here are

82

C. Thomas Caskey

Table 1

Financial Drivers of Biotechnology in Texas
National Institutes of Health
National Science Foundation
Pharmas
Venture Capital
State

$23.0 million
$ 4.8 million
$26.0 million
$ 3.3 million
$ 1.0 million

NIH numbers from 2001, and they are focused on the biologic sector. This does
not include training, so it gives you an idea of the level of investment. You can
see that Texas is doing extremely well. These are excellent numbers. This is an
important engine for the state and reflects the state’s wisdom in developing
these academic institutions. We need to capitalize on this base. An example of
how to capitalize would be the creation of the ability to transfer this technology
into industrial parks. I favor development of a second Texas Medical Center for
Industry, adjacent to the Texas Medical Center and the size of the Texas Medical
Center. It is estimated such a campus would exceed the income of the Texas
Medical Center within fifteen years. We are underachieving by almost tenfold
the introduction of new corporations in the state based upon our investment.
Before we rest on our achievements in our great state, there is competition to
be considered. I have identified two substantial challengers — San Francisco and
Boston. Their numbers (Table 3 ) are terrific. These are very powerful institutions that score beautifully in the ability of their scientists to draw in basic
research numbers.
Table 2

Texas’ 2001 Share of National Research Funds

SOURCE: National Institutes of Health.

The Convergence of Disruptive Technologies

83

Table 3

Texas’ Competition for National Research Funds in 2001

SOURCE: National Institutes of Health.

My favorite project on the West Coast is Mission Bay in San Francisco. It
is the site of the old 1903 earthquake that has been reclaimed. This is a real
estate-driven park. The funds are from the private sector. Everything from academic institutions to biotech clusters is located within it. Profit is made on the
commercial side: commercial centers, townhouses, housing, hotels, and, incidentally, biotech parks in academic institutions. It is located adjacent to downtown. It is a beautiful opportunity for San Francisco to accelerate what is already
a very powerful focus in the state.
The opportunities for expansion in our region are to increase the number
of quality start-ups. We need to recruit pharma and biotech into our region, not
just to grow them but to recruit them. We need to be looking for more opportunities for consolidation. The consolidations do not have to come out of Texas.
We could take a Texas base and move a company from Seattle or Baltimore, for
example. Consolidations are critical because of the large numbers of companies
drifting down to smaller numbers of quality organizations.
It is clear that improvement in business plans and management is critical.
The talent and skills are not at an optimal level in our state. To have talent and
the opportunity for expansion, we need to build and recruit. We need larger VC
investment firms. We need regional incentives such as is being done in Michigan, Ohio, Missouri, and others. I am extremely pleased with Governor Perry’s
initiative in the state of Texas. When you have the governor leading, others join
the logic. His leadership has been critical. We are fortunate to have this leadership in the state. Finally, we need to advertise, advertise, advertise, and communicate with the VCs that are moving and driving biotech.

84

C. Thomas Caskey

Table 4

Publicly Traded Biotech Corporations in Texas
Introgen ($95M)
Lexicon ($460M)
Luminex ($200M)
Tanox ($600M)
Texas Biotechnology ($250M)
Total: $1.605 Billion

Table 4 shows the Texas-based companies traded on the Nasdaq. All these
companies are derivatives of academic institutions. The research engine of the
institutions leads to the ideas that back these companies. We are doing well, but
we can do better.
Lexicon Genetics, in The Woodlands, is an example of one of those Nasdaqtraded companies. It is now a 500-person pharmaceutical company. It has two
divisions; the chemistry company is in New Jersey, and a biology division is in
The Woodlands. Gordon Cain has to be given a lot of credit for this corporation. He stepped up to the plate to singularly fund the company. He told me
several days ago he preferred single ownership. Few had his vision and
resources. The public offering raised the necessary capital from the market to
build a corporation in Texas and New Jersey. We can do more in Texas with
leaders of the quality and confidence of Gordon Cain.
Let me finish with a few comments. First, we have abundant unsolved
medical needs. The new technology enables innovative discovery. Second, we
have an emerging set of new information from the genome project — all investigator empowering. Third, the biotechs can move faster and are more focused
than pharmas. What are the challenges? I see the following challenges for
biotech: Visualize your product. Manage the company to achieve that product.
Value the product properly at the outset, and fund the company sufficiently to
be able to drive to the end point you are trying to achieve in the development.
Consistently look for the opportunity for consolidation, and wish good health
to big pharma.

PART THREE

Legal and Regulatory Issues
Facing Biotechnology
Patents and New Product Development
in the Pharmaceutical and
Biotechnology Industries
Henry G. Grabowski

Reaching Through the Genome
Rebecca S. Eisenberg

Patents and New Product
Development in the Pharmaceutical
and Biotechnology Industries
Henry G. Grabowski

G

rilliches, in a 1992 survey paper, found that high social returns to R&D
were a major factor underlying the growth in per capita income and consumer welfare during the twentieth century.1 Many of the studies done by
economists on this topic have found that the social returns to R&D are more
than twice the private returns to R&D.2 A primary reason for this finding is the
positive externalities generally associated with industrial innovations. As F. M.
Scherer stated in his leading graduate text in industrial organization, “Making the
best use of resources at any time is important. But in the long run it is dynamic
performance that counts.” 3
The pharmaceutical and biotechnology industries, which are among the
most research-intensive industries, have been the focus of several studies on
benefit cost and social return on R&D. Elsewhere in this symposium, Frank
Lichtenberg has reported on his finding concerning the impact of new drugs on
increased longevity, worker productivity, and savings in other types of medical
expenditures.4 He finds significant aggregate net benefits to society from new
drug introductions. His analysis is consistent with more microeconomic analyses
targeted to specific medical conditions such as cardiovascular disease, depression, and infectious disease. These studies have also found high incremental
social benefits from new drug innovation.5
Another general finding of the academic literature is that public policy
actions can have a significant influence on the rate of innovation in particular
industries. Among the key industrial policies influencing the innovative process
in pharmaceuticals are the public support of biomedical research, patents, FDA
regulatory policy, and government reimbursement controls.6 The focus of this
paper is on the role and impact of patents and intellectual property protection
in the discovery and development of new pharmaceutical and biotechnical
products.
87

88

Henry G. Grabowski

The importance of patents to pharmaceutical innovation has been reported
in several cross-industry studies by economists. In particular, Richard Levin et
al. and Wes Cohen et al. have undertaken surveys of U.S. R&D managers in a
large cross section of industries to identify which factors are most important
and necessary in appropriating the benefits from innovations.7 These factors
included the competitive advantages of being first in the market, superior sales
and service efforts, and secrecy and complexity of productions and product
technology, as well as patents. Both studies found that the pharmaceutical
industry placed the highest importance on patents. By contrast, many other
research-intensive industries, such as computers and semiconductors, placed
greater stress on factors like lead time and learning-by-doing efficiencies in production accruing to first movers.
The findings of these studies are in accordance with an earlier study by
the British economists Taylor and Silberston. Based on a survey of U.K. R&D
managers, they estimated that pharmaceutical R&D expenditures would be
reduced by 64 percent in the absence of patent protections. By contrast, the corresponding reduction was only 8 percent across all industries. Similar findings
were reported by Edwin Mansfield from a survey of the research directors of
100 U.S. corporations.8
In the sections of this paper that follow, we examine the economic characteristics of the R&D process in pharmaceuticals that make patents so critical.
The next two sections consider the costs of innovation relative to imitation in
this industry. The following section considers whether the biotech industry is
different from the pharmaceutical industry in terms of R&D costs. The paper
then considers the distribution of returns on R&D in these industries. The final
section presents conclusions and policy considerations.
R&D COSTS FOR A NEW DRUG INTRODUCTION
The explanation for why patents are more important to pharmaceutical
firms in appropriating the benefits from innovation follows directly from the
characteristics of the pharmaceutical R&D process. In essence, it takes several
hundred million dollars to discover, develop, and gain regulatory approval for
a new medicine. Absent patent protection, or some equivalent barrier, imitators
could free ride on the innovator’s FDA approval and duplicate the compound
for a small fraction of the originator’s costs. In essence, imitation costs in pharmaceuticals are extremely low relative to the innovator’s costs for discovering
and developing a new compound.
One of the reasons R&D is so costly in pharmaceuticals is that most new
drug candidates fail to reach the market. Failure can result from toxicity, carcinogenicity, manufacturing difficulties, inconvenient dosing characteristics,
inadequate efficacy, economic and competitive factors, and various other prob-

Patents and New Product Development

89

lems. Typically, less than 1 percent of the compounds examined in the preclinical period make it into human testing. Only 20 percent of the compounds
entering clinical trials survive the development process and gain FDA approval.9
Furthermore, the full R&D process from synthesis to FDA approval involves
undertaking successive trials of increasing size and complexity. The preclinical
and clinical testing phases generally take more than a decade to complete.10
In a recently completed study, Joe DiMasi, Ron Hansen, and I examined
the average R&D cost for drugs introduced into the market in the late 1990s.
Data were collected on R&D costs for a randomly selected sample of sixty-eight
investigational drugs from ten multinational firms. We found the representative
new product approval incurred out-of-pocket costs of over $400 million.11 This
includes money spent in the discovery, preclinical, and clinical phases, as well
as an allocation for the cost of failures.
Figure 1 shows a breakdown of total R&D costs per approved drug that
are incurred during the preclinical and clinical R&D phases. As shown in this
figure, expenditures in the clinical period account for roughly 70 percent of total
out-of-pocket expenditures. This reflects the fact that clinical trials are very
expensive on a per patient basis, many drugs must be tested for every one
approved, and drugs that do make it to the final testing phase and FDA submission typically require premarket testing on thousands of patients.
Figure 1 also shows R&D costs capitalized to the date of marketing at a
representative cost of capital for the pharmaceutical industry of 11 percent. The
average capitalized R&D cost for a new drug introduction during this period is
Figure 1

Out-of-Pocket and Capitalized Costs per Approved Drug

SOURCE: Tufts Center for the Study of Drug Development.

90

Henry G. Grabowski

$802 million, or nearly double the out-of-pocket expenditure. Capital costs are
high in this situation because of the long time periods involved in pharmaceutical R&D. More than a decade typically elapses from initial drug synthesis to
final FDA approval. Since preclinical expenditures occur several years prior to
FDA approval, these costs are subject to greater compounding at the industry
cost of capital of 11 percent. Therefore, they account for a greater proportion
of total capitalized compared with total out-of-pocket costs (42 percent versus
30 percent).
R&D costs per new drug approval were observed to have increased at an
annual rate of 7.4 percent above general inflation when compared with the costs
of 1980s introductions. A major factor driving this increase is the size, complexity, and number of clinical trials, which have increased significantly in the
1990s compared with the 1980s.12 One important factor underlying this trend is
the increasing focus of the pharmaceutical industry on chronic and degenerative diseases. These conditions require larger trial sizes to establish their efficacy
and longer time periods for effects to be observed.
A number of factors could operate to alter the growth pattern for future
R&D costs. Emerging discovery and technologies may have profound effects on
R&D productivity in the next decade. The mapping of the genome, and related
advances in fields like proteomics and bioinformatics, has led to an abundance
of new disease targets. Nevertheless, some industry analysts have hypothesized
that these developments may actually cause R&D costs to rise in the short run.13
The basic reason is that these new technologies require substantial up-front
investments, and to date they have generated many disease targets that are not yet
well understood. Eventually this expansion in the scientific knowledge base should
lead to substantial efficiencies in the R&D process for new pharmaceuticals.
GENERIC ENTRY AND COMPETITION
In contrast to new product introductions, the development costs of generic
compounds are relatively modest. In the United States, since the passage of the
1984 Hatch–Waxman Act, generic products need only demonstrate that they are
bioequivalent to the pioneering brand to receive market registration. Generic
firms can file an abbreviated new drug application (ANDA). The ANDA process
only takes a few years and typically costs a few million dollars.14 The probability of success is also very high, as reflected by the fact that many generic firms
file to receive FDA approval and enter the market within a short time window
around patent expiration of the pioneer brand.
John Vernon and I have completed studies of generic competition during
the 1980s and 1990s.15 A distinctive pattern of competitive behavior for generic
and brand name firms has emerged in the wake of the 1984 act. First, commercially significant products experienced a large number of generic entrants within

Patents and New Product Development

91

a short time after patent expiration. This was in sharp contrast to what occurred
in the pre-1984 period. In the post-1984 period, we also observed a strong positive relation between the size of the market and the number of generic competitors, in accordance with expectations from economic theory.
Second, generics exhibited a high degree of price competition. The initial
generic product entered the market at a significant discount to the brand name
firm, and this discount grew larger as the number of generic competitors for a
particular brand name product expanded over time. For our 1984 to 1989 sample
of commercially significant products, generic prices averaged 61 percent of the
brand name product during the first month of generic competition. This declined to 37 percent by two years after entry.
Third, we observed a more rapid rate of sales erosion by the brand name
products in the case of more recent patent expirations. This is illustrated in
Figure 2. This figure shows the growth in generic market shares during the first
year on the market for four successive time cohorts. Market shares are measured in terms of pills sold for the most popular dosage size. The more recent
time cohorts in Figure 2 are characterized by much more intensive generic competition. The observed trend is particularly striking for the 1994 – 97 cohort of
brand name products. In particular, generic drugs captured a 64 percent share
of total units sold after one full year on the market. This increased to 73 percent after the second year. Recently, Prozac was subject to its first generic com-

Figure 2

Generic Market Shares One Year After Entry

92

Henry G. Grabowski

petition, in September 2001. Prozac lost over 80 percent of its U.S. sales to
generics within the first month after their entry.
In sum, price competition and generic utilization have increased dramatically since the Hatch–Waxman Act was passed. In the mid-1980s, generic products accounted for approximately 19 percent of all prescriptions. By 1999, the
figure was 47 percent.16 The growth of managed care and other related demandside changes also have been important factors underlying the rapid increase in
generic usage that has taken place during the last decade. However, the passage
of the 1984 act played a major role in relaxing the regulatory hurdles for generic
firms and facilitating higher levels of generic entry.
ARE THE INNOVATION AND IMITATION COSTS OF NEW BIOTECH
ENTITIES DIFFERENT?
Most of the analyses of R&D costs for new drug entities and their generic
imitators have focused on small-molecule new chemical entities. This reflects
the fact that the biotech industry is relatively young. New biologic entities were
first introduced in the 1980s. By 1994, only twenty-nine new biologic entities
had been introduced into the U.S. market, but this number has increased dramatically since then. In this regard, forty-one new biological introductions
occurred between 1995 and 2001.
The newest R&D cost study by DiMasi et al. does include seven biotech
compounds in the sample of sixty-nine entities for which data were obtained
from ten major pharmaceutical and biopharmaceutical firms.17 While this sample
of biological entities is too small to say anything definitive about the cost of
biotech drug development, the clinical phase costs in the DiMasi et al. study
were similar for the biotech and pharmaceutical projects.
As discussed above, capitalized R&D costs per new drug introductions are
influenced by a number of factors. These include out-of-pocket costs at the preclinical and clinical phase, the probability of success for new drug candidates at
different stages of the R&D process, and the length of time it takes to move
through all the stages of the R&D process and gain FDA approval. Recent studies of the probability of success and length of the R&D process for biotech drugs
indicate a convergence in these parameters toward the values observed for
small-molecule pharmaceuticals.
Two initial studies of success rates for biotech drugs were performed by
Bienz-Tadmor et al. and Struck.18 Both studies found that success rates for
biotech drugs were substantially higher than success rates for new chemical
entities. In particular, both studies projected success rates for biopharmaceuticals in excess of 50 percent. However, a basic assumption implicit in the
methodology of both studies is that success rates for biotech drugs that entered
development in the late 1980s and early 1990s are the same as for the biotech

Patents and New Product Development

93

drugs that entered development in the early to mid-1980s. This was a very
strong, and potentially hazardous, assumption, given that 90 percent of the
drugs in their samples were still under active testing.
Subsequently, Gosse et al. analyzed a comprehensive sample of U.S. biopharmaceutical drugs and compared the success rates of older and newer
biotech entities.19 They found dramatic differences in the time pattern of success
rates observed for early versus later biotech drug cohorts. In particular, for the
investigational new drugs (INDs) filed in the early 1980s, the success rate for
new recombinant entities is 38 percent. For the INDs filed during the late 1980s,
the success rate was only 10 percent, based on approvals to date (i.e., six years
after testing). At a comparable point in time, the new recombinant entities of
the early 1980s had a success rate of 26 percent. In fact, the success curve of
the recent recombinant entities more closely resembles that of new chemical
entities rather than that for the early biological entities.
This result is consistent with the history of biotech research in the United
States. The first biological entities introduced into the market were naturally
occurring proteins that replaced purified nonrecombinant formulations already
in general use as established therapies (e.g., insulin and human growth hormone). It is reasonable to expect that recombinant versions of established therapies would have high success rates, once the technology to manufacture these
products was proven. Other earlier targets for biotechnology were naturally
occurring proteins with well-known and defined physiologic activity (e.g., erythropoietin and filgrastim). As the biotech drugs moved to targets for which
limited knowledge existed about clinical and pharmacological profiles, it is reasonable to expect that success rates would fall back toward those of conventional drug entities. This is consistent with the findings of the recent Gosse et
al. study. The prospect of a long and uncertain discovery and development
period for a new drug is another factor affecting costs and risks in the drug R&D
process. The longer the development and approval process, the higher the interest and opportunity costs and the overall capitalized R&D costs of a new drug
introduction. Recently, Janice Reichert of the Tufts University Center for the
Study of Drug Development has done a historical analysis of clinical development time for successive cohorts of new biopharmaceuticals.20 The results are
presented in Figure 3. This figure shows that the earliest biopharmaceuticals had
much shorter total clinical development times than more recent introductions.
In particular, the cohort of 2000 – 01 new biopharmaceutical introductions had a
total clinical development time (including FDA approval) of eighty-six months,
versus 53.2 months for 1982 – 89 biopharmaceutical introductions.
Hence the experience with respect to development times parallels the
experience observed with respect to success rates. In particular, there has been
a convergence in clinical trial period times observed for new biological and new
chemical entries. Of course, the biotech industry is still in the early stages of

94

Henry G. Grabowski

Figure 3

Historical Comparison: Biopharms

evolution. It may eventually produce higher success rates and shorter development times as a result of new technologies currently emerging in the discovery
period. However, the best evidence at the current time is that biopharmaceuticals, like new chemical entities, are subject to very high rates of attrition and
long gestation periods in the clinical development stage.
One aspect in which biopharmaceuticals may be different from smallmolecule new chemical entities concerns the ease of generic entry when patents
expire. To date, there have only been a few patent expirations involving biopharmaceuticals. One case in which there has been entry after patent expiration
is human growth hormone. However, all the entry to date has been by other
big pharma firms that have had experience supplying this product in Europe
and Japan (Pharmacia, Novo Nordisk, and Ares-Serono). There are greater hurdles
in manufacturing biopharmaceuticals at an efficient scale compared with new
chemical entities, and in addition there are greater regulatory requirements
for biologicals associated with the manufacturing process.21 These factors may

Patents and New Product Development

95

moderate the degree of imitative competition for biopharmaceuticals compared
with small-molecule chemical entities. Whether or not this is the case will
become more apparent when some of the commercially important biopharmaceuticals are subject to patent expiration and potential competitive entry during
the current decade.
RETURNS ON R&D FOR NEW DRUG INTRODUCTIONS
John Vernon and I have examined the distribution of returns for new drug
introductions.22 This work builds directly on the R&D cost analysis of DiMasi et
al. and considers the sales and net revenues realized over the product life of new
drug introductions during the 1970s, 1980s, and 1990s. A finding of this work is
that the distribution of returns to new drug introductions is highly variable. This
is another source of risks for firms developing new drug introductions.
Figure 4 shows the distribution for present value of net revenues (revenues net of production and distribution costs but gross of R&D investment outlays) for 1990 to 1994 new drug introductions. The distribution shows very
strong skewness. Roughly one half of the overall present value from this sample of 118 compounds is accounted for by the top-ranked decile of new drug
introductions. The top decile of new drug introductions has an estimated after-

Figure 4

Present Values by Decile: 1990–94 NCEs

96

Henry G. Grabowski

Figure 5

Sales Profiles

tax present value that is more than five times the present value of average aftertax R&D costs per approved introduction. Furthermore, only the top three
deciles have present values that exceed average R&D costs.
A major factor underlying the skewed distribution observed in Figure 4 is
the level of sales realized by new drug introductions. Figure 5 shows sales profiles for the top two deciles and also for the mean and median drug introduction for the 1990 to 1994 period. This figure illustrates the highly skewed nature
of the sales distribution for new drug introductions. The sales peak of the top
decile drugs is several times greater than the sales peak of the next decile. In
addition, the mean sales curve is much higher than the median one. This latter
result is also reflective of a highly skewed distribution. John Vernon and I have
investigated other periods and time cohorts of new introductions and found that
they are characterized by similar patterns.23
Our returns to R&D analyses confirm the fact that the search for blockbuster drugs is what drives the R&D process in pharmaceuticals. The median
new drug does not cover the R&D costs of the average compound (including
allocations for the cost of discovery and the candidates that fall by the wayside).
A few top-selling drugs are really key in terms of achieving economic success
in pharmaceutical R&D over the long run. This result implies that larger firms,
which have the resources to develop a diversified portfolio of drugs simultaneously, will have lower overall risk of failure (e.g., bankruptcy) than small firms.
The large fixed costs of pharmaceutical development and the skewed distribu-

Patents and New Product Development

97

tion of outcomes help to explain the clustering of biotech firms at the research
stage of the R&D process and the large number of alliances between biotech
and big pharma firms at the development and marketing stages.
In Figure 6, the distribution of worldwide sales in 2000 is presented for
thirty new biological entities introduced into the U.S. market between 1982 and
1994. This includes new biological entities at different stages of their life cycle.
However, all these compounds have been in the market at least seven years,
and therefore they have progressed beyond the initial rapid growth phase of
their life cycle. The sales data presented in Figure 6 indicate that new biopharmaceuticals also exhibit a high degree of skewness, similar to the much larger
cohort of new drug introductions.
The high degree of skewness in the outcomes of pharmaceutical R&D
projects indicates that there are substantial risks in this endeavor, both for big
pharma firms as well as smaller biotech enterprises. Even though many big
pharma firms spend billions of dollars per year on a diversified portfolio of inhouse and outsourced projects, this does not guarantee a stable set of outcomes.
In particular, the law of large numbers does not work very well in the case of
skewed distributions.
If a firm invests in a large diversified portfolio of projects that are normally
distributed, we expect that returns can be predicted with some confidence.
When returns are highly skewed, however, individual companies experience
highly volatile outcomes even when they invest in large numbers of independ-

Figure 6

New Biotech Introductions, 1982–94
Worldwide Sales in 2000

98

Henry G. Grabowski

ent projects. To illustrate this point, John Vernon and I examined the new product sales for the U.S. drug companies that spent between $300 million and $500
million on their global R&D in the mid-1980s (the top-tier group in that period).
We found subsequent new product sales emanating from these R&D efforts
varied between $100 million and $3 billion (after seven years of market life).24
Finally, it is important to note that the distribution of outcomes from pharmaceutical R&D projects has similar characteristics to many other innovation
samples, including venture capital funding of high-tech start-ups. In this regard,
Scherer et al. have examined the size distribution of profits from investments in
innovation projects using a diverse set of data samples.25 Their analysis included
two large samples of high technology venture capital investments, as well as a
comprehensive sample of venture-backed start-up firms that had their initial
public offering in the mid-1980s. A common finding was that the size distribution of profit returns from technological innovation is strongly skewed to the
right. As in the case of new drug introductions, the most profitable cases contribute a disproportionate fraction of the total profits from innovation.
Table 1 summarizes the results from three data sets employed in Scherer’s
analysis. The first two data sets, assembled by Venture Capital Inc. and Horsley
Keough Associates, involve an analysis of several hundred venture capital firm
investments in high-tech start-up companies. Scherer’s analysis indicates that
roughly 60 percent of the returns, measured at the time of the final distributions
to investors, are realized by the top decile of venture capital projects. At the same
time, roughly half of the projects in these samples failed to earn positive returns.
Similarly, an analysis of the stock market performance of the universe of hightech companies that went public in the mid-1980s found that the top decile of
companies realized 62 percent of the sample’s total market value ten years later.

Table 1

Returns Distribution for Selective Innovation Samples

Patents and New Product Development

99

The corresponding value for our sample of 1990 – 94 new drug introductions is
52 percent. Hence these samples of risky, high-tech start-up companies exhibit
similar skewed distributions of returns to the pharmaceutical industry.
CONCLUSIONS AND POLICY CONSIDERATIONS
Economic analyses of the R&D process in pharmaceuticals indicate that it
is a very costly and risky process, even for large established firms. Most compounds in the R&D pipeline never reach the marketplace. The process takes a
long time, and the distribution of profits among those that are marketed is
highly skewed. A few blockbuster successes cover the losses on many other
R&D investment projects.
Overall, then, a key implication of my work with John Vernon and Joe
DiMasi is that the returns of research-intensive pharmaceutical firms are positive
but are highly dependent on a relatively few highly successful new products.
One important implication for public policy is that reimbursement, regulatory,
or patent policies that target the returns to the largest-selling pharmaceuticals
can have significant adverse consequences for R&D incentives in this industry.26
Many of the compounds in the top decile of the returns distribution involve
the first mover, or other early entrants, in a new therapeutic class. The family of
medicines in a given therapeutic class passes through a well-delineated life
cycle. There is dynamic competition involving breakthrough, as well as incremental, advances among the branded products within that class. This dynamic
competition, in turn, produces substantial consumer surplus and social returns,
as discussed above. When the patents for established products expire, consumers also benefit from imitative competition from generic entrants, which
provide social benefits in terms of significantly lower prices.
The patent system is the public policy instrument designed to balance the
trade-offs inherent between these dynamic and generic forms of competition.
Without a well-structured system of global patent protection, neither the
research pharmaceutical industry nor the generic industry would be able to
grow and prosper, as the rate of new product introductions and patent expirations would decline significantly.
Effective patent life (EPL), defined as patent time at a product’s market
launch date, is an important variable influencing R&D incentives in this industry
because it takes many years to recoup the R&D costs and earn a positive return
for a typical new drug introduction. Because firms apply for patents at the
beginning of the clinical development process, significant patent time is lost by
pharmaceutical products by the time of FDA approval. This implies a significant
reduction in the effective patent life of drugs relative to the nominal life of
twenty years.27 In light of this, the United States, the European Community, and
Japan have all enacted patent term restoration laws.

100

Henry G. Grabowski

The U.S. law in this regard, the Hatch–Waxman Act, has been in existence
since 1984. This law provides for patent term restoration of time lost during the
clinical development and regulatory approval periods, up to a maximum of five
years additional patent life.28 This is also the law that facilitates generic entry by
allowing generic firms to file abbreviated new drug applications, in which
generic firms only have to demonstrate bioequivalence to the pioneer’s products to obtain FDA approval. Prior to the passage of the act, generic firms had
to submit their own proof of a compound’s safety and efficacy, as well as show
bioequivalence.29
John Vernon and I have investigated the effects of the 1984 act on both
generic competition and effective patent lifetimes.30 In this paper, I have summarized our analysis of the significant increases in generic competition that have
taken place since the act’s passage. We have also examined the impact of the
law on effective patent lifetimes. Figure 7 shows the trends in EPLs by approval
year for the new drugs introduced in the first half of the 1990s. This figure indicates that the average EPLs in the 1990s center around an eleven- to twelve-year
range.31 The mean for all 126 new drug introductions in the 1990 – 95 period is
11.7 years, with an average Hatch–Waxman extension of 2.33 years. In the last
two years of this period, when virtually all of the drugs involve compounds that
entered clinical testing after 1984, the average extension is close to three years
in length. The mode of the frequency distribution of EPLs for this sample of
annual new drug introductions is in the interval of twelve to fourteen years.
Figure 7

Effective Patent Life for 1991– 95 NCEs

Patents and New Product Development

101

We also found that relatively few NCEs are marketed with effective patent
lifetimes of less than ten years. The effective patent life on the top decile of
compounds is particularly critical, given the highly skewed nature of the outcome distribution and the vital role that the top compounds play in sustaining
the viability of the entire R&D enterprise. We found that effective patent life for
these compounds tends to be a few years above the mean for the full sample
as a whole. This suggests that firms are able to accelerate the development of
commercially promising compounds by doing R&D in parallel and by undertaking other cost-increasing activities to marginally speed up the development
process.
The Congressional Budget Office (CBO) has also done an analysis of
the economic effects of the act.32 As in our analysis, they found that generic
competition has been a powerful force for price competition since 1984. The
CBO estimated annual savings of $8 billion to $10 billion to consumers by
the mid-1990s. In terms of R&D incentives, however, they found that the 1984
act has had negative consequences on the expected returns on R&D. In this
regard, they estimated that the act, together with the increased demand-side
incentives promulgated by managed-care organizations to utilize generic products in the 1990s, has resulted in steadily accelerating erosion in pioneer-brands’
sales over time.
The CBO found that from the perspective of R&D returns, the much more
rapid loss of sales in the period after patent expiration has dominated the patent
term restoration aspects of the law. In particular, the CBO estimated a 12 percent lower expected value for the after-tax profits from R&D for the mean new
drug compound as a consequence of the 1984 act. While the mean compound
is still profitable in this analysis, the increased generic competition since 1984
can have adverse R&D incentives for compounds of above average riskiness or
ones with shorter than average effective patent life.
Overall, the Hatch–Waxman Act has provided a relatively balanced
approach to the trade-offs between pharmaceutical R&D and generic competition. Improvements on the margin could be considered by policymakers, such
as a longer minimum exclusivity period before an ANDA could be filed for new
drug introductions (currently five years in the United States but longer in Europe
and Japan). Nevertheless, the law has provided a reasonably well-structured system of incentives for both innovative and generic firms. Both R&D activities and
generic utilization have increased dramatically in the period since the passage
of the 1984 act. Some groups have suggested that Congress consider changing
the patent restoration aspects of the law in order to further increase generic
competition in pharmaceuticals.33 Given the critical role that patents and effective patent life play in terms of R&D incentives for this industry, this would not
appear to be a desirable course of action on social welfare grounds.

102

Henry G. Grabowski

NOTES
1

Zvi Grilliches (1992), “The Search for R&D Spillovers,” Scandinavian Journal of Economics 94
(Supplement): 29 – 47.

2

Ibid., Table 1.

3

F. M. Scherer (1980), Industrial Market Structure and Economic Performance (Chicago: Rand
McNally), 407.

4

Frank Lichtenberg, paper on social returns to pharmaceutical R&D, presented at April 19, 2002,
Federal Reserve Bank of Dallas conference.

5

See, for example, David M. Cutler and Mark McClellan (2001), “Is Technological Change in
Medicine Worth It?” Health Affairs 20 (September/October): 11– 29; Jack E. Triplett, ed. (1999),
Measuring the Price of Medical Treatments (Washington, D.C.: Brookings Institution).

6

Adrian Towse, ed. (1995), Industrial Policy and the Pharmaceutical Industry (London: Office of
Health Economics).

7

Richard D. Levin et al. (1987), “Appropriating the Returns from Industrial Research and Development,” Brookings Papers on Economic Activity : 783 – 820; Wes Cohen et al. (1997), “Appropriability Conditions and Why Firms Patent and Why They Do Not in the American Manufacturing Sector,” Carnegie Mellon University Working Paper (Pittsburgh).

8

C. T. Taylor and Z. A. Silberston (1973), The Economic Impact of the Patent System (Cambridge:
Cambridge University Press). In a follow-on study, Silberston categorized three groups of industries for when patents are essential, very important, or less important, based on both survey
responses and objective analyses (patent and R&D intensity). He concluded that “the first category consists of one industry only, pharmaceuticals.” Z. A. Silberston (1987), “The Economic
Importance of Patents” (London: Common Law Institute of Intellectual Property). Edwin Mansfield surveyed the R&D directors of 100 U.S. corporations on what fraction of the inventions they
introduced between 1981 and 1983 would not have been developed without patent protection.
For pharmaceuticals, the value was 60 percent, while the average across all industries was 14
percent. Edwin Mansfield (1986), “Patents and Innovation: An Empirical Study,” Management
Science 32: 175.

9

Joseph A. DiMasi (1995), “Success Rates for New Drugs Entering Clinical Testing in the United
States,” Clinical Pharmacology and Therapeutics 58: 1–14.

10

Joseph A. DiMasi (1995), “Trends in Drug Development Costs, Times and Risks,” Drug Information Journal 29: 375 – 84; Kenneth I. Kaitin and Joseph A. DiMasi (2000), “Measuring the
Pace of New Drug Development in the User Fee Era,” Drug Information Journal 34: 673 – 80.

11

Joseph A. DiMasi, Ronald W. Hansen, and Henry G. Grabowski (2003), “The Price of Innovation: New Estimates of Drug Development Costs,” Journal of Health Economics 22(2): 151– 85.
For an earlier study using the same methodology for 1980s new drug introductions, see Joseph
A. DiMasi (1991), “Cost of Innovation in the Pharmaceutical Industry,” Journal of Health Economics 10(2): 107– 42.

12

Ibid.

13

Lehman Brothers (2001), “The Fruits of Genomics: Drug Pipelines Face Indigestion Until the
New Biology Ripens” (New York, January).

Patents and New Product Development

103

14

U.S. Congressional Budget Office (1998), “How Increased Competition from Generic Drugs Has
Affected Prices and Returns in the Pharmaceutical Industry” (Washington, D.C.: U.S. Government Printing Office); U.S Department of Health and Human Services, Theodore Goldberg et al.
(1986), “Generic Drug Laws: A Decade of Trial: A Prescription for Progress” (Washington, D.C.:
NCHSR).

15

Henry Grabowski and John Vernon (2000), “Effective Patent Life in Pharmaceuticals,” International Journal of Technology Management 19: 98 –100. This paper summarizes and extends our
analyses of generic competition published in the Journal of Law Economics October 1992 and
PharmacoEconomics, vol. 10, supplement 2, 1996.

16

PhRMA (2000), Pharmaceutical Industry Profile 2000: Research for the Millennium (Washington,
D.C.), 69.

17

Joe DiMasi et al., “The Price of Innovation,” op. cit., footnote 11.

18

Brigitta Bienz-Tadmor, Patricia A. D. Cerbo, Gilead Tadmor, and Louis Lasagna (1992), “Biopharmaceuticals and Conventional Drugs Clinical Success Rates,” BioTechnology 10 (May):
521– 25; M. M. Struck (1994), “Biopharmaceutical R&D Success Rates and Development
Times,” BioTechnology 12 (July): 674 – 77.

19

Marilyn E. Gosse, Michael Manocchia, and Toben F. Nelson (1996), “Overview of U.S. Pharmaceutical Development, 1980 –1994,” Tufts University Center for the Study of Drug Development,
May.

20

The data in Figure 3 were provided by Janice Reichert of the Tufts University Center, April 2002.

21

Henry Grabowski and John Vernon (1994), The Search for New Vaccines: The Effects of the
Vaccines for Children Program (Washington, D.C.: American Enterprise Institute), 13 – 35.

22

Henry Grabowski, John Vernon, and Joseph DiMasi (forthcoming), “Returns on R&D for New
Drug Introductions in the 1990s,” PharmacoEconomics. For earlier studies of new drug introductions in the 1970s and 1980s, see (1994), “Returns to R&D on New Drug Introductions in the
1980s,” Journal of Health Economics 13: 383 – 406; (1990), “A New Look at the Returns and
Risks to Pharmaceutical R&D,” Management Science 36: 804 – 21.

23

Ibid.; see in particular Figure 8.

24

Henry Grabowski and John Vernon (2000), “The Distribution of Sales from Pharmaceutical Innovation,” PharmacoEconomics 18 (Supplement 1): 21– 32.

25

F. M. Scherer, D. Harhoff, and J. Kukies (2000), “Uncertainty and the Size Distribution of
Rewards from Innovation,” Journal of Evolutionary Economics 10: 175 – 200.

26

Henry Grabowski and John Vernon (1996), “Prospects for Returns to Pharmaceutical R&D
Under Health Care Reform,” in ed. Robert Helms, Competitive Strategies in the Pharmaceutical
Industry (Washington, D.C.: AEI Press).

27

For data on effective patent time, see the 1998 CBO study cited in footnote 14, as well as my
work with John Vernon cited in footnote 15.

28

Title II of the Hatch–Waxman Act provided for partial restoration of the patent time lost during
the clinical testing and regulatory approval periods. A formula for patent term restoration was
embedded in the law. In particular, new drugs were eligible for an extension in patent life equal
to the sum of the NDA regulatory review time plus one-half of the IND clinical testing time. The
law capped extensions at five years and also constrained extensions to a maximum effective

104

Henry G. Grabowski

patent lifetime of fourteen years. Drugs in the pipeline at the time the act was passed, in September 1984, were limited to a maximum extension of two years.
29

For new drug products with little or no effective patent life, generic firms are prohibited from filing an abbreviated new drug application within the first five years of the product life. Most European countries prohibit such filing within the first ten years of market life.

30

Grabowski and Vernon, “Effective Patent Life in Pharmaceuticals,” cited in footnote 15.

31

This includes any benefits from the international GATT agreement, passed by Congress in 1994,
which harmonized U.S. patent laws with foreign countries’, including setting the nominal patent
life to twenty years from the date of patent application rather than seventeen years from the date
of patent grant. It does not include any potential benefits of a six-month extension granted
under the FDA Modernization Act in 1997, which can be awarded if the firm does additional testing and gains FDA approval for a pediatric indication.

32

See the CBO study, “How Increased Competition from Generic Drugs Has Affected Prices and
Returns in the Pharmaceutical Industry,” cited in footnote 14.

33

See, for example, National Institute for Health Care Management Foundation (2000), “Prescription Drugs and Intellectual Property Protection,” NIHCM Foundation Issue Brief, Washington,
D.C., August.

Reaching Through the Genome
Rebecca S. Eisenberg

T

he past two decades have been a period of rapid evolution in the science
of biotechnology and therefore in patent strategies, if not in patent law
itself. Patent law takes a long time to catch up with science, and commentators take a long time to catch up with the law, but patent lawyers don’t
have that luxury. They have to keep ahead of the game, figuring out claiming
strategies that allow their clients to capture the value of future discoveries.
I want to discuss some of these strategies today.
The patenting of DNA sequences is hardly a new thing, but rather an
established practice that goes back at least two decades. It began with little fanfare and little controversy, in contrast to other first encounters of the patent
system with new categories of invention in biotechnology and other fields. Considerably more public controversy accompanied the allowance of patents on
microorganisms, animals, computer software, and business methods. The issuance of patents in each of these areas provoked immediate opposition, along
with critical media commentary and congressional hearings.
In recent years we’ve seen similar attention focused on the practice of
patenting genes, but nothing like that happened when people first started
patenting genes in the early 1980s. At the time, public outcry over biotechnology
patents was focused on living organisms rather than genes. We didn’t see any
significant controversy over patenting DNA sequences until the advent of highthroughput DNA sequencing in the early 1990s, when genomics started to look
more like information technology than like chemistry. By this point, patenting
genes was such a well-established practice that questions about whether DNA
should be patentable seemed quaint and out of touch.
Even in the early 1980s, when the courts were still wary of protecting
information technology,1 they viewed DNA as a molecule, a chemical, a composition of matter, rather than as information.2 Perhaps if the Patent and Trade105

106

Rebecca S. Eisenberg

mark Office (PTO) and courts had coded DNA as a storage medium for information, a metaphor that is more common in popular understandings today, the
outcome would have been different. Instead, paradoxically, for a while it was
far easier to patent nature’s information technology than it was to patent
human-made, electronic information technology.
Why was the patenting of genes so uncontroversial in the early days, and
why has it become so controversial since then? In the early days, patenting
genes looked like patenting drugs. Now it looks more like patenting scientific
information. We have a clear story about why we should issue patents on drugs.
It is less clear whether we want to issue patents on scientific information.
In fact, it was the scientific community, and not the usual antibiotech
suspects, that first provoked public controversy over the patenting of DNA
sequences. The focus of the controversy was the filing of patent applications
by the National Institutes of Health in the early ’90s on the first random gene
fragments (expressed sequence tags, or ESTs) coming out of the laboratory
of Dr. Craig Venter while he was at NIH.3 But until the era of high-throughput
DNA sequencing, the scientific community did not complain about patenting
DNA.
The first generation of DNA sequence patents was directed toward genes
encoding proteins of interest. They typically claimed:
1. An isolated and purified DNA sequence.
2. A recombinant vector that includes the DNA sequence.
3. A transformed host cell that includes the vector.
These claims all covered tangible materials used to make pharmaceutical
products. The effect was similar to a patent on a drug, although the gene patent
was directed to the recombinant materials used in production of the protein
rather than to the protein product itself. The PTO and the courts treated these
patents the same way they treated patents on new chemical compounds. The
analogy may never have been perfect, but it worked, in the sense that it provided commercially effective patent protection that motivated investment in the
development of new products.
This was important because in the biopharmaceutical industry the patent
system does real work. In some industries, firms report that patents aren’t really
very important to their investment decisions, that other things matter more in
determining the profitability of innovation, such as being first to market, that
patents are just trading currency to get other patent holders to leave you alone.
That is not what one hears in the pharmaceutical industry. Empirical evidence indicates that this is a field where patents really matter.4 Why? The standard account from the pharmaceutical industry is that new drugs cost a fortune
to develop, and there are many costly failures for each successful product. If
generic firms could compete and drive down prices on the successful products

Reaching Through the Genome

107

without incurring all the development costs on the full range of successful and
unsuccessful candidates, they would drive them out of business.
Early biotechnology firms saw themselves as “high-tech” pharmaceutical
firms developing therapeutic protein products rather than small molecule drugs.
They, too, wanted patents that would prevent free riders from destroying their
profits. Patents on genes promised to provide that protection and allowed these
new firms to raise capital and sometimes to get pharmaceutical firms to collaborate with them.
But in recent years the biotechnology and genomics industries have
become much more diverse in their research and business strategies. As the
Human Genome Project has generated vast quantities of DNA sequence information, with biological significance yet to be determined, many firms have
emerged in a market niche that requires appropriating the value of information
resources for use in future research and product development. Research that
builds upon today’s bioinformatics platforms can contribute to the development
of products that are several steps removed from the genomic information base
that helped researchers on the path to discovery. It’s not obvious how to use
patents to capture the value that upstream research platform technologies contribute to these discoveries. Firms are seeking strategies for reaching into the
revenues from end product sales, especially drug sales. The introduction of new
pharmaceutical products is typically the point at which bioscience starts to yield
real money.
Needless to say, the pharmaceutical industry is viewing these strategies
with concern. The industry has long relied on patents on drugs to make drug
development profitable, but patents on drugs are not the only patents that
accompany new drugs on the road to market these days. Patents on the prior
“upstream” inventions that explain disease pathways and mechanisms and identify potential drug targets impose costs on drug development. They are like so
many siphons at the feeding trough of new drugs, draining away profits in a lot
of different directions.
From a strategic perspective, the issue for upstream firms is how to use
intellectual property rights in advances that facilitate future research to capture
a share of the commercial value of the future discoveries that they facilitate, and
the issue for downstream drug developers is how to resist these strategies. From
a public policy perspective, we can recast the issue as how to allocate intellectual property claims along the complex course of cumulative innovation in biomedical research.
When researchers identify the disease relevance of a gene or set of genes,
perhaps identifying a new drug target, can or should they be able to get patent
claims that dominate future products that bind that target?
Various strategies are available for achieving that goal. Each of these strategies depends for its viability upon legal rules that might be interpreted or fine-

108

Rebecca S. Eisenberg

tuned to promote, permit, inhibit, or forbid these strategies, depending on how
we feel about them as a normative matter.
One approach is called “reach-through licensing.” This is primarily a contract strategy, although often the contract involves a license to use a patented
research platform technology or material. The basic idea is that the patentholder restricts access to a patented research-enabling technology to users that
agree, as a term in the license, to share a piece of the action in future products.
Sometimes the piece of the action takes the form of a royalty on future product
sales, and sometimes it takes the form of a license to use future inventions made
in the course of the research. Many institutions resist these strategies, but some
agree to them.5
Pharmaceutical firms will go to great lengths to avoid incurring reachthrough royalty obligations, such as inventing around a patent or even going
offshore to conduct drug screens. They will not sign a reach-through license
agreement unless they absolutely have to. Universities are rarely targeted for
reach-through royalties because they are unlikely to develop and sell products
on which royalties might be collected. But firms often seek grant-backs of
licenses to future inventions made in the course of university-based research as
a license term when they provide research tools to universities. For their part,
universities resist agreeing to grant-backs whenever possible. They view these
provisions as compromising their stewardship over future discoveries and
believe that firms should provide them with free access to research tools so that
they can advance the frontiers of knowledge.
Both pharmaceutical firms and universities believe that reach-through
rights overvalue the past contribution of tool providers relative to the work that
remains to be done by themselves as tool users in order to advance the course
of cumulative innovation. But for some institutions, particularly biotechnology
firms, reach-through license terms make sense. Not only do they ask others to
agree to pay them reach-through royalties, but they sometimes agree to pay
them themselves.
The diverse institutions comprising the biopharmaceutical research community do not easily arrive at agreement on reach-through rights. They consume
a lot of transaction costs in haggling about them, and if the haggling takes place
far enough upstream, when the profitable end point of the research looks speculative and far away, they might conclude it’s just not worth the costs of getting
to yes. This risk enhances the attractiveness of reach-through strategies that
don’t require ex ante agreements. Two such strategies have been getting attention: reach-through remedies and reach-through claiming.
A reach-through remedy is a damage award for infringement that is measured as a reach-through royalty on sales of products developed through unlicensed use of a research tool. Janice Mueller has recently proposed such a remedy
as a modified “research exemption” from infringement liability.6 Under this pro-

Reaching Through the Genome

109

posal, researchers who use a patented tool to develop a commercial product
don’t need to get permission in advance, provided they give notice, but if their
research yields a product, they will be liable for reach-through royalties on that
product. If reach-through royalties become common in license agreements for
research tools, then they would arguably be an appropriate damage remedy
under current law, on the theory that they approximate the value to which a
willing licensor and licensee would have agreed. But in the present environment, with many would-be licensees putting up strong resistance to reachthrough royalties, such a remedy seems to substitute a court’s evaluation of fair
license terms for that of the market.
Another strategy is called “reach-through claiming,” which means issuing
patents that are broad enough to cover future discoveries enabled by prior
inventions. This strategy depends less on contract and more on patents. If the
claims of a patent cover future products, the owner does not need to get the
user to agree in advance to pay royalties on future product sales but can wait
until the user has a product ready to bring to market before sitting down to bargain. Users that avoid patent owners at the research stage will still have to deal
with them later, perhaps from a weaker bargaining position.
Patent claims that reach beyond the technological accomplishments of the
patent holder are by no means unprecedented. It is common for pioneering
inventions that open up new fields (in which there is little prior art) to receive
broad patents that dominate future advances, including products that require
significant further R&D. An example of an early advance in the biotechnology
field that received broad reach-through claims is the Cohen-Boyer gene-splicing
technique patented by Stanford University. The patent claims covered not only
the enabling technology, subsequently put to use in many different academic
and industrial laboratories across a broad range of R&D problems, but also any
recombinant organisms created through use of the technology. The claims to
recombinant organisms reached through the disclosed technology to cover laterdeveloped starting materials used in recombinant production of proteins, giving
the patent owner a dominant claim over a whole generation of biotechnology
products.
But the history of patent law also includes many examples where the
courts have held that a broad claim on a pioneering invention simply proves too
much, including claims from such pioneering inventors as Morse and Edison.
Today the courts and the PTO seem to be viewing reach-through claims
in genomics with some skepticism, but that doesn’t stop inventors from continuing to pursue such claims, and some of them may be succeeding.
A stylized example illustrates how these reach-through claims work. Suppose
a firm has identified a novel gene encoding a receptor, and based on similarities to previously characterized genes, it appears to be a new member of a
known receptor family. Suppose further that based on what is known about

110

Rebecca S. Eisenberg

other members of this family of receptors, the inventor plausibly speculates that
this new receptor might be a drug target. Let’s suppose the applicant wants to
patent:
1. The receptor itself, as an isolated and purified composition of matter.
2. A method of identifying a ligand that binds the receptor through screening procedures described in the specification.
3. Ligands identified by the screening method.
Is the applicant entitled to any of those claims? The applicant’s best hope
is for the first two claims — the claims to the isolated and purified receptor and
the drug-screening method. The biggest obstacle to obtaining these claims is the
requirement of utility (or industrial applicability, as it is known outside the
United States). In order to get a patent, the inventor must have a useful invention and must disclose how to use it. All the claims will fail unless the application discloses a specific and substantial use for the receptor protein.7
Disclosure of a specific and substantial utility will be enough to permit the
inventor to claim the isolated and purified receptor and the screening method
to identify agonists. But it won’t permit him to reach through to claim the asyet-unidentified ligands. The primary obstacle to obtaining these reach-through
claims is the disclosure requirements of the patent laws. To patent an invention,
the inventor must provide a written description that is sufficient to enable a person of ordinary skill in the relevant field to make and use it without undue
experimentation.8 For product claims to meet this standard, one must supply
information about the structure of products covered by the claim, and not just
their function.9 The hypothetical claims to ligands that bind the receptor fail to
meet this standard because all the applicant has disclosed is the function of the
molecules covered by the claim, without saying anything about their structure.
The Federal Circuit has been particularly tough in applying the written description requirement to biotechnology inventions,10 in contrast to its relatively light
touch on the utility11 and nonobviousness12 standards.
Nonetheless, technology has advanced in ways that give some firms a strategy
for addressing this written description problem. Researchers studying new proteins can sometimes crystallize the protein and determine its three-dimensional
structure using X-ray crystallography. They can then obtain Cartesian coordinates
permitting visualization of the target on a computer screen, creating a 3-D
model of the target for use in designing ligands. Some patent claims have been
issued for methods of identifying candidate inhibitor compounds that involve
introducing crystal coordinates for a drug target into a computer program and
superimposing models of inhibitor test compounds to identify those that fit spatially into an active site of the target.13
Might such an inventor also claim compounds identified through this computer visualization technique? Perhaps. Although the written description require-

Reaching Through the Genome

111

ment is a potential problem, one might argue that the requirement is satisfied if
the crystal coordinates provide enough structural information linked to the function of binding the target to permit visualization of the molecules falling within
the scope of the claim. In other words, the applicant is not just claiming any
molecules that do the job but actually describing what such molecules would
look like.
Of course, the speculation might be wrong. Perhaps the shape of the
receptor in the environment in which it interacts (or not) with the rationally
designed compound is quite different than the shape found for the crystallized
protein. Or perhaps a prior art compound will turn out to fall within the scope
of the claims, rendering them invalid. A broad claim to a genus of compounds
fails to meet the novelty standard if even a single member of the genus was disclosed in the prior art, even if the properties of the prior art compound that
make it fall within the scope of the claim were merely inherent and not disclosed. Broad claims make big targets.
So while there are a lot of open questions yet to be resolved, there may
be some claiming strategies that allow upstream inventors to get reach-through
claims that will dominate future pharmaceutical products on the basis of preliminary genomics and bioinformatics work.
How should we be thinking about these reach-through practices as a normative matter? Should the law permit or promote practices that allow early-stage
inventors to reach through to capture a share of the value of future discoveries?
Should it discourage or prohibit these practices?
Critics argue that reach-through rights over-reward those who rest on their
laurels at the expense of those who carry research forward. Moreover, mechanisms that permit leveraging of patents on early discoveries into control of
future inventions raise potential antitrust concerns. To the extent that reachthrough rights continuously augment the number of rights-holders at the bargaining table as cumulative research proceeds, they magnify risks of bargaining
failures in a potential “tragedy of the anticommons.” 14
On the other hand, reach-through rights may be a valuable way to permit
early innovators to capture the value that their discoveries contribute to subsequent research. Otherwise the stand-alone value of early innovations may be
too low, undermotivating the initial investment that is necessary to identify and
enable socially valuable research paths. A reach-through remedy may be a solution to the anticommons problem, permitting research to proceed without need
for constant negotiations over access to each proprietary input.
How one weighs these competing concerns depends upon how one views
the relative need for incentives at different points in the course of cumulative
innovation. If one worries more about the adequacy of incentives for early-stage
innovation and less about the adequacy of incentives for later-stage innovation,
then reach-through strategies make a lot of sense. On the other hand, if one

112

Rebecca S. Eisenberg

worries more about the adequacy of incentives for downstream research and
product development, then reach-through strategies are cause for concern. Judicial opinions about patent law reflect both of these perspectives.
Patent law sometimes rewards pioneers in a field with broad claims (and
a broad range of equivalents), while giving only narrower claims to those who
make follow-on improvements, even though the improvements may have more
stand-alone commercial value than the primitive versions of the invention developed by the pioneer. Suzanne Scotchmer has argued15 cogently that upstream
research is both riskier and less likely to have a high stand-alone value than
downstream research, which by definition is closer to market. She therefore
argues for giving broad rights to early innovators that allow them to force subsequent improvers to deal with them. Giving a broad patent to the pioneer who
invents, say, a primitive sewing machine allows her to capture some of the
follow-on value created by those who tweak the invention and make it more
user-friendly.
But one gets a very different picture of the relative contributions of early
and subsequent innovators from observing biotechnology and genomics
research. Of course, in biotechnology, as in other fields, there have been pathbreaking, pioneering discoveries that paved the way for lesser discoveries that
were more financially viable. But many of these discoveries were paid for by
NIH, raising questions about the need for strong patents to motivate and reward
the work that generates that sort of basic research. In recent years some private
firms have tried to figure out business models for generating biomedical
research information to provide a platform for downstream discovery, especially
in genomics, but often by the time private firms see such an opportunity, the
so-called upstream research has become relatively mechanical. For example,
when Celera decided they could take on the publicly funded Human Genome
Project and complete their own version of the human genome sequence,16 much
of the pathbreaking work had been done already. Although much work
remained, there was little question but that the job could be done. In this setting, the “upstream” work of sequencing the genome looks relatively routine,
riskless, and uncreative compared with the “downstream” work of figuring out
what it all means and how to use the information to develop new diagnostic
and therapeutic products.
More generally, in the biomedical field, upstream research is relatively
cheap and heavily subsidized with public funding. Downstream research is relatively costly and risky and relies primarily on private funding. This configuration of risk and cost argues for focusing on motivating and rewarding downstream research more than upstream research.
But other factors would support the opposite intuition. The biotechnology
industry, already in its third decade, has mostly been unprofitable, while the
pharmaceutical industry over the same time period has been extremely prof-

Reaching Through the Genome

113

itable. Maybe this is partly because the pharmaceutical firms are smarter about
business than the biotechnology firms, but perhaps this gross disparity in the
bottom lines reflects in part a failure of the biotechnology industry to capture
the social value that they have contributed to the pharmaceutical industry. If that
is indeed what is going on, that cautions against disabling biotechnology firms
from using legal strategies to get their fair share.
Persistent bargaining failures in the biomedical research community over
the terms of access to research tools also caution against precluding reachthrough strategies. Universities, biotechnology firms, and pharmaceutical firms
describe these problems in different ways, each pointing the finger at the others, but they all report difficulties in agreeing about what is fair and reasonable
when one institution provides another with resources that might facilitate future
discoveries. In this environment it makes little sense for the law to foreclose
options that might help the parties get to yes. Reach-through provisions can
help with two big problems in the licensing of research tools: valuation and
financing. If the parties cannot use reach-through provisions, they need to use
pay-as-you-go terms for access to research tools. This suits the pharmaceutical
industry just fine, because they have plenty of cash and would rather pay a relatively small amount upfront than agree to share profits later. But reach-through
terms are more attractive to biotechnology firms that otherwise would be unable
to pay enough to compete with the pharmaceutical industry to get access to
research tools. In effect, reach-through agreements allow upstream and downstream biotechnology firms and universities to form joint ventures, sharing risks
without draining cash at the research stage.
Reach-through grant-back licenses — i.e., precommitments to license future
discoveries back to the provider of an upstream research platform — are more
troubling but may also be a valuable contract option. What makes these provisions troubling is that they allow early innovators to exercise continuing control
over future research and perhaps to suppress new innovation. On the other
hand, such provisions may be a necessary defensive maneuver to permit early
sharing of research tools without having to worry about facilitating domination
by a competitor. Reach-through licenses can have a “copyleft” aspect to them,
disabling subsequent innovators who have benefited from access to a predecessor’s research platform from monopolizing their own subsequent inventions.
Given that these provisions have the potential to enhance efficiency and
promote exchanges that are currently vexed by bargaining problems, it seems
unwise to preclude the use of reach-through provisions as terms in voluntary
agreements as a matter of law.
On the other hand, the case for reach-through royalties as a remedy for
patent infringement is far weaker. Such a remedy (which amounts to a compulsory license in exchange for court-ordered payment of reach-through royalties) has the advantage of permitting research to proceed without compelling

114

Rebecca S. Eisenberg

researchers to get licenses in advance, when the transaction costs may loom
large relative to the expected value of research that is far removed from commercial payoffs. But they present a danger that a court-ordered remedy will be
too high or too low. The standard worry about compulsory licenses is that the
court-ordered remedy will be too low. In this setting there is also a risk that a
royalty rate determined ex post, when the research has proven successful and
valuable, will be significantly higher than an ex ante valuation arrived at between the parties. If reach-through royalties become common in licenses, the
terms of actual reach-through licenses negotiated ex ante will provide a benchmark to guard against overvaluation ex post. The law should follow, not lead,
actual contracting practices and award reach-through royalties if and when they
become common as a license term for research tools.
Reach-through claims raise all the problems of reach-through remedies
and more. Approved by the PTO in the course of ex parte patent prosecution,
patent claims are not tied even to a hypothetical agreement between prior and
subsequent innovators. Patent examiners who speak only to patent applicants
without hearing from future innovators may overvalue the importance of the
applicant’s invention relative to potential future discoveries. Multiple overlapping claims, already common even with the PTO viewing reach-through claims
skeptically, will become much more common if reach-through claiming strategies become more commonplace, giving multiple owners hold-up rights over
future products.
Patent law has a tradition of limiting patent protection to actual accomplishments and future variations that can be achieved through work that is routine and predictable. This is a sensible limitation that appropriately guides
patent examiners away from acceding to unreasonable requests for patent claims
that dominate future research that is itself fraught with risk and uncertainty.
There are good reasons for permitting prior innovators to use their intellectual property to capture a fair share of the value that their discoveries contribute to subsequent downstream innovation, but we can be more comfortable
with strategies that are negotiated in the market for licenses than with those that
are negotiated in the course of patent prosecution.
NOTES
1

See generally Diamond v. Diehr, 450 U.S. 175 (1981).

2

See, e.g., Amgen v. Chugai Pharmaceuticals, 927 F.2d 1200 (Fed. Cir. 1991). (“A gene is a
chemical compound, albeit a complex one.”)

3

For a history of this controversy, see Robert Cook-Deegan (1994), The Gene Wars: Science, Politics, and the Human Genome (W.W. Norton).

4

W. M. Cohen et al. (2000), “Protecting Their Intellectual Assets: Appropriability Conditions and
Why U.S. Manufacturing Firms Patent (or Not),” National Bureau of Economic Research Work-

Reaching Through the Genome

115

ing Paper Series, no. 7552 (concluding on basis of survey results that “patents are used in substantially different ways across different technologies” and indicating that patent incentives are
particularly important in motivating R&D in the pharmaceutical industry).
5

See Rebecca S. Eisenberg (2001), “Bargaining Over the Transfer of Proprietary Research Tools:
Is This Market Failing or Emerging?” in Expanding the Bounds of Intellectual Property: Innovation Policy for the Knowledge Society, ed. R. Dreyfuss, H. First, and D. Zimmerman (Oxford University Press), 209.

6

Janice Mueller, “No ‘Dilettante Affair’: Rethinking the Experimental Use Exception to Patent
Infringement for Biomedical Research Tools,” 76 Wash. L. Rev. 1 (2001). See also Jorge A.
Goldstein (2001), “Patenting the Tools of Drug Discovery,” Drug Discovery World, Summer,
9 –18; James Gregory Cullem (1999), “Panning for Biotechnology Gold: Reach-Through Royalty
Damage Awards for Infringing Uses of Molecular Sieves,” IDEA 39 (4): 553.

7

35 U.S.C. §§ 101, 112. See U.S. Patent and Trademark Office, Utility Examination Guidelines,
66 Fed. Reg. 1092 (Jan. 5, 2001).

8

35 U.S.C. § 112.

9

See U.S. Patent and Trademark Office, Guidelines for Examination of Patent Applications Under
the 35 U.S.C. § 112, ¶ 1, “Written Description” Requirement, 66 Fed. Reg. 1099 (Jan. 5, 2001).

10

E.g., Enzo Biochem. v. Gen-Probe, 2002 U.S. App. LEXIS 5642 (Fed. Cir. April 2, 2002); University of California v. Eli Lilly & Co., 119 F.3d 1559 (Fed. Cir. 1997).

11

E.g., In re Brana, 51 F.3d 1560 (Fed. Cir. 1995).

12

E.g., In re Deuel, 51 F.3d 1552 (Fed. Cir. 1995).

13

U.S. Patent 6,083,711.

14

M. Heller and R. Eisenberg (1998), “Can Patents Deter Innovation? The Anticommons in Biomedical Research,” Science, May 1, 698 – 701.

15

See Suzanne Scotchmer (1991), “Standing on the Shoulders of Giants: Cumulative Research
and the Patent Law,” Journal of Economic Perspectives 5 (Winter): 29 – 41; Jerry R. Green and
Suzanne Scotchmer (1995), “On the Division of Profit in Sequential Innovation,” RAND Journal
of Economics 26 (Spring): 20 – 33.

16

J. Craig Venter et al. (1998), “Shotgun Sequencing of the Human Genome,” Science, June 5,
1540.

PART FOUR

Financing Biotech Research
Financing Biotechnology Research:
A Firsthand Perspective
Timothy F. Howe

Biotechnology and Government Funding:
Economic Motivation and Policy Models
Michael S. Lawlor

Financing Biotechnology Research:
A Firsthand Perspective
Timothy F. Howe

D

rawing on my experience with a health care venture capital firm and on
much of what I teach at Columbia University’s School of Business, I will
focus on several practical aspects of the venture capital funding of biotechnology. These include how venture firms that invest in biotechnology operate,
how this market evolved, and what areas of biotechnology research hold much
promise. I will address these topics from the perspective of Collinson Howe &
Lennox (CHL), a Northeast-based venture firm that my partners and I operate.
HOW CHL OPERATES AND HOW THE VC INDUSTRY HAS EVOLVED
In describing how our venture firm operates and how this form of investing has evolved, I will first provide some background on our company as a way
to describe the talents that venture firms need from senior management and staff
in order to successfully invest in biotechnology.
Some Background on CHL Medical Partners
Our partnership makes early-stage and seed investments in companies
operating in the medical sector, defined to include biotechnology, pharmaceuticals, medical devices, and health care services. We are active, hands-on
investors who are typically responsible for defining strategy at the companies in
which we invest. We often manage these start-ups until we hire a complete
management team to run them, and we are often responsible for their financing before they go public. We believe we add value by bringing top financial,
scientific, and clinical expertise to bear on managing such ventures (Table 1 ).
Since the emergence of the institutional venture industry in the early 1980s,
venture capital firms have generally become more specialized, a trend that char119

120

Timothy F. Howe

Table 1

Collinson Howe & Lennox: Classic Venture Capital

acterizes the nature of the funds that we have managed over the years. In line
with most venture capital funds, the ones we have managed are ten-year limited
partnerships using capital raised from outside investors. During the 1980s, our
activities were widely disbursed amongst leveraged buyouts, specialty retailing,
biotechnology, information technology, communications, and just about anything
one could imagine. Since then, the world has become a lot more complex and
specialization has become necessary in order to identify the best investment opportunities. Our latest fund is called CHL Medical Partners II LP and is $160 million
in size. Over the years, we have been involved in making private investments in
about 150 companies, approximately forty of which have been biotechnology firms.
Since 1989, we have focused 100 percent of our time on the medical sector.
Some of our portfolio companies might be familiar, including Incyte
Genomics, Genetic Systems, Chiroscience, DNA Plant Technology, Procyte,
Leukosite, Neurogen, Alexion, and Nova Pharmaceuticals. About 86 percent of
our companies historically have completed initial public offerings (IPOs) and
been successfully traded (not including the most recent, 1998 vintage, fund).
The top one-third of our ventures have generated returns in excess of four times
our initial investments; half of those have generated more than ten times, and
half of those, more than twenty times.
Our investments with Texas-based companies are Texas Biotech, whose technology came from the Texas Heart Institute, and Gene Medicine (now known as
Valentis), built upon research at Baylor University. We also have invested in two
Texas start-ups: Odyssey Health Care, which provides home health and hospice
care, and SemperCare, which operates long-term, acute-care hospitals.
Senior Management at Health-Oriented Venture Firms
The trend toward sector specialization among venture funds has reinforced
the need for venture firms to couple specialized scientific knowledge with the
managerial and financial expertise needed to develop a new business. At CHL

Financing Biotechnology Research

121

Medical Partners, our technical specialty is medical science. Of the three partners, Ron Lennox, with a Ph.D. in cellular biology and a B.S. degree in molecular biology, has a considerable scientific background, along with an M.B.A.
from Wharton. On the financial side, Jeff Collinson has over twenty-five years
of experience in private equity investing, along with an M.B.A. from Harvard
and a B.A. from Yale; and I have worked in venture capital for about seventeen
years, since earning a B.A. and an M.B.A. from Columbia. We are not atypical.
Among senior management at venture firms, it is quite common to see a blend
of business experience built upon considerable scientific and financial training.
Overall, there are nine people involved with our firm, and there is considerable scientific and business expertise among our future partners as well. For example, Greg Weinhoff is an M.D. who has an M.B.A. from Harvard, and Goga Vukmirovic, our latest addition, majored in molecular biology at Princeton, where
she wrote a senior thesis on a topic in functional genomics. Looking through our
firm, one sees a great depth of venture capital and medical-oriented experience.
Specialization, Diversification, and Sector Selection
At CHL Medical Partners, we try to balance the gains from specialization
with the need to diversify across medical solutions. Although approximately half
our business is biotechnology, our strategy is to diversify across the health care
marketplace because we believe the solutions to medical problems could arise not
only from biotechnology but also from medical devices and services concepts.
Within biotechnology, we have invested in biotechnology tools, biopharmaceutical
development, genomics, proteomics, and drug delivery technology (Figure 1 ).
Figure 1

Collinson Howe & Lennox: Sector Analysis

122

Timothy F. Howe

Figure 2

U.S. Health Care Expenditures

SOURCE: Centers for Medicare & Medicaid Services, Office of the Actuary.

Figure 3

The Aging of the Population: The Baby Boomers Move Through

SOURCE: U.S. Administration on Aging.

Financing Biotechnology Research

123

Investing in health care is attractive to us because health care expenditures
have grown to become a large part of the U.S. economy, amounting to nearly
15 percent of GDP. Figure 2 depicts U.S. health care spending in billions of dollars over five-year increments.
Most of these expenditures are service-oriented, with only 10 to 12 percent spent on pharmaceuticals. The primary reason health care expenditures are
growing at an increasing pace is the aging of the population, portrayed in Figure 3.
The right-hand bars depict the percentage increase in the population that is over
age 55 relative to the 1995 levels, while the left-hand bars depict population
growth of those under age 55. This figure shows that the over-55 age population segment, which accounts for most medical spending, is growing about five
times the pace of the rest of the population. This is a fundamental force that is
propelling growth in health care spending. Moreover, with better development
of drugs, this increased demand could be accommodated with much less
expense to the system.
We see the opportunities in biotechnology as building off of advances in
molecular biology, genomics, and proteomics, listed in Table 2. Currently, all the
drugs on the market act on a total of about 500 targets, but there are upwards
of 35,000 genes in the human genome, and many new targets for drug therapy
remain to be identified. What we are really trying to do as investors is get a little
bit closer to the ultimate goal of personalized medicine. More specifically, the
goal is to really understand the genetic basis of disease and how an individual’s
genetic makeup influences the effectiveness of drug therapies. Such advances
would hopefully lead to better, more specific therapies with fewer side effects.

Table 2

Human Genome Opportunities

124

Timothy F. Howe

The Role of Universities in Biotechnology Ventures
The critical role of science in the biotechnology arena naturally leads us to
work closely with universities. With our headquarters in Stamford, Conn., Yale University is a natural partner. As Table 3 shows, we have worked closely with the
Yale technology transfer office to establish and fund seven start-up companies.
Mirroring the experience of other medical-oriented venture funds, we have
seen a shift in how the returns from joint ventures are shared with universities.
While Yale-based technologies and expertise led to the creation of two of our
biotechnology companies in the early ’90s, Alexion and Neurogen, the financial
returns to Yale consisted mostly of licensing fees for their technologies as
opposed to equity ownership, which was typical of the environment at the time.
Both of these companies became public, and depending on how their stock
prices have traded, their market values have ranged anywhere from $150 million to over a billion dollars. Recently, in contrast to these initial ventures, our
last five venture deals with Yale, made between 1998 and 2002, involved companies founded with the technology transfer office at Yale, whereby Yale University received an equity stake in exchange for licenses to technologies
invented at Yale. So while it is still too early for these companies to consider
initial public offerings, we are hopeful those days will come and Yale’s returns,
though riskier, could be substantially greater than had they just taken licensing
fees. This exemplifies an important emerging trend of ventures involving tech-

Table 3

CHL Medical Partners: The Yale Relationship

Financing Biotechnology Research

125

Figure 4

CHL Medical Partners: Portfolio Analysis, Deal Sources

nology transfers from universities, and Yale has emerged as one of the leaders
in making this transition. Nevertheless, many other institutions remain reluctant
to take equity and continue to prefer royalties.
There are other benefits to universities from venture deals. For example,
we tend to situate the companies we fund from Yale around New Haven in
order to benefit from the local talent pool, and Yale has been very pleased
about bringing new companies and all the employment they help generate to
the city of New Haven. From just our last five equity-share ventures alone, over
$100 million of capital has been brought into the region thus far, and close to
100 new jobs were created, contributing both to the university’s bottom line and
to local development.
I would like to point out that most of our deals come directly from inventors, academic institutions, or from people in successful, earlier ventures. From
Figure 4, one can see how few opportunities we have generated through intermediaries. It is important for us that the source of technology makes contact
directly with us.
WHERE ARE THE FUTURE OPPORTUNITIES FOR
BIOTECH VENTURE FIRMS?
Later, I will discuss the particular opportunity facing venture capitalists in
the area of proteomics; however, first I’d like to provide the following introduction to how we develop biotech venture opportunities. Broadly speaking,
we have found that important venture investment opportunities continue to
emerge when one considers the vertically integrated pharmaceutical industry
from the perspective of the potential of developing large horizontal players.
Drawing on the transformation of the computer industry illuminated by
Andy Grove in his 1996 book Only the Paranoid Survive, where he describes

126

Timothy F. Howe

how the computer industry transformed itself from being vertically integrated to
being dominated by horizontal players, we see some parallels in the pharmaceutical industry. Figure 5 illustrates the vertically integrated pharmaceutical
industry on the left-hand side, with the activities broken down into five categories: sales and distribution, manufacturing, clinical research, compound discovery and development, and research/target discovery.
Historically, large pharma houses have been justifiably proud of their own
research and target discovery capabilities, which they housed internally. Prior to
the advent of combinatorial chemistry technologies, the pharmaceutical companies could also be very proud of their own proprietary compound libraries.
Many big pharmaceutical firms continue to house their own clinical research, to
manufacture all of their own pills, and to maintain huge sales and distribution
forces. Recently, however, a number of venture-backed companies have emerged
as potentially significant horizontal players that are transforming the way pharmaceuticals are discovered, developed, and brought to market.
For example, we founded Incyte Genomics to serve as a source for much
of the research and target discovery for the pharmaceutical industry. Specifically, Incyte Genomics focused on sequencing the human and other species’
genomes and providing access to its database to pharmaceutical industry clients
who pay sizeable annual subscription fees under multiyear contracts. Currently,
most of the top pharmaceutical companies subscribe to Incyte’s databases. In
addition to subscription fees, Incyte Genomics is also entitled to royalty payments on discoveries made using the databases that eventually get developed
and sold in the marketplace. We believe that a company such as Incyte has the
ability to dominate a horizontal segment of the pharmaceutical industry, much
as Intel did within the chip sector of the computer industry.
Similarly, combinatorial chemistry, structure-based design technologies,
and high-throughput screening tools are enabling more efficient compound
development, and companies that have these technical capabilities have the
potential to become horizontal leaders and integral parts of the industry’s drug
discovery and development process. Outsourcing clinical research to clinical
research organizations has already become commonplace, but we expect many
additional opportunities to arise with the emergence of pharmacogenomics and
advances toward personalized medicine.
Within the sales segment, we have yet to see an outsider or newcomer grow
into gaining a dominant position. Nevertheless, Medco Containment (prior to its
acquisition by Merck) opened up a significant position as a mass distributor of
drugs. We believe that the Internet could fuel further change by giving doctors
easier access to drug information and pharmaceutical firms. In particular, we
can conceive of lowering the cost of selling drugs by enabling pharmaceutical
firms to introduce therapies to physicians at the physicians’ own convenience
over the Internet. Those companies that can provide these efficient systems for

The Horizontal View

Financing Biotechnology Research

Figure 5

127

128

Timothy F. Howe

pharmaceutical firms to reach physicians may become significant companies;
hence we have invested in RxCentric.
Opportunities Posed by Advances in Proteomics
We see the coming proteomics era as potentially presenting a number of
exciting venture opportunities. Over the past decade, the sequencing of human
and model organism genomes has resulted in an enormous proliferation of
information, transforming biology from a data-poor to a data-rich science. Historically, to understand the form and function of organisms, scientists have had
to increasingly define more narrowly the unit of study. Looking at the entire animal gave way to a focus on increasingly smaller parts — from organs, to cells, to
molecules. The reductionist approach of dissecting the whole into its constituent parts and studying each part independently has dominated the field of
molecular biology over the past century; with the discovery of the structure of
DNA, the unit and focus of study became the molecule. This approach has been
a powerful strategy that will continue to have a role in scientific research. Yet
today, with the genome sequencing efforts for humans and numerous model
organisms nearly complete, there has been a shift away from the reductionist
approach in research in favor of genomewide, context dependent, global analysis. The promise of genomics and proteomics has been fueled by the promise
of holistic, whole-genome- and proteome-based approaches that generate and
integrate sequence data, expression profiles, protein interaction maps, and protein structural and functional information.
Significant opportunities for venture capital investors exist in funding the
development of technologies to understand and exploit this flood of information
and data, and proteomics in particular has the potential to enhance pharmaceutical productivity and tremendously impact the drug discovery and development process.
Over the last twenty years, our biotechnology investing practice has
changed as the focus of and the technology involved in drug R&D process
changed (Figure 6 ). In the 1980s, a single drug target or a drug that was pretty
interesting could be developed and sold at a significant profit, provided it was
an important therapy. Fundamentally, success was defined by a bottom line
driven by a single drug approval and sale.
In the 1990s, our investments shifted more into genomics-based research,
where the success of a venture was based on the development of comprehensive systems and proprietary tools that enabled pharmaceutical companies to
utilize the massive amount of information coming from the genome-sequencing
projects. The overabundance of raw information, and lack of adequate tools to
interpret and leverage it into marketable products, became the bottleneck in
drug development. Hence our companies were designed to provide the tools

Financing Biotechnology Research

129

Figure 6

The Transformation of the Drug R&D Process

that mine the genome sequence information and generate a wealth of novel
therapeutic targets. Genomics offered pharmaceutical firms a promise to decrease time to drug development, increase the success rate in clinical trials, and
lower R&D costs. As investors, we were now taking pure technology risks rather
than drug development risks. This was an important feature of the genomics era
ushered in during the 1990s. Enormous investment returns were available for
those who developed the successful proprietary genomic tools and systems and
sold them effectively to the pharmaceutical industry.
Yet many of the early promises of genomics (such as decreased time to
drug development, increased success rate in clinical trials, and lower R&D costs)
have not materialized. It has become clear that medical breakthroughs do not
follow from the genome sequence information itself. Rather, the breakthroughs
will come from focusing on understanding the function and relationships among
gene products (i.e., proteins) in a changing environment. Going from knowing
the coding sequence to understanding the protein function is not a trivial task.
Identifying any sequence feature is not necessarily indicative of its function, and
the function of a particular gene product is highly context dependent and is
rarely unique, as there are numerous instances where there is a “backup” system (i.e., redundancies in signaling pathways) in place that can compensate for

130

Timothy F. Howe

a particular loss of function. Dynamic, multidimensional analysis is needed to
understand the structure and function of proteins, as particular proteins can be
present in varying amounts at different times and in different locations within a
cell. While the identification, sorting, cataloging, and analysis of structures and
functions of proteins will be more important and difficult to undertake than was
the case with genomics, leveraging proteomic information could alleviate many
bottlenecks in drug discovery and development and potentially enhance pharmaceutical productivity.
Investing in technological tools continues to present attractive investment
opportunities because researchers need increasingly more complex and sensitive
technologies to carry out proteomic analysis, and the data management requirements alone supercede those of genomics by at least four- to fivefold. The tools
market is attractive because it is unregulated (no FDA approvals necessary), businesses can scale quickly, and products are patent protected. Moreover, for
investors, exit alternatives are wide ranging. There has been substantial M&A
activity by large, publicly traded tool companies seeking new technologies to
complement their existing portfolios, and numerous pharmaceutical and biotechnology companies seeking technologies that complement and extend their R&D
efforts, offering them unique, proprietary techniques for drug development.
Genomics and proteomics have brought about a revolution in the field of
biology and have the potential to fundamentally transform the pharmaceutical
industry. Increasingly, diseases will be diagnosed and treated based on a greater
understanding of both the disease pathology and the mechanism of particular
drug action, further integrating drug discovery with disease characterization and
diagnosis. With advances in proteomics, we are many steps closer to fulfilling
the promise of personalized medicine to offer more effective and less toxic therapies to individuals. But even before personalized medicine becomes a reality,
opportunities abound for creating significant value by developing enabling technologies and utilizing them in drug discovery and development.

Biotechnology and Government
Funding: Economic Motivation
and Policy Models
Michael S. Lawlor

T

he United States is clearly the world leader in the emerging field of
biotechnology — the application of breakthroughs in biochemistry and
molecular biology to new products and health care therapies. It is no exaggeration to say that this world leadership position is the result of the superiority of the human and physical capital of the U.S. science and technology base
in the nation’s university, government, and nonprofit labs. Most of this base has
been nurtured and sustained since the end of World War II by the generous support of the American taxpayer. The economic and political motivations upon
which the U.S. research system was designed and operates, the special features
of the biomedical research community, its history up to the present era of
tremendous advance, and some lessons that lie therein for public policy toward
science are the subjects of this paper.
OVERVIEW OF R&D FUNDING
Figure 1 shows the total funding for research and development in the
United States from 1953 to 1998 in both current and constant (1992) dollars.
Concentrating on the constant dollar values, roughly four eras of total funding
are evident from these data. From 1953 to 1970, there were large, sustained
increases in research funding, led by federal government efforts that corresponded to the arms and space races. In the ’70s, public support for both these
goals waned. It is important to note, though, that while total funding stagnated
in the 1970s, this decade also saw the advent of new biomedical initiatives, such
as the federally sponsored “war on cancer,” which laid the groundwork for
much of the bioscience of today.
The third era of funding, in the 1980s, saw a brief resurgence of federal
spending, mostly fueled by the Reagan administration’s defense program, espe131

132

Michael S. Lawlor

Figure 1

National R&D Funding, by Source, 1953–98

Figure 2

National R&D Expenditures, by Source of Funds

cially the effort to develop a space-based missile defense system. But the real
story of the ’80s is the rapid increase in private research expenditures. As can
be seen in Figure 2, by the end of the decade, the federal role as the leader of
funding had shifted to the private sector. Much of this was spent on drug discovery by the pharmaceutical industry. It was directed at the biological targets
of the pregenetic biotechnological era that public basic science was then discovering. This research is largely responsible for most of the blockbuster drugs
on the shelf today.

Biotechnology and Government Funding

133

Finally, referring to Figure 1 again, a fourth stage in total national research
and development spending began in the late 1980s with the end of the Cold
War. The lack of any ambitious new nondefense initiative at this time saw total
federal funding decline again. In terms of total federal government research
spending, this decline has still not been reversed. Perhaps the recent military
and biological defense spending increases called for by President Bush, in
response to the threat of terrorism, may be the next large focus for public support of research spending. An important message to take away from this is the
pervasive influence of current political interests, such as the Cold War science
race, for instance, on the public funding of research and development activity.
The variability of postwar public funding for science research is largely a story
of changes in this political commitment.
Given the unpredictable nature of science, potentially all past federal science research has contributed to the biotechnology era. Consider, for instance,
the fact that the techniques employed in the Human Genome Project were a
complex combination of basic breakthroughs in theoretical biology, enabled by
developments in imaging stemming from high-energy physics and kept track of
by advanced computing technology (bioinformatics). Yet the original funding
for the basic investigations that went into these seemingly unrelated advances,
if they could all even be traced, came from sources originally thought to be
unrelated to biomedical research.
Nevertheless, it is useful to see a breakout of public funding by category
(Figure 3 ). First we should set aside defense research, much of which is directed
Figure 3

Federal R&D Funding, by Budget Function

134

Michael S. Lawlor

to weapons development, not to basic science issues at all. Aside from defenserelated research, it is evident that most federal research funding is for “health”
related activities. The majority of the funds in this category support the National
Institutes of Health. If one were to further isolate the specifically biotechnologyrelated funding in the remainder of this breakout, it would span all the other
categories shown in Figure 3, in addition to the obvious amount labeled
“health.” The majority of nonmedical, basic science training and research funding falls under the “general science” category (predominantly for the National
Science Foundation). Additionally, various small programs that target specific
biotechnology industries and/or types of technology are here included in the
category “other” (for instance, the Department of Agriculture’s research budget).
Overall, because of the dominance of nondefense research by the health share,
it has become customary to focus on the NIH budget when discussing biotechnology. By this measure, a good rough and ready indicator of the current health
of publicly funded biotechnology research is the recent rapid increase in the
NIH budget. If the proposal submitted by President Bush in April 2002 for the
2003 budget is passed by Congress, President Clinton’s 1998 pledge to double
the NIH budget over five years will have been met.
Finally, Figure 4 displays an international comparison of total U.S. science
and technology expenditures. On a country-by-country basis, no single state
comes close to the U.S. level of total spending. Japan, the closest, expends
about half of the U.S. total. All of the non-U.S. G-7 countries combined spend
an amount about equal to what the United States does. If we subtract defense
Figure 4

U.S. and Other G-7 Countries’ R&D Expenditures

Biotechnology and Government Funding

135

commitments though, an area in which none of these countries really competes,
the United States does not stand out quite so dramatically. We are outspent on total
nondefense research and development when compared with the whole group of
our fellow G-7 partners. But we still outspend any one country by a large amount.
This does not tell the whole story though, because for reasons discussed below,
we also get more productivity—in terms of new breakthroughs, patents, marketable products, Nobel Prize winners, etc. — per dollar spent on research and
development than any of these countries. Thus at present, the United States is
the undisputed leader in almost every basic science area related to biotechnological research. Our universities, especially those with research-intensive medical
schools, are the preferred place to train for a career in these fields. Every year
sees a net inflow of talent from other countries of scientists wishing to work in
these institutions. For these reasons, and despite the fact that much valuable
research is conducted in Europe and Japan, all major international pharmaceutical companies feel the need to establish research relationships or laboratory
locations to keep abreast of the new developments in the United States.
THE ECONOMIC MOTIVATION FOR PUBLICLY FUNDED RESEARCH
The basic economic motivation for public funding of scientific research lies
with what economists call a market failure. The failure is caused by the degree
to which the process of creating new scientific knowledge and technological
innovation may be insufficiently appropriable — i.e., difficult to establish property rights to — to provide profit-seeking investment with sufficient rationale to
pursue such research (Nelson 1959, Arrow 1962). At the root of this failure is
the problem of the ease of spillover effects from new knowledge. Knowledge
production requires real resources, and if the fruits of one’s investment in those
resources are freely available to anyone, then they make poor investment targets. Add to this the uncertainty, risk, and long-term nature of the knowledge
production process and it is likely that many forms of research will not meet
investors’ minimum expected return hurdles. Even so, this is only a market failure
of an important public good, one that requires public intervention, if the activity
would also yield social returns greater than the cost of the investment in them.
Much empirical research by economists has established that this is in fact
the case for investments in science. Two types of studies have been undertaken.
At the microeconomic level, on an industry or case study basis, specific technological innovations have been demonstrated to display more benefits for the
ultimate users of technology, consumers, than for the original innovating firms
(Mansfield et al. 1977; Scherer 1982; Griliches 1992, 1995; Hall 1996; Jones and
Williams 1998; and Lichtenberg’s contribution to this conference). Thus the public
rates of return on investment in the research to produce such new knowledge
tend to be many multiples of the rates for private investors. This is because as

136

Michael S. Lawlor

the benefits spill over to other producers and consumers, the private investor is
not compensated. A second type of study, at a macroeconomic level, consistently finds that the rate of productivity growth of the whole economy is importantly linked to the invention and adoption of new technology. (See Ruttan
2001, ch. 2 and Steil, Victor, and Nelson 2002, ch. 1 for useful summaries of this
literature.) Here the proposed linkage is both the effect on productivity of the
spread of new ways of doing things across industries and products and the
increased skill of a workforce trained in the new technology. Note that this evidence also indicates that the investment value of new scientific knowledge is
greater for society in general than for any one firm. Thus both types of evidence
combine in an argument that the private sector can be expected to underinvest,
relative to what would be an optimal level for society, in the types of scientific
research for which the difficulty of appropriating specific new knowledge is
substantial.
To move closer to a socially optimal level of investment in research, then,
requires that the government intervene in some fashion. One such intervention
could be the establishment of a government-regulated system of property rights
on intellectual “inventions,” such as the patent and copyright laws. This has
been vigorously pursued in the United States since the time of the founding
fathers. (See Grabowski’s contribution to this conference.) A second possible
policy could be to subsidize private research efforts by a tax credit. Researchintensive industries in the United States have lobbied for this incentive for
decades, and since the 1980s it has become a permanent feature of our corporate
tax code. Nevertheless, economists generally view tax incentives as a weak and
ineffective tool, due to the undifferentiated incentive it presents to all kinds of
research and development efforts. The problem is that it is difficult to target just
the kind of basic research for which there is a market failure, and much applied
and developmental research that could attract sufficient private funding on its
own also ends up being subsidized. This is also inefficient from a social point
of view. Thus the most direct and effective tool available to governments to
fund research that is expected to be socially beneficial, but that is not likely to
be done by the private sector, is to fund it directly. A long-term program of such
funding also has the beneficial effect of keeping in place a system that will be
uninterrupted over the long-run cycles of private funding. Providing infrastructure to nurture the cumulative and unpredictable nature of research programs,
and the institutionalization of training grounds for reproducing the next generation of scientists, are additional benefits of this system.
SCIENCE VERSUS TECHNOLOGY, BASIC RESEARCH VERSUS APPLIED
Implicitly, we have assumed above that basic research into scientific questions represents the appropriate target of public research expenditures. Why?

Biotechnology and Government Funding

137

Generally because such research is so broadly defined and so generally applicable and/or difficult to write into a legally binding patent application that it
cannot be protected from spillover effects in sufficient degree to make it attractive to private investors. Applied research into technology would, in this framework, be such research as can be adequately appropriated to encourage private
investment. But this tells us nothing essential about the qualities of these two
types of activities that can be identified independently of what private initiative
will or will not fund. Since, particularly at the basic science level, one cannot
know in advance what scientists pursuing a particular line of research might
have discovered or produced unless we let them try, there will always be a difficulty attached to identifying the correct (optimal) amount of total research
spending. Additionally, public policy is faced with a need for ranking research
goals to provide for the allocation of public funds between particular lines of
research. Thus to some extent basic research will always depend on a degree
of confidence — perhaps even bordering on “faith”— in the possible future lifeenhancing usefulness of what must remain at some level the unforeseeable
results of scientific research.
Nevertheless, the distinction between bioscience and biotechnology plays
an important role in both investors’ minds and in thoughtful public policy. Investors are reasonably suspicious of the probability of commercial success of an
early and uncertain line of research. Policymakers, beholden to their constituencies, often want to direct funds to areas of greatest public interest or concern.
Thus here is a good place to briefly review attempts by scholars to draw theoretical distinctions between the activities of science and technology research. We
shall see that though this effort has added insight into the process of scientific
research that is useful and interesting, the distinction is a shifting and slippery
one to maintain in practice.
The most common view of the essential distinction between science and
technology is based on the intended use of the new knowledge that is being
pursued by research. By this view, science is new knowledge intended for
knowledge’s sake alone. Alternatively, technology is the “useful” application of
new ideas for commercial, military, clinical, etc., uses. With regard to basic science
activity, this captures something of the element of curiosity often proclaimed by
scientists as the crucial element of research that has led to new breakthroughs
(see Kornberg 1997). A more economic extension of this distinction was put forward by Dasgupta (1987). He distinguished between the alternative incentive
systems involved in the production of each type of knowledge. Scientific discovery, situated largely in nonprofit settings and conducted by academic scientists, is motivated by the “rule of priority,” according to Dasgupta. She who is
first wins in this contest. A notable aspect of this system is that its focus on early
achievement also serves to encourage rapid and complete disclosure of new
knowledge. Thus this system advances the social function of the diffusion of

138

Michael S. Lawlor

new ideas. Technology, in Dasgupta’s view, is motivated quite differently. A
profit-oriented firm that employs scientists to develop new clinical products, for
instance, is motivated by the “rents” that can be appropriated from the new
knowledge. It is not in the interest of the firm to see a rapid dissemination of
its new knowledge. Private knowledge production, alternatively, encourages
secrecy and hoarding of information.
An interesting feature of the current biotechnology era, where academic
science and commercial development fluidly intermix, is the degree to which
these distinctions have been breaking down. Most new biotechnology is already
a mix of what Ruttan (2001, 536) calls “doing science” and “doing technology”
years before it ever hits the marketplace. (Indeed, most biotechnology has yet
to reach the marketplace!) Both the fundamental idea of the structure of DNA
and the useful method of gene splicing have been integral to the development
of molecular biology, for instance. Thus the distinction between scientific ideas
and technological applications of them is more of interactive feedback loop.
New ideas often lead to new technology that itself aids the next stage of discovery. Moreover, today the dual feedback loop between bioscience and technology extends quite far into the development stage, where new product development is itself a scientific endeavor. Consider, for example, that the average
biotechnology start-up firm is a small group of scientists and laboratory technicians
trying to develop a commercially viable prototype of a previous scientific result.
The same blurring of lines affects Dasgupta’s taxonomy. Academic researchers
who are winning the priority prize have also been patenting their discoveries
and starting new commercial ventures to bring them to market. (See Darby’s and
Zucker’s contributions to this conference.) For instance, Cohen and Boyer’s
gene-splicing technique, developed in a university lab, became the basis upon
which the most commercially successful biotechnology firm to date, Genentech,
was founded. This was the original model for the now numerous academicscience-entrepreneur firms that have sprung up in the biotech field. This crossfertilization between basic science and technological development has also
spread to the patenting process. Not only are many scientists and universities
patenting the results of their research with greater frequency, but also many
patent applications are citing scientific publications in their applications to
establish both prior knowledge and the rule of priority. Figure 5 demonstrates
this increasing link between new technology and new science in the rapid
increase of references to scientific paper on applications for new patents since
the mid-1980s. Another aspect of the breakdown of Dasgupta’s distinction is the
growing controversy over the clash of academic and commercial interests in disseminating the results of scientists who are also commercially funded or have a
vested interest in future commercial applications. Thus not only are the worlds
of science and technology increasingly intermixed, so are the motivations and
roles of researchers in academic institutions and private industry laboratories.

Biotechnology and Government Funding

139

One important public policy lesson of the ambiguous results academic
investigation reveals about the distinction between science and technology is
that there may be an additional role for public funding in certain cases of what
might seem to be applied research. The rationale in this case concerns the possibilities that in relatively new technological areas there may be an additional
market failure as firms find the difficulties of translating new laboratory science
results into industrially viable technology too risky for private investment hurdles.
In many cases, as biotechnology firms today are finding, for instance, established firms like large pharmaceutical companies find it an intolerable risk to
nurture such new technologies as their own research investments. This is why
the pooling of risk and consequent benefits of portfolio diversification through
the characteristics of the venture capital fund have become so important in
biotechnology.
But even venture capital may not be willing to assume all of the risk of
wholly new science for which there is as yet no technology. Crossing this “death
valley” of lack of funding along the way from laboratory science to engineering
viability is a major hurdle today for many bioscience firms. Part of the problem
is that certain types of premarket, generic, process technologies will possibly
become commonly used by other firms in ways that are difficult for the original
investor to lay claim to. But just as often it is the uncertainty of success in ramping up a basic science result to an industrial scale. An instructive historical
example of this difficulty can be seen in the World War II– era attempt to produce penicillin. Penicillin mold was originally identified as an antibiotic by
Alexander Fleming in 1928. Then followed a decade of work by British scientists at Oxford University in isolating the essential agent, producing it in laboraFigure 5

Number of Citations on U.S. Patents to Scientific and Technical Articles, 1987– 98

140

Michael S. Lawlor

tory quantities, and proving its clinical efficacy in small numbers of risky experiments on patients. Only when the war arrived and it was expected that a successful antibiotic would save thousands of lives did the effort to industrially produce the product receive attention. When it did, the British turned to America for
help. What turned out to be the crucial technological breakthrough came from
agricultural scientists working for the U.S. Department of Agriculture on fermentation technology in Peoria, Illinois. The expertise of the agriculture scientists and engineers in the seemingly unrelated area of fermented food production led to an economically viable process by minimizing the ratio of
“feedstock” input of mold to output of finished penicillin in industrial scale fermentators. (See Bud 1993, 103 – 7.) It was largely from these publicly funded
results that the technological problem of penicillin was solved. This process
then became the basis for a vast commerce in a wide spectrum of commercial
antibiotics after the war.
Though some have called for more of this type of effort in the biotechnology field (Tassey 1999), already there is evidence that the U.S. government
has been moving in the direction of providing some funding for just such
generic technology development. For one, since the mid-1980s, it has been the
policy of the federal government to encourage the transfer of any federally funded
research to the private sector. This encouragement may occur in the form of
cooperative research agreements (CRADAs) by which federally funded laboratories are authorized to establish research links for their own profit with commercial firms using their results. Similarly, all federally funded scientists are now
authorized and encouraged to patent the results of their research by the opportunity for the scientists and the institutions to which they belong to share the
royalties such patents might generate in the private sector. More directly, the
Department of Commerce’s National Institute of Standards and Technology has
initiated a small-scale program of directly funding research into the development of a new generic process for emerging high-technology industry. This Advanced Technology Program (ATP) has launched projects on such questions as
laboratory reproduction of stem cells, regeneration of human tissue, and the
possible growth of insulin-producing cells in the pancreases of diabetics (Martin
et al. 1998). Its mission is to investigate the feasibilities of such enabling technologies and then turn them over to the private sector for further use. If they
can avoid, as they seem to be carefully doing, the problem of favoring particular firms, this small program has the potential to help bridge a crucial gap in the
move of biotechnology from public science to viable commercial products.
POLITICAL AND SOCIAL MOTIVATION FOR PUBLICLY FUNDED RESEARCH
As our brief overview of public funding for scientific research in the previous section suggests, the market-failure rationale from economics has not

Biotechnology and Government Funding

141

been the prime mover of science policy in the United States. Indeed, throughout history it has been much more likely that political considerations such as
national prestige, military security, and social needs have been the main motivators of public funding for scientific research. There is no reason to expect that
this will change in the near future. Since it is particularly the long-term nurturing of the broad basic science base that has produced the United States’ competitive edge in biotechnology, it is instructive to understand how this system
was founded and what qualities are responsible for its many successes.
In fact, it is something of a historical fluke, born of the political conditions
of the 1940s, that the United States established what is an internationally unique
system of noncentralized, government-funded, but largely university-performed,
basic research in science and technology in the postwar era. That fluke is the
story of what has come to be called the Vannevar Bush-inspired era of national
science policy.
Vannevar Bush, formerly the dean of the MIT School of Engineering and
then the president of the Carnegie Foundation, was selected by President Roosevelt in 1941 to head the Office of Scientific Research and Development. His
task was to harness the skills of the academic science community for the war
effort. The spectacular successes of this effort in the war — including, to name
just three of many possible examples, the creation of a feasible synthetic rubber to replace the natural supplies cut off by the Japanese, the mass production
of penicillin, and the creation of the atomic bomb — convinced Bush that the
same type of work should be harnessed for peacetime needs in the postwar era.
In 1945 he authored a Carnegie Foundation report, Science: The Endless Frontier,
which laid out a plan for establishing a permanent federal science policy. Bush’s
report was couched in the inevitable military rhetoric of its era, with accounts
of logistical needs, personnel available and needing to be trained, chains of
command, budgets, and a plan of action that would have made a field general
proud. No doubt this added to his report’s enormous influence. But more
important is the vision that his plan put forward. It called for an ambitious and
comprehensive National Research Foundation, which would support primarily
basic research in the life sciences, physical science, medicine, and what he
called “basic military research,” meaning research prior to actual weapons
development. (He no doubt had atomic energy in mind.)
All of this research was to be funded at the federal level and performed
largely by state and private university scientists. Most important for Bush, the
scientists themselves would control the allocation of the funds. This would be
accomplished, he suggested, by a peer review system by which proposals
would be granted to projects and individuals deemed both capable and scientifically promising by other scientists. Yet though Bush made glowing comments
on the potential applied social uses of scientific knowledge, he implicitly seemed
to reject the rate of return reasoning we have just outlined as the economic argu-

142

Michael S. Lawlor

ment for public funding. In rejecting the “investment” criterion, as he called it,
he articulated the humanistic argument still heard today, that scientists do their
best work when motivated by curiosity alone, not practical application. Curiously, though, like the economics argument this led him also, for different reasons, to reject funding for any applied research. His objection was that too much
political or commercial direction of scientific research would inhibit the free play
of idle curiosity. Thus he did not envision federal support for industrial product
development, and he wanted actual weapons development to be the responsibility of the armed forces separately. Also, curiously, his discussion of medical
research (Bush 1945, appendix 2) only briefly mentions clinical research.
It is fair to say that Bush’s vision of a science establishment innocent of
other interests was politically naïve. Bush’s historical role was to use his enormous influence to lobby for perhaps as much freedom for science from political, commercial, or military control as it has been given anywhere, anytime in
history. In this he was largely pitted against the forces arrayed behind Senator
Harley Kilgore of West Virginia, who sought a more centralized control of government-funded scientific research (see Kleinman 1995). Though Bush is often
credited with being the architect of the postwar federal policy, it is important to
note that his was not an unqualified political success. First of all, his advocacy
of noninterference with scientists by either defense, social, or commercial interests was not to become a complete reality. It would be more correct to say that
the purest example of the Bush model has been the National Science Foundation, founded in 1950. The NSF has stridently avoided attempts by Congress
over the years to direct or widen its mission to applied research and any outside interference with its peer review system. Yet, possibly because of the NSF’s
sponsorship of this pole of pure curiosity-driven science, the NSF’s budget has
remained relatively small throughout the postwar era. It has, for example,
received only a fraction of the NIH’s funding.
In fact, the creation of a separate vehicle for biomedical and clinical
research at the NIH is itself an example of the only partial success of Bush’s
ideal. Additionally, the military establishment was not about to hand over the
direction of its research program to a group of scientists. Consequently, the
Department of Defense’s own applied weapons development projects, working
largely with commercial defense contractors and the Atomic Energy Commission, often focused on big projects like the Oak Ridge facility, which are examples of Kilgore’s favored centralized, state-controlled research policy. Later,
when space exploration became a serious concern of the government, it also
was organized along similar lines — as a centrally controlled, explicitly missionbased, applied research project.1
Bush’s vision, then, is just one of many ways the government can and does
fund science. Figure 6 (borrowed from Ruttan 2001, 537, and altered for this
paper) illustrates, both conceptually and by reference to actual examples, a con-

Biotechnology and Government Funding

143

Figure 6

Possible Interactions Between Basic Science and Applied Technology and
Between the Market and Government

tinuum of possible combinations of ways that the government, the market, applied
technological work, and new scientific knowledge might be organized and
interact. In the upper quadrants of this diagram, there are purely applied
attempts, based mostly on trial and error, by commercial “inventors” like Edison
to invent new products. Alternatively, it is possible that investigation of a practical problem, like Pasteur’s work on alcohol fermentation, might inadvertently
lead to significant new scientific knowledge, such as the identification of the
role of microorganisms in organic processes. The government has also been
known — sometimes spectacularly successfully, as in some defense technology,
agricultural science, and the space race; sometimes dismally, as in alternative
energy research in the ’70s — to organize, fund, direct, and either disseminate or
use the skills and methods of science to meet a predetermined social need. This
is illustrated in the lower left quadrant. Finally, Bush’s vision is illustrated in the
lower right quadrant. In this model, the government funds basic research but
otherwise leaves its direction to the curiosity of the funded scientist.
How does this discussion relate to the current biotechnology era? First,
recall that the major player in the fundamental developments of bioscience has
been the NIH. In light of our discussion of the pervasive role of political considerations in generating support for science, the NIH represents a highly successful compromise model. It broadly represents the motivations of the public
by its system of twenty-five institutes organized around body systems (e.g., the
National Heart, Lung, and Blood Institute) and diseases (e.g., the National Can-

144

Michael S. Lawlor

cer Institute). Moreover, as illustrated in Table 1, there is a rough and ready concordance between the nation’s major health threats and the relative proportions
of the NIH budget devoted to them. Yet within each institute, and in accordance
with the Vannevar Bush vision, an extensive peer review system sorts out the
particular researchers and projects that will be funded.
It would be a mistake to say that NIH policy is perfect. That is an unachievable goal for any public policy. There has continued to be debate in both
the political and scientific communities about the proper balance between NIH
funding of basic science and more applied clinical activities. But in broad perspective, considering both its successes and in its more realistic (compared with
Bush’s utopian ideal of science funded by the taxpayer but run only by scientists) political model, it is a fine example of a good policy that trumps a theoretically “best” one. Policymakers should be mindful that its success is based on
so delicate a balance of social and intellectual forces. It is this system we have
to thank for the efforts that have made the United States the world leader in biomedical science.
Table 1

Top Ten Diseases and Conditions by Level of NIH Funding, Fiscal Year 2000

Biotechnology and Government Funding

145

NOTE
1

A nice illustration of the political and organizational tensions that surround the methods of public research fund allocation is recounted by Ruttan (2001, 568, note 23), who attributes the following comment to former Surgeon General Jesse Steinfeld: “If the space program had been
conducted by NASA on an investigator-initiated project basis, we might now have 60,000 space
scientists, each 80 miles on the way to the moon.” Steinfeld’s comment was made in light of his
efforts to focus more of the NIH budget on disease-oriented clinical research, as opposed to
basic science. Though amusing and true enough about the space race, we should be careful
to note that it doesn’t tell the whole story. The space race had a relatively stable goal in mind
(put a man on the moon) and was working with relatively known scientific tools (rocket technology and planetary physics, etc.). Thus a centrally controlled crash program was feasible but, of
course, not guaranteed. It is not clear that we know enough yet about cancer, for instance, to
justify putting all of our scientific eggs into one basket.

REFERENCES
Arrow, K. J. (1962), “Economic Welfare and the Allocation of Resources for Invention,” in The Rate
and Direction of Inventive Activity (New York: National Bureau of Economic Research).
Bud, R. (1993), The Uses of Life: A History of Biotechnology (Cambridge: Cambridge University
Press).
Bush, V. (1945), Science the Endless Frontier: A Report to the President by the Director of the
Office of Scientific Research and Development (Washington, D.C.: U.S. Government Printing
Office).
Dasgupta, P. (1987), “The Economic Theory of Technology Policy: An Introduction,” in Economic
Policy and Technological Performance, ed. P. Dasgupta and P. Stoneman (Cambridge: Cambridge University Press).
Griliches, Z. (1992), “The Search for R&D Spillovers,” Scandinavian Journal of Economics 44:
29 – 41.
——— (1995), “R&D Productivity: Econometric Results and Measurement Issues,” in Handbook
of the Economics of Innovation and Technological Change, ed. P. Stoneman (Oxford: Basil Blackwell), 52– 89.
Hall, B. H. (1996), “Private and Social Returns to Research and Development,” in Technology
R&D and the Economy, ed. B. R. Smith and C. E. Barfield (Washington, D.C.: Brookings Institution Press), 148–55.
Jones, C. I., and J. C. Williams (1998), “Measuring the Social Return to R&D,” The Quarterly Journal of Economics, 1119 – 35.
Kleinman, D. L. (1995), Politics on the Endless Frontier: Postwar Research Policy in the United
States (Durham, N.C.: Duke University Press).

146

Michael S. Lawlor

Kornberg, Arthur (1997), “Support for Basic Biomedical Research: How Scientific Breakthroughs
Occur,” in The Future of Biomedical Research, ed. C. E. Barfield and B. L. Smith (Washington,
D.C.: American Enterprise Institute and The Brookings Institution).
Mansfield, E. et al. (1977), The Production and Application of New Industrial Technology (New
York: Norton).
Martin, S. A., D. Winfield, A. Kenyon, J. Farris, M. Bala, and T. Bingham (1998), A Framework for
Estimating the National Economic Benefits of ATP Funding of Medical Technologies (Research
Triangle Park, N.C.: Research Triangle Institute). Available at www.rti.org/publications/cer/6715001.pdf.
Nelson, R. R. (1959), “The Simple Economics of Basic Scientific Research,” Journal of Political
Economy 67: 297– 306.
Scherer, F. M. (1982),”Inter-industry Technology Flows and Productivity Growth,” Review of Economics and Statistics 64: 627– 34.
Ruttan, V. W. (2001), Technology, Growth and Development: An Induced Innovation Perspective
(New York and Oxford: Oxford University Press).
Steil, B., D. G. Victor, and R. Nelson, eds. (2002), Technological Innovation and Economic Performance (Princeton, N.J.: Princeton University Press).
Tassey, G. (1999), R&D Trends in the U.S. Economy: Strategies and Policy Implications. A Planning Report of the National Institute of Standards and Technology (Washington D.C.: National
Institute of Standards and Technology, U.S. Department of Commerce). Available at www.nist.gov/
director/prog-ofc/report99-2.pdf.

PART FIVE

Local Determinants of
Biotech Research
Commercializing Knowledge:
University Science, Knowledge Capture,
and Firm Performance in Biotechnology
Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Commercializing Knowledge:
University Science, Knowledge
Capture, and Firm Performance
in Biotechnology
Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

1. INTRODUCTION
Our research program over the past 10 years has focused on the use of basic
science knowledge in commercial firms and the impact of that knowledge on firm
performance. In our earlier research, we have found substantial consistent evidence that top academic science, specifically the star scientists who make most of
the defining discoveries, provides intellectual human capital that defines the technology of the firm—at least following scientific breakthroughs. Although there are
likely to be considerable spillover effects when knowledge is created or employed
(Jaffe 1986, 1989), and perhaps also an important symbolic and legitimating function of high quality science for commercial activity (Stephan and Everhart 1998),
our empirical work identifies the main and robust empirical effects due to real scientific labor contributions of star scientists to performance of the firm.
To “detect” stars and quantify their labor contributions to firms, we identified 327 “star” bio-scientists worldwide based on their publications of geneticsequence discovery articles up to early 1990 before gene-sequencing machines
were in widespread use. Stars were those cumulatively reporting more than 40
genetic-sequence discoveries or on 20 or more articles reporting any geneticsequence discoveries in GenBank (1990). We identified every “star” article on which
Reprinted by permission, Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong, “Commercializing Knowledge: University Science, Knowledge Capture, and Firm Performance in Biotechnology,” Management Science, 48(1), 2002. Copyright 2002 the Institute for Operations Research
and the Management Sciences (INFORMS), 901 Elkridge Landing Road, Suite 400, Linthicum, MD
21090 USA.
149

150

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

the star, or (more frequently) a co-author, was affiliated with a firm. The numbers
of these articles was our measure of the depth of star involvement in the firm.
Before turning to new results reported in this article, a brief summary of
our prior results will be useful for readers not already familiar with our work:
• Location of top, “star” scientists predicts location of firm entry into new
technologies (both new and existing firms), shown for the United States
and Japan in biotechnology (Zucker, Darby, and Brewer 1998, Darby
and Zucker 2001) and replicated for the semiconductor industry in the
United States (Torero et al. 2001).
• Ties that involve actual work at the science bench between star scientists (mostly academics) and firm scientists consistently have a significant positive effect on a wide range of firm performance measures in
biotechnology (Zucker, Darby, and Armstrong 1998, Zucker and Darby
2001) and in semiconductors for number and quality of patents (Torero
1998). Ties to stars also shorten the time to IPO (firms are younger) and
increase the amount of IPO proceeds (Darby et al. 2001).
• As the quality of an academic star bio-scientist increases and his/her
research becomes more relevant to commercialization, the probability
increases that the scientist conducts joint research or moves to a firm. As
expected scientific returns increase—measured by citations to other local
star scientists working with firms—the probability that the next star will
begin working with a firm also increases (Zucker et al. 2001). Quality is
also positively related to working with firms in Japan, but only number of
articles predicts significantly with this smaller sample (Zucker et al. 2000).
Our findings on the importance of basic university science to successful
commercialization of important scientific discoveries are confirmed in other
research, especially the importance of intellectual human capital (Di Gregorio
and Shane 2000). Faculty are a key resource in creating and transferring early,
discovery research via commercial entrepreneurial behavior (Yarkin 2000).
Jensen and Thursby (2001) confirm that active, self-interested participation of
discovering professors is an essential condition for successful commercial licensing of university inventions. Thursby and Thursby (2000) find that the sharp
increase in university-industry technology transfer has not resulted so much
from a shift in the nature of faculty research as from an increased willingness of
faculty and administrators to license and increased interest on the part of firms.
In this article, we continue our research program on the economic value
of knowledge, especially tacit knowledge at the time of commercially relevant
scientific breakthroughs. We compare the real effects on the performance of
biotech firms of two overlapping groups of academic scientists who collaborate
with firm scientists: the stars who made significantly more genetic sequence discoveries, and all relevant scientists (including the bulk of the stars) employed at

Commercializing Knowledge

151

one of the top 112 U.S. research universities ranked by federal research funding. Our overall results again support the strong effects of academic science on
the success of firms. Both science measures have strong positive independent
effects on most performance measures. The patent panels show that the labor
effort of the stars has a significant incremental impact on firm performance
above and beyond the effects of all scientists from top research universities
working with the firm. In cross-section estimates, we find significant positive
effects from either star or top 112 faculty linkages but efforts to enter both sets
of variables in the same regressions are confounded by multicollinearity. We
conclude that affordable bibliometric measures are good but not perfect substitutes for the costly-to-construct star measures.
The article is organized as follows. In §2 we develop our theoretical
approach to (a) the sources and implications of the information advantage —
common to most scientific breakthrough knowledge — held by the discovering
scientists, (b) the difficulties inherent to the transfer of tacit knowledge that lead
to joint research, and (c) the amount of knowledge capture necessary for firms
to offset sunk commercial development costs. In §3 we sketch the history of
scientific development and the rise of the biotech industry, focusing on the ties
between academic science and commercial firms. Since data are the plural of
anecdote, we present qualitative evidence of the importance of ties to star scientists for the performance of the most successful firms. In §4 we briefly review
the variables and their sources and then present and discuss the empirical
results. We estimate Poisson regressions (and linear-least squares for employment) that explain the performance of a panel of biotech firms for patents and
citation-weighted patents, and cross-sections for products in development, on
the market and employment. In these regressions, we systematically test the predictive power of science (stars and top 112 university scientists tied to the firm
via co-authoring of scientific research, as well as all local academic scientific
publishing by stars), venture capital, and other firm characteristics such as use
of the dominant technology (rDNA or genetic engineering). In §5 we offer our
conclusions. Details on the data set and supplementary analyses are compiled
in a separate appendix, which is cross-referenced and available from the authors
on request.
2. THE REAL EFFECTS OF KNOWLEDGE CAPTURE
Academic-to-industry technology transfers may be rare, but we believe
they can still account for the bulk of technological progress. These are not pure
“transfers,” but necessarily knowledge captures to the degree necessary to offset
sunk development, marketing, and other costs invested in moving a discovery
into a commercial innovation. Many fundamental industry transformations or
technological breakthroughs can be traced to specific advances in science.

152

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

While the industries experiencing technological discontinuity are a distinct
minority in our economy, we argue that a distinct minority of firms within this
distinct minority of industries account for a large part of the aggregate technological progress conventionally measured in productivity studies (Harberger
1998, Darby and Zucker 2002).
Knowledge and the Market for Information
Our argument starts from the classic Stigler (1961) observation that information is a valuable and costly resource and that individuals are thus motivated
to adopt strategies, such as search, that weigh the expected costs and benefits
of acquiring information. For example, if individuals’ search involves unique
goods, then costs of search are sufficiently high that transactions are commonly
localized as a device for identifying potential buyers and sellers. Stigler pointed
out that medieval markets were an example of actual localization; advertising is
an example of a “virtually” localized market.
We argue that another mechanism of “virtual” localization is a profession, or
more commonly, a subspecialty within a profession.1 Here, the buyers and sellers
of knowledge, including new or “breakthrough” discoveries, are brought together
in a highly balkanized market in which the participants share a reasonably similar endowed knowledge base that makes the new knowledge potentially understandable and useable. The size and geographic distribution of that knowledge
base determines the extent of initial demand for the new knowledge. For the
purposes of our argument here, information and knowledge are equivalent.
From Tacit to Codified Knowledge
New information tends to be produced in tacit form, increasing in tacitness
as a function of distance from prior knowledge (hence, especially breakthrough
knowledge), and requires resources to codify. Tacit knowledge tends to be
highly personal, initially known only by one person (or a small team of discovering scientists) and is difficult to transfer to others (Polyani 1962, Schutz 1962).
As knowledge increases in complexity, the probability increases that deviation from “textbook” description of action will be required (Nelson 1959, Nelson
and Winter 1982). For example, internal bleeding during surgery requires decisions about whether and how to deviate from the textbook that cannot be fully
prescribed in advance. This kind of complexity leads to knowledge remaining
tacit longer, perhaps remaining an “active task” that changes its nature in
response to contingencies in contrast to an “inert task” such as a secretary typing a letter written by his/her boss (Scott et al. 1967).
Knowledge becomes shared (intersubjective) to the extent that codes or
formulas are borrowed from pre-existing knowledge and/or are newly created.

Commercializing Knowledge

153

Relevance to earlier knowledge allows borrowing of codes, mathematical expressions and relations, and even machines that “embody” those codes/math.
Such knowledge is cumulative and can be easily understood and transferred,
relying on references to the well-understood prior scientific literature.
But new knowledge that cannot be readily grafted on old is likely to offer
more opportunities. Opportunity can shift incentives — increasing them along a
continuum from incremental change to breakthrough discoveries (Klevorick et
al. 1995). Increased incentives to enter arise from these greater opportunities.
Discovering scientists become important in technology transfer when a
new discovery has both high commercial value and a combination of scarcity
and tacitness that defines natural excludability, the degree to which there is a
barrier to the flow of the valuable knowledge from the discoverers to other scientists. Tacit, complex knowledge provides partial natural protection of information, both separately and jointly with more formal property rights. Those with
the most information about breakthrough discoveries are the scientists actually
making them, so there is initial natural scarcity. To the extent that the knowledge is both scarce and tacit, it constitutes intellectual human capital retained
by the discovering scientists, and therefore they become the main resource
around which firms are built or transformed (Zucker, Darby, and Brewer 1998;
Zucker, Darby, and Armstrong 1998). Hence, tacit knowledge can be viewed as
at least partially rivalrous and excludable information and thus “appropriable”
as long as it remains difficult (or impossible) to learn it.
As tacit knowledge becomes increasingly codified — or translated into
“recipe knowledge” as Schutz (1962) terms it — tacitness decreases and knowledge transfer is easier. But significant barriers stand in the way of codification.
Relevance between old and new knowledge can be difficult to determine
(Schutz 1970), increasing the demand for social construction of new codes, formulae, and machines. The greater the discontinuity, the more difficult it is to
anchor in prior systems of knowledge.
Until there is a reliable indicator of the value of the new knowledge, the
size of the market for codification is unlikely to be large enough to cover the
cost of developing the new codes. Paradoxically, once the value is known,
• If the value is low relative to alternative uses of scientific talent, then
there are few incentives to codify it.
• If it is high, those few scientists who hold the new knowledge will have
to weigh returns to codification against returns to time invested in scientific research, a trade-off that pits knowledge transfer against knowledge creation.
Hence, the average scientific discovery is never codified, and valuable discoveries experience a significant codification lag that tends to increase with their
value.

154

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Knowledge Capture via Team Production
Knowledge that is cumulative builds on an existing set of words and symbols, and hence involves less or no barrier to communication: Listening to a lecture or reading a text can suffice. But tacit knowledge often requires that one
of those already holding that knowledge works with the novices to teach them
in a hands-on process. For example, 81% of the new authors enter GenBank by
writing with old authors, and new authors write exclusively with new authors a
significant 36% less than “old,” experienced authors write exclusively with other
“old” authors after excluding all sole authored papers (Zucker et al. 2001).
Transfer may be very effective — there are well-documented effects of cumulative experience on performance improvement (Pisano et al. 2001) — but it is
slow and requires the active participation of the holder of the tacit knowledge.
Discovering scientists are typically willing to transfer knowledge primarily
in the context of their ongoing laboratory work. At the extreme, when tacitness
is high, it is their collaborators on their research team who are the recipients of
this knowledge; others are excluded through lack of access. Thus, the initial cost
of entry is high. But entry cost tends to decline over time, and the probability
of an error in the initial discovery also declines as others replicate it, thus reducing risk to the new entering scientist.2
This restricted process of transfer will more often than “normal science”
lead to sufficient knowledge capture to justify the cost of commercial development by a firm. Knowledge capture explains why tacit knowledge tends to be
highly localized: It will be concentrated geographically around where the discoveries are made (or where the discoverers move). As shown in Figure 1, there
is considerable concentration of patented inventions, as well as human therapies
and vaccines, in development and on the market. Just two states, California and
Massachusetts with 14% of the U.S. population, have a disproportionate share
especially of U.S. products in development (49%) and on the market (58%).
Patenting is somewhat less concentrated; since patenting is both an input and
an output of the innovation process, this may suggest a lessening of geographic
concentration, perhaps as the discoveries mature and are codified. Generally
patents provide a useful incentive to the codification of knowledge, but in the
case of patented cell lines, a novel technique — deposit in an approved depositary to be publicly available on patent expiration — acknowledged the difficulty
in codifying exactly how the new organisms could be created.
Understanding the role of scientific teams in tacit knowledge transfer extends the arguments for team production: (a) Team organization makes routine
the transfer of tacit knowledge from the discoverer to other team members, and if
team members cross organizational boundaries, then tacit knowledge is efficiently
transferred — in the present case, most interestingly from university discovering
scientists to firm scientists (Zucker, Darby, and Armstrong 1998). (b) Through

Commercializing Knowledge

155

Figure 1

The Geographic Distribution of Biotech Patents and New Products as of 1991

team organization, more productive cooperation is often achieved via specialization than possible through the linking of individual efforts across impersonal
markets (Demsetz 1995, p. 17).
The greater the labor effort of the discovering university scientist(s) with
teams containing firm scientists, the greater the amount of tacit knowledge
transfer. In bench level collaboration, you can actually see how the science is
done. As tacit knowledge transfer increases from the discovering scientists, the
success of the firm also increases. Thus, managers of high tech firms have incentives to hire the top-discovering scientists if their discoveries have commercial
value. Discovering scientists also have incentives to found a new firm. In sharp
contrast, in industries where “normal science” reigns, hiring of below average,
acceptably competent scientists at a low wage is the typical practice (Kornhauser 1962). Obviously, each can be a market-value – maximizing strategy for
the firms facing different knowledge frontiers.
3. SCIENTISTS’ LEADERSHIP AND INDUSTRY SUCCESS:
3. COMMERCIALIZING KNOWLEDGE
Biotechnology is a preeminent example of an industry undergoing very
rapid growth associated with radical technological change initiated in academe
and based on basic science breakthroughs. The key attributes can be summarized concisely:
• Breakthrough discovery: Professors Stanley Cohen (Stanford) and Herbert Boyer (University of California–San Francisco) reported the basic

156

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

technique for recombinant DNA, also known as rDNA, genetic engineering, or gene splicing (Cohen et al. 1973).
• University scientists: We identified star bio-scientists based on genetic
sequence discoveries reported in GenBank (1990), an online reference
file, and in this article introduce bio-scientists identified in ISI’s electronic file of research articles written by at least one author located at
one of the top 112 U.S. research universities.3 Star articles are (nearly)
a subset of top 112 articles (U.S. stars not in a top 112 university and
conference papers — less than 1% of the total star articles — are not
included in the ISI article files).
• Links/collaborations with firms: Articles that are co-authored by firm
employees and top scientists, including “stars” and the top 112 university scientists, indicate the intensity of involvement with the firm’s
research effort.4 Most of these scientists “wear two hats,” one as professor
at a university and one as a leader or lab head at a firm (confirmed
through interviews at universities and firms on both coasts).
Firm Success and Knowledge Capture
The degree to which an open scientific literature can produce such strong
apparent “knowledge capture” effects on firm success rests on (a) characteristics of tacit, complex knowledge that lead to natural excludability, and (b) selection by firms of discoveries for which the degree of knowledge capture is likely
to offset sunk costs incurred in making the scientific discovery a commercial
innovation.
To provide some intuition for our regression results, we first briefly review
examples of the prominent positions that top academic scientists are given in
the most successful biotech firms, identify their copublishing with the firm, and
finally explore the impact that top scientists’ copublications with firm scientists
have on success.
Top 10 Biotech Firms
Individual scientists are often highlighted in an IPO prospectus.5 These scientists typically achieved prominence in both their university and private sector
appointments. Examples of distinguished academics from the top-112 universities 6 that were appointed to corporate officer positions in one of the top 10
biotechnology firms (as of 1994) include: (a) Herbert Boyer to the position of
vice president and director of Genentech Inc.;7 (b) Edward Penhoet, former faculty member of the Biochemistry Department at UC–Berkeley and co-founder
of Chiron, to the position of president, CEO, and director of Chiron; (c) Walter
Gilbert, the American Cancer Society Professor of Molecular Biology at Harvard

Commercializing Knowledge

157

Table 1

The 10 Most Highly Valued Biotechnology Firms in 1994:
Leading Academic Scientists Appear on Their IPO Prospectus and as Joint Authors

University and 1980 Nobel prize winner, to several of Biogen’s boards, with Phillip
Sharp, professor of biology at MIT, and Daniel Wang, professor of chemical and
biochemical engineering at MIT, on its scientific board; (d) two founders of
Genetics Institute were university faculty, who also were executive officers and
directors of the company, as well as members of its scientific advisory board;
and (e) Amgen included on its scientific board prominent university professors from
UCLA, CalTech, and Stanford, all members of the National Academy of Sciences.
Table 1 shows that 40% of these top 10 biotech companies reported at
least one star on their team when going public, while 70% had linked articles
(star copublishing with at least one firm employee). Not surprisingly, because
of the much broader coverage of both scientists and universities, these top 10
biotech companies reported a higher percentage of top 112 university scientists:
80% reported at least one top 112 scientist on their team when going public,
and 90% had core collaborations with one or more of these scientists. The
advantage of the publishing measure is that it weights the amount of involvement of the scientist: For example, Centocor had only 1/20 as many core collaborative research articles as Genentech.
IPOs listed many former or current university professors as company
founders, officers, directors or key members of scientific advisory boards (see
Appendix Table A1). Almost every scientist holding a top management position
had done so since the company’s founding. These scientists were not brought
in as part of the preparation for the IPO to merely “signal” the firm’s success,
contrary to a suggestion in Stephan and Everhart (1998).

158

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Is Success in the Stars?
Certainly, scientists in high-ranking positions in these now public firms
provide scientific control and are important for firm success. However, the
majority of firms in our sample do not go public before the end of our time
period. In any case, we are interested in the actual work that top scientists do
that is joint with the firm. We measure this joint work by the cumulative number of collaborative articles.
Using the total number of joint articles, drawing on both of our science
measures, we can take a preliminary look at our findings by graphing the mean
values of the cumulative number of tied articles: for the stars, articles that
involve a star scientist and a firm scientist (where the star can also be an
employee of the firm) and for scientists at the top 112 universities, articles that
involve joint work by at least one university and one firm scientist. These values
are shown in Figure 2a. The differences are particularly striking at the 10+ article
level. The mean success by tied star articles is consistently and markedly higher
than for top 112 university scientists across our major success measures: patents,
products in development, and products on the market.
Figure 2a

Biotech Firms Are More Successful if Tied to Star Scientists
or if Linked to Top Research University Faculty

Commercializing Knowledge

159

Figure 2b presents the comparable data on venture capital funding (data
from Venture Economics). The amount of venture capital funding is less consistent in its effects compared to tied/linked science results. While increasing
cumulative amount of venture financing generally increases both patents and
products in development, the magnitude of differences is small relative to the
tied/linked science effects shown in Figure 2a.
Concentration of Success
Darby and Zucker (2002) argue that much if not most of technological
progress is accounted for by relatively few firms operating in relatively few
industries undergoing rapid change. We will just touch on examples of concentration here:
• Industry Success Concentration: Top-decile biotech firms account for
64% of the total number of human therapies and vaccines in development (485 as of 1991), 43% of all patents, and dominated human
therapies and vaccines on the market (82%). (See Appendix Figure A1.)
• Geographic Concentration: 64% of the total products in development
are concentrated in the top five states (Appendix Table A2); 58% of the
total products on the market are concentrated in those same five states
(Appendix Table A3).

Figure 2b

Biotech Firms Are More Successful if Funded by Venture Capitalists

160

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

4. EMPIRICAL RESULTS
The Data
The Zucker-Darby star-scientists/articles database has been a powerful
tool for exploring the co-evolution of life sciences and biotechnology. However,
that methodology involves an expenditure of resources justifiable only for pioneering academic efforts or sophisticated financial institutions. As the ISI databases are increasingly available, the extent to which electronic bibliometry can
substitute for hand coding and specialized technical knowledge is a question of
practical importance to both academic researchers and industry practitioners.
Here we use the basic tool of copublishing between academic and firm
scientists as a detector of joint research and (often two-way) university-industry
technology transfer. The Institute of Scientific Information (ISI 2000) U.S. University Science Indicators database on CD-ROM has extensive information on all
the scientific articles with at least one author at any of the top 112 U.S. research
universities.
Table 2 defines all the variables used in the empirical estimates and provides summary sample statistics for each. As in Zucker, Darby, and Armstrong
(1998), we classify each article in GenBank of which a star scientist is an author
relative to each firm as affiliated with the firm, as linked to the firm if the star
is unaffiliated but writing with the firm’s employees, and otherwise as untied
to the firm. Aggregating over all stars and time for each firm gives the first six
variables in Table 2. The “local” in local untied articles refers to articles by stars
affiliated with universities or research institutes in the firm’s functional economic
area (metro area plus exurbs as defined by the U.S. Bureau of Economic Analysis).
We attempted to find all articles written by any employee of each of our
biotech firms in the ISI (2000) database; these articles also must have at least
one top 112 university author to be included. Among these joint articles, we
focus on the “core collaborations” in the four central biotech fields catalogued
by ISI: biochemistry and biophysics, cell and developmental biology, molecular
biology and genetics, and microbiology. To control for variation in quality of the
collaborators, we also collected the number of citations in ISI-indexed journals
in the current plus next four years for each article.
The firm characteristics and the last five dependent variables were mostly
collected from paper directories and industry studies used by industry participants when looking for suppliers and customers. This methodology is tedious,
but it is one of the few available for analysis of large numbers of privately (as
well as publicly) held firms. As described in other papers referenced in Table 2,
considerable effort was expended in ensuring that uniform coding procedures
were applied to obtain quantitative variables from text records.
The primary exception was the venture-funding data obtained by licens-

Commercializing Knowledge

161

Table 2

Definitions and Sample Statistics for Variables

ing the Venture Economics database and deflating dollar amounts by the GDP
deflator. We also had the list of licensees of the UC–Stanford Cohen-Boyer
patent as an alternate indicator of the use of recombinant DNA technology. We
bought our list of biotech patents from CHI Research, Inc., in 1997. We ensured
that the CHI list included all those on U.S. Department of Commerce, Patent and
Trademark Office (1993) and appropriate others. Counts of citations to date by
other patents were included.

162

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Table 3

Panel Estimates for Patenting-Success Models for All U.S. Firms and Years 1976–1991

The Estimates
In a technology-intensive industry like biotechnology, patents are a crucial
measure of success. Patents serve as a measure of output from a firm’s “knowledge production function” (Griliches 1990). The patent permits knowledge capture by establishing ownership rights to the invention’s commercial rewards
until the patent expiration date and even beyond expiration to the extent the
firm establishes brand recognition. Patenting success also impacts the firm’s
ability to raise public equity capital.8 Because patent acquisition is key to both
financial and nonfinancial measures of success and citations data are available
with which to quality-adjust a firm’s patents, the patenting success models are
a key testing ground for the electronic version of our star methodology.
Table 3 reports standard Poisson regression estimates for panel data on
U.S. patenting by U.S. biotech firms. The standard errors are corrected using the
procedure of Wooldridge (1991).9 Models a and e in Table 3 indicate that simple firm characteristics available for both private and public firms do a good job

Commercializing Knowledge

163

of explaining patenting. Entrants are generally at a disadvantage, experience
helps, and use of the dominant technology (recombinant DNA or genetic engineering) is a positive factor for both quantity and quality of patenting. As always
with forward-looking financial variables, the positive effect of the cumulative
amount of venture capital investment may confound real R&D productivity of
the investments with forecasting the effects of other, omitted variables.
Firms that have many articles with star scientists also tend to have many
articles with top 112 university faculty— indeed nearly all the linked star articles
are also included in the top 112 core collaborations count of joint faculty-firm
articles. If one adds either the star variables used in Zucker, Darby, and Armstrong (1998) or core collaborations and their mean citations (a quality measure) as in Models b and c or f and g, we see that either set of indicators
improves the explanatory power of the models. In the current case of patents
and patent citations, the fit is a little better with the new variables than with the
star-based variables, but we will see below that just the opposite is true for all
products and for human therapeutics and vaccines on the market. The failure of
local untied star articles to enter significantly positively reaffirms our (1998)
result that localized knowledge impacts of universities on industry are associated with market transactions rather than uncompensated spillovers from the
ivory tower. The coefficients on cumulative venture capital investment are only
mildly reduced by inclusion of either (or both) of the star or top 112-based
measures of the firm’s science base. This suggests that venture capitalists in the
1980s were not discriminating much among biotech firms on the basis of scientific depth, so we obtain independent effects on research productivity of both
intellectual and financial capital. The significance of the knowable science-base
information implies that the capital markets were not fully incorporating it in
allocating capital.
Models d and h in Table 3 experiment with adding both sets of science indicators at once. Since linked star articles are generally included in the top 112 core
collaboration counts, the coefficient on linked articles measures the additional
impact of stars on firm research output over and above that of the “average” joint
authorship with a professor from a top 112 university. The coefficients for all core
collaborations and their mean citations as well as this additional star impact are
positive and significant for patents and patent citations. The negative coefficient
on affiliated star scientists in these full regressions appears to reflect the special
circumstances of one or two firms that have the bulk of affiliated articles.
Unfortunately, the smaller samples for the cross-section results in Tables 4,
5, and 6 — comparable patent cross-sections are in the appendix available on
request — seem more confounded by the near multicollinearity of the science
variables observed cumulatively up to 1990: For the full Models d and h, where
both the star and top 112 article faculty-firm coefficients are significant they
have opposite signs. We would prefer panel estimates for products in development

164

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Table 4

Estimates for Products-in-Development Models for All U.S. Firms

and on the market and employment also, but each observation is very costly to
obtain from old paper directories for these predominantly private start-up firms.
As with the patent panels, we get generally significantly positive coefficients for
linked and affiliated star articles (Models b and f in Tables 4, 5, and 6) or for
top 112 core collaboration articles and their mean citations. Employment is the
one dependent variable without many zeroes; in Table 6 we estimate the log of
1994 employment in accord with Gibrat’s Law (Sutton 1997).
In summary, the empirical work strongly supports the central message that
university-firm technology transfer for breakthrough discoveries generally involves
detectable joint research between top professors and firms that they own or are
compensated by. We have shown that our electronic bibliometry provides good
but imperfect substitutes for the more costly to obtain and difficult to operationalize star measures. In particular, in large samples where we can obtain separable impacts, star linkages appear to have a significantly larger effect on firm
research productivity than the average article written jointly by top research university professors and firm employees.

Commercializing Knowledge

165

Table 5

Estimates for Products-on-the-Market Models for All U.S. Firms

5. CONCLUSIONS
Breakthrough discoveries in gene splicing set off a revolution in bioscience and created the biotechnology industry. These discoveries set the stage,
then, for increased opportunity and increased incentives to enter. But significant
natural barriers to the communication of new knowledge often exist. New
knowledge tends to be developed in tacit form and requires resources to codify.
New codes and formula to describe discoveries develop slowly—with insufficient incentives if value is low and too many competing opportunities if the
value is high. Hence new knowledge tends to remain uncodified, difficult to
obtain except through hands-on learning at the lab bench, and hence naturally
excludable and appropriable. Our basic argument is that knowledge close to
breakthrough discoveries needs to be transformed into words, codes, and/or
formula before it can be easily transferred.
Difficulties inherent to the transfer of tacit knowledge lead to joint
research: Team production allows more knowledge capture of tacit, complex
discoveries by firm scientists. A robust detector of tacit knowledge capture by

166

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Table 6

OLS Estimates for 1994 Employment for All Reporting U.S. Firms, Dependent
Variable: Natural Logarithm of Total Employees as of 1994

the firm (and strong predictor of firm success) is the number of research articles
written jointly by scientists working at a firm and the discovering, “star” scientists, nearly all working at top universities. For firms to commercialize new discoveries, there must be sufficient knowledge capture by the firm to offset sunk
commercial development costs.
We find the results reported in Zucker, Darby, and Armstrong (1998) to be
replicated to a major extent in the whole United States. The principal finding in
our earlier paper, covering only California firms, was that research collaborations between firm scientists and university star scientists (the ties ) had a robust
significant positive effect on firm performance. The local pool of bioscience
knowledge generated by nearby but noncollaborating scientists had no positive
effect, providing further evidence for embodied technology transfer through
markets rather than “knowledge spillovers.” But this article is not simply a replication and scale-up.
In this article we add a generalized form of our star measure: the collaborative research articles between firm scientists and top U.S. university scientists.

Commercializing Knowledge

167

In panel analyses, firms whose scientists collaborate with stars and/or top 112
U.S. university scientists have more patents and more highly cited patents. Further, star articles have an incremental positive effect above top 112 university
scientists’ articles on the number and quality of patents. Our cross-sectional
analyses of products and employment show a generally similar pattern of positive effects on firms’ success of collaborations with stars or top university scientists, but the incremental effects are less systematic. This nonrobustness appears
to be due to multicollinearity. As predicted, untied star articles are either nonsignificant or oscillate between significant positive and negative effects. Venture
capital funding amounts were always significant and usually positive.
The overall importance of ties, compared to lack of significance or instability of untied star effects, suggests that working jointly at the lab bench is a
crucial transfer mechanism when knowledge has an important or large tacit component. Further, our findings suggest that, as we predicted, tacit knowledge is
embodied in individual, discovering scientists. Telephone interviews conducted
by Jeff Armstrong of university star scientists revealed that their relationships
with firms were governed by tight contractual arrangements, academic scientists
typically being “vertically integrated” into the firm in the sense of receiving
equity compensation and being bound by exclusivity agreements. This evidence
that star scientists were either fully employed by firms or were governed in their
relationships with firms by explicit contracts supported our conclusion that firm
success was not the result of a general knowledge “spillover” from universities
to firms but due to star scientists taking charge of their discoveries.
ACKNOWLEDGMENTS
This research has been supported by grants from the University of California’s Industry–University
Cooperative Research Program, the University of California Systemwide Biotechnology Research
and Education Program, the Alfred P. Sloan Foundation through the NBER Research Program on
Industrial Technology and Productivity, and the National Science Foundation (SES 9012925). The
authors also appreciate very useful comments from Scott Shane, Scott Stem, and other conference participants. The postdoctoral fellow David Waguespack prepared the series on all publishing between firms and the top 112 research universities for biotechnology firms based on ISI
data. This article is part of the NBER’s research program in productivity. Any opinions expressed
are those of the authors and not those of the National Bureau of Economic Research.

NOTES
1

Most commonly, there are multiple virtually localized markets organized around competing perspectives or models employed within the subspecialty. There is also geographic localization
within the professions, with advantages to universities or cities with a “critical mass” of scientists who can interact. Thus, UCSF with its critical mass of molecular biologists and related
sciences, and nearby strong universities, was “ripe” for a breakthrough.

168

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

2

Note that when multiple teams are racing for a “ripe” discovery and publish their results almost
simultaneously, we have much more rapid confirmation/validation of the discovery, which promotes faster learning by others. Gina Durante, graduate student at the Anderson School at
UCLA, suggested this point.

3

The top 112 universities are defined in terms of rank order on federal research funding received.
The top 112 are defined by the Institute for Scientific Information, and the data were purchased
from them.

4

In 1994, Jeff Armstrong conducted a telephone survey of randomly selected linked stars in California and found that most possess a significant equity or founding interest in the firm.

5

The prospectuses were obtained from Thomson Financial Services. The 10 companies in the
table were the top biotechnology firms in 1994 as reported by Lee and Burrill (1995, p. 16).

6

Due to human subjects’ restrictions, we cannot reveal the identity of the star scientists. The following scientists may or may not be included in our list of U.S. stars.

7

It is interesting that Genentech — with the largest number of star scientists of any firm —
appeared to avoid mentioning stars on its prospectus resume unless the star had a formal corporate position. The one leading scientist who was listed on the prospectus was Dr. Boyer, who
made it a policy never to publish a genetic-sequence discovery article as or with a Genentech
employee.

8

See Darby et al. (2001).

9

The significance of key variables in these regressions is generally not sensitive to the
Wooldridge correction, but to achieve an estimate of the variance-covariance matrix that is not
restricted by first-moment parameter estimates, we apply the Wooldrige method as we did in
the California study. An alternative would be to implement a binomial specification, but as
explained in Wooldridge (1991), this procedure may bias both first- and second-moment estimates, whereas the Poisson process potentially biases only the second-moment parameters.

REFERENCES
Bioscan. 1989–1998. Volumes 3–12. The Oryx Press, Phoenix, AZ.
Cohen, S., A. Chang, H. Boyer, R. Helling. 1973. Construction of biologically functional bacterial
plasmids in vitro. Proc. Nat. Acad. Sci. 70(11): 3240 – 3244.
Darby, M. R., L. G. Zucker. 2001. Change or die: The adoption of biotechnology in the Japanese
and U.S. pharmaceutical industries. Res. Tech. Innovation, Management, Policy 7: 85 –125.
———, ———. 2002. Growing by leaps and inches: Creative destruction and the Crusonia plant.
Econom. Inq. 40 (forthcoming).
———, I. I. Welch, L. G. Zucker. 2001. Going public when you can in biotechnology. Working
paper, UCLA Anderson School, Los Angeles, CA.
Demsetz, H. 1995. Agency and nonagency explanations of the firm’s organization. The Economics
of the Business Firm: Seven Critical Commentaries, Cambridge University Press, Cambridge, U.K.

Commercializing Knowledge

169

Di Gregorio, D., S. Shane. 2000. Why do some universities generate more start-ups than others?
Working paper, University of New Mexico and University of Maryland, College Park, MD.
GenBank. 1990. Release 65.0, machine readable database. IntelliGentics, Inc., Palo Alto, CA.
Griliches, Z. 1990. Patent statistics as economic indicators: A survey. J. Econom. Lit. 28(4):
1661–1707.
Harberger, A. C. 1998. A vision of the growth process. Amer. Econom. Rev. 88(1): 1– 32.
Institute of Scientific Information (ISI). 2000. U.S. University Science Indicators. Machine-readable database on CD-ROM. Institute of Scientific Information, Philadelphia, PA.
Jaffe, A. B. 1986. Technological opportunity and spillovers of R&D: Evidence from firms’ patents,
profits, and market value. Amer. Econom. Rev. 76(5): 984 –1001.
———. 1989. Real effects of academic research. Amer. Econom. Rev. 79(5): 957– 970.
Jensen, R., M. Thursby. 2001. Proofs and prototypes for sale: The tale of university licensing.
Amer. Econom. Rev. 91(1): 240 – 259.
Klevorick, A. K., R. C. Levin, R. R. Nelson, S. G. Winter. 1995. On the sources and significance
of interindustry differences in technological opportunities. Res. Policy. 24(2): 185 – 205.
Kornhauser, W. 1962. Scientists in Industry: Conflict and Accommodation. University of California
Press, Berkeley, CA.
Lee, K. B., Jr., G. S. Burrill. 1995. Biotech 95: Reform, Restructure, Renewal. Ernst & Young, San
Francisco, CA.
Nelson, R. R. 1959. The economics of invention: A survey of the literature. J. Bus. 32(2): 101–127.
———, S. G. Winter. 1982. An Evolutionary Theory of Economic Change. Harvard University
Press, Cambridge, MA.
Pisano, G. P., R. M. J. Bohmer, A. C. Edmondson. 2001. Organizational differences in rates of
learning: Evidence from the adoption of minimally invasive cardiac surgery. Management Sci.
47(6): 752–768.
Polanyi, M. 1962. Personal Knowledge: Towards a Post-Critical Philosophy. University of Chicago
Press, Chicago, IL.
Schutz, A. 1962. On multiple realities. Collected Papers 1: 207– 259. Martinus Nijhoff, The Hague,
The Netherlands.
———. 1970. Reflections on the Problem of Relevance. Yale University Press, New Haven, CT.
Scott, W. R., S. M. Dornbusch, B. C. Busching, J. D. Laing. 1967. Organizational evaluation and
authority. Admin. Sci. Quart. 12: 99 –117.

170

Lynne G. Zucker, Michael R. Darby, and Jeff S. Armstrong

Stephan, P. E., S. S. Everhart. 1998. The changing rewards to science: The case of biotechnology. Small Bus. Econom. 10(2): 141–151.
Stigler, G. J. 1961. The economics of information J. Polit. Econom. 69(3): 213 – 225.
Sutton, J. 1997. Gibrat’s legacy. J. Econom. Lit. 35(1): 40 – 59.
Thursby, J. G., Marie Thursby. 2000. Who is selling the ivory tower? Sources of growth in university licensing. Conf. Tech. Transfer Univ. Entrepreneurship, Georgia Institute of Technology,
Atlanta, GA.
Torero, M. 1998. Analyzing the spillover mechanism on the semiconductor industry in the Silicon
Valley and route 128. Essays on Diffusion of Technical Change. Unpublished Ph.D. dissertation,
UCLA Economics Department, Los Angeles, CA.
Torero, M., M. R. Darby, L. G. Zucker. 2001. The importance of intellectual human capital in the
birth of the semiconductor industry. Working paper, UCLA Anderson School, Los Angeles, CA.
U.S. Department of Commerce, Patent and Trademark Office. 1993. Patent Technology Set:
Genetic Engineering. Machine readable database on CD-ROM. U.S. Department of Commerce,
Office of Information Systems, Washington, D.C.
Wooldridge, J. M. 1991. On the application of robust, regression-based diagnostics to models of
conditional means and conditional variances. J. Econometrics 47: 5 – 46.
Yarkin, C. 2000. Assessing the role of the University of California in the state’s biotechnology
economy. The Economic and Social Dynamics of Biotechnology. Kluwer Academic Publishers,
Boston, MA.
Zucker, L. G., M. R. Darby. 2001. Capturing technological opportunity via Japan’s star scientists:
Evidence from Japanese firms’ biotech patents and products. J. Tech. Transfer. 26(1/2): 37– 58.
———, ———, J. S. Armstrong. 1998. Geographically localized knowledge: Spillovers or markets?
Econom. Inq. 36(1): 65 – 86.
———, ———, M. B. Brewer. 1998. Intellectual human capital and the birth of U.S. biotechnology enterprises. Amer. Econom. Rev. 88(1): 290 – 306.
———, ———, M. Torero. 2000. Determinants of embodied technology transfer from stars to
firms. Working paper, UCLA Anderson School, Los Angeles, CA.
———, ———, ———. 2001. Labor mobility from academe to commerce. J. Labor Econom. 20:
629 – 660.