View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

The Geography of Research and
Development Activity in the U.S.*
BY KRISTY BUZARD AND GERALD A. CARLINO

I

n the U.S., metropolitan areas contain the
largest concentrations of people and jobs.
Despite some drawbacks, these so-called
agglomeration economies also have benefits,
such as the cost savings that result from being close to
suppliers and workers. Spatial concentration is even more
pronounced among establishments that do basic research
and development (R&D). In this article, Kristy Buzard
and Jerry Carlino show that geographic concentration of
R&D extends beyond locations such as Silicon Valley.
In fact, many types of R&D establishments are highly
concentrated geographically.

Although metropolitan areas account for less than 20 percent of the
total land area in the United States,
they contain almost 80 percent of
the nation’s population and nearly 85
percent of its jobs. Put differently, the
United States has, on average, 24 jobs
per square mile, but metropolitan areas
average about 124 jobs per square mile.
This high degree of spatial concentration of people and jobs leads to
congestion costs, such as increased
traffic and pollution, and higher housJerry Carlino is a
senior economic
advisor and
economist in
the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.
org/research-and-data/publications/.
www.philadelphiafed.org

ing costs. Congestion has become so
severe in London that in February
2003, the city imposed a fee, currently £8 a day, on all vehicles entering, leaving, driving, or parking on a
public road inside the Charging Zone
between 7:00 a.m. and 6 p.m., Monday
through Friday. New York City recently
considered a similar plan. To offset
these congestion costs, workers must
receive higher wages, and higher wages
increase firms’ costs.
If congestion costs were the
only thing resulting from the spatial
concentration of firms, firms could
easily disperse to reduce these costs.
Yet they do not. This is because the
negative effects of concentration make

*The views expressed here are those of the authors and do not necessarily represent the views
of the Federal Reserve Bank of Philadelphia or
the Federal Reserve System.

up only one side of the urban ledger.
The positive effects of agglomeration
economies — efficiency gains and cost
savings that result from being close to
suppliers, workers, customers, and even
competitors — make up the other.
Other things equal, firms will have
little incentive to move if congestion
costs are balanced by the benefits of
agglomeration economies.
While economic activity tends
to be geographically concentrated,
spatial concentration is even more
pronounced among establishments
doing basic research and development
(R&D). For example, although the
United States has more than 3100
counties, the 50 counties that contain the largest number of R&D labs
account for almost 60 percent of all
such labs, while the top 50 counties in
terms of the overall number of plants
across all industries account for only
about one-third of all plants.
More than most economic activity, R&D depends on a particular
byproduct of agglomeration economies
called knowledge spillovers — the
continuing exchange of ideas among
individuals and firms. The high
geographic concentration of R&D labs
creates an environment in which ideas
move quickly from person to person
and from lab to lab. Locations that
Kristy Buzard
is currently a
graduate student
in economics at
the University of
California-San
Diego. Previously,
she was an
economic analyst
in the Research
Department of the
Philadelphia Fed.
Business Review Q3 2008 1

are dense in R&D activity encourage
knowledge spillovers, thus facilitating
the exchange of ideas that underlies
the creation of new goods and new
ways of producing existing goods.
Policymakers view the success
of areas such as Silicon Valley in
California, the Route 128 corridor in
Boston, and North Carolina’s Research Triangle as a miraculous recipe
for local economic development and
growth. But are these examples exceptions rather than the rule? The answer
appears to be no. Equally remarkable
concentrations may be found in many
other types of R&D activity, such
as the concentration of R&D in the
pharmaceutical industry in northern
New Jersey and southeastern Pennsylvania. In this article, we show that
many types of R&D establishments are
highly concentrated geographically.
CLUSTERING OF R&D LABS
Some studies have looked at the
geographic clustering of economic
activity in a particular industry, such
as manufacturing or advertising. A
study by Glenn Ellison and Edward
Glaeser and one by Stuart Rosenthal
and William Strange find evidence of
geographic concentration of employment in many U.S. manufacturing
industries. The geographic concentration of manufacturing jobs is not
simply an American phenomenon, as
Gilles Duranton and Henry Overman
demonstrate in their analysis of manufacturing plants in the UK.
A study by Mohammad Arzaghi
and Vernon Henderson looks at the
location pattern of firms in the advertising industry in Manhattan. They
report that Manhattan accounts for 20
percent of total national employment
in the ad industry, 24 percent of all
advertising agency receipts, and 31 percent of media billings. They show that
for an ad agency, knowledge spillovers
and the benefits of networking with

2 Q3 2008 Business Review

other nearby agencies are large but the
benefits dissipate very quickly with distance from other ad agencies and are
gone after roughly one-half of a mile.
Thomas Holmes and John Stevens
take a broader approach. They used
employment data for all U.S. industries, not just manufacturing, and

information about R&D labs. These
data were not available in a machinereadable format. Since the directory
lists the complete address for each
establishment, we were able to assign a
geographic identifier (using geocoding
techniques) to 3,129 R&D labs in the
U.S. in 1998.3

Policymakers view the success of areas such
as Silicon Valley in California, the Route
128 corridor in Boston, and North Carolina’s
Research Triangle as a miraculous recipe for
local economic development and growth. But
are these examples exceptions rather than the
rule? The answer appears to be no.
not for just a single industry, such
as advertising. Among the 15 most
concentrated industries, they find
that six are in mining and seven are
in manufacturing; only two industries
fall outside mining and manufacturing
(casino hotels and motion picture and
video distribution).
Our article differs from past
studies in two ways. First, rather than
looking at the geographic concentration of firms engaged in the production of goods (such as manufacturing)
and services (such as advertising), we
consider the spatial concentration of
private R&D activity.1 Second, rather
than focusing on the concentration
of employment in a given industry,
we look at the clustering of individual
R&D labs.2 To do this, we used 1998
data from the Directory of American
Research and Technology to electronically code the addresses and other

A map of the spatial distribution
of R&D labs reveals a striking clustering of this activity (Figure 1). In places
that have little R&D activity, each dot
on the map represents the location of
a single R&D lab. For example, there
is only one lab in Montana, represented by the dot in Flathead County.
In counties with a dense clustering of
labs, the dots tend to sit on top of one
another, representing a concentration
of labs. A prominent feature of the
map is the high concentration of R&D
activity in the Northeast corridor,
stretching from northern Virginia to

2

The study by Paulo Guimarães, Octávio
Figueiredo, and Douglas Woodward is one of
only a few other studies we are aware of that
look at spatial clustering at the establishment
level. Specifically, they look at the geographic
concentration of over 45,000 plants in 1999 for
concelhos (counties) in Portugal. Duranton and
Overman use plant-level data to study the locational pattern of UK manufacturing industries.

3

1

There are a number of other studies that look
at innovative output across cities, such as the
study by David Audretsch and Maryann Feldman. What is unique about our article is that
we present information on local private R&D
activity, which no one else has done.

Our data on individual labs were limited to
the top 1,000 U.S. public companies in terms
of R&D expenditure in 1999. The 1,000
firms cover more than 95 percent of all R&D
performed by public companies. Many of these
firms have multiple labs. For example, the
Lockheed Martin Corporation has 54 labs, and
General Electric has 26.

www.philadelphiafed.org

FIGURE 1
Location of Total R&D Labs*

* In counties with relatively little R&D activity, the dots on the map represent the location
of a single R&D lab. In counties having a dense concentration of labs, the dots represent a
concentration of labs.

Massachusetts. There are other concentrations, such as the cluster around
the Great Lakes and the concentration
of labs in California’s Bay Area and in
southern California. But some states
that account for a relatively large share
of the nation’s jobs account for a much
smaller share of the nation’s R&D
labs. For example, Texas ranks second
among states in terms of employment,
but it ranks eighth in the number of
R&D labs. Similarly, Florida ranks
fourth in employment, but 13th in the
number of labs.
However, as already noted, recent
studies have shown that economic
activity, especially manufacturing, also
tends to be geographically clustered.
We will show that R&D activity tends
to be more spatially concentrated than
total employment or manufacturing
employment. There are 3,141 counties in the U.S., and all of them are
engaged in some type of economic
activity. All but 33 counties are en-

www.philadelphiafed.org

gaged in some form of manufacturing
activity. In contrast, only 519 of these
counties have at least one R&D lab,
and far fewer counties have a notable
concentration of labs.
A simple way to quantify the
concentrations of R&D relative
to establishments in general or to
manufacturing establishments in
particular is to first compute each
county’s share of total R&D labs and
rank counties by descending order of
this share. Moving down this ranking,
we compute a cumulative total for the
share of R&D labs. Next, we construct
a similar ranking for establishments
in general and for manufacturing
establishments in particular. The top
50 counties ranked by number of R&D
labs account for 58 percent of all R&D
labs, while the top 50 counties ranked
by number of manufacturing establishments account for only 36 percent of
all manufacturing establishments and
only 32 percent of all establishments. It

appears that R&D labs are more highly
concentrated than economic activity
in general and overall manufacturing
activity in particular. This is important
because it means the concentration
of R&D labs doesn’t simply reflect
the concentration of manufacturing
activity. Since R&D is more concentrated than manufacturing activity,
this suggests that some factors, such as
knowledge spillovers, may be a more
centralizing force for R&D than they
are for manufacturing activity.
WHICH R&D LABS CLUSTER?
Paul Krugman and David Audretsch and Maryann Feldman developed a “locational Gini coefficient” to
answer the question of which manufacturing industries cluster geographically. A locational Gini coefficient
shows how similar (or dissimilar) the
location pattern of employment in a
particular manufacturing industry is
from the location pattern of overall
manufacturing employment. It does
this by subtracting a county’s share of
national employment in manufacturing from the county’s share of national
employment in a given manufacturing industry, squaring the result, and
summing over locations to arrive at
a single number. The squaring of the
difference in shares means that larger
differences contribute more than proportionately to the overall value of the
index. If the squared difference takes a
value of zero, employment in a particular industry is allocated across counties
in exactly the same way as employment
in manufacturing. That is, this would
indicate that employment in a given
manufacturing industry is no more or
less geographically concentrated than
overall manufacturing employment.
At the other extreme, the locational
Gini coefficient takes on values close
to one when employment in a given
industry is completely concentrated in
one county.

Business Review Q3 2008 3

Glenn Ellison and Edward Glaeser
have identified a potential problem
with the locational Gini coefficient.
They argue that if an industry consists
of a small number of large establishments, the locational Gini coefficient
may take on large values, suggesting
localization of the industry even if
there is no agglomeration force behind
the industry’s location. They refer
to this as the dartboard approach to
geographic concentration, using the
metaphor of a few darts tossed at a
dartboard randomly creating a cluster.
Ellison and Glaeser have developed an
alternative concentration measure —
called the Ellison-Glaeser, or the EG,
index — that controls for an industry’s
organization.
Recently, Paulo Guimarães, Octávio Figueiredo, and Douglas Woodward (GFW) have generalized the EG
index to include the case where the
data are in the form of establishments
(labs, in our case) rather than employment shares, as in the EG index. The
GFW locational Gini, or the GFW
index, for R&D labs is constructed just
like the locational Gini for employment except each county’s share of the
nation’s labs in a given industry is used
instead of the county’s employment
share for the industry. As before, the
GFW index for a given industry takes
on a value of zero when R&D labs in
the industry are not geographically
more concentrated than is manufacturing employment. Following Ellison
and Glaeser, Guimarães, Figueiredo,
and Woodward adjust the GFW index
to account for the industrial organization of the industry in question.
We use the adjusted GFW index
as our measure of concentration for
R&D by industry.4 We find an adjusted
GFW index of 0.0457 for R&D in the
average industry at the county level. In
studying the agglomeration patterns in
the manufacturing industries, Glenn
Ellison, Edward Glaeser, and William

4 Q3 2008 Business Review

Kerr report an average adjusted Gini
coefficient of 0.03 for manufacturing
in 1997 at the metropolitan area level.
(Since metropolitan areas tend to be
aggregates of counties, there are more
counties than metropolitan areas.)
Thus, our R&D labs appear to be more
spatially concentrated, on average,
than is manufacturing activity.5
Our findings indicate that 256, or
68 percent, of all R&D counties have
an adjusted GFW index greater than
zero, suggesting that R&D labs are
appreciably more concentrated than
manufacturing employment. Earlier
we reported that the top 50 counties
ranked by number of R&D labs account for 58 percent of all R&D labs,
while the top 50 counties ranked by
number of manufacturing establishments account for only 36 percent
of all manufacturing establishments.
Thus, the concentration of labs is
broadly similar when looking at the
top 50 counties or the adjusted GFW
index.
While an adjusted GFW index for
an industry could have a value greater

4

See the article by Paulo Guimarães, Octávio
Figueiredo, and Douglas Woodward for details
on the construction of the adjusted GFW index
used in our article as well as a discussion of the
EG index. Our sample consists of 376 four-digit
Standard Industrial Classification industries at
the county level. We chose to do our analysis
based on the number of labs in a county rather
than employment in these labs, since we have
data on employment for only about one-half of
the labs in our data set.
5
By construction, the value of both the EG
index and the adjusted GFW index is directly
related to the level of aggregation of the geographic area under consideration. That is, for
any given industry, the EG indexes and the
adjusted GFW indexes take on larger values for
metropolitan areas (aggregations of counties)
than the indexes do at the county level. Thus,
our finding of greater average concentration of R&D labs compared with the average
concentration of manufacturing employment
reported in Ellison, Glaeser, and Kerr is even
more striking, given that the adjusted GFW is
calculated at the county level and still exceeds
the average value of the EG index calculated at
the MSA level.

than zero, an important question is:
Does this represent a significant departure from the spatial concentration
of manufacturing employment? We
performed a simulation procedure to
determine what value of the adjusted
GFW indexes constitutes a significant
departure from the concentration of
manufacturing employment.6 We find
R&D labs in 129 of the 376 industries considered (34.3 percent) are
significantly more concentrated than
is manufacturing employment. Thus,
of the 256 industries with an adjusted
GFW index greater than zero, only
about one-half — or 129 industries —
represent a significant departure from
the overall concentration of manufacturing employment. This shows the
importance of providing statistical
tests that determine whether labs in a

6

To develop measures of statistical significance
for the adjusted GFW indexes, we partitioned
our industries into six nonoverlapping groups
based on the number of R&D labs in a given
industry. The first group consists of industries
that have between two and nine labs. The
second group consists of industries with 10 to 30
labs, while the third group consists of industries
with between 31 and 50 labs. The fourth group
consists of industries with between 51 and 100
labs, while the fifth group consists of industries
with 101 to 200 labs. The final group consists
of industries with more than 200 labs. For each
group, we performed a simulation procedure to
produce a probability distribution for the adjusted GFW index. In the simulation we randomly
allocated labs to counties while maintaining
the counties’ share of national manufacturing
employment. Therefore, if a given county has a
relatively high share of the nation’s manufacturing jobs, the county is more likely to randomly
be assigned more R&D labs, too. For each group
the simulation produces a value for the adjusted
GFW index. For each group, we performed
1,000 simulations and formed a probability
distribution for the adjusted GFW indexes.
From the distribution we can calculate critical
values (one that’s positive and one that’s negative) that allow us to say that we are 95 percent
certain that any value that exceeds the critical
value indicates that labs in that grouping are
significantly more concentrated than is the actual distribution of manufacturing employment.
Similarly, any value that falls below the critical
value indicates that labs in that grouping are
significantly more dispersed than is the actual
distribution of manufacturing employment.

www.philadelphiafed.org

given industry are significantly more
concentrated (significantly more dispersed) than is the actual distribution
of manufacturing employment.
Our measure of concentration, the
adjusted GFW index, has a maximum
value of about one for R&D in five industries.7 However, there are only two
R&D labs in each of these industries,
so it’s not surprising to find a large
value for the adjusted Gini index if the
two firms are located in proximity to
one another.8 Among industries with
20 or more labs, R&D tends to be most
concentrated in the oil and gas field
machinery industry, the computer storage devices industry, and the electronic computer industry (see the Table).9
Until now, we have looked at the
concentration of R&D labs relative
to the concentration of manufacturing employment. We would also like
to know whether labs in a particular
industry (such as pharmaceuticals) are
more or less concentrated than overall

TABLE
Concentration of R&D Labs for
Selected Industries
Industry

Number
of Labs

Adjusted GFW
Indexa

Concentrated Industriesb
Oil & Gas Field Machinery

22

0.33

Tires and Tubes

14

0.12

Crude Petroleum & Natural Gas

14

0.10

Computer Storage Devices

34

0.08

Motor Vehicles & Car Bodies

26

0.06

Electronic Computers

57

0.06

Semiconductors

278

0.03

Prepackaged Software

359

0.03

Motor Vehicle Parts

134

0.03

36

0.03

Computer-Integrated Systems Design

105

0.02

Radio and TV Communication Equipment

185

0.02

Wood Household Furniture

11

-0.01

Gaskets, Packing, and Sealing Devices

11

-0.01

Industrial Valves

14

-0.01

Plastic Plumbing Fixtures

11

-0.01

Gray and Ductile Iron Foundries

12

-0.01

Optical Instruments and Lenses
7

They are hog production; the production of
brooms and brushes; the production of fiber
cans, tubes, and drums; the bottled and canned
soft drinks and carbonated waters industry;
and the rolling mill machinery and equipment
industry.

8

There is a negative relationship between the
size of the adjusted GFW index and the number
of labs in an industry. However, this relationship
is not strong: a correlation coefficient of -0.09
that is only marginally significant (at the 10
percent level).

9

In this article, our index of concentration (the
adjusted GFW index) compares the concentration of R&D labs in a given industry to the concentration of manufacturing employment in that
industry. Instead of using manufacturing employment as the benchmark when constructing
the adjusted GFW index, we could have used
manufacturing establishments as the benchmark.
In general, there’s a moderate correlation (a
Spearman’s rank correlation coefficient of 0.56)
between the industry ranking under the two alternative benchmarks for R&D industries with
significant adjusted GFW indexes and with 20
or more labs. Following Guimarães, Figueiredo,
and Woodward, we report the adjusted GFW
index using manufacturing employment as the
benchmark in this article to make our findings
consistent with past studies, such as the one by
Ellison and Glaeser.

www.philadelphiafed.org

Dispersed Industriesc

a The

adjusted GFW index for a given industry shows the sum of the squared differences of the
share of employment in manufacturing from the share of labs in a given industry, adjusted to
account for the industrial organization of the industry under consideration.

b R&D labs in the selected industries are significantly more concentrated than manufacturing
employment (5 percent level of significance).
c R&D

labs in the selected industries are significantly more dispersed than manufacturing
employment (5 percent level of significance).

Business Review Q3 2008 5

R&D labs. To get this information, we
recalculated the adjusted GFW index
to reflect the geographic concentration
of labs in individual industries relative
to the overall concentration of R&D
labs (as opposed to the overall concentration of manufacturing employment).
We find that 314, or 84 percent, of all
R&D labs have an adjusted Gini index
greater than zero; however, we find
that R&D labs in only 105 of the 376
industries (28 percent) considered are
significantly more concentrated than
overall R&D labs.10 It’s not surprising
to find less concentration of R&D
by industries when the comparison is
to overall R&D labs than when the
comparison is to overall manufacturing employment (34.3 percent), given
that R&D labs already tend to be more
concentrated than manufacturing
employment. Still, for the majority
of industries (72 percent), labs at the
industry level tend not to be more spatially concentrated than labs overall.
Maps of R&D activity for individual industries (for example, software,
Figure 2; pharmaceuticals, Figure 3;
and chemicals, Figure 4) confirm the
findings of the adjusted GFW indexes
in that the location pattern of R&D
activity for the majority of industries is
broadly similar to the location pattern of overall R&D activity. That is,
R&D activity for most industries tends
to be concentrated in the Northeast
corridor, around the Great Lakes, in
California’s Bay Area, and in southern
California.

FIGURE 2
Location of Software R&D Labs

FIGURE 3
Location of Pharmaceutical R&D Labs

10

We performed a simulation procedure to
determine what value of the adjusted GFW
indexes constitutes a significant departure
from the concentration of total R&D labs. The
simulation procedure is similar to the procedure
used when the reference was manufacturing
employment, except we now randomly allocate
labs to counties while maintaining the counties’ share of national R&D labs, as opposed to
the counties’ share of national manufacturing
employment.

6 Q3 2008 Business Review

www.philadelphiafed.org

FIGURE 4
Location of Chemistry R&D Labs

FIGURE 5
Location of Oil and Gas Field Machinery
R&D Labs

As indicated, there are a number
of exceptions to the general pattern of geographic concentration
just described. One exception is
R&D activity in the oil and gas field
machinery industry, which tends to
be concentrated in Texas, especially
in the Houston area, and accounts
for about 60 percent of the labs doing
R&D in this industry (Figure 5). Another exception is the location of R&D
activity in the motor vehicle and car
body industry, which tends to be concentrated in Michigan, especially in
the Detroit area, and which accounts
for just under 40 percent of the labs
doing R&D in this industry (Figure 6).
This industry comprises establishments
primarily engaged in manufacturing
motor vehicle parts and accessories.
WHY DO R&D LABS CLUSTER?
Economists have developed a
number of theories to explain firms’
tendency (not just R&D labs) to
cluster. Firms may attempt to minimize
transport costs by locating close to a
natural resource used as an input, or
to their suppliers, or to their markets.
Or firms may cluster to share inputs
such as specialized workers. Finally,
firms may cluster to take advantage
of knowledge that “spills over” when
firms are located near one another.
Among these, the sharing of inputs
and especially of knowledge spillovers
is likely to be most important for R&D
firms when choosing a location.
Knowledge Spillovers. Economists have identified two types of
knowledge spillovers thought to be important in understanding the location
pattern of R&D labs: MAR spillovers
and Jacobs spillovers.11 While these
11

MAR spillovers are so-called because in
1890 Alfred Marshall developed a theory of
knowledge spillovers that was later extended
by Kenneth Arrow and Paul Romer — hence,
MAR. In 1969, Jane Jacobs developed another
theory of knowledge spillovers.

www.philadelphiafed.org

Business Review Q3 2008 7

FIGURE 6
Location of Motor Vehicle and Car Body
R&D Labs

theories were originally developed to
explain the concentration of industries
in general, we think they are particularly important to an explanation of
the clustering of R&D labs. More than
most industries, R&D depends on new
knowledge. Often, the latest knowledge about technological developments
is valuable to firms but only for a short
time. Thus, it behooves firms to set up
shop as close as possible to the sources
of information. The high spatial concentration of R&D activity facilitates
the exchange of ideas among firms and
aids in the creation of new goods and
new ways of producing existing goods.
MAR spillovers. According to
the MAR theory of spillovers, the
concentration of establishments (labs
in our case) in the same industry in a
common area helps knowledge travel
among labs and their workers and
facilitates innovation and growth.12
Employees from different establish-

8 Q3 2008 Business Review

ments in the same industry exchange
ideas about new products or new ways
to produce goods. Often, knowledge is
tacit and not easily codified and therefore requires face-to-face contact to be
effectively transmitted. Having firms
concentrated in a particular area is
an efficient way to produce new ideas,
leading to innovation and growth.
People’s ability to receive ideas or
knowledge is then influenced by their
distance from the source of the ideas;
communicating ideas is harder over
longer distances. Stuart Rosenthal and
William Strange consider the importance of input sharing, matching, and
knowledge spillovers for manufacturing firms at the state, county, and ZIP
code levels. They find that the effects
of knowledge spillovers on the agglomeration of manufacturing firms tend to
be quite localized, influencing agglomeration only at the ZIP code level.13
For example, many semiconductor

firms have located their R&D facilities
in the Silicon Valley because the area
provides an environment where semiconductor firms can develop new products and new production technologies.
Often, information about current
developments in the semiconductor industry is shared informally. In her 1994
book, AnnaLee Saxenian describes
how gathering places, such as the
Wagon Wheel Bar located only a block
from Intel, Raytheon, and Fairchild
Semiconductor, “served as informal
recruiting centers as well as listening
posts; job information flowed freely
along with shop talk.” Other examples
include the Route 128 corridor in
Massachusetts, the Research Triangle
in North Carolina, and biotechnology
and medical technology software firms
in suburban Philadelphia.
Jacobs spillovers. Jane Jacobs
believed that knowledge spillovers are
related to the diversity of industries
(diversity of labs in our case) in an
area, in contrast to MAR spillovers,
which focus on firms in a common
industry. Jacobs argued that an industrially diverse environment encourages innovation. Such environments
include knowledge workers with varied
backgrounds and interests, thereby fa-

12
Edward Glaeser, Hedi Kallal, Jose Scheinkman, and Andrei Shleifer, who coined the term
MAR spillovers, pulled these various views on
knowledge spillovers together in their article.
13

Several other studies have found that knowledge spillovers dissipate rapidly with distance.
See, for example, the articles by Mohammad
Arzaghi and J. Vernon Henderson; David
Audretsch and Maryann Feldman; Wolfgang
Keller; and Jed Kolko. The extent to which
innovations in communication technologies are
rendering face-to-face contacts obsolete is not
so clear. Jess Gaspar and Edward Glaeser argue
that improvements in telecommunications technology increase the demand for all interactions.
So while technology may substitute for face-to
face contact, this effect is offset by the greater
desire for all kinds of interactions, including
face-to-face contact.

www.philadelphiafed.org

cilitating the exchange of ideas among
individuals with different perspectives. This exchange can lead to the
development of new ideas, products,
and processes.
As John McDonald points out,
both Jane Jacobs and John Jackson
have noted that Detroit’s shipbuilding
industry was the critical antecedent
leading to the development of the
auto industry in Detroit. In the 1820s,
Detroit mainly exported flour. Because
the industry was located north of Lake
Erie along the Detroit River, small
shipyards developed to build ships for
the flour trade. R&D in the shipbuilding industry led to refinements and the
adaptation of the internal-combustion
gasoline engine to power boats on
Michigan’s rivers and lakes. As it
turned out, the gasoline engine, rather
than the steam engine, was best suited
for powering the automobile. Several
of Detroit’s pioneers in the automobile
industry had their roots in the boat
engine industry. For example, Olds
produced boat engines, and Dodge repaired them. In addition, a number of
other industries in Michigan supported
the development of the auto industry,
such as the steel and machine tool industries. These firms engaged in R&D
that led them to produce many of the
components required to make cars.
While other factors could be at
work, the adjusted GFW indexes appear to support Jacobs’ diversity view,
in that R&D labs for the vast majority
of industries (almost three-quarters)
tend to exhibit a common overlapping pattern of concentration. David
Audretsch and Maryann Feldman used
the U.S. Small Business Administration’s innovation database and focused
on innovative activity for particular
industries within specific MSAs. They
found less industry-specific innovation
in MSAs that specialized in a given
industry, a finding that also supports
Jacobs’ diversity thesis.

www.philadelphiafed.org

The Role of Natural Advantage.
While it’s tempting to argue that the
broadly similar geographic clustering of
R&D labs in many different industries
is suggestive of Jacobs externalities,
this conjecture is simply based on visual inspection of a map (Figure 1). Jacobs spillovers are one possible way to
account for the common overlapping
pattern of concentration among R&D
labs, but other forces might be at work.

Rust Belt region, an area relatively low
in amenities.
Another natural advantage that
an area may have lies in its workers
and institutions, especially its universities. Universities are key players
not only in creating new knowledge
through the basic research produced
by their faculties but also in supplying a pool of knowledge workers on
which R&D depends. It is well known

Universities are key players not only in
creating new knowledge through the basic
research produced by their faculties but also
in supplying a pool of knowledge workers on
which R&D depends.
One such source is the natural advantages an area offers to firms that locate
there. An area’s natural advantages,
such as climate, soil, and mineral and
ore deposits, could explain the location of some R&D labs. For example,
oil deposits, an essential ingredient
for testing equipment, may be largely
responsible for the concentration
of R&D labs in the oil and gas field
machinery industry (one of the most
highly concentrated industries, according to our adjusted GFW indexes) in
Texas, especially in the Houston area.
But the draw of ore deposits seems to
be industry-specific and is therefore
unlikely to account for the common
overlapping pattern of concentration
among R&D labs in many different
industries. Of course, if R&D labs tend
to be drawn to areas offering amenities
such as pleasant weather, proximity to
the ocean, and scenic views, this could
explain the overlapping concentration
in amenity-rich locations, such as the
concentrations found in California.
While local amenities might explain
some of the concentrations of labs, the
vast majority of R&D labs tend to be
highly concentrated in the country’s

that Silicon Valley and the Route 128
corridor became important centers for
R&D as a result of their proximities to
Stanford and MIT. AnnaLee Saxenian
describes how Stanford’s support of
local firms is an important reason for
the Silicon Valley’s success. Two of
Stanford’s star engineering professors,
John Linvill and Fred Terman, not
only drew some of the best and brightest students to Stanford, but they also
trained their students (and encouraged
them) to seek careers in the semiconductor industry.
There is also evidence that
an area’s human capital can be an
important type of natural advantage.
In a 2007 paper, Gerald Carlino,
Satyajit Chatterjee, and Robert Hunt
looked at the effect of a metropolitan
area’s human capital (the share of
the adult population with at least a
college education) on the area’s ability
to innovate (measured by patents per
capita). Of the things these authors
considered, by far the most powerful
effect on local innovation is generated
by local human capital. Specifically,
a 10 percent increase in the share of
the adult population with at least a

Business Review Q3 2008 9

college degree is associated with an 8.6
percent increase in patents per capita.
Since the share of a metropolitan area’s
population with at least a college degree varied by a factor of almost six in
the sample used in Carlino, Chatterjee,
and Hunt’s paper, the implied gains in
innovation are substantial.
There is also general evidence that
R&D at local universities is important
for firms’ innovative activity. David
Audretsch and Maryann Feldman, and
Luc Anselin, Attila Varga, and Zoltan
Acs found evidence of localized knowledge spillovers from university R&D
to commercial innovation by private
firms, even after controlling for the
location of industrial R&D. However,
Carlino, Chatterjee, and Hunt found
that R&D at local universities has
only modest effects on local innovative
activity. They found that a 10 percent
increase in R&D intensity of local
universities is associated with less than
a 1 percent increase in patent intensity.
Evidence on MAR vs. Jacobs
Spillovers. To more formally address
the issue of the importance of industrial diversity, or, alternatively, specialization, we conducted a simple experiment. Recall that we have only one
adjusted GFW index for each industry.
These industry indexes can, however,
be used to construct an overall adjusted
GFW index for each metropolitan
county. This is done by weighting each
industry’s adjusted GFW index by the
share of the county’s total establishment accounted for by that industry.
The industry-weighted adjusted GFW
indexes for a given county are then
summed to arrive at an overall adjusted GFW index for each metropolitan county. The overall adjusted GFW
index for a county can be correlated
with a widely used index of industrial
diversity.14 By construction, a county
is said to be more highly specialized
or less diversified as the value of the
diversity index increases. Recall that as

10 Q3 2008 Business Review

the value of the adjusted GFW index
increases, the extent of the spatial
concentration of labs in the industry
also increases. A positive correlation
between the overall county adjusted
GFW index and the specialization index means that as the county becomes
more specialized industrially, its labs
are also becoming more geographically
concentrated. This evidence favors
MAR spillovers.15
On the other hand, if the geographic concentration of labs tends
to increase as the specialization index
decreases — indicating that an area is
more industrially diverse (or less specialized) — this negative correlation
provides evidence in favor of Jacobs
spillovers. We found a positive and
highly significant correlation between
the overall county adjusted GFW
index and the specialization measure,
evidence favoring MAR spillovers.
While a more definitive conclusion awaits a more complete analysis,
the evidence provided in this article
tends to support the importance of
both Jacobs spillovers (visual inspections of maps) and MAR spillovers
(statistical correlation) for R&D labs.16
CONCLUSION
Most countries make sustained
economic growth a principal policy
objective. Although many factors
contribute to economic growth, recent
research has found that innovation
and invention play an important role.
Innovation depends on R&D, and
R&D depends on, among other things,
the exchange of ideas among individuals. The high spatial concentration of
R&D labs creates an environment in
which ideas move quickly from person
to person and from lab to lab. That
is, locations that are dense in R&D
activity encourage knowledge spillovers, thus facilitating the exchange
of ideas that underlies the creation of
new goods and new ways of producing

existing goods.
Finally, the study by Saxenian provides a cautionary note for policymakers who view the success of areas such
as Silicon Valley as a recipe for local
economic development and growth.
While investing in science centers to
attract R&D activity is fairly common
in the U.S., Saxenian’s study suggests
that creating the right corporate culture to make the centers successful is
more challenging. Instead of targeting
industries, we suggest that policymakers consider strategies that help to
establish a good business environment
and which are conducive to attracting
and retaining highly skilled workers.
Glaeser and co-authors’ study suggests
that local policymakers need to focus
on life-style issues because they are
important in attracting and retaining
high-skill workers. One such policy is
providing good public schools. Other
policies might focus on reducing urban
crime and providing amenities such as
clean streets and public parks. BR

14

County-level specialization was measured
using a Herfindahl index. A Herfindahl index
measures diversification or, inversely, specialization. It is calculated by squaring and summing
the share of establishments accounted for by
each industry in a given county. The squaring of
industry shares means that the larger industries
contribute more than proportionately to the
overall value of the index. Thus, as the index
increases in value for a given county, this implies that the county is more highly specialized
or less diversified industrially.

15
We have 847 metropolitan counties in our
sample. The correlation coefficient is 0.0148
and is significant at the 1 percent level. The
coefficient is small in magnitude because the
average value for the diversity index is 75 times
as large as the average value for the county
adjusted GFW index. Despite the relatively low
value of the correlation between the county
adjusted GFW index and the diversity index,
the relationship between these variables is
economically significant, displaying an elasticity
of almost one in value.
16

A more complete analysis of the role of MAR
vs. Jacobs spillovers on the clustering of R&D
labs should also control for an area’s natural
advantages as identified in this article.

www.philadelphiafed.org

REFERENCES

Anselin, Luc, Attila Varga, and Zoltan
Acs. “Local Geographic Spillovers
between University and High Technology
Innovations,” Journal of Urban Economics,
42 (1997), pp. 442-48.
Arzaghi, Mohammad, and J. Vernon
Henderson. “Networking Off Madison
Avenue,” unpublished manuscript (2005).
Audretsch, David B., and Maryann
P. Feldman. “R&D Spillovers and the
Geography of Innovation and Production,”
American Economic Review, 86 (1996), pp.
630-40.
Carlino, Gerald A. “Knowledge Spillovers:
Cities’ Role in the New Economy,” Federal
Reserve Bank of Philadelphia Business
Review (Fourth Quarter 2001), pp. 17-23.
Carlino, Gerald A. “The Economic Role
of Cities in the 21st Century,” Federal
Reserve Bank of Philadelphia Business
Review (Third Quarter 2005), pp. 9-15.
Carlino, Gerald A., Satyajit Chatterjee,
and Robert M. Hunt. “Urban Density and
the Rate of Invention,” Journal of Urban
Economics, 61 (2007), pp. 389-419.
Directory of American Research and
Technology, 23rd Edition. New York: R.R.
Bowker, 1999.
Duranton, Gilles, and Henry G. Overman.
“Testing for Localization Using MicroGeographic Data,” Review of Economic
Studies, 72 (2005), pp. 1077-1106.
Ellison, Glenn, and Edward. L. Glaeser.
“Geographic Concentration in U.S.
Manufacturing Industries: A Dartboard
Approach,” Journal of Political Economy,
105 (1997), pp. 889-927.

www.philadelphiafed.org

Ellison, Glenn, Edward L. Glaeser,
and William Kerr. “What Causes
Industry Agglomeration? Evidence from
Coagglomeration Patterns,” Discussion
Paper 2133, Harvard Institute of Economic
Research (April 2007).
Feldman, Maryann P., and David B.
Audretsch. “Innovation in Cities:
Science-Based Diversity, Specialization,
and Localized Competition,” European
Economic Review, 43 (1999), pp. 409-29.
Gaspar, Jess, and Edward Glaeser.
“Information Technology and the Future
of Cities,” Journal of Urban Economics, 43
(1998), pp. 136-56.
Glaeser, Edward, Hedi Kallal, Jose
Scheinkman, and Andrei Shleifer.
“Growth in Cities,” Journal of Political
Economy, 100 (1992), pp. 1126-53.
Glaeser, Edward L., Jed Kolko, and Albert
Saiz. “Consumer City,” Journal of Economic
Geography, 1 (2001), pp. 27-50.
Guimarães, Paulo, Octávio Figueiredo,
and Douglas Woodward. “Measuring the
Localization of Economic Activity: A
Parametric Approach,” Journal of Regional
Science, 47 (2007), pp. 753-44.
Holmes, Thomas J., and John J. Stevens.
“Spatial Distribution of Economic
Activities in North America,” in J.V.
Henderson and J.-F Thisse, eds., Handbook
of Regional and Urban Economics, Vol. IV:
Cities and Geography. Amsterdam: Elsevier,
2004.
Jackson, John. “Michigan,” in R. Scott
Fosler, ed., The New Role of American
States. New York: Oxford University Press,
1988.

Jacobs, Jane. The Economy of Cities. New
York: Vintage Books, 1961.
Keller, Wolfgang. “Geographic Localization
of International Technology Diffusion,”
American Economic Review, 92 (2002), pp.
120-42.
Kolko, Jed. “Agglomeration and CoAgglomeration of Services Industries,”
unpublished manuscript (April 2007).
Krugman, Paul. Geography and Trade.
Cambridge: MIT Press, 1991.
Madden, Janice. “Creating Jobs, Keeping
Jobs, and Losing Jobs: Cities and the
Suburbs in the Global Economy,”
unpublished manuscript (2000).
Maurel, Françoise, and Béatrice
Sédillot. “A Measure of the Geographic
Concentration in French Manufacturing
Industries,” Regional Science and Urban
Economics, 29 (1999), pp. 575-604.
McDonald, John F. Fundamentals of Urban
Economics. Upper Saddle River, NJ:
Prentice Hall, 1997.
Rosenthal, Stuart, and William C. Strange.
“The Determinants of Agglomeration,”
Journal of Urban Economics, 50 (2001), pp.
191-229.
Saxenian, AnnaLee. Regional Advantage:
Culture and Competition in Silicon Valley
and Route 128. Cambridge, MA: Harvard
University Press, 1994.
Starr, Paul. “Review of AnnaLee
Saxenian,” Regional Advantage
Contemporary Sociology (May 1995).

Business Review Q3 2008 11

Creative Destruction and
Aggregate Productivity Growth*
BY SHIGERU FUJITA

P

roductivity growth is the engine of economic
growth and is responsible for rising standards
of living. But all firms do not partake equally
in the nation’s productivity growth. Rather,
according to economist Joseph Schumpeter’s theory, firms
undergo a process of “creative destruction”: New firms
that adapt to new knowledge cause the decline and eventual demise of incumbent firms. In this article, Shigeru
Fujita surveys recent studies that examine the role of
creative destruction in aggregate productivity growth.

Productivity growth is the engine
of economic growth. Firms constantly
discover and implement new technologies, making it possible for them to
produce new products and services
or to produce existing products and
services more efficiently. Productivity
growth is responsible for rising living
standards in the world.
The figure on page 13, which plots
a common measure of productivity
— labor productivity — for the U.S.,
shows that productivity has grown

Shigeru Fujita
is a senior
economist in
the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.
org/research-and-data/publications/.
12 Q3 2008 Business Review

steadily in the postwar U.S.1 economy,
indicating that the economy has become wealthier over time.
The smooth rise of productivity shown in the figure might suggest
that all firms partake equally in the
nation’s productivity growth. Joseph
Schumpeter (1883-1950), one of the
most influential economists of the
20th century, observed that anyone
who thought so would completely miss
the “essential fact about capitalism,”2
which, he argued, is the process of

1

Labor productivity is defined as the value of
output less intermediate inputs (both values adjusted for inflation) produced per unit of labor
input (measured as man-hours).

“creative destruction.” In his famous
book, Capitalism, Socialism, and Democracy, he summarized this process
as one that “incessantly revolutionizes
the economic structure from within,
incessantly destroying the old one,
incessantly creating a new one.” Of
course, many other economists have
deeply appreciated the importance of
creative destruction in capitalism. Former Federal Reserve Chairman Alan
Greenspan, for instance, argues in his
latest book that creative destruction
is the only way to increase productivity and therefore the only way to raise
average living standards on a sustained
basis. These readings suggest that
the turbulent process of creation and
destruction lurks beneath the smooth
rise in aggregate productivity.
Underlying Schumpeter’s astute
observation is the fact that firms are
very different from each other: They
differ in terms of their managerial abilities, their location, their organization,
and their know-how. These differences
mean that some firms take better
advantage of new knowledge and ideas
than others. New and existing firms
that adapt to new knowledge cause the
decline and eventual demise of other
firms. Schumpeter emphasized that
this process of creative destruction
is an “evolutionary process” whereby
“every element of it takes considerable
time in revealing its true features and
ultimate effects,” and thus “we must
judge its performance over time.”

2

*The views expressed here are those of the author and do not necessarily represent the views
of the Federal Reserve Bank of Philadelphia or
the Federal Reserve System.

The first view is consistent with Adam Smith’s
view of economic growth. See Leonard Nakamura’s article for detailed characterizations of
the differences between Smith’s and Schumpeter’s views.

www.philadelphiafed.org

FIGURE
Labor Productivity
(output per hour)
Log (Labor Productivity)
4.9
4.7
4.5
4.3
4.1
3.9
3.7
3.5
47 50 53 56 59 62 65 68 71 74 77 80 83 86 89 92 95 98 01 04 07

Note: Shaded areas indicate recessions.

This article surveys recent studies that examine the role of creative
destruction in aggregate productivity
growth. These studies seek to understand the link between the productivity of individual business units and
aggregate productivity, paying particular attention to the role that the birth
and death of firms plays in the growth
of aggregate productivity. Although
Schumpeter’s idea of creative destruction has been around for more than 60
years, it is only in the last 20 years or
so that economists have had access to
data that make it possible to quantify
— and establish beyond doubt — this
“essential fact about capitalism.”
LINKING INDIVIDUAL AND
AGGREGATE PRODUCTIVITY
To understand how growth in
aggregate productivity depends on
the process of creative destruction,
we need a way to link individual

www.philadelphiafed.org

productivities to aggregate productivity. Although there are various ways
to make this link, I will focus on the
one proposed by Lucia Foster, John
Haltiwanger, and C.J. Krizan. Their
method is to take a weighted average
of individual establishment productivities and it allows us to express
aggregate productivity as the sum of
four components, each of which has
intuitive economic meaning.3
The first component represents
the productivity growth of establishments that continuously exist between
two dates. Obviously, if the productivity of these continuing establishments
grows, aggregate productivity will
grow. This first component is called

3

Note that in the literature I review in this
article, an individual unit is a business establishment (or plant) that may be part of a larger firm.
For this reason, I refer to an individual unit as
an establishment rather than as a firm.

the “within component,” reflecting
the fact that this term captures the
productivity gains that occur within
each continuing establishment.
A second component takes into
account the changes in aggregate
productivity that result from changes
in the relative size of establishments
with different productivity levels.
Even if the productivity of continuing establishments were to remain
constant, aggregate productivity could
change because of changes in the size
of the establishments with different
productivity levels. For instance, if
more productive establishments were
to expand employment over time and
less productive establishments were to
shrink, aggregate productivity (which
is a weighted average of individual productivities) will grow. This component
is called the “between component.”
The two components above
measure the effects of changes in
individual productivities or changes
in employment shares. Because these
two components are calculated by
fixing either the shares or the level of
individual productivities, they do not
capture the effects of how the changes
in the individual productivities and
the changes in shares are correlated.
The “cross component” measures this
correlation. The positive correlation
shows up as a positive contribution.
Similarly, the negative correlation
shows up as a negative contribution.
More specifically, if the establishments
with faster-growing productivity are
also the ones that are increasing their
shares of employment, it shows a positive contribution. Again, similarly, if
the establishments with slower-growing
productivity are also the ones that are
decreasing their shares of employment,
it shows a positive contribution. The
case of the positive correlation sounds
reasonable in that one may think
that establishments that have higher
productivity growth expand their em-

Business Review Q3 2008 13

ployment shares over time, while those
that have lower productivity growth
shrink their shares. However, it is also
possible that those that are reducing
employment faster than others (for example, more aggressively restructuring)
are more rapidly improving their productivity. When the two are negatively
correlated, the cross component shows
the negative contribution to aggregate
productivity growth.
The last component measures
the effects of the births and deaths of
establishments. If new establishments
have higher-than-average productivity, their presence will contribute to
growth in aggregate productivity. If
exiting establishments have lowerthan-average productivity, that, too,
contributes to productivity growth.
The sum of these two subcomponents
is called the “net entry component.”
Clearly, this term is directly related
to Schumpeter’s notion of creative
destruction.
The Process of Creative Destruction and the Accounting
Framework. While the net entry
component has a direct connection to
the notion of creative destruction, it is
important to recognize that the other
components are also influenced by it.
For instance, invention of a superior
technology by a new entrant may
encourage incumbent firms to improve
their own technologies. In the accounting framework above, this effect
will show up in the within component.
Another possibility is that the invention of new technologies induces
resource reallocation (for example,
workers change jobs) across incumbent
establishments. This reallocation will
clearly affect the between component.
Of course, the actual effects of
creative destruction are likely to be
more varied and subtle than any accounting framework can fully reveal.4
Nevertheless, this simple framework
can shed considerable light on what

14 Q3 2008 Business Review

actually happens in each establishment
as new technologies emerge and old
technologies die out.
CREATIVE DESTRUCTION AND
PRODUCTIVITY GROWTH IN
MANUFACTURING
Net Entry Accounts for 30
Percent of Productivity Growth over
a 10-Year Period. Table 1 reports the
contribution to productivity growth
in the manufacturing sector.5 The
first row shows the breakdown over a

three five-year periods: 1977-1982,
1982-1987, and 1987-1992. Overall,
it is somewhat difficult to clearly
characterize the results. However, we
can make two observations. First, the
contribution of net entry is always
around 20 percent, regardless of time
period. Note that relative to the result
for 10-year productivity growth, the
contribution of net entry is smaller.
This is consistent with the idea that
the effects of creative destruction are
more apparent over a longer horizon.

Invention of a superior technology by a new
entrant may encourage incumbent firms to
improve their own technologies.
10-year period between 1977 and 1987.
The first column of the row shows
that aggregate labor productivity,
defined as real output divided by total
hours (number of workers times hours
worked per worker), grew 21 percent
over this period. The four columns
next to the aggregate growth rate are
the shares of contributions of the four
terms explained above.
According to the first row, the
within component (77 percent) and
the net entry component (29 percent)
are the main contributors to productivity growth over this 10-year period.
This latter finding is consistent with
creative destruction. The next three
rows in Table 1 present the contribution of the four components for

4
Deeper understanding of the creative destruction process requires development of the
appropriate theoretical framework. Readers who
are interested in such attempts can refer to a
recent paper by Markus Poschke and the references therein.
5

The data in Table 1 are based on tables in the
article by Lucia Foster, John Haltiwanger, and
C.J. Krizan.

The second observation we
can make from Table 1 is that the
contribution of the between component is higher when overall productivity growth is lower (and vice
versa). Specifically, it is highest during
1977-1982, when productivity growth
is low compared with the other two
periods. This result in Table 1 is based
on coarse data observations, that is,
only three observations of five-year
productivity growth. However, a recent
study by Yoonsoo Lee, which breaks
down the annual productivity growth
in manufacturing from 1973 through
1997 using a similar method, also finds
that the between component is higher
when aggregate productivity growth
is slower. To put this observation into
perspective, we can note that aggregate
productivity tends to move together
with the business cycle, which implies
that reallocation of workers from less
productive establishments to more
productive ones intensifies during the
cyclical downturns.
New and More Productive
Establishments Displace Old and
Less Productive Ones. Now, let’s look

www.philadelphiafed.org

TABLE 1
Productivity Decomposition
(Manufacturing Sector)
Overall
Growth
Rate

Within
Component

Between
Component

Cross
Component

Net Entry
Component

1977 - 1987

21.32

16.42
(77)

-1.71
(8)

-2.98
(-14)

6.18
(29)

1977 - 1982

2.54

3.10
(122)

2.16
(85)

-3.23
(-127)

0.51
(20)

1982 - 1987

18.67

15.50
(83)

2.43
(13)

-2.80
(-15)

3.55
(19)

1987 - 1992

7.17

6.74
(94)

2.37
(33)

-3.51
(-49)

1.51
(21)

Source: Foster, Haltiwanger, and Krizan, 2001, Tables 8.4 and 8.7. Sum of the four components equals the overall growth rate. Numbers in parentheses indicate the share of overall productivity growth explained by each component, calculated by dividing the contribution of each component by
the overall growth rate (expressed as percent). The share is negative when the component contributes negatively to overall growth.

more closely at the role of net entry in
10-year productivity growth. The four
columns in Table 2 report productivity
levels of the following three types of
establishments: (i) those that existed
in 1977 but disappeared 10 years later,
(ii) those that did not exist in 1977
and appeared 10 years later, and (iii)
those that continued to exist throughout the 10-year period. All numbers
are expressed relative to the average
productivity level in 1977 of the establishments that existed throughout the
10-year period.
The table shows an interesting pattern. The first column (0.83)
indicates that the average productivity
of the establishments that failed to survive the 10-year period was 17 percent
lower in 1977 than that of the establishments that successfully survived

www.philadelphiafed.org

the same 10-year period. One can
think of these displaced establishments
being replaced by new establishments,
which first appeared in 1987. The
average productivity level of these new
establishments in 1987 is given in the
second column of the table. Observe
that these new establishments on average had much higher productivity than
that of the displaced establishments
(1.11 vs. 0.83). This pattern is clearly
consistent with Schumpeter’s creative
destruction insight that new and more
productive firms push out old and less
productive ones.
Learning and Selection Play
Important Roles in the Evolution
of Aggregate Productivity. However,
another important insight from this
table is that these new establishments
do not necessarily enjoy the highest

productivity when they appear in the
market. This is reflected in the fact
that the average productivity of these
entering establishments is lower than
the productivity of establishments
that continue to exist throughout the
10-year period (1.11 vs. 1.20). This
observation is consistent with the
idea that “selection” and “learning”
play important roles in the evolution
of establishment-level productivity.
Note first that those establishments
that continue to exist throughout
the period (that is, survive) do so
because they are able to achieve high
productivity. One can view this as the
“selection” process over time. Further,
even though new entrants presumably
have some advantages (especially over
old, exiting firms) — for example, because they can take advantage of new

Business Review Q3 2008 15

TABLE 2
Relative Labor Productivity for
Exiting, Entering, and Continuing Establishments
(Manufacturing Sector, 1977-1987)

Exiting
Establishments
in 1977

Entering
Establishments
in 1987

Continuing
Establishments
in 1977

Continuing
Establishments
in 1987

0.83

1.11

1.00

1.20

Source: Foster, Haltiwanger, and Krizan, 2001, Table 8.9. Each column gives the average productivity
level of each type of establishment, relative to the average productivity level of establishments in 1977
that existed throughout the 10-year period 1977-1987. Existing establishments: establishments that
existed in 1977 but disappeared in 1987. Entering establishments: establishments that did not exist
in 1977 but appeared in 1987. Continuing establishments: establishments that existed in both
1977 and 1987.

technology or a good location — their
observed productivity is not necessarily
higher than the pre-existing “selected”
establishments, since it takes time for
these new entrants to “learn” the new
technology and building organizational
capability also takes time. Of course,
some of these new entrants may disappear, failing to survive the competition, and only productive establishments will again be selected over time.
These facts are consistent with
Schumpeter’s assertion that creative
destruction is an evolutionary process.
As Schumpeter suspected, the facts
point to the presence of rich microlevel dynamics, whereby the gradual
process of learning and selection plays
a key role in diffusing and propagating
technological improvements.
CREATIVE DESTRUCTION AND
PRODUCTIVITY GROWTH IN
RETAIL TRADE
So far we have looked at the role
of creative destruction in the manu-

16 Q3 2008 Business Review

facturing sector. But the service sector
employs the bulk of the U.S. workforce. For example, in 2007, 84 percent
of nonfarm business employees were
employed in service-providing industries. Unfortunately, data limitations
prevent us from carrying out a similar
analysis for the entire service sector.
But Foster, Haltiwanger, and Krizan
have made an important attempt to
look at a key segment of the service industry, namely, the retail trade sector.
While the analysis covers only one service industry, it is of particular interest
given that the retail trade sector is
large, employing more than 15 million
workers (2007), which amounts to 11
percent of total nonfarm business employment. Moreover, it has undergone
massive restructuring and reallocation
since the late 1980s. In particular, it
has changed its ways of doing business, mostly because of the adoption of
advanced information technology (for
example, improved inventory and sales
tracking).

Productivity Growth in Retail Trade Is Mostly Driven by Net
Entry. Table 3 considers aggregate
productivity growth in the retail trade
sector over the 10-year period and the
contributions of the four components.
According to the table, the net entry
component accounts for virtually all
(98 percent) of the productivity growth
over the 10-year period. Compared
with the corresponding figure for the
manufacturing sector, it is much larger,
indicating the importance of net
entry in the retail trade industry. This
finding is consistent with the fact that
job creation and destruction in this
industry are explained mostly by the
entry and exit of establishments.
Another interesting finding in Table 3 is the large negative contribution
of the cross term. As I discussed before, the cross component contributes
positively if establishments with higher
productivity growth also have higher
employment growth or if establishments with lower productivity growth
have lower employment growth. Thus,
a negative contribution of this term
implies that higher productivity growth
at the establishment level is associated
with lower employment growth and
lower productivity growth is associated
with higher employment growth. This
appears counterintuitive if one expects
more productive establishments to
expand employment over time and less
productive establishments to shrink
employment over time. However, causality can go in the other direction as
well: Downsizing of employment may
have enhanced productivity growth for
some establishments over the period.
Establishment Births Are
Driven by Expansion of Continuing
Firms, While Establishment Deaths
Come from Firms’ Deaths. Table 4
presents the breakdown of net entry’s
contribution. The entry and exit columns of the table indicate that entry
and exit account equally for net entry’s

www.philadelphiafed.org

large, positive contribution.6 Foster,
Haltiwanger, and Krizan’s analysis does
not stop there; the authors explicitly
consider how much of the entry and
exit of establishments reflects the entry
and exit of firms as opposed to the entry and exit of establishments. Remember that the unit of observation in the
analysis so far has been an “establish-

6

As in the case of the manufacturing sector, the
pattern of reallocation is consistent with the
idea that less productive plants are replaced by
more productive plants (selection effects) and
those new plants experience more rapid productivity growth than more mature incumbents
(post-entry learning effects).

ment,” which is defined by the physical
production site (whether a manufacturing plant or a retail store). This definition leaves ownership of the establishments out of the analysis. However,
bringing the notion of firms into the
analysis, especially for the retail trade
sector, provides a richer picture of
creative destruction. Table 4 indicates
that the positive contribution of entry
comes mostly from entering establishments of continuing firms, whereas the
large contribution of exit comes from
exiting establishments of exiting firms.
The authors further distinguish firms
depending on whether the parent firm

is a single-unit or a multi-unit firm that
operates locally (one state), regionally
(two to five states), or nationally (more
than five states). The findings can be
summarized as follows:
• For continuing establishments, multiunit firms have a large productivity
advantage over single-unit firms.
Establishments operating locally,
regionally, and nationally are, on
average, 10.9 percent, 18.3 percent,
and 24.1 percent more productive
than single units.
• Among exiting establishments, the
least productive are the single units.
These units are 20.9 percent less

TABLE 3
Productivity Decomposition (Retail-Trade Sector)
Overall
Growth
Rate
1987-1997

Within
Component

Between
Component

Cross
Component

Net Entry
Component

1.83
(16)

2.74
(24)

-4.46
(-39)

11.20
(98)

11.43

Source: Foster, Haltiwanger, and Krizan, 2006, Table 3. Sum of the four components equals the overall growth rate. Numbers in parentheses
indicate the fraction of overall productivity growth explained by each component, calculated by dividing the contribution of each component by the
overall growth rate (expressed as percent). The fraction is negative when the component contributes negatively to overall growth.

TABLE 4
Productivity Decomposition: Contributions of Firm Entry and Exit
(Retail-Trade Sector 1987-1997)
Net Entry of Establishments
Entering Establishments
Continuing
Firms
98

54

37

Exiting Establishments
Entering
Firms
17

Continuing
Firms
45

Exiting
Firms

3

42

Source: Foster, Haltiwanger, and Krizan, 2006, Table 3. The numbers indicate the fraction of overall productivity growth explained by each component, calculated by dividing the contribution of each component by the overall growth rate (expressed as percent). The number in the first column
(98) corresponds to that in parentheses in the last column of Table 3. Entering establishments (firms): establishments (firms) that did not exist in
1987 and appeared in 1997. Existing establishments (firms): establishments (firms) that existed in 1987 but disappeared in 1997. Continuing firms:
firms that existed in both 1987 and 1997.
www.philadelphiafed.org

Business Review Q3 2008 17

productive relative to the continuing single units, on average. The
most productive among the exiting
establishments are those affiliated with a national chain. These
establishments are actually slightly
more productive than the continuing single-unit establishments (+1.9
percent).
• Among entering establishments, those
associated with a national chain
have a very large productivity advantage over single-unit incumbents
(+24.7 percent).
Clearly, these findings are consistent with views in the popular press
that describe the demise of “mom and
pop” stores and the increasing presence of large national chains. The
creative destruction process has played

a crucial role in the productivity gains
in the retail trade industry.
SUMMARY
The availability of rich establishment-level data over the last 15 years
or so has made it possible for researchers to assess Schumpeter’s assertion
regarding the importance of creative
destruction in aggregate productivity
growth.
Recent empirical studies indeed
find that creative destruction plays
a significant role in shaping the
evolution of aggregate productivity:
The evidence shows that new and
relatively more productive establishments displace older and relatively
less productive ones. However, new
establishments are not necessarily the

most productive: While new entrants
have some advantages over existing establishments — for example, they can
take advantage of new technology or a
good location — it takes time for them
to fully exploit these advantages.
The facts reviewed in this article
point to the importance of creative destruction but only hint at how creative
destruction actually works. To fully
appreciate these facts, economists have
begun to build models that explicitly
connect establishment-level decisions
to aggregate outcomes. These models,
together with the accumulating empirical evidence on establishment-level
dynamics, promise to further enrich
our understanding of creative destruction. BR

REFERENCES

Davis, Steven, John Haltiwanger, and
Scott Schuh. Job Creation and Destruction.
Cambridge, MA: MIT Press, 1996.

Greenspan, Alan. The Age of Turbulence:
Adventures in a New World. New York:
Penguin Press, 2007

Poschke, Markus. “Employment Protection, Firm Selection, and Growth,” IZA
Discussion Paper 3164, 2007.

Foster, Lucia, John Haltiwanger, and C.J.
Krizan. “Aggregate Productivity Growth:
Lessons from Microeconomic Evidence,”
in Charles Hulten, Edward Dean, and
Michael Harper, eds., New Directions in
Productivity Analysis. Chicago: University
of Chicago Press, 2001.

Lee, Yoonsoo. “The Importance of
Reallocations in Cyclical Productivity
and Returns to Scale: Evidence from
Plant-level Data,” Federal Reserve Bank of
Cleveland Working Paper 05-09 (2005).

Schumpeter, Joseph. Capitalism, Socialism,
and Democracy, Third Edition. New York:
Harper and Brothers, 1942.

Foster, Lucia, John Haltiwanger, and C.J.
Krizan. “Market Selection, Reallocation,
and Restructuring in the U.S. Retail Trade
Sector in the 1990s,” Review of Economics
and Statistics, 88 (2006), pp. 748-58.

18 Q3 2008 Business Review

Nakamura, Leonard. “Economics and the
New Economy: The Invisible Hand Meets
Creative Destruction,” Federal Reserve
Bank of Philadelphia Business Review (July/
August 2000).

www.philadelphiafed.org

APPENDIX
Example of Productivity Decomposition
This appendix provides a simple example to better understand the four components of aggregate productivity
growth discussed in the text.
Date T

Date T+1

Establishments

Output per worker

Number of workers

Output per worker

Number of workers

A

3

10

6

20

B

2

10

4

10

C

1

10

----

----

D

----

----

4

10

Total

2

30

5

40

In this example, I go through the decomposition of
the productivity difference between the two dates T
and T+1. At each point in time, there are only three
establishments: at date T, they are establishments
A, B, and C, and at date T+1, they are A, B, and D.
That is, establishment C, which existed at date T, is
replaced by a new establishment D, at T+1. The first
two columns of the table summarize the information
at date T. Each number in the first column gives
the productivity (output per worker) of the three
establishments, while the next column gives the total
number of workers. The last row is economy-wide
productivity, which can be calculated as a weighted
average of the establishment-level productivities:
Productivity of A * (Employment share of A) +
Productivity of B * (Employment share of B) +
Productivity of C * (Employment share of C) =
3(10/30)+ 2(10/30)+ 1(10/30) = 2.
One can verify that aggregate productivity in period
T+1 is 5 by doing the same calculation. Between the
two dates, aggregate productivity goes up by 3 (=5-2),
which amounts to a 150 percent increase in aggregate
productivity. I now go over how to decompose overall
improvement of productivity into four components.
1. Within component: This term measures the
contribution of continuing establishments
to productivity improvements. It is simply a

www.philadelphiafed.org

weighted average of changes in the productivity
of continuing establishments, namely, A and
B, in this example. To measure the effect of
productivity changes that occurred “within” those
establishments, employment shares are fixed at the
levels of date T:
(Change in productivity at A) *(A’s employment
share at Date T)+ (Change in productivity at
B) *(B’s employment share at Date T) = (6-3)
(10/30)+(4-2)(10/30)=5/3.
In this example, establishments A and B
experienced productivity gains of 3 and 2,
respectively. They are averaged by using their
employment shares at date T.
2. Between component: This term measures how
much of the overall productivity gain comes from
the shift of employment from less productive
establishments to more productive establishments:
Even if the productivity levels of the existing units
do not change over time, overall productivity
can change simply because reallocating workers
to more productive units improves overall
productivity. It is calculated as a weighted average
of the changes in employment shares, where the
weights are productivity at the initial date, T,
relative to overall productivity.

Business Review Q3 2008 19

APPENDIX (continued)

(Change in A’s share)*(Productivity difference
of A from overall productivity at Date T) +
(Change in B’s share)*(Productivity difference of
B from overall productivity at Date T) = (20/4010/30)(3-2)+(10/40-10/30)(2-2) = 1/6.
The calculation in the first set of parentheses
shows that the employment share of establishment
A increased from 1/3 to 1/2. Since establishment
A had a productivity level of 3, which is higher
than the average productivity level of 2, this term
contributes positively. Similarly, the calculation
in the second term captures the fact that the
share of establishment B decreased, but it had the
same productivity level as aggregate productivity
and thus makes no contribution to aggregate
productivity.
3. Cross component: This term is less intuitive, but it
is simply computed by multiplying changes in shares
and changes in productivity and summing them
across all continuing establishments:
(Change in productivity at A)*(Change in A’s
share) + (Change in productivity at B)*(Change
in B’s share) = (6-3)(20/40-10/30)+(4-2)(10/4010/30) = 1/3.
Establishment A increased both its employment
share and its productivity, and thus the first term is
positive. However, part of this positive contribution
is offset by the second term, which is negative
because the share of establishment B decreased.
4. Net entry component: This term represents the
difference between the contributions of the entry
and exit components. The contribution of entry is
expressed as a weighted average of the productivity
of entering establishments relative to overall

20 Q3 2008 Business Review

productivity at the initial date, T. In this simple
example, there is only one entering establishment,
namely, D. It is therefore computed as:
(Productivity difference of D from overall
productivity at Date T)*(D’s share at Date T+1)
= (4-2)10/40=1/2
Note that at the initial period, establishment D
has a higher productivity level, 4, than the overall
productivity level of 2, and this positive contribution
is multiplied by the share of employment at date
T+1. Similarly, the contribution of exits is expressed
as a weighted average of the productivity of exiting
establishments.
(Productivity difference of C from overall
productivity at Date T)*(C’s share at Date T) =
(1-2)10/30=-1/3
The calculation in the parenthesis reflects
the fact that the productivity level of establishment
C is lower than the average level at date T. Relative
productivity is weighted by the employment share:
1/3. The net entry term is calculated as a difference
between the two terms:
(Contribution of entry) – (Contribution of exit)
= 1/2-(-1/3) = 5/6.
The exit of establishment C contributes positively to
changes in overall productivity because establishment
C had lower-than-average productivity, while the entry
of establishment D also makes a positive contribution
because it has higher-than-average productivity. This
pattern is consistent with creative destruction.
Summing over all four components we find that
5/3+1/6+1/3+5/6=3, which indeed gives the aggregate
productivity gain observed between the two dates.

www.philadelphiafed.org

Ten Years After:
What Are the Effects of Business Method Patents
in Financial Services?*
BY ROBERT M. HUNT

I

n recent years, the courts have determined
that business methods can be patented and
the United States Patent and Trademark
Office has granted some 12,000 patents
of this sort. Has the availability of patents for business
methods increased the rate of innovation in the U.S.
financial sector? The available evidence suggests that
there has been no significant change in the aggregate
trend of R&D investments made by financial firms.
In this article, Bob Hunt discusses how recent court
decisions and proposed federal legislation may change
how firms enforce their patents. In addition, he outlines
some of the remaining challenges that business method
patents pose for financial companies.

A decade has passed since American courts made clear that methods
of doing business could be patented.
Since then, the U.S. Patent and Trademark Office (USPTO) has granted
more than 12,000 of these patents,

Bob Hunt is a
senior economist
in the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.
org/research-and-data/publications/.
www.philadelphiafed.org

of which only a small share has been
obtained by financial firms. A number of lawsuits have been filed, and a
number of financial settlements, some
involving significant sums of money,
have occurred.
Has the availability of patents for
business methods increased the rate of
innovation in the U.S. financial sector?
This is a difficult question to answer,
in part because our official measures
are not well suited for estimating

*The views expressed here are those of the author and do not necessarily represent the views
of the Federal Reserve Bank of Philadelphia or
the Federal Reserve System.

research activity in financial services.
Nevertheless, the available evidence
suggests that there has been no significant change in the aggregate trend of
R&D investments made by financial
firms.
Business method patents are
probably here to stay. But recent court
decisions and proposed federal legislation are likely to change how firms
enforce their patents. These changes
should mitigate some of the concerns
raised about business method patents:
that the claimed inventions are not
new, are not sufficiently novel to justify
the award of a patent, and are being
enforced in ways that increase business risk to financial firms. Nevertheless, significant challenges remain. In
particular, the boundaries of the rights
being granted in at least some business
method patents are not sufficiently
clear. Ambiguity over these boundaries
creates uncertainty for both the owners of these patents and their competitors.
BACKGROUND
A patent is a grant of the legal
right to exclude others from making,
using, or selling the patented invention
for a limited period of time. If the patent is infringed, the patent owner may
sue the infringer to recover lost profits.
Sometimes the patent owner is able to
obtain an injunction — a court order
that prevents the alleged infringer
from continuing to make, use, or sell
the patented invention. For reasons
described below, an injunction is a
very powerful legal weapon in patent
litigation.
But not all inventions qualify
for patent protection. To qualify, an
Business Review Q3 2008 21

invention must satisfy a number of
statutory requirements, including what
the law describes as nonobviousness.
This prevents the grant of a patent
for an invention that would have
been obvious to a practitioner in the
relevant field at the time the invention
was made. In other words, a patentable
invention must be more than a trivial
extension of what is already known
(the prior art).
As an example, consider one of
the patents examined in the Supreme
Court decision in Graham v. Deere.1
The claimed invention was a combined sprayer and cap used on bottles
of household chemicals. The essential elements of the sprayer had been
developed by others, but they had
never been assembled in this particular
way, which made possible the use of
automated bottling equipment. As a
result, the product was highly successful. While the Supreme Court
acknowledged that long-felt need and
commercial success might suggest the
invention was nonobvious, in the end
it decided otherwise because the differences between the product’s design
and that of pre-existing products were
minimal.
Patentable Subject Matter. In
the U.S., assuming the criteria just described are also satisfied, any process,
machine, manufacture, or composition
of matter, or any improvement of those
things can be patented. But the courts
have also identified certain categories
of subject matter that cannot be patented, for example, laws of nature and
abstract ideas.
For at least 80 years, it was commonly believed that these limitations
precluded patenting methods of doing
business. This view was suddenly

upended by the Federal Circuit’s State
Street decision in 1998.2 That case
involved a patent on a data processing
system that made possible the pooling
of assets in several mutual funds into
a single portfolio, reducing overhead
costs while maintaining the transaction information necessary for allocating gains, losses, and tax liabilities to
the original funds. The district court
determined that the invention in

half or more of all the patents depicted
in Figure 1 fall into categories of technology directly related to the provision
of financial services. In addition, the
vast majority of business method patents (roughly four in five) would also
qualify as software patents.3
Classifying the industrial mix of
the owners of business method patents
can be difficult. Nevertheless, it is clear
that when compared with firms in

In the U.S. any process, machine,
manufacture, or composition of matter, or any
improvement of those things can be patented.
But the courts have also identified certain
categories of subject matter that cannot be
patented.
question was a business method and
was therefore unpatentable. But the
Federal Circuit concluded that, under
U.S. law, there was no such thing as a
subject matter exception for business
methods.
Business Method Patenting
Grows Rapidly. The State Street decision had an almost immediate effect
in terms of patenting behavior. About
1,000 patents for computer-implemented
business methods were granted in each
year after 1999 (Figure 1). Some examples are found in 10 Business Method
Patents Granted in 2008. An inspection
of random business method patents
reveals that many are not directly
related to the financial industry (there
are many patents on postage-metering
systems, for example). Nevertheless,

the information and communications
technology sector (for example, computers, software, and communications
equipment), financial institutions are
relatively minor players. Very roughly
speaking, manufacturers of electronics,
computers, instruments, and software account for at least a third, and
likely much more, of business method
patents granted in the last five years.4
In contrast, and again speaking very
roughly, financial firms and providers
of consumer payment services account
for less than one-tenth of the total.
Nevertheless, a number of financial
institutions have accumulated a dozen
or more these patents.5

3

See the data appendix for definitions and
additional information.

4

2
1

The Supreme Court wrote a combined decision
for three patent cases. The patent I describe
here was at issue in Calmar, Inc. v. Cook
Chemical Co.

22 Q3 2008 Business Review

See also the Federal Circuit opinion in AT&T
v. Excel Communications. The Federal Circuit
is the sole court of appeals from federal district
courts for patent cases. Federal Circuit decisions
can be appealed to the Supreme Court.

The leading recipients include IBM, Sony,
Hewlett-Packard, Fujitsu, Hitachi, NCR, and
Microsoft.

5
Among others, these include American
Express, Citibank, JPMorgan Chase, Capital
One, and Goldman Sachs.

www.philadelphiafed.org

TABLE
10 Business Method Patents Granted in 2008

Description*

Company**

A method and system for predicting changes in interest-rate sensitivity induced
by changes in economic factors that affect the duration of assets and liabilities,
including core deposits (no. 7,328,179).

McGuire Performance
Solutions, Inc.

A method and system for calculating marginal cost curves for electricity generating
plants (no. 7,333,861).

NeuCo, Inc.

A method of selecting sector weights and particular securities for a stock portfolio
(no. 7,340,425).

First Trust Portfolios

A system and method of calculating prepayment and default risk, loss given
default, and default correlations for the purpose of valuing a portfolio of assets (no.
7,340,431).

Freddie Mac

A machine and computer program that enables the pricing of auto insurance
based on the risk associated with driving at particular locations and times (no.
7,343,309).

International Business
Machines Corp.

A system and method for trading pollution emission allowances (no. 7,343,341).

Chicago Climate
Exchange, Inc.

A computer-implemented method of computing price elasticities, choosing from
one or more demand models based on goodness of fit (no. 7,343,355)

i2 Technologies US, Inc.

A method of assessing the capital adequacy of an automotive finance company (no.
7,346,566).

Ford Motor Company

A method of creating a customized payment card, based on a consumer’s
instructions/images, via a website (no. 7,360,692).

AT&T Delaware
Intellectual Property, Inc.

A method of sharing the profits generated by a payment card program, in excess of
some target, with users of the card (no. 7,360,693).

JPMorgan Chase Bank,
N.A.

*

The author’s interpretation, based on the patent’s claims or description of the invention
Initial assignee on the patent document

**

www.philadelphiafed.org

Business Review Q3 2008 23

FIGURE 1
Business Method Patents*
3,000

2,500

2,000

1,500

1,000

500

0
76

78

80

82

84

86

88

90

92

94

96

98

00

02

04

06

08e

Calendar Year Patent Was Granted

Source: U.S. Patent and Trademark Office and author’s calculations
* These are patents in Class 705 (Data Processing: Financial, Business Practice, Management, or
Cost/Price Determination) in the USPTO’s patent classification system. The 2008 total is estimated from five months of data. See the appendix for details.

Between 1997 and 1999, new applications for business method patents
tripled, and they have more than
tripled since then. Today about 11,000
new applications for patents on business methods are filed each year, which
suggests that there will be significant
future growth in the number of patents
granted. Over 40,000 of these applications are currently pending.
ARE FINANCIAL SERVICES
SPECIAL?
An important question to ask is
whether there are characteristics of
the financial services sector that might
make us think differently about how

24 Q3 2008 Business Review

intellectual property influences decisions and outcomes among financial
firms. For example, how do these
firms protect their innovations in the
absence of patents? Are there special
interactions between network effects,
which are important in many areas
of finance, and intellectual property?
What challenges does intellectual
property pose for standard-setting,
which is essential for coordinating the
interactions of hundreds, if not thousands, of financial institutions acting
on behalf of millions of clients?
Protecting Innovations in the
Financial Sector. In many theoretical
papers, patents are considered essential

for protecting the fruits of innovation.
Without them, inventions might be
quickly copied by imitators, leaving the
inventor without a means of recovering her costs. This would reduce the
incentive to do R&D in the first place
and hence the rate of innovation.
In practice, however, firms employ
other means of protecting their innovations. Surveys of manufacturing
companies in the 1980s and 1990s report that only a few industries (chemicals and pharmaceuticals) view patents
as the primary means of protecting
the profits generated by an invention.6
Other factors, such as lead time or
proprietary knowledge maintained as
a trade secret, were typically ranked as
more important than patents.7 In addition, firms in most industries viewed
their investments in specific manufacturing capabilities, reputation, brand
names, and distribution networks as
more important mechanisms than
patents for protecting their innovations. Such investments are sometimes
described as complementary assets.
Consider the example of the semiconductor firm Intel. While the firm invests heavily in patents, much of Intel’s
success is derived from its ability to
design and build new factories (which
produce only the latest CPU chips)

6

See the working paper by Wesley Cohen,
Richard Nelson, and John Walsh. Evidence
from earlier surveys is found in Edwin
Mansfield’s article and the article by Richard
Levin and his co-authors.

7
A trade secret is certain confidential
information, such as a formula or a production
technique, that a firm tries to keep from
being disclosed. The firm can sue a person (or
another company) for stealing or disclosing
this information, but it cannot prevent others
from independently discovering and using such
knowledge.
8

These are described in William Silber’s article,
Peter Tufano’s 1989 article, and John Caskey’s
working paper. For a recent review of the
literature on financial innovation, see Tufano’s
2003 book chapter.

www.philadelphiafed.org

more rapidly than its competitors.
While those surveys focused on
manufacturing firms, other researchers
have documented similar lessons for
innovations in financial services. For
example, despite rapid imitation there
appears to be persistent first-mover advantages (reflected primarily in market
share) among firms developing new
securities or option contracts.8 Some
studies find that larger investment
banks and mutual fund companies
tend to innovate more frequently than
smaller ones. This pattern is consistent
with the idea that financial firms are
able to leverage complementary assets
to protect their innovations.9
In conclusion, it appears that
financial firms typically protect their
innovations in much the same way as
do manufacturing firms. Historically,
patents have not been a significant
part of the story for financial firms,
and yet their absence has not prevented them from investing in new
products (financial instruments) or the
processes required to offer them. The
question is then whether the addition
of financial patents to the mix can
improve on the existing incentives and
thus increase the rate of innovation in
this sector.
Network Effects and Standards.
Many financial markets are subject to
network effects: Users find the services
provided are more valuable when there
are many other users of the service.
Two obvious examples include payment systems and financial exchanges.
Consumers are more willing to carry a
payment card when they know it will
be accepted by most of the merchants
they frequent. Merchants are more
willing to incur costs to accept a
payment card if they know there are

9
See Tufano’s 1989 article and his 1993 book
chapter with Erik Sirri.

www.philadelphiafed.org

many potential customers who want
to use them. In the case of financial
exchanges, efficiency is often determined by the number of active buyers
and sellers of a security. This creates a
tendency to concentrate trading of an
instrument on just a few (or even one)
exchanges. As these examples suggest,
networks are difficult to start, but once

Network effects
also arise from the
requirements of
interoperability, which
is extremely important
in financial services.
they attain a critical mass, they often
enjoy a large market share and generate considerable income.
Network effects also arise from
the requirements of interoperability,
which is extremely important in financial services. Interoperability is accomplished via standard setting, where
industry participants agree on technical features so that their systems can
work together. Two examples are the
specification of the layout and numbering systems of paper checks and the
message formats used by automated
clearinghouse (ACH) networks for
direct deposit of paychecks and other
transactions.
Network effects have two implications for thinking about patents. First,
they are an example of complementary assets that may permit financial
institutions (or networks) to protect
their innovations even in the absence
of strong intellectual property rights.
Second, networks are vulnerable to
hold-up by third parties who own
patents allegedly infringed by members
of the network. Hold-up means that

a patent owner may obtain an injunction, effectively shutting down the
network. This puts the patent owner in
a strong bargaining position in licensing negotiations. It is possible, then,
for the patent owner to obtain income
in excess of the incremental value
created by the underlying invention.
The source of that additional income
is the value created by the size of the
established network.
Consider the case of Research in
Motion (RIM), not only the developer
of the BlackBerry but also the builder
of the servers and software that make
it work. RIM was sued by a patentholding company, NTP, whose primary
investment was its portfolio of patents.
RIM, on the other hand, had invested
about $1 billion in property, equipment, and R&D. NTP won the case
and was eventually granted an injunction that would shut down the RIM
network in the U.S. This induced RIM
to settle the litigation for about $600
million. Ironically, while NTP was very
successful in court, the U.S. patent office, on re-examination, rejected many
of NTP’s patent claims.10
A similar problem can arise with
standard setting, since firms have limited options to make technical changes
without sacrificing interoperability.
Suppose a third party subsequently
obtains a patent that is infringed by
firms complying with the standard.
The patent owner may enjoy considerable bargaining power. This is especially the case when implementing the
standard requires significant up-front
investments that firms will be hesitant
to abandon simply to avoid infringing
the patent.
A key concern here is the effect
of such risks on dynamic incentives.
Companies may not be aware of all

10

NTP appealed at least one of those decisions.

Business Review Q3 2008 25

of the patents that may arise and
who owns them, at the time they are
required to make their investment
decisions. The risk of potential hold-up
may discourage firms from investing
in the first place. Such lost investment
would be particularly costly, since it
would otherwise enhance the network
and, in turn, reinforce the positive externalities that network effects convey.
Alternatively, such risk may increase
the barriers that must be overcome in
order for a network to reach a critical
mass. In other words, some networks
might never form.11
HAS THE AVAILABILITY OF
BUSINESS METHOD PATENTS
INCREASED FINANCIAL
INNOVATION?
It is always difficult to establish a
cause-and-effect relationship between
a policy change and subsequent
economic outcomes. This is especially
difficult in this case because there is
no systematic data on the volume of
these innovations over time. Ordinarily changes in the number of patents
might be used. But in this case such
changes might simply reflect the
fact that obtaining business method
patents became much easier after the
decision in State Street.
Measuring R&D. If the outputs
of financial innovation are difficult to
measure, another approach is to examine changes in the inputs, specifically
research and development (R&D).
The first items to look at, then, are the
measures of R&D spending obtained
from the National Science Foundation’s (NSF) regular survey of private
firms. The NSF has published these
data for most years since 1958. It began
reporting R&D statistics for firms in
finance, insurance, and real estate

(FIRE) only in 1995. Its most recent
estimate (2006) of R&D spending for
this group of industries was only $2
billion, compared with more than $220
billion for all industries.
The NSF reports that the majority
(58 percent) of R&D spending in FIRE
in 2003 was for computer software.
The financial sector’s focus on software R&D is consistent with the mix
of investment goods it purchases. In
1997, for example, companies in FIRE
bought $30 billion in computers and
software, making it the largest business
customer of the information technology sector (accounting for 19 percent
of sales). More than three-quarters of
financial-sector investment, excluding
structures, was devoted to the informa-

12

These statistics are from the article by
Douglas Meade and his co-authors.

tion, communication, and technology
sector.12
Economists often examine R&D
by comparing the size of these investments relative to sales or employment
in the industry. According to the NSF
data, financial services, including
real estate, are significantly less R&D
intensive than private industry as a
whole (Figure 2). By these measures,
the private economy as a whole enjoys
a research intensity more than five
times that of financial services. And
while the R&D intensity of the U.S.
economy has risen gradually over time,
there has been no apparent change in
the R&D intensity of financial services
(the obvious spike in 2000 may reflect
intense rewriting of computer code
to address the century-date-change
problem).
It is quite possible that the NSF’s
estimates for the financial sector do
not reflect all of the R&D activity

FIGURE 2
Research Intensity (R&D/Sales)
Percent
4.0
All Industries

3.5
3.0
2.5
2.0
1.5
1.0

Finance, Insurance,
& Real Estate

0.5
0
88 89 90 91 92 93 94 95 96 97 98 99 00

01 02 03 04 05 06

11

For a more detailed discussion, see my working
paper with Samuli Simojoki and Tuomas
Takalo.

26 Q3 2008 Business Review

Source: National Science Foundation and author's calculations

www.philadelphiafed.org

that is actually occurring. The NSF’s
methodology and the definition of
R&D employed are derived from a
long tradition of surveying R&D managers at manufacturing firms. In that
sector, R&D facilities are relatively
easy to identify, and members of senior
management know who their R&D
managers are. These factors make it
relatively easy to conduct a survey of
R&D patterns among manufacturing
firms. For most financial institutions,
the terms R&D, R&D lab, and R&D
manager are largely foreign concepts.13
For example, in 2006 only six publicly
traded financial firms reported any
R&D in their financial statements, and
the total amount they reported was
only $65 million.14 No publicly held
bank or insurance company reported
doing any R&D in that year.
The activities associated with
developing new financial products,
or better ways of delivering them,
often fall outside the definition of
R&D applied by official agencies. For
example, a number of tax-court decisions conclude that research carried
out by financial firms does not satisfy
the IRS’s definition of R&D. The NSF
excludes from its definition of R&D
“other nontechnological activities…
and research in the social sciences.”15
The development of a better credit
scoring model or a new derivative
contract would likely fall outside this
definition.

13
For additional discussion of the issues in
measuring R&D in finance and other service
industries, see the 2005 National Research
Council report and the report by Michael
Gallaher, Albert Link, and Jeffrey Petrusa.

Measuring Research Workers.
Other data may shed additional light
on both the level and the trend in
R&D being performed in this sector.
To do that, I compare the composition
of the workforce in financial services
with that of the private economy as

The activities
associated with
developing new
financial products,
or better ways of
delivering them,
often fall outside the
definition of R&D
applied by official
agencies.
a whole. This may be a particularly
informative measure for financial
services, since 80 percent of R&D
costs in this sector consist of wages
and fringe benefits.16 The strategy is
to identify those occupations that are
most likely to be used for research and
to count the number of these workers
among financial services firms.
To do that, I rely on the Occupational Employment Statistics produced
by the Bureau of Labor Statistics. I
defined a set of occupations I’ll call
research occupations. This set includes
all types of engineers and computer
programmers and all scientists (including social scientists) and research
managers.17 Physicians, teachers, and
technicians in any of the above fields
were excluded. Of course, not all work-

14

These data are from Standard & Poor’s
Compustat.

15

The instructions for the survey forms used
in the NSF survey of industrial R&D explicitly
exclude the following listed categories:
economics, expert systems, market research,
actuarial and demographic research, and R&D
in law.

www.philadelphiafed.org

16

That statistic is derived from NSF data for
2002. The comparable share for all private firms
is 53 percent.

17
See the appendix for a more complete list of
occupations included in this definition.

ers in these occupations and employed
by financial firms are actually engaged
in R&D; in fact, most are probably
not. But I expect that this is also true
of other industries. As long as the ratio
of actual R&D workers to my broader
measure remains constant over time,
the broader measure should accurately
capture any trend.
For 2005, my occupational data
identify about 3.2 million potential
research workers. In that year, the NSF
identified 1.1 million R&D workers in
all industries (see the first column of
Figure 3).18 In other words, for every
three workers in these research occupations there was an R&D worker in
the NSF counts. In financial services,
my occupational measure identifies
about 147,100 potential research workers in 2005, which is roughly five times
the number of R&D workers (30,200)
found by the NSF (see the second
column of Figure 3).
In the financial sector, about twothirds of potential research workers
were either computer programmers or
software engineers. The other third
were actuaries, operations researchers, market researchers, or social
scientists — occupations less likely to
be reflected in the NSF counts, since
work in these fields is not counted as
R&D. In contrast, in all industries, 85
percent of potential research workers
were engineers, programmers, or nonsocial scientists.
The NSF count of R&D workers in the financial sector is likely to
understate the actual number. As
described in the previous section,

18
By R&D worker, I mean the count of (fulltime equivalent) scientists and engineers
engaged in R&D as reported in the NSF survey
of industrial R&D. The survey instructions
indicate that the count should include “all
persons engaged in scientific or engineering
work at a level that requires knowledge of
physical or life sciences or engineering or
mathematics.”

Business Review Q3 2008 27

this may result from the definition
of R&D used and the greater difficulty in identifying where R&D is
performed in financial organizations.
A very crude estimate of the number
of additional R&D workers in finance
can be constructed using the relationships between my data and the NSF
data for all industries. If those relationships also hold true in finance, there
might have been another 20,000 R&D
workers in that sector in 2005 (see the
second column of Figure 3).19 About
half of this amount may be attributable
to the higher share of nontechnological occupations among workers who
may be involved in developing new
products or processes.
Just as with R&D spending, we
can create a measure of research
intensity by calculating the share of an
industry’s workforce that falls into the
occupations included in my definition
of potential research workers (Figure
4). There are several striking patterns.
First, the potential research share of
the financial workforce is about the
same as for private industry as a whole.
Second, after 1999, there is a rising
trend for the entire economy. The
pattern is more mixed in financial
services, with increases in some years
offset by declines in other years.20
The occupation-based measure of
research intensity can be broken down
to examine patterns within different segments of the financial services
sector (Figure 5). Again, there does
not appear to be a consistent trend for
any of these five industries, but there
are persistent differences across them.
Using somewhat older data, we can
examine even finer industry counts.
In 2001, for example, insurance firms

FIGURE 3
Research Workers, 2005
Thousands

Thousands

3500

350

3000

Potential Research Workers - BLS*
R&D Workers - NSF

2500

250

Nontechnological occupations*

3203.52

2000

300

Missing?*

200

1500

150
147.11

1000
500

1098 (34%)
50 (34%)

0
All Industries (left axis)

{

9
11

30 (21%)

100
50
0

Financial Industries (right axis)

Source: Author’s calculations using data from the Bureau of Labor Statistics and National Science Foundation
* Potential research workers include programmers, software engineers, actuaries, mathematicians,
operations researchers, statisticians, architects, cartographers, surveyors, all engineers, and all
life, physical, or social scientists. Estimates of additional financial R&D workers assume that the
true ratio of NSF R&D workers to potential research workers in financial services is identical to
the ratio for all private industries (34 percent). About half of this amount may result from undercounting R&D workers who are actuaries, operations or market researchers, or social scientists.
The remainder is categorized as potentially missing. See the appendix for additional information.

FIGURE 4
Potential Research Workers*
Share of workforce (Percent)

4

Financial Services

3
All Industries

2

1

0
90
19
Details on these calculations are found in the
data appendix.
20

The BLS introduced a new occupational
taxonomy in 1999, so we should be cautious
about interpreting the decline from the level of
the late 1990s.

28 Q3 2008 Business Review

93

97

98

99

00

01

02

03

04

05

06

Source: Bureau of Labor Statistics and author’s calculations
* Potential research workers include programmers, software engineers, actuaries, mathematicians,
operations researchers, statisticians, architects, cartographers, surveyors, all engineers, and all
life, physical, or social scientists.

www.philadelphiafed.org

accounted for nearly half (48 percent)
of potential research workers, followed
by commercial banks (20 percent) and
investment banks (12 percent).
Do these data suggest that the
financial services sector enjoys the
same research intensity as other parts
of the economy? Probably not. We
know from NSF data that, compared
with all private industries, financial
firms spend significantly less on R&D
per research worker.21 Adjusting for
this difference, it would appear that
financial services has a research intensity (roughly 1.3 percent) that is about
40 percent of that found in private
industry as a whole. Still, this would be
2.5 times higher than reported in the
NSF statistics.
What can we conclude? First, the
financial services sector is likely more
research intensive than is reflected in
the more traditional measures. Second,
there is no clear trend in the research
intensity of this industry. If financial
patenting is having an effect, it is not
easily discerned in any of the R&D
measures presented. Finally, NSF data
and my occupation-based measures
show that ICT (especially software) are
important technologies developed and
employed in financial services.
PATENT LITIGATION
While there is little evidence of a
change in R&D patterns in the industry, patent litigation involving financial
firms has increased. In perhaps the
first systematic study of suits involving
financial business method patents, Josh
Lerner found that they are litigated
at a rate 27 times higher than patents

in general.22 Defendants in these suits
were typically large financial services
firms or one of the financial exchanges. Plaintiffs were typically not financial companies. In several instances,
they were patent-holding companies.
In other words, they were not actively
engaged in providing goods or services.
Instead, they specialized in asserting,
and sometimes litigating, patents. It
also means they couldn’t be countersued for infringing someone else’s
patents.
Litigious plaintiffs have obtained
significant damage awards and licensing revenues. These are usually paid by
very large financial institutions or the
technology companies that serve them.
For example, in January 2006, the
Lending Tree Exchange was found to
infringe a patent on a method and sys-

tem for making loan applications and
placing them up for bid by potential
lenders. The jury awarded $5.8 million
in damages to the plaintiff, IMX, an
award that was increased 50 percent in
subsequent proceedings in the district
court. In an unrelated case, the three
American futures exchanges settled
infringement suits, each involving the
same patent, collectively paying about
$50 million in licensing fees.23
Litigation Affecting Consumer
Payments. Another important example of patent litigation involves the
application of new technologies to an
old payment instrument — the paper
check. Check imaging and exchange
technologies are especially important
in the U.S. at this time. The Check
Clearing for the 21st Century Act of

23
22

See the article by Mark Young and Gregory
Corbett.

See Lerner’s working paper.

FIGURE 5
Potential Research Workers*
by Financial Segment
Share of workforce (Percent)

Central Bank
Credit Intermediation
Insurance

12

Investment Banking
Financial Exchanges

10

8

6

4

2

21
NSF data for 2003 show that for every dollar
of R&D spent per full-time researcher in all
industries, financial firms spent less than 40
cents. While some of this disparity may be due
to the definitional issues described earlier, it’s
unlikely they explain the entire difference.

www.philadelphiafed.org

0
90

93

97

98

99

00

01

02

03

04

05

06

Source: Bureau of Labor Statistics and author’s calculations
* Potential research workers include programmers, software engineers, actuaries, mathematicians,
operations researchers, statisticians, architects, cartographers, surveyors, all engineers, and all life,
physical, or social scientists.
Business Review Q3 2008 29

2003 (Check 21) permits banks to
process check transactions without
physically presenting the original
check to the issuing bank, so long as
certain standards are satisfied.24 Financial institutions are making very large
investments in technology in order to
take advantage of the efficiencies afforded by this law.
In January 2006, a company called
DataTreasury sued Wells Fargo, 56
other banks, and a number of other
firms that participate in the check-image clearing process. The company also
sued the Clearing House Payments
Co., which operates a check-image
exchange network. DataTreasury owns
at least six patents on processes for
creating, processing, and storing digital
images of paper checks. In earlier years
it had sued a number of institutions
and obtained licensing agreements
with firms such as JPMorgan Chase,
Merrill Lynch, and ATM manufacturer
NCR Corporation. More recently, the
ATM manufacturer Diebold struck a
licensing agreement with DataTreasury
in part to assuage bank customers who
have grown increasingly concerned
about their potential liability for patent
infringement.25
SHIFTING SANDS?
While financial patents are likely
here to stay in the United States, they
will be affected by a number of recent
Supreme Court decisions and, quite
possibly, new federal legislation. For
the most part, this activity is prompted
by more general concerns about the
efficacy of our patent system, but some
proposals are specifically directed at

24

Public Law 108-100, 12 U.S.C. 5001.

25
See the article by Steve Bills. There are at
least 63 issued U.S. patents and 123 published
patent applications that contain one or more
references to the phrase Check 21.

30 Q3 2008 Business Review

business method patents. For example,
a patent reform bill (H.R. 1908)
passed by the House of Representatives in 2007 would make tax-planning
methods unpatentable subject matter.
In 2008, the Senate Judiciary Committee reported a bill (S. 1145) that
included an amendment intended to
preclude patent infringement claims
against institutions processing checks
in compliance with the requirements
of Check 21.26
The Supreme Court Speaks. In
2006 the Supreme Court decided a
case involving a patent owned by the
company MercExchange that a federal
district court determined was infringed
by eBay’s “Buy it Now” feature on its
auction website. The question was
whether, in addition to damages,
MercExchange was also entitled to an
injunction preventing eBay’s ongoing
use of this feature. On appeal, the
Federal Circuit concluded that injunctions should be denied to a successful
plaintiff in patent cases only under
exceptional circumstances. The Supreme Court disagreed, pointing to its
traditional balancing test for determining the appropriateness of a permanent
injunction. On retrial, the original
court concluded that an injunction
was not warranted.27
In 2007 the Supreme Court
decided what may become the most
important patent case in at least a
decade. In KSR International v. Teleflex,
the court considered how to determine
whether an invention consisting of a

26
A cost estimate for the bill, prepared by the
Congressional Budget Office, suggests that
the affected patent holders would likely sue
the federal government for a taking of private
property. If those suits were successful, CBO
estimates that the resulting compensation
payments could be as high as $1 billion.

combination of pre-existing elements
is obvious and therefore unpatentable.
With inventions like this, courts worry
about the problem of hindsight bias:
A novel combination of the elements
seems more obvious once it has been
tried and proven to work. To prevent
this, the Federal Circuit created limitations on how the prior art could be interpreted to suggest that an invention
was obvious. Unless a piece of prior
art actually suggested the combination
of ideas, the Federal Circuit typically
concluded the invention was not obvious. At the extreme, to demonstrate
obviousness, all the relevant aspects of
the new combination must be mentioned in a single piece of prior art.
Such an approach has been criticized for being too permissive, since it
presumes that a person of ordinary skill
in the art has little ability or creativity.
Some legal scholars and economists
have argued that the standard should
be related to the rate of technical
progress in the field. If the standard is
too low, the result is less innovation in
those industries that ought to be the
most innovative.28 Without specifically articulating a more appropriate
standard, a unanimous Supreme Court
concluded that the Federal Circuit had
set the bar too low: “In many fields
there may be little discussion of obvious techniques or combinations, and
market demand, rather than scientific
literature, may often drive design
trends. Granting patent protection to
advances that would occur in the ordinary course without real innovation
retards progress and may, for patents
combining previously known elements,
deprive prior inventions of their value
or utility.”
Business method patents are
already feeling the effects of this deci-

27

Permanent injunctions in patent cases have
not disappeared. In his article, Keith Slenkovich
identifies 22 district court decisions after eBay
where an injunction was awarded.

28

See, for example, the article by John Barton
and my 2007 law review article.

www.philadelphiafed.org

sion. In one case (in re Trans Texas
Holdings), several issued patents for a
system of inflation-adjusted deposit
and loan accounts were rejected on reexamination, and the Federal Circuit
upheld the decision. The rejection was
based on an allegedly obvious combination of two pieces of prior art. The
first was a book chapter that described
how, in the 1950s, Finnish banks
would adjust their loan and deposit
accounts for the actual inflation that
had occurred. The second was a patent
granted in 1983 that described how to
use a data processor (for example, a
computer) to manage a set of accounts.
In a separate case (Advanceme Inc v.
Rapidpay), a district court invalidated
a patent on a computerized method for
securing a loan using future credit card
receivables, arguing that the claimed
invention was a predictable variation
of at least five card programs already in
existence.
Congress Deliberates. For a number of years, there has been considerable debate over the efficacy of the
patent system in facilitating innovation
in high-technology industries that
tend to innovate cumulatively.29 This
stands in contrast to the view that in
other industries, such as chemicals and
pharmaceuticals, where innovations
tend to be more discrete, the patent
system seems to be functioning reasonably well. From this debate a consensus
is emerging in favor of some limited
reforms. Other proposals are more
controversial.
Two proposals are particularly
relevant for business method patents.
The first is designed to increase the
quality of patents issued by increasing
the information available to the patent
office. That information is likely to

come from interested third parties if
they are afforded the opportunity to
contest the issuance of a pending or recently granted patent (these are called
opposition procedures).
Limited forms of these procedures
exist under current law, but they are
used infrequently. One proposal would
reduce certain disadvantages that a
third party might experience in any
subsequent litigation involving the patent. Under the current post-grant procedure (inter partes re-examination),
a third party is precluded from using
any argument in subsequent litigation
that it could have raised during the reexamination proceedings.30 Under the
proposal, the third party is precluded
from using only the actual arguments
it raised during the opposition.
Another proposal stipulates that
when the patent in question involves
a combination invention, damages for
infringement should be based on the
incremental contribution of the patented feature to the value of the final
product. This proposal is intended to
address the problem of royalty stacking
in information, communication, and
technology industries where products
may embody dozens or even hundreds
of patented inventions. Some researchers and industry participants suspect
that, in such environments, there is
a tendency for courts to overestimate
damages from the infringement of individual patents.31 They fear the resulting conflict over the division of profits
may reduce the incentive to bring new
products to market.
Concerns about royalty stacking

30

See 35 U.S.C. § 315(c). The purpose of this
restriction is to prevent abuse of the opposition
process.

31
29

See the report by the Federal Trade
Commission, the book by Stephen Merrill,
Richard C. Levin, and Mark Myers, and the
book by Adam Jaffe and Josh Lerner.

www.philadelphiafed.org

may also arise in the financial sector,
especially given its reliance on ICT
and the emphasis on software in its
R&D. In particular, innovations in
the processes used to provide financial
services are typically cumulative in
nature. As noted earlier, financial markets and payment systems often exhibit
network effects. These effects create
value for network participants that
may complicate the estimation of the
incremental benefit attributable to one
of many patented inventions employed
by a network.
Patent Boundaries. Not all concerns about business method patents
are likely to be resolved. One major
concern about these patents, and
software patents more generally, is that
their abstractness makes it difficult to
determine the actual boundaries of the
property rights being granted. Using
the jargon of patent law, these patents
often suffer from ambiguous “claims.”32
This is problematic because if firms
cannot determine what is protected
and what is not, instances of inadvertent infringement are more likely to
occur.
Consider the analogy to property
rights to land. If the boundary lines
between properties are consistently unclear or frequently reinterpreted over
time, trespassing on another’s property
would be more difficult to avoid. Even
worse, there may be instances in which
a person makes significant improvements to his or her property only to
find he or she has built partially on
another’s land. The result would be
more litigation, and this additional risk
might deter efficient investment in the
first place.

The issues are described in the article by
Mark Lemley and Carl Shapiro and formally
modeled in Shapiro’s working paper. For some
examples from actual cases, see the testimony
by John Thomas.

32

In their 2008 book, James Bessen and
Mike Meurer point out that appeals over the
definition of claims in a business method patent
occur more than six times as frequently as for
(litigated) patents in general.

Business Review Q3 2008 31

CONCLUSION
There is, at present, very little evidence to argue that business method
patents have had a significant effect on
the R&D investments of financial institutions. It is possible that the availability of business method patents has
encouraged more entry and R&D by
start-up firms or more efficient trading
of technologies. At present, however,
these represent intriguing possibilities
and not outcomes that have actually been measured. In short, we still
cannot determine whether financial
patents are creating value for the U.S.
economy.
Nevertheless, business method
patents are becoming commonplace.

Compared with many other patents,
they are litigated more often. Some of
this litigation has resulted in very large
settlements paid by established providers of financial services. These facts, in
themselves, don’t prove anything. But
combined with the lack of evidence
suggesting a positive effect on R&D investments, they do suggest that there is
likely scope for improving on the current business method patent bargain.
From the standpoint of policy, it
is important to ensure that patents
are granted only for new and nonobvious business methods and that those
standards are rigorous. In this light,
the Supreme Court’s decision in KSR
and the debate over the adoption of

enhanced opposition procedures appear to be positive developments. The
characteristics of financial markets
— in particular, network effects and
the requirements of interoperability
— should affect the choice of appropriate remedies for patent infringement.
At least after the eBay decision, these
factors may influence when a court is
willing to grant an injunction or how
it will determine the damages resulting from infringement. Each of these
changes suggests that we may already
be in the process of increasing the benefits and reducing the costs to society
of financial patents. But there is likely
more work to be done. BR

DATA APPENDIX
Counts of business method patents consist of all patents
assigned to Class 705 (Data Processing: Financial, Business
Practice, Management, or Cost/Price Determination) in the
U.S. Patent Classification System. The United States Patent and
Trademark Office (USPTO) describes Class 705 as a collection
of financial and management data processing areas, including
insurance, trading of financial instruments, health-care management, reservation systems, computerized postage metering,
electronic shopping, auction systems, and business cryptography. For additional information, see http://www.uspto.gov/web/
menu/busmethp/class705.htm.
The estimate of business method patents that are more
financial in nature is based on counts of patents falling into
subclasses of Class 705 based on analysis of patents performed
by CHI research in 2001. These subclasses include 1, 4, 7, 10,
16, 26, 30, 33, 45, 53, and 64-80. These exclude many of the
patents primarily dealing with cryptography, postage metering,
and similar technologies less closely related to the provision of
financial services.
The definition of software patents used to calculate the
software share of business methods is the one specified in the
article by Bessen and Hunt. It is based on the following search
of the USPTO patent full-text database: “SPEC/software OR
SPEC/computer AND program ANDNOT spec/antigen OR
antigenic OR chromatography ANDNOT ttl/chip OR semiconductor OR bus OR circuit OR circuitry AND ISD/$/$/yyyy
AND ccl/705/$.”
The analysis of occupational data is based on the Occupational Employment Statistics compiled by the Bureau of Labor
Statistics (BLS) (see http://www.bls.gov/oes/home.htm). The

32 Q3 2008 Business Review

BLS has used different industrial and occupational taxonomies
over the years. In particular, industries were defined using the
Standard Industry Classification (SIC) system up to 2001, when
the BLS switched to the North American Industry Classification (NAIC) System. The BLS used its own occupational
definitions in these data until 1999, when it adopted definitions
based on the Census Bureau’s Standard Occupational Classification (SOC) system. In the end, I constructed two lists
of industries and three lists of research occupations that were
roughly comparable over time. Note that my definition of financial services excludes real estate and holding companies. Potential research occupations include computer scientists, programmers, software engineers, actuaries, mathematicians, operations
researchers, statisticians, architects, cartographers, surveyors, all
engineers, and all life, physical, and social scientists. Additional
details are available upon request.
In the text, I suggested a potential undercounting of R&D
workers in financial services of about 20,000. This was derived
as follows. For all industries in 2005, the ratio of potential
research workers to R&D workers identified by the NSF was
2.9:1. Dividing the 147,000 potential research workers in
financial services by 2.9 yields about 50,400 jobs, about 20,200
more than found by the NSF. If, however, I exclude workers in
all industries who were actuaries, operations researchers, market
researchers, and social scientists, the ratio of potential research
workers to NSF R&D workers falls to 2.5:1. Excluding jobs in
those occupations in the financial sector leaves about 98,400
potential research workers in 2005. Dividing this number by 2.5
yields about 39,400 jobs, about 9,200 more than reported in the
NSF data.

www.philadelphiafed.org

REFERENCES

Barton, John H. “Nonobviousness,”
mimeo, Stanford University (2001).
Bessen, James, and Robert M. Hunt. “An
Empirical Look at Software Patents,”
Journal of Economics and Management
Strategy, 16 (2007), pp. 157–89.
Bessen, James, and Michael J. Meurer.
Patent Failure: How Judges, Bureaucrats, and
Lawyers Put Innovators at Risk. Princeton,
NJ: Princeton University Press, 2008.
Bills, Steve. “Diebold Bids to Lift Image
ATMs with Patent Deal,” American
Banker, 172:16 (January 24, 2007).
Caskey, John P. “The Evolution of the
Philadelphia Stock Exchange,” Federal
Reserve Bank of Philadelphia Business
Review (Second Quarter 2004).
Cohen, Wesley M., Richard R. Nelson,
and John P. Walsh. “Protecting Their
Intellectual Assets: Appropriability
Conditions and Why U.S. Manufacturing
Firms Patent (or Not),” NBER Working
Paper 7552 (2000).
Congressional Budget Office. “Cost
Estimate: S. 1145, Patent Reform Act of
2007 as reported by the Senate Committee
on the Judiciary on January 24, 2008.”
(February 15, 2008) at http://www.cbo.gov/
ftpdoc.cfm?index=8981
Federal Trade Commission. To Promote
Innovation: The Proper Balance of
Competition and Patent Law and Policy.
Washington, DC: Federal Trade
Commission (2003).

www.philadelphiafed.org

Gallaher, Michael, Albert Link, and
Jeffrey Petrusa. “Measuring ServiceSector Research and Development,” NIST
Planning Report No. 05-01 (March 2005).
Hunt, Robert M. “Economics and the
Design of Patent Systems,” Michigan
Telecommunications and Technology Law
Review, 13 (2007), pp. 457-70.
Hunt, Robert M., Samuli Simojoki, and
Tuomas Takalo. “Intellectual Property
Rights and Standard Setting in Financial
Services: The Case of the Single European
Payments Area,” Federal Reserve Bank of
Philadelphia Working Paper 07-20 (2007).
Jaffe, Adam B., and Josh Lerner. Innovation
and Its Discontents: How Our Broken
Patent System Is Endangering Innovation
and Progress, and What to Do About It.
Princeton, NJ: Princeton University Press,
2004.
Lemley, Mark A., and Carl Shapiro.
“Patent Hold-Up and Royalty Stacking,”
mimeo, University of California at Berkeley
(2007).
Lerner, Josh. “Trolls on State Street? The
Litigation of Financial Patents, 1976-2005,”
mimeo, Harvard Business School (2006).
Levin, Richard C., Alvin Klevorick,
Richard Nelson, and Sidney Winter.
“Appropriating the Returns From
Industrial Research and Development,”
Brookings Papers on Economic Activity
(1987), pp. 783-831.
Mansfield, Edwin. “Patents and
Innovation: An Empirical Study,”
Management Science, 32 (1986), pp. 173-81.

Meade, Douglas S., Stanislaw J. Rzeznik,
and Darlene C. Robinson-Smith. “Business
Investment by Industry in the U.S.
Economy for 1997,” Survey of Current
Business, 83:11 (2003), pp. 18-70.
Merrill, Stephen A., Richard C. Levin,
and Mark B. Myers, eds. A Patent System
for the 21st Century. Washington: National
Academies Press, 2005.
National Research Council. Measuring
Research and Development Expenditures
in the U.S. Economy. Washington DC:
National Academies Press, 2005.
National Science Foundation. Survey
of Industrial Research and Development.
Washington: NSF (various years).
National Science Foundation. Survey of
Industrial Research and Development Form
RD-1 Instructions (2005); available at
http://www.nsf.gov/statistics/ srvyindustry/
surveys/srvyindus_rd1i_2005.pdf
National Science Foundation. U.S.
Business R&D Expenditures Increase in
2006. NSF Info Brief 08-313, 2008.
Shapiro, Carl. “Injunctions, Hold-Up, and
Patent Royalties,” mimeo, University of
California at Berkeley (2006).
Silber, William L. “Innovation and New
Contract Design in Futures Markets,”
Journal of Futures Markets, 1 (1981), pp.
123-55.
Sirri, Erik, and Peter Tufano. “Competition
and Change in the Mutual Fund Industry,”
in Samuel L. Hayes, III, ed., Financial
Services: Perspectives and Challenges.
Boston: Harvard Business School Press,
1993.

Business Review Q3 2008 33

REFERENCES

Slenkovich, Keith. “Triple Dose of Bad
News to Non-Practicing Patent Holders,”
IPFrontline (August 29, 2007); from
http://www.ipfrontline.com/depts/article.
asp?id=15866&deptid=7.
Thomas, John R. Prepared statement for
U.S. House of Representatives Hearing
on H.R. 1908, The Patent Reform Act of
2007. Hearings before the Subcommittee
on Courts, the Internet, and Intellectual
Property of the Committee on the Judiciary (April 26, 2007).
Tufano, Peter. “Financial Innovation and
First-Mover Advantages,” Journal of Financial Economics, 25 (1989), pp. 213-40.
Tufano, Peter. “Financial Innovation,” in
G.M. Constantines, M. Harris, and R.
Stulz, eds. Handbook of the Economics of
Finance, Vol. 1a: Corporate Finance. Amsterdam: Elsevier, 2003.

34 Q3 2008 Business Review

Young, Mark, and Gregory Corbett.
“Futures Patent Litigation: A new
Competitive Force,” Futures Industry
Magazine (January/February 2005); from
http://www.futuresindustry.org/fi-magazinehome.asp?a=979

eBay Inc. et al. v. MercExchange, L. L. C.
126 S. Ct. 1837 (2006).
Graham v. Deere 383 U.S. 101 (1966).
IMX, Inc. v. Lendingtree, LLC 469 F. Supp.
2d 203 (D. Delaware 2007).

Cases Cited
Advanceme Inc v. Rapidpay, LLC, et al.,
Case No. 6:05 CV 424 (E.D. Texas 2007).

in re Trans Texas Holdings Corp. 498 F.3d
1290 (Fed Cir 2007).

AT&T v. Excel Communications 172 F.3d
1352 (Fed Cir 1999).

KSR International Co. v. Teleflex Inc. 127 S.
Ct. 1727(2007).

Calmar, Inc. v. Cook Chemical Co, 336 F.2d
110 (8th Cir. 1964)

State Street v. Signature Financial Group 149
F.3d 1368 (Fed Cir 1998).

DataTreasury Corporation v. Wells Fargo &
Co. No. 2:06-cv-00072 (E.D. Texas 2006).

www.philadelphiafed.org

Innovation and Regulation in Financial Markets*
A Summary of the 2007 Philadelphia Fed Policy Forum
BY LORETTA J. MESTER

“I

nnovation and Regulation in Financial
Markets” was the topic of our seventh
annual Philadelphia Fed Policy Forum,
held on November 30, 2007. This event,
sponsored by the Bank’s Research Department, brought
together economic scholars, policymakers, and market
economists to discuss and debate the consequences of
financial innovation and the implications for financial
market regulation. The recent events in financial markets
and their effects on the real sector of the economy
underscore the importance of greater understanding and
further research on these topics.

The planning for our 2007 Policy
Forum began well before the onset of
the financial market disruptions in the
summer of 2007. By the time of our
conference on November 30, 2007, the
timeliness of the topic – innovation
and regulation in financial markets –
could not be denied. The continued
problems in the financial markets,
which began with subprime mortgages but expanded to other financial
instruments, the ensuing spillovers
from the financial market disruptions
to the real sector of the economy, and
Loretta J. Mester
is a senior vice
president and
director of
research at the
Federal Reserve
Bank of
Philadelphia. This
article is available
free of charge at
www.philadelphiafed.org/research-and-data/
publications/.
www.philadelphiafed.org

the steps taken by the Federal Reserve
and the U.S. Treasury to help ensure
financial stability have led to various
proposals for new regulatory structures to help limit systemic risk in our
evolving financial markets. Given the
importance of the financial markets to
our economy, it is vital that we get the
reforms right. Better understanding of
the pros and cons of financial innovation and financial market regulation –
the topic of our 2007 Policy Forum – is
an important step in doing so.
Charles Plosser, president of
the Federal Reserve Bank of Philadelphia, provided opening remarks
and outlined the Policy Forum’s three
sessions. He pointed out that whenever
there is innovation, regulation often
follows. By its very nature innovation is
*The views expressed here are those of the author and do not necessarily represent the views
of the Federal Reserve Bank of Philadelphia or
the Federal Reserve System.

a messy process with winners and losers. Market discipline is an important
part of the process, helping to weed
out flawed from beneficial innovations.
But the fact that there are winners and
losers sets up an environment that is
ripe for regulation.
Our first session addressed issues
in corporate governance. In financial
markets, the innovation of high-yield
bonds contributed to a boom in corporate restructuring and buyouts, which
in turn led to changes in corporate
governance structures. The boom and
bust in technology stocks highlighted
some of the shortcomings of these
new governance structures, leading
to the passage of the Sarbanes-Oxley
Act in 2002. Some have argued that
Congress was too quick to act. Plosser
asked whether there were lessons to be
learned for our current situation.
Our second session examined several innovations in financial markets
and the role regulation may play in
helping innovations yield more efficient economic outcomes. Regulation
and innovation are interrelated – regulation, or the desire to evade regulation, can help spur innovation. Some
of these innovations may be inefficient
and some may fail, causing painful
corrections. The current situation is
a case in point. Better understanding
of the interplay between innovation
and regulation may help us avoid these
types of situations in the future.
Our third session covered the
role of regulation in financial markets.
Technology can spur innovation, but
regulation also affects the way markets function. For example, different
regulatory structures can affect the
competitiveness of financial markets.
Business Review Q3 2008 35

As Plosser pointed out, there are
subtle trade-offs in the benefits and
costs of particular types of regulation,
and these have implications for the
health of financial markets. Thus,
assessing the costs and benefits will be
an important part of redesigning our
financial market regulatory structure.
CORPORATE GOVERNANCE1
Roberta Romano, of the Yale
University Law School, began the first
session with a discussion of the Sarbanes-Oxley Act and its effect on cor-

Roberta Romano,
Yale University Law School

porate governance. Romano pointed
out that the act was passed swiftly with
little opposition, but since then, some
flaws in the act have become apparent
and four major commission reports on
the act have been published. (One of
these, by the Committee on Capital
Market Regulation, was discussed by
committee co-chair R. Glenn Hubbard
in the final session of our Policy Forum.) Two criticisms are that the costs
of compliance are disproportionately
high for smaller public firms and that
there has been an adverse impact on
U.S. capital markets’ competitiveness.
Some have recommended that small
firms and foreign firms be exempted.
The Securities and Exchange Commis-

1

Some of the presentations reviewed here and
background papers are available on our website
at www.philadelphiafed.org/research-and-data/
events/.

36 Q3 2008 Business Review

sion (SEC) has rejected those recommendations, but it has tried to lower
the compliance burden on small firms
and has made it easier for foreign firms
to de-register and leave U.S. markets.
The rationale for the latter is that
foreign firms may be less reluctant to
enter U.S. markets if they feel the costs
of leaving are not too high.
Romano’s research has attempted
to assess the probability that the act
will be revised.2 This involves assessing the political climate for such a
revision, which is usually difficult even
when a piece of legislation’s flaws are
apparent. As part of her assessment,
she has examined how Sarbanes-Oxley
has been covered by the business
press. Coverage of Sarbanes-Oxley has
increased over time, with the national
press focusing more on the issue of
competitiveness and the regional press
covering both issues of competitiveness
and small-firm impact. As coverage
has increased, so have congressional
hearings into Sarbanes-Oxley and the
introduction of bills that call for some
revisions to the act, most of these
focused on exemptions for small firms
and/or community banks. However,
Romano pointed out that it took over
60 years to repeal the Glass-Steagall
Act, which separated commercial
banking from investment banking, and
she thinks it would take a major shift
in the political environment before revision of Sarbanes-Oxley would move
quickly. She said that despite increasing dissatisfaction with the SarbanesOxley Act, it could take some time
before its flaws are addressed, given
the difficulty in altering the status quo
within our political system.
Bengt Holmstrom, of the Massachusetts Institute of Technology,

continued the discussion on corporate
governance. He took two perspectives: the first as a member of a board
of directors and the second as an
academic who has studied the issue.
Holmstrom has been on the boards of
several companies, including a family
firm for 20 years and Nokia, the global
mobile telecommunications company,
for the past nine years. In his view, the
academic literature on the corporate
governance scandals hasn’t understood
the reason the scandals occurred
because it hasn’t understood the role of
the board. The corporate governance
discussion has focused too much on
executive compensation. In Holmstrom’s view, everyone agrees that we
got executive compensation wrong, but
the literature attributes this to weak
boards and their failure to intervene.
It concludes that the corporate governance system is fundamentally flawed
and therefore in need of wholesale
reform, with shareholders gaining
significantly more power. Holmstrom
disagrees. He pointed out that if you
look at the longer record of the U.S.
corporate governance system, it has
performed extraordinarily well and
there is nothing better elsewhere in the
world. He cautioned that a wholesale
change would be difficult to unwind,
as Romano pointed out earlier. Thus,
it is important to understand the
source of the corporate governance

2

See for example, Roberta Romano, “The
Sarbanes-Oxley Act and the Making of Quack
Corporate Governance,” Yale Law Journal, 114
(2005), pp. 1521-1611.

Bengt Holmstrom,
Massachusetts Institute of Technology
www.philadelphiafed.org

scandals in 2000 before advocating a
completely different system.3
In Holmstrom’s view, flaws in
the design of executive compensation schemes, which gave executives
powerful incentives to act on their
own behalf rather than on the behalf
of stockholders, did contribute to
the accounting scandals. But weak
auditing systems, which allowed the
executives to act in this way, were also
part of the problem. Some argue that
the scandals occurred because the
board members weren’t strong enough
against the executives, and therefore,
stockholders need to be given more
rights in intervening in the running of
the firm. Holmstrom says that there
will be costs associated with such a
system. His experience suggests that
boards should not be watchdogs over
CEOs. Instead, the board’s role is to
evaluate whether the CEO and management know what they are doing
and have the ability to get the firm out
of a crisis should one arise – to evaluate the management team’s capabilities
for running the firm, not to determine
whether the team is pilfering the firm.
The board needs a trusting relationship with the CEO; it needs open
communication to know how the CEO
approaches problems in order to assess
whether this is the right person to be
running the company. It would be
difficult to have such a relationship
if the board were always investigating the CEO. Holmstrom believes
one needs to consider how proposed
reforms would affect the communication relationship between the board

and CEO. If shareholders can intervene significantly into how the firm
is run this could adversely affect this
communication relationship. While
Holmstrom believes that shareholders
should have the right to fire the board,
he doesn’t believe they should be able
to fire one member selectively, since
that might prevent board members
from effectively doing their job.
Holmstrom is also skeptical of
some of the reforms being proposed
for executive compensation schemes.
For example, the use of options in
executive compensation schemes arose
because of problems with the performance plans used in the 1960s and
1970s. Now the pendulum has swung
back to such plans in which accounting numbers are used as triggers for
how much to pay people. Holmstrom
prefers payment schemes that are
simpler but more transparent, since he
believes such plans would yield better
incentives. A compensation scheme
that aims to give the executive a sufficiently high stake in the firm over time
should yield better incentives.
Franklin Allen, of the Wharton
School, University of Pennsylvania,
expanded the discussion to corporate
governance outside the U.S. Allen
noted that the notion of a corporation’s purpose differs across countries.
When you ask executives (or his MBA
students) from countries where the
spoken language is English and those

3

For further discussion, see Bengt Holmstrom
and Steven N. Kaplan, “Corporate Governance
and Merger Activity in the United States:
Making Sense of the 1980s and 1990s,” Journal
of Economic Perspectives, 15 (Spring 2001), pp.
121-144 and Bengt Holmstrom and Steven
N. Kaplan, “The State of U.S. Corporate
Governance: What’s Right and What’s Wrong,”
Journal of Applied Corporate Finance, 15 (Winter
2003), pp. 8-20.

www.philadelphiafed.org

Franklin Allen, The Wharton School,
University of Pennsylvania

from non-English speaking countries
whom a company is there for, you get
radically different answers. Those
from English-speaking countries say
the company is there for the shareholders. Those from non-English
speaking countries say it is there for all
stakeholders–shareholders, employees,
bondholders, and customers. If you
ask whom the company should look
after if things go bad, you again get
different answers. In Japan, 97 percent
say job security is most important. In
Germany and France, a strong majority also favors maintaining employment. But in the U.S. and the U.K.,
maintaining dividends is significantly
more important.
These differences also mean
there will be differences in corporate
governance structures across countries.
The U.S. and the U.K. have specific
laws stating that the managers’ duty
is to the shareholders’ interests. In
Germany, employees have a 50 percent
representation on the firm’s supervisory board, which oversees the management board. Thus, workers’ interests
are taken into account in the firm’s
strategic decisions. China has recently
introduced mandatory representation
of workers on boards. In France, while
it is not mandatory to have workers
on boards, workers do have the right
to attend board meetings. In Finland,
companies can choose whether to have
workers on their boards, and many
companies have chosen to have worker
representation. Allen points out that
despite the existence of different systems, the corporate governance literature has focused only on shareholder
value, at least until recently.
One question of interest is which
system is better in terms of allocating
society’s resources most efficiently. We
know from economics that if markets
are complete, there is no asymmetric
information, and there is perfect competition, then maximizing shareholder

Business Review Q3 2008 37

value is efficient. However, if there are
market imperfections, it isn’t clear this
yields the best outcome. Some of Allen’s ongoing theoretical research with
Elena Carletti and Robert Marquez
indicates that when there are imperfect markets, shareholders as well as
workers may be better off when workers are represented on a firm’s board.4
Worker representation changes the
firm’s incentives toward taking actions
that reduce the chance of bankruptcy.
This leads to less competition, which
takes the form of higher prices (which
hurt consumers), but this in turn leads
to higher expected profits and in some
cases higher overall market value than
when the firm acts only in shareholders’ interests. Thus, it is not always
the case that workers gain at the
expense of shareholders. Allen and his
co-authors are also investigating when
firms will choose to be stakeholderoriented versus shareholder-oriented,
and what happens in product markets
when firms of each type compete.
The auto industry provides such an
example, with firms from the U.S.,
a shareholder-oriented system, and
Germany, a stakeholder-oriented
system, competing. Questions such as
these become even more important as
countries such as China, with different
corporate governance structures, gain
global economic importance.
INNOVATIONS IN
FINANCIAL MARKETS
Our second session delved into
financial market innovations with
speakers who have academic, policymaking, and market practitioner
experience. John Geanakoplos, of
Yale University, discussed his research
on the foundations of market liquid4

See Franklin Allen, Elena Carletti, and Robert
Marquez, “Stakeholder Capitalism, Corporate
Governance, and Firm Value,” Wharton Financial Institutions Center Working Paper 07-39,
August 4, 2007.

38 Q3 2008 Business Review

ity and financial crises. This research
yields several policy implications for
the current period of financial market
distress. Geanakoplos noted that the
interest rate has played a central role
in economics for more than a century,
but during crises, collateral levels and
margins, which he sees as synonymous
with leverage, become paramount.
In these situations, the interest rate
may not move at all, but the economy
is transformed by radical shifts in
margins and collateral levels. Thus, in
his view policymakers may want to pay
more attention to collateral levels and
less attention to interest rates.
Just as supply and demand determine the interest rate in equilibrium,
in Geanakoplos’s theory they also
determine the equilibrium margin.
There is a leverage cycle in which the
economy can go from having too much
leverage to too little. Consider an
economy that has too much leverage,
that is, where margins are very low. If

John Geanakoplos,
Yale University

there is a spate of bad news that lowers
expected values but increases expected
volatility, individuals may demand
more margin to cover their higher
risk, and the situation becomes one in
which there is too little leverage in the
market. Geanakoplos pointed to several historical episodes in which there
were extreme changes in margins: the
1994 derivatives crisis, 1998 emerging markets debt crisis, and the 2007

subprime crisis (and a possible housing
market crash, which he speculated
might follow).
Geanakoplos is a partner of
Ellington Capital Management, a
mortgage hedge fund, so he spoke as
both an academic and a practitioner
as he elaborated on the subprime
crisis. In his view, the problems in
the subprime market derive from the
margin requirement, that is, the down
payment, which prevents subprime
borrowers from refinancing. Prior to
the current crisis, when a subprime
borrower’s mortgage rate reset at a
higher level, a borrower that was in
good standing was able to refinance
at a lower rate. Now, both the decline
in housing prices and the rise in down
payment requirements have prevented
such refinancings. In Geanakoplos’s
view, the interest rate has not played
the main role.
In Geanakoplos’s theory, a liquidity crisis begins when bad news about
an asset lowers its price. The owners
of this asset had been the most optimistic buyers and they were leveraged
because they wanted to invest more
in the asset than their own resources
permitted. The drop in the asset price
hurts them more than others in the
economy. Thus, wealth is redistributed
away from the asset’s natural buyers,
and this causes the asset price to fall
more, which then causes a further
drop in wealth, and so on. The crisis
reaches its climax only when lenders
then tighten the margin requirements,
that is, the amount of collateral they
require to back a loan. This tightening
of margins may force investors to sell
the asset, which leads to even greater
declines in the asset’s value, and
there may be spillovers to other asset
prices if they also need to be sold. Of
course, those who manage to survive
the crisis can benefit from the buying opportunity provided by the low
prices of the assets.5 In Geanakoplos’s

www.philadelphiafed.org

economic model, even a small piece of
bad news, that is, one that results in
just a small increase in the chance of
a bad outcome, can have a large effect
on the price if it results in driving the
most optimistic buyers (who were the
most leveraged) out of the market and
increasing borrowing margins. The
leverage cycle, then, has broad implications for the economy.
While crises are, thankfully, rare
events, changes in margins and the
resulting problems happen more frequently. In some cases, bad news isn’t
large enough to drive the optimistic
buyers from the market and create a
financial crisis. Instead, it raises uncertainty and disagreement about the
future among people so that the less
optimistic want to sell their assets, and
the optimists want to buy up the assets
being sold. Because margins have
risen, in order to do that, the optimists
need to sell some of their other assets,
an action that causes their prices to
fall. Thus, there is some contagion.
The optimists also want to hold assets
they can borrow money against, so
they reallocate their portfolio. There
is a flight to quality, a flight away from
illiquidity.
Geanakoplos pointed out that an
important implication of the theory is
that policymakers might want to focus
more attention on regulating margins.
Forcing people to have tighter margins
in normal times and looser margins
during crises can make society better
off.
Randall Kroszner, member of
the Board of Governors of the Federal
Reserve System, discussed the role of
information in the development of new

5

For further discussion, see John Geanakoplos,
“Liquidity, Default, and Crashes: Endogenous
Contracts in General Equilibrium,” Advances
in Economics and Econometrics: Theory and
Applications, Eighth World Conference, Volume
II, Econometric Society Monographs (2003), pp.
170-205.

www.philadelphiafed.org

financial products and lessons to be
learned about risk management and
regulation to help foster productive
financial market innovations. The
economy has benefited from innovations that have allowed capital to be
allocated to its most productive uses
and risks to be dispersed to a wide
range of market participants. But innovations also create challenges when
participants don’t have the necessary
information to value new instruments.
Kroszner described the typical life-cycle of a new instrument. When a new

Randall Kroszner, Board of Governors,
Federal Reserve System

product is developed, there is usually
an experimentation phase when market participants try to learn about the
product’s performance and risk characteristics. The product’s characteristics
are adjusted in response to market
demand. Information is gathered to
facilitate price discovery, the process
that reveals the market-clearing price
of the asset. Kroszner discussed how
the lack of information, inadequate
due-diligence to verify information,
and lax risk management have created problems in the market for some
structured finance products like SIVS,
structured investment vehicles. Their
complexity and the lack of information about where the underlying credit,
legal, and operational risks reside have
made these instruments hard to value.
According to Kroszner, when market
participants realize they lack the nec-

essary information for price discovery,
the price discovery process becomes
disrupted, market liquidity can become
impaired, and it may take a significant
amount of investment in information
gathering and time before the price
discovery process can be revived.
Investment in information gathering may also result in more standardization of the instrument. Kroszner
pointed out several benefits of standardization. It can decrease complexity and increase transparency of the
instruments. More standardization
lowers the information-gathering costs,
and also the transactions costs for the
instrument, which in turn increases
market liquidity. Kroszner suggested
that improvements in standardization
could help address some of the current
challenges in the subprime market,
perhaps facilitating the workout and
loan-modification processes. He said
that the Federal Reserve and other
regulators have been actively encouraging mortgage lenders and servicers to
work with borrowers at risk of losing
their homes. Kroszner noted that the
supervisory agencies and the industry
are addressing the need for improved
risk management, including more
comprehensive due-diligence for new
financial products, and better stresstesting to cover contingent exposures,
market-wide disruptions, and potential
contagion.
John Bogle, founder and former
CEO of The Vanguard Group, Inc.,
and president of Bogle Financial
Markets Research Center – himself a
financial markets innovator – provided
his views on when innovation goes
too far. In his view, innovation and
entrepreneurship are major drivers of
global economic growth, but financial
innovation is unique because of the
sharp dichotomy between the value of
innovation to the financial institution
and the value of innovation to the
institution’s customers. Bogle believes

Business Review Q3 2008 39

John Bogle,
The Vanguard Group

that institutions have a large incentive to favor complex and costly over
simple and cheap. He estimates that
the costs of the financial sector have
risen from about $100 billion in 1990
to about $530 billion in 2006. These
costs include annual expenses borne
by mutual fund investors, brokerage
commissions, investment banking
fees, fess paid to hedge fund managers,
and legal, accounting, marketing, and
advertising costs. Bogle asks whether
the costs of the financial sector have
reached a level that exceeds the value
of the sector’s many benefits.
In Bogle’s view, two recent innovations in the banking industry –
CDOs (collateralized debt obligations
backed by pools of mortgages) and
SIVs (structured investment vehicles)
– are complex and costly vehicles of
questionable benefit. He discussed a
number of innovations in the mutual fund industry over the years that
brought mutual fund managers high
fees but ultimately losses to investors,
including aggressive growth funds in
the latter half of the 1960s, government-plus funds in the 1970s, adjustable-rate mortgage funds in the 1980s,
and technology funds in the 1990s.
Bogle explained that some mutual
fund innovations have benefited fund
investors. One of these is the money
market fund, which he sees as one of
the greatest innovations in the industry’s history. He also outlined several

40 Q3 2008 Business Review

of the innovations of his own firm,
Vanguard, which he established in
1974. These include a fund organizational structure that keeps investment
costs down; the first market-index
mutual fund, created in May 1975,
that tracks the returns of the S&P 500
stock index; and tax-managed funds,
introduced in 1993.
Bogle concluded by suggesting
that financial innovations nearly
always create value for their creators,
but that too often, in his view, these
innovations have subtracted value
from investors.
REGULATION AND
COMPETITIVENESS
Bogle’s discussion provided an
excellent segue into the final session,
which addressed the proper role of
regulation in capital markets. Given
financial market disruptions that have
taken place since the Policy Forum,
this session provides particularly useful
insights into thinking about the regulatory structure that is to come.
R. Glenn Hubbard, dean of
the Columbia University Business
School and chairman of the Council
of Economic Advisers from 2001 to
2003, focused his discussion on the
regulation of equity markets, drawing on the work of the Committee on
Capital Markets Regulation, a nonpartisan group co-chaired by Hubbard
and John Thornton, former president
of Goldman Sachs. Hubbard sees
our financial markets as one of the
important sources of the productivity
boom in the U.S. over the past decade,
and thus, it is important to preserve
and enhance the global competitive
position of U.S. capital markets. The
U.S. share of equity raised in global
public markets dropped from about 30
percent in 2002 to about 19 percent in
2007 (through November). There has
been an increase in U.S. companies
doing initial public offerings abroad

and a decrease in the number of firms
that are listing on U.S. equity exchanges. Economic research indicates that,
on average, foreign firms still receive
a benefit from listing in the U.S., but
that listing premium has declined in
recent years. Hubbard argued that one
of the reasons for these trends is that
the U.S. securities regulatory system
does not do an adequate assessment
of the costs and benefits of proposed
and enacted regulations. The implementation of Sarbanes-Oxley could be
improved to lower its costs. Another
cost facing firms doing business in the
U.S. is potential litigation. He cited
issues surrounding auditor and director
liability and securities class-action
lawsuits, which have larger and more
frequent settlements in the U.S. than
in other financial centers.
The committee recommended
that a more risk-based approach be
taken toward securities regulation
to ensure that regulation enhances
shareholder value by improving the
incentives of managers, auditors, and
directors, and the rights of shareholders with respect to corporate control.
This would include the SEC’s performing formal cost-benefit analyses
of regulations. Regarding litigation
reform, the committee recommended
allowing alternative dispute resolution
for class actions, which might include
shareholders waiving their rights to
class actions at the time of the initial
public offering. Hubbard suggested
that the Financial Services Author-

R. Glenn Hubbard,
Columbia University Business School
www.philadelphiafed.org

ity in the United Kingdom, which is a
consolidated system of financial supervision, might provide a model worth
considering in the U.S., and indeed
the costs and benefits of such a system
are being assessed as part of the work
regarding regulatory reforms needed in
the aftermath of the current financial
market disruptions.
Annette Nazareth, who at the
time of the Policy Forum was a commissioner at the SEC, was our final
speaker. Nazareth argued that a
well-conceived and balanced system
of securities regulation gives the U.S.
a competitive edge in global financial markets. The SEC’s balanced
approach to securities regulation is
based on the principles of competition,
transparency, investor protection, and

Annette Nazareth, Former Commissioner,
Securities and Exchange Commission

market integrity. Nazareth believes
the approach has worked well and has
been instrumental in establishing confidence in the U.S. securities markets,
which in turn has increased market
liquidity and has attracted business to
the U.S. Indeed, rather than conflicting with market forces, high-quality
regulation, she feels, is a complement
that works with market forces.
In Nazareth’s view, high-quality
regulation is based on clear goals and
standards. It should be minimally
intrusive in the marketplace, allowing
disparate business models to compete
vigorously and effectively, which fosters

www.philadelphiafed.org

innovation. It should be flexible
enough to accommodate different business models. It should promote market
efficiency. Nazareth believes securities
regulation has been most effective in
addressing market externalities, a type
of inefficiency. She outlined four types
of externalities that regulation has successfully addressed: dominant markets,
principal-agent conflicts, collectiveaction issues, and information asymmetries.
Markets with high market power
may use it anti-competitively. The
U.S. has multiple competing securities markets, and the SEC has used its
authority to enhance the competition
in these markets. For example, the
SEC mandated fair-access rules that
ensured that all market participants
would have access to the market. The
commission also regulated the sharing
of market price data, which is necessary for trading.
As Bogle described in the previous
session, financial intermediaries and
their customers are in a principal-agent
relationship, in which there are sometimes conflicts of interest. Nazareth
discussed the SEC’s regulation of
sales practices, which is intended to
alleviate some of these conflicts. She
pointed out that the U.S. has the highest level of retail investor participation
anywhere in the world and attributes
this to the standards set for sales
practices, which inspire confidence in
our markets.
The SEC has helped solve collective action problems in financial
markets. Nazareth discussed an
example that arose in the over-thecounter credit derivatives markets. As
Nazareth explained, it was discovered
that there was a very serious problem
of incomplete documentation on high
volumes of transactions in this market.
Although the securities firms realized
there was an issue, none individually
had the incentive or the ability to solve

the problem on its own. The SEC has
worked with the firms toward clearing
up this problem.
The SEC has advocated transparency and has mandated standardized
disclosure to alleviate asymmetric
information problems. Globalization
has led to convergence in disclosure
as well as accounting standards across
countries, and this has raised market
efficiency.
In Nazareth’s view, these four
types of market imperfections point
out the need for regulation to ensure
that the evolution of the marketplace
benefits investors and serves the public
interest. If regulation is well-designed,
it will enhance competition, not stand
it is way.
SUMMARY
The 2007 Policy Forum generated
lively discussion among the program
speakers and audience on the consequences of innovation in global financial markets and the implications for
financial market regulation. The recent financial market disruptions, their
effect on the real sector of the economy, and the feedback from the real
economy to financial markets underscore the need for better understanding of financial market innovations,
performance, liquidity, and regulation.
It now appears clear that some reform
of the financial supervisory system in
the U.S. is needed and will take place.
Given the vital importance of financial markets and institutions to our
economic well being, it is imperative
that rigorous economic modeling and
empirical research be used in developing these regulatory reforms to avoid
unintended consequences and to help
ensure a more efficient and productive
financial system that is less susceptible
to systemic risk. BR

Business Review Q3 2008 41

RESEARCH RAP

Abstracts of
research papers
produced by the
economists at
the Philadelphia
Fed

You can find more Research Rap abstracts on our website at: www.philadelphiafed.org/research-and-data/
publications/research-rap/. Or view our Working Papers at: www.philadelphiafed.org/research-and-data/
publications/.

REAL-TIME DATA ANALYSIS: A
LOOK AT RESEARCH TO DATE
This paper describes the existing
research (as of February 2008) on real-time
data analysis, divided into five areas: (1) data
revisions; (2) forecasting; (3) monetary policy
analysis; (4) macroeconomic research; and
(5) current analysis of business and financial
conditions. In each area, substantial
progress has been made in recent years, with
researchers gaining insight into the impact
of data revisions. In addition, substantial
progress has been made in developing better
real-time data sets around the world. Still,
additional research is needed in key areas,
and research to date has uncovered even
more fruitful areas worth exploring.
Working Paper 08-4, “Frontiers of RealTime Data Analysis,” Dean Croushore,
University of Richmond, and Visiting Scholar,
Federal Reserve Bank of Philadelphia
INSTITUTIONAL FORM AND
CENTRAL BANK PERFORMANCE:
SOME EMPIRICAL EVIDENCE
Over the last decade, the legal and
institutional frameworks governing central
banks and financial market regulatory
authorities throughout the world have
undergone significant changes. This has
created new interest in better understanding
the roles played by organizational structures,
accountability, and transparency, in
increasing the efficiency and effectiveness of
central banks in achieving their objectives
and ultimately yielding better economic

42 Q3 2008 Business Review

outcomes. Although much has been written
pointing out the potential role institutional
form can play in central bank performance,
little empirical work has been done to
investigate the hypothesis that institutional
form is related to performance. This paper
attempts to help fill this void.
Working Paper 08-5, “Central Bank
Institutional Structure and Effective Central
Banking: Cross-Country Empirical Evidence,”
Iftekhar Hasan, Rensselaer Polytechnic Institute
and Bank of Finland, and Loretta J. Mester,
Federal Reserve Bank of Philadelphia and The
Wharton School, University of Pennsylvania
FLUCTUATIONS IN
UNEMPLOYMENT AND VACANCIES
In a reasonably calibrated Mortensen
and Pissarides matching model, shocks to
average labor productivity can account for
only a small portion of the fluctuations
in unemployment and vacancies (Shimer
(2005a)). In this paper, the author argues
that if vintage-specific shocks rather than
aggregate productivity shocks are the driving
force of fluctuations, the model does a better
job of accounting for the data. She adds
heterogeneity in jobs (matches) with respect
to the time the job is created in the form of
different embodied technology levels. The
author also introduces specific capital that,
once adapted for a match, has less value in
another match. In the quantitative analysis,
she shows that shocks to different vintages of
entrants are able to account for fluctuations
in unemployment and vacancies and that,

www.philadelphiafed.org

in this environment, specific capital is important to
decreasing the volatility of the destruction rate of
existing matches.
Working Paper 08-6, “Specific Capital and Vintage
Effects on the Dynamics of Unemployment and
Vacancies,” Burcu Eyigungor, Federal Reserve Bank of
Philadelphia
OPTIMAL POLICY IN A CHANNEL SYSTEM
Channel systems for conducting monetary
policy are becoming increasingly popular. Despite its
popularity, the consequences of implementing policy
with a channel system are not well understood. The
authors develop a general equilibrium framework of a
channel system and study the optimal policy. A novel
aspect of the channel system is that a central bank
can “tighten” or “loosen” its policy without changing
its policy rate. This policy instrument has so far been
overlooked by a large body of the literature on the
optimal design of interest-rate rules.
Working Paper 08-7, “Monetary Policy in a Channel
System,” Aleksander Berentsen, University of Basel, and
Cyril Monnet, Federal Reserve Bank of Philadelphia
EXAMINING REVISIONS TO PCE
INFLATION RATES
This paper examines the characteristics of the
revisions to the inflation rate as measured by the
personal consumption expenditures price index both
including and excluding food and energy prices. These
data series play a major role in the Federal Reserve’s
analysis of inflation.
The author examines the magnitude and patterns
of revisions to both PCE inflation rates. The first
question he poses is: What do data revisions look like?
The author runs a variety of tests to see if the data
revisions have desirable or exploitable properties. The
second question he poses is related to the first: Can we
forecast data revisions in real time? The answer is that
it is possible to forecast revisions from the initial release
to August of the following year. Generally, the initial
release of inflation is too low and is likely to be revised
up. Policymakers should account for this predictability
in setting monetary policy.
Working Paper 08-8, “Revisions to PCE Inflation
Measures: Implications for Monetary Policy,” Dean
Croushore, University of Richmond, and Visiting Scholar,
Federal Reserve Bank of Philadelphia

www.philadelphiafed.org

COMBINING CPI AND PCE INFLATION
MEASURES: BETTER FORECASTS?
Two rationales offered for policymakers’ focus on
core measures of inflation as a guide to underlying
inflation are that core inflation omits food and energy
prices, which are thought to be more volatile than other
components, and that core inflation is thought to be a
better predictor of total inflation over time horizons of
import to policymakers. The authors’ investigation finds
little support for either rationale. They find that food
and energy prices are not the most volatile components
of inflation and that depending on which inflation
measure is used, core inflation is not necessarily the
best predictor of total inflation. However, they do find
that combining CPI and PCE inflation measures can
lead to statistically significant more accurate forecasts
of each inflation measure, suggesting that each measure
includes independent information that can be exploited
to yield better forecasts.
Working Paper 08-9, “Core Measures of Inflation
as Predictors of Total Inflation,” Theodore M. Crone,
Swarthmore College; N. Neil K. Khettry, Murray, Devine
& Company; Loretta J. Mester, Federal Reserve Bank
of Philadelphia, and The Wharton School, University of
Pennsylvania; and Jason A. Novak, Federal Reserve Bank
of Philadelphia
BUSINESS METHOD PATENTS AND THE
FINANCIAL SERVICES INDUSTRY
A decade after the State Street decision, more
than 1,000 business method patents are granted each
year. Yet only one in 10 are obtained by a financial
institution. Most business method patents are also
software patents.
Have these patents increased innovation in
financial services? To address this question the
author constructs new indicators of R&D intensity
based on the occupational composition of financial
industries. The financial sector appears more research
intensive than official statistics would suggest but less
than the private economy taken as a whole. There
is considerable variation across industries but little
apparent trend. There does not appear to be an obvious
effect from business method patents on the sector’s
research intensity.
Looking ahead, three factors suggest that the
patent system may affect financial services as it
has electronics: (1) the sector’s heavy reliance on

Business Review Q3 2008 43

information technology; (2) the importance of standard
setting; and (3) the strong network effects exhibited
in many areas of finance. Even today litigation is not
uncommon; we sketch a number of significant examples
affecting financial exchanges and consumer payments.
The legal environment is changing quickly. The
author reviews a number of important federal court
decisions that will affect how business method patents
are obtained and enforced. He also reviews a number of
proposals under consideration in the U.S. Congress.
Working Paper 08-10, “Business Method Patents and
U.S. Financial Services,” Robert M. Hunt, Federal Reserve
Bank of Philadelphia
IS THERE A LINK BETWEEN JOBLESS
RECOVERIES AND THE GREAT
MODERATION?
This paper uses new data on job creation and
job destruction to find evidence of a link between
the jobless recoveries of the last two recessions and
the recent decline in aggregate volatility known as
the Great Moderation. The author finds that the last
two recessions are characterized by jobless recoveries
that came about through contrasting margins of
employment adjustment: a relatively slow decline in job
destruction in 1991-92 and persistently low job creation
in 2002-03. In manufacturing, he finds that these
patterns followed a secular decline in the magnitude
of job flows and an abrupt decline in their volatility. A
structural VAR analysis suggests that these patterns are
driven by a decline in the volatilities of the underlying
structural shocks in addition to a shift in the response
of job flows to these shocks. The shift in structural
responses is broadly consistent with the change in job
flow patterns observed during the jobless recoveries.
Working Paper 08-11, “Job Flows, Jobless Recoveries,
and the Great Moderation,” R. Jason Faberman, Federal
Reserve Bank of Philadelphia
WHAT EXPLAINS THE HOME BIAS IN
TRADE?
A large empirical literature finds that there is too
little international trade and too much intra-national
trade to be rationalized by observed international trade
costs such as tariffs and transport costs. The literature
uses frameworks in which goods are assumed to be
produced in just one stage. This paper investigates
whether the multi-stage nature of production helps

44 Q3 2008 Business Review

explain the home bias in trade. The author shows that
multi-stage production magnifies the effects of trade
costs. He then calibrates a multi-stage production
model to the U.S. and Canada. He solves the model
with measures of trade costs constructed from data
on tariffs, transport costs, and wholesale distribution
margins. The model can explain about three-eighths of
the Canada border effect; this is three times more than
what a calibrated one-stage model can explain. The
model also explains a good deal of Canada’s vertical
specialization trade. Finally, a reverse engineering
exercise suggests that the unknown or unobserved
component of trade costs is smaller than observed trade
costs.
Working Paper 08-12, “Can Multi-Stage Production
Explain the Home Bias in Trade?,” Kei-Mu Yi, Federal
Reserve Bank of Philadelphia
WORKER TURNOVER AND FIRM GROWTH
The authors use establishment data from the Job
Openings and Labor Turnover Survey (JOLTS) to
study the micro-level behavior of worker quits and
their relation to recruitment and establishment growth.
They find that quits decline with establishment growth,
playing the most important role at slowly contracting
firms. They also find a robust, positive relationship
between an establishment’s reported hires and
vacancies and the incidence of a quit. This relationship
occurs despite the finding that quits decline, and
hires and vacancies increase, with establishment
growth. The authors characterize these dynamics
within a labor-market search model with on-the-job
search, a convex cost of creating new positions, and
multi-worker establishments. The model distinguishes
between recruiting to replace a quitting worker and
recruiting for a new position and relates this distinction
to firm performance. Beyond giving rise to a varying
quit propensity, the model generates endogenously
determined thresholds for firm contraction (through
both layoffs and attrition), worker replacement, and
firm expansion. The continuum of decision rules
derived from these thresholds produces rich firm-level
dynamics and quit behavior that are broadly consistent
with the empirical evidence of the JOLTS data.
Working Paper 08-13, “Quits, Worker Recruitment,
and Firm Growth: Theory and Evidence,” R. Jason
Faberman, Federal Reserve Bank of Philadelphia, and Éva
Nagypál, Northwestern University

www.philadelphiafed.org